Skip to content

Quality Diversity

Daniele Gravina edited this page Jul 5, 2019 · 3 revisions

[In Progress] While diversity alone can be beneficial to discover the optimal solution of a deceptive problem, for particular search spaces this is not enough. When the search space is boundless or when the diversity measure is entirely uncorrelated to desired behaviors, some push towards quality is needed. Several solutions have been proposed to solve this issue, such as combining novelty with objectives (Mouret, 2011; Gomes et al., 2015), the minimal criteria novelty search algorithm by Lehman and Stanley (2010), or the constrained novelty search algorithm by Liapis et al. (2015). However, a more aligned combination of divergence and convergence might enable evolutionary search to discover high-performing and diverse solutions in the same run. In fact, the solutions mentioned above try to combine novelty with a global competition for the objective, which may contrast with the inspiration of divergent search and make the convergence preminent again.

Nature can be source of inspiration to solve this problem. Natural evolution has discovered an impressive number of diverse solutions (i.e., organisms) that can adapt to different conditions and constraints. If we take for example the problem of moving, nature has evolved different ways to ambulate: crawling, swimming, walking, etc. Moreover, an organism can be composed of different legs. It becomes apparent that each configuration of the above can lead to an effective yet different solution to movement. An interesting parallelism can be traced back to the rise of dinosaurs. Recent theories (Langer et al., 2017) have highlighted the fact at the beginning of the Triassic period (at the early stage of their evolutionary history), dinosaurs were not as widespread as believed up to now, but they were overtaken by other species that were larger and more diversified. The dinosaurs continued to live as a marginal species till the end of the Triassic period when a random event caused the extinction of their direct competitors. Starting from the the Jurassic period, dinosaurs began to spread and outperform any other species in terms of agility, dimensions and strength. Therefore it seems that protecting promising niches in the search space can lead to advantages regarding global performance, as they can save interesting and potential characteristics that can enable further evolvability (Lehman and Stanley, 2011c).

Such an approach can lead to several advantages if transferred to an optimization scenario. Given a set of solutions for a problem, an algorithm can use a repertoire of alternative solutions in case the selected solution does not perform as expected, i.e., in case the simulated results do not transfer well to reality. This problem, known as reality gap (Koos et al., 2013), can be solved efficiently by keeping several good solutions in an archive and employ backup solutions when needed (Cully et al., 2015). In evolutionary robotics, an archive of diverse and good solutions grants the ability to evolve controllers able to solve multiple tasks, which is a more efficient solution compared to evolve a separate controller for each task. A further advantage is that diversity and quality can influence each other synergistically towards an even better solution; rewarding diversity might help to find the necessary stepping stones towards higher-performing areas of the search space, as they circumvent potential deceptive traps of the fitness landscape.

Inspired by these advantageous properties, Pugh et al. (2016) propose the quality diversity challenge. This family of algorithms search for the discovery of both quality and diversity at the same time, following the traditional approach within computational creativity of seeking outcomes characterized by both quality (or value) and novelty (Ritchie, 2007). Such evolutionary algorithms have been named quality diversity (QD) algorithms (Pugh et al., 2015, 2016) and aim to find a maximally diverse population of high-performing individuals. Examples of such algorithms include novelty search with local competition by Lehman and Stanley (2011b) and MAP-Elites by Mouret and Clune (2015); Cully et al. (2015) as well as algorithms that constrain the feasible space of solutions—thereby forcing high-quality solutions—while searching for divergence such as constrained novelty search by Liapis et al. (2015).

It is worth mentioning that this new paradigm stresses the importance of the diversity of the solutions, while the performance becomes of secondary importance, in contrast with the traditional formulation of EC (Pugh et al., 2016). While many solutions have been proposed in the literature to search for good and diverse solutions, they usually focus first on the performances of the solutions, and not on their diversity. For instance, in multi-modal function optimization (Goldberg et al., 1987; Mahfoud, 1995; Preuss, 2015), the objective is to find the local optima of a function. MMFO accomplishes this task by spreading the solutions over the search space employing, e.g., clustering, but usually, the diversity is maintained in the genotypical space. This has the disadvantage that possible good solutions cannot be reached if the fitness function is mono-modal. Behavioural diversity is in general different from the genotypical diversity, especially when there is an indirect encoding between the genotype and the phenotype Stanley and Miikkulainen (2002); a possible phenomenon, for instance, it is the genotypic aliasing, which can make different genotypes acting in the same way in the phenotypic space.

Another possible approach to the quality diversity challenge are the multi-objective evolutionary algorithms (Deb, 2001) (explained earlier in Section 2.2.1). MOEAs try to optimize more than one objective, and they return a selection of high-performing solutions called Pareto front. However, MOEA algorithms are intrinsically convergent (Cully and Demiris, 2018), as their primary use is for optimization of multiple objectives, while divergence is used only as an aid for the exploration of the fronts (e.g., crowding distance). Admittedly, the quality diversity paradigm has been inspired by the idea of solving single objective optimization problems via multi-objective algorithms. Such approaches have been successfully applied to different problems in the literature. For instance, Gong et al. (2015) proposes using MOEA to optimize two conflicting objectives, the reconstruction error and the sparsity of deep artificial neural networks. In Li et al. (2016), multi-objective self-paced learning decomposes a hard problem into simpler problems which can be optimized in a more accessible way. In Qian et al. (2015a), a multi-objective approach is proposed for ensemble learning to obtain the best performance with the fewest learners. In Qian et al. (2015b), feature selection is performed through evolutionary Pareto optimization to select the best possible subset of features for a deep neural network (feature learning). Finally, in Qian et al. (2017) MOEAs are used for influence maximization in a constrained scenario.

A shift towards divergent algorithms is necessary. The argument is similar to the one used previously for divergent search: given a definition of objective function, a bias is introduced in the optimization process that leads to the problems described in Section 2.3 Searching for diversity and then archiving the best solutions helps the search process to be independent of poorly designed fitness functions. However, the definition of the diversity measure can influence the performance of a QD algorithm as well. As shown in (Pugh et al., 2016; Preuss et al., 2014), depending on the diversity measured employed, specific algorithms can be more or less performant, based on the alignment between the selected behaviour characterization and the performance measure. While the diversity measure affects the performance of the QD algorithm, another critical aspect to take into consideration is how this diversity information is exploited. Novelty with local competition by Lehman and Stanley (2011b) uses the diversity measures defined in Eq. (2.3) and MAP-Elites by Mouret and Clune (2015) subdivides the behavioral space in bins of predefined size.

Quality Diversity. A high-level diagram of the Quality Diversity paradigm. Given a behaviour characterization (or space descriptor), here depicted as a one-dimensional space, and the desired quality, the goal is to find a set of diverse and high-performing solutions (red points). Image inspired by Cully and Demiris (2018)

Clone this wiki locally