Programming and writing about it.

echo $RANDOM

Tag: GA

Niching in Genetic Algorithms

Updated on September 23, 2011

Niching is a term often used in the Evolutionary Algorithms literature and its significance and implications may become clear only after the researcher has worked her way up some of them literature. My aim with this post is to informally define the term, and hopefully hint at its meaning and significance and more importantly consolidate the foundational literature in this area.

Since my niche is Genetic Algorithms (GAs), the discussion here will be limited to them. However, as the literature will show Niching methods originally proposed for the GA community has found applications in other Evolutionary Algorithms (EAs) as well.  Niching methods extend canonical Genetic Algorithms to domains that require finding and maintenance of multiple solutions. They are traditionally used in domains when the finding of multiple solutions is desired, such as in a multi-modal optimization task or to maintain diversity so as to lead to one single, final solution in a difficult optimization task. A niching method must be able to form and maintain multiple diverse solutions and preserve them for the entire duration of the GA run.  Under the effect of niching, the population of solutions is dynamically stable under the selection pressure.

From [3], “Niching involves the formation of distinct species exploiting different niches (resources) in the environment. It can promote cooperative populations that work together to solve a problem. Whereas the simple GA is purely competitive, with the best individuals quickly taking over the population, niched GAs converge to a population of diverse species (niches) that together cover a set of resources or rewards“. The resources or rewards, of course vary from one application domain to another.  In a multi-modal optimization task, the resources would be the multiple optima, for example.

Classification of Niching methods

In [1] and [2], Mahfoud suggests a classification based on the way multiple niches are found in a GA: (Note that this classification is independent of the number of physical computer processors being used)

  1. Spatial or Parallel Niching methods: Niching methods belonging to this category finds and maintains multiple niches simultaneously in a single population. Examples of parallel niching methods are Sharing, Crowding function approach and Clearing method
  2. Temporal or Sequential Niching methods: These niching methods find multiple niches iteratively or temporally. For example the Sequential Niching method finds multiple niches iteratively.

Frequently Answered Questions (Answered in [1])

  • Why do niching?
  • Why maintain niches?
  • Why not locate multiple solutions individually, by iterating the GA?

Essence of Niching

To better understand the essence and need for niching, here is a demo showing a function optimization task when no effort is made to maintain multiple soltuions and a second one where an explicit multiple-solution preserving mechanism is in place(For details refer here and here)

(please note that the demo has been created with an algorithm which does not use a niching mechanism, but it shows why niching is needed and what happens in the absence of a multiple solution preserving mechanism, such as niching)

Without multiple-solution preserving mechanism

With multiple-solution preserving mechanism

 

 

Use of Niching in Maintaining diverse feasible solutions in constrained optimization

The idea of niching is applicable in optimization of constrained problems. In such problems, maintaining diverse feasible solutions is desirable so as to prevent accumulation of solutions only in one part of the feasible space, especially in problems containing disconnected patches of feasible regions. Prof. Deb in his constraint handling paper [5] suggests one such use of niching.

References:

References [1] and [3] are the most exhaustive treatment of Niching I have come across in the literature

  1. Niching methods for Genetic Algorithms
  2. A Comparison of Parallel and Sequential Niching Methods
  3. The Nature of Niching:, Genetic Algorithms and the Evolution of Optimal, Cooperative Populations
  4. Niching methods, specifically in the context of Multi-modal function optimization is discussed in the book “Multi-objective Optimization using Evolutionary Algorithms
  5. An Efficient Constraint Handling Method for Genetic Algorithms
Advertisement

GA Demo: Multiple global optima of the Vincent function

The Vincent function is defined as follows: f(\vec{X}) = - \sum_{i=1}^n sin(10.log(x_i)), with \vec{X} varying in [0.25:10].  This is a plot of the function in 1 dimension:

Vincent Function- 1 Dimension

As you can see there, are 6 global minima. This function is such that in N dimension, the function with have 6^D global minima. For example, in 2D it has 36 minima:

Vincent Function - 2D

The job of a multi-modal optimizer is to find all the global optima in a multi-modal function.  See this demo:

See the interesting bit? As the population of the GA evolves, it settles into the 36 minima, forming beautiful clusters around them.

The algorithm used here is based on our work  Deb, K., Saha, A., Finding Multiple Solutions for Multimodal Optimization Problems Using a Multi-Objective Evolutionary Approach (Accepted to be presented as full paper in GECCO-2010). (Let me know if you want to take a look at the paper).

You may also be interested in my past two demos:

Simulated Evolution And Learning (SEAL-2010)

Simulated Evolution and Learning SEAL-2010 (http://www.iitk.ac.in/kangal/seal10/)  is the eighth biennial conference in the highly  successful conference series that aims at exploring these two forms of  adaptation and their roles and interactions in adaptive systems. Any  paper involving evolution as a vehicle for adaptive and artificial  problem solving tasks and any form of computational and machine  learning procedure for developing and analyzing adaptive or artificial  systems will be of interest to this conference. Cross-fertilisation  between evolutionary learning and other machine learning approaches,  such as neural network learning, reinforcement learning, decision tree  learning, fuzzy system learning, etc., are encouraged by the conference. The other major theme of the conference is optimization problem solving by evolutionary approaches or hybrid evolutionary
approaches. The topics of interest to this conference include but are not limited to the following: Evolutionary Learning; Evolutionary Optimization (single and multi-objective); Hybrid Learning; Hybrid Optimization, Adaptive Systems; Theoretical Issues in Evolutionary Computation; Real- World Applications of Evolutionary Computation and Learning Techniques.

All accepted papers which are presented at the conference will be included in the conference proceedings, published as a volume of the series Lecture Notes in Computer Science, Springer. Selected best papers will be invited for further revisions and extensions for possible publications by volutionary computing related journals.

Deadlines:

05 July 2010, for submission of full papers (<=10 pages (pdf only) ) (Latex Template, Word Template).
16 August 2010, Notification of acceptance
03 September 2010, Deadline for camera-ready copies of accepted papers
1-4 December 2010, Conference sessions (including tutorials and workshops)

For further details visit: http://www.iitk.ac.in/kangal/seal10/

GA Demo: Unimodal Function Optimization

Here is a demo of using a Genetic Algorithm for a function optimization. Vital information:

  • Real coded Genetic Algorithm
  • Population size: 100, Generations: 100
  • Crossover: SBX Crossover
  • Mutation: Polynomial Mutation

What do you see?

A. We start with a initial population which is randomly distributed throughout the function landscape. If you have keen eyes, like a friend, Shivam, you will notice that the initial population is not so random. If you increase the population size, the initial population will cover all most all areas of the fitness landscape, like shown in this figure:

With a increased popultion size, the fitness ladscape is well "covered"

B. As the GA evolves, you see more individuals settling in on the two global minima and finally all the individuals settle in only one of the global minima. (A vanilla GA will in most cases converge to the best solution in the fitness landscape, to capture mulitple solutions, we will have to use one of the  GA methods specially developed for that purpose, which are loosely called GAs for multimodal optimization)

C. An interesting thing to see here is that, which one of the minima the individuals go to is completely unpredictable, since our fitness or cost function here is the function value. In this case, since both the minima are global, hence the probability of the GA converging to each is equal and will vary w.r.t the GA parameter settings.

Code

I used an existing RGA implementation and I just hooked it up with gnuplot and xvidcap to capture the population evolution. I f you want to take a look at the code, please let me know and I shall be happy to discuss it with you.

Wishful wish

Such visualizations often help you with understanding the evolution of the population, which otherwise is not so easy. Sadly, we are in a three dimensional space. (which however, I hope is not true).Once that happens, working with GAs will be sure fun- Imagine population of individuals in 10s of dimensions evolving and you seeing them where exactly they are going) :D

Book Review: Evolutionary Computation: A Unified Approach

My background: I am currently working full time on research problems in Genetic Algorithms. My current work is in using GAs for multi-modal optimization. My experience in this field is 6 months, my interest infinite :)

Evolutionary Computation: A Unified Approach brings together a summarized view of three distinct fields of Evolutionary Computing (EC)- Evolutionary Strategies (ES), pioneered by Rechenberg and Schwefel, Evolutionary Programming (EP), pioneered by Fogel and Genetic Algorithms (GA) pioneered by John Holland. A fundamental book such as this one helps the EA researcher to sit back and identify the fundamental principles of these different algorithms. Throughout the book, a unified view of the fields is presented and thus this book is a must read for the evolutionary computing researcher- novice, like me or the experienced. This book explores a very experimental approach to the study of these algorithms and is thus easy to follow, almost like reading a novel for the EA researcher.

In Chapter 1, fundamental characteristics of a simple evolutionary system are presented. EV, a simple evolutionary system is presented and sample runs over a 1-dimensional and 2-dimensional landscape is presented. First ideas of evolutionary systems as problem solvers are presented in this chapter. A key idea presented in this chapter is that simulated evolutionary systems try to draw inspiration from the biological evolutionary system to design better algorithms to solve complex computational problems, and not mimic them. The graphical plots of the results demonstrates the ideas of convergence, and though not explicitly mentioned, the EC researcher easily identifies this as a useful way to demonstrate the progress of a Evolutionary algorithm.

In Chapter 2, a step is taken back to look at the historical development of the three fields of ES, EP and GA. This chapter puts the three research communities into perspective of their historical development.

In Chapter 3, the simple EV system introduced in Chapter 1 is extended to the generic EV(m,n) system of m parent population, and an offspring population of size n. What this generalization does is that it allows talking about the canonical EP, ES and GA in one common framework.

Chapter 4 presents a first detailed unified treatment of the three algorithms. The following common pattern of evolutionary dynamics are presented:

  1. a fixed size population, m is evolved over time,
  2. the current population is used as a parent population to produce n offspring, and
  3. the expanded population size is reduced from m+n to m

Its apparent that the EA researchers have several design choices to make: a correct value of m and n, the selection of the parent population that takes part in the process to generate offspring, and selecting the m population size from the m+n population. These questions are addresses in the rest of this chapter by the discussions on population sizing, selection mechanisms, reproductive mechanism and representation issues.

So far, the book has addressed the ideas of simulating simple evolutionary systems and their temporal dynamics. How can such systems be used to solve computational and engineering problems is addressed in chapter 5. Using EAs for search, optimization-single objective and multi-objective, machine learning and automatic programming are discussed here. This is a chapter that will be useful to readers beyond the EA research community, since they provide insights into using EA for practical problem solving, which can be useful in complex, high-dimensional search spaces and not much information is available about the problem at hand.

Chapter 6 is of utmost importance and interest for the EA researcher as it describes some of the ways that EAs can be theoretical analyzed. Depending on the property of a EA you are analyzing, the tools for analysis also vary:

  • EA population dynamics -> dynamical systems tools
  • Stochastic properties of EAs -> Markov processes
  • Aggregate properties of EAs -> Statistical mechanics tools
  • Computational complexity  of EAs -> Algorithmic Analysis tools
  • Optimization properties of EAs -> Mathematical tools

I am yet to grok this chapter in its full capacity and power, so I shall get back to it after a detailed reading.

The second last chapter of the book, Chapter 7 takes a look at some of the advanced EA topics such as self-adaptive EAs, Multi-objective EAs, memetic EAs, EAs applied to dynamic landscapes and others.

The book concludes in Chapter 8 by urging more work in the quest the book started with- more unification of the different fields in Evolutionary computation.