Programming and writing about it.

echo $RANDOM

Category: Genetic Algorithms

Explainer: Evolutionary Algorithms

Whenever you undertake an activity that seeks to minimise or maximise a well-defined quantity such as distance or the vague notion of the right amount of sleep, you are optimising.

Like I have mentioned elsewhere, I like to introduce complex (and not so complex) concepts in a popular science fashion – for consumption by even the high-school kid. Hence, when I was contacted by Bella from The Conversation, I was excited to write about Evolutionary Algorithms and optimization and what we as a group work on.

The article is now live here. Hope you enjoy the read. Many thanks to the team at The Conversation for the final touches on the article.

Snowy, your support in various ways is always appreciated.



Book Review: Essentials of Metaheuristics

Getting the book:

I picked up a copy of this book from the man himself, Sean Luke at the IEEE CEC 2011. I was “aware” of this book from a while back, so I thought it might be a good idea to pick a print copy for light readings during my travels post-conference. Here is a brief review of the book:


As the author states, the book is a compilation of undergraduate lectures notes on Metaheuristics. It focuses on the applications of Metaheuristics to optimization problems including Multi-objective optimization, Combinatorial optimization and Policy optimization. Depending on your experience with Metaheuristics, this book will serve a different purpose for you:

  1. If you are quite well versed with them, this book will be a nice light reading, with interesting bits and pieces throughout
  2. If you are starting with them, or want to start with Metaheuristics, this book gives a nice well rounded view of the state-of-the art


The book starts with an overview of gradient based optimization methods in Chapter 1 gradually moving to stochastic methods such as randomized hill-climbing, tabu search, simulated annealing in Chapter 2.

Chapter 3 introduces population methods — Evolution Strategies, Genetic Algorithms, Differential Evolution and Particle Swarm Optimization.

Over the last three chapters, the author introduces some fundamental concepts: the choice of representation of solutions, issues of exploration v$ exploitation and local optima traps.

Chapters 4-10 each discuss one specific topic. For example,  Chapter 4 is dedicated to representation of solutions — vectors, direct encoded graphs, program trees and rulesets.  Chapter 5 discussess parallel methods for metaheuristics and Chapter 7 talks about Multi-objective optimization. Chapter 8 and 10 talks about combinatorial optimization and policy optimization respectively. So, if you are looking for anything specific, you can directly jump to the relevant chapter (assuming, of course that you have the pre-requisite knowledge). As you can see in the ToC, most of the chapters from 4-10 depends on Chapters 3 & 4.

The book finally concludes with some descriptions of test problems and statistical tests that researchers often use to test their algorithms. The very important issue of selecting a proper random number generator is discussed in this chapter.


This book along with Evolutionary Computation: A Unified Approach (You may be interested in my review) is great for getting a holistic view of the Meta-heuristic methods, especially if you are more experienced with only one of them.

Getting the book:

IEEE CEC 2011: Post-conference Thoughts

I am currently sitting by the window of my 10th floor of my hotel room and New Orleans looks beautiful at this time of the night. The neon glows of the hotels and shops around and the lights of those huge wooden/steel bodies on the mighty Mississippi is quite a spectacle for my bespectacled eyes. The IEEE Congress on Evolutionary Computation 2011 concluded today.  Over three days of paper and poster presentations, plenary lectures, cruise dinner on the steamboat Natchez and the sumptuous banquet last night, it was an awesome conference. Thank you Dr. Alice Smith and congratulations on the wonderful conference, which must have given you a lot of sleepless nights.

Here are some rough notes/thoughts/rants on the conference:

Plenary lectures

Each of the three days of the conference began with a plenary lecture. Natalio Krasnagor delivered the lecture on the first day talking about his work at the confluence of Natural sciences and Evolutionary Algorithms. Holger Hoos delivered the lecture on the 2nd day  where he had a lot of interesting things to talk about his research and mostly on topics of automating software development, having more degrees of freedom in software and algorithm selecting algorithms. Hod Lipson delivered the last of the plenary lectures and demonstrated his work on Evolutionary robotics and his super work, Eureqa. A lot to take home from each of these lectures. Enlightening and inspiring.

Interesting ideas/papers presented

  • A lot of work is being done on Genetic Programming, mainly as tools — in varied domains, from edge detection to blog network modeling. Once the IEEE CEC proceedings are indexed by IEEExplore, it would be very interesting to go through these papers. Available here.
  • Multi-view classification
  • Representation plays a key role in EAs and Daniel Ashlock‘s tutorial on this topic was (or supposed to be) quite enlightening, but I think I was busy doing something else. However, I intend to go through the slides he used and get an idea of the variety of representation schemes for different applications.


I usually tend to set high standards for myself and more often than not fail to achieve them, which ofcourse doesn’t deter me in setting them in the first place. Seeing a lot of “well known” people in this field presenting trivial works at a premier conference was quite disheartening. One good thing it does is that it makes me feel that may be I should be a little gentle to myself.

I wanted to change the world, but they lied to us

I don’t know about you, but I almost always think myself to be the cover page of the major world news papers or its nearest domain equivalent, whenever I do something cool/nice/interesting (according to myself, ofcourse). I thought writing up a paper titled “How does the good old GA perform at Real World Optimization?” would irk a lot of people and elicit reactions out of them. I guess nothing really matters. Sigh.

Well, anyway I am taking back a lot from this conference at the Big Easy. Good bye Poboy’s and Gumbo!

Prof. Deb’s podcast on Science Watch

Hear Prof. Kalyanmoy Deb speak about his research on Multi-objective Optimization, Evolutionary Algorithms, Innovisation, Multi-modal Optimization in this ScienceWatch podcast:

Kalyanmoy Deb\’s podcast


A Bi-criterion Approach to Multimodal Optimization: Self-adaptive Approach


In a multimodal optimization task, the main purpose is to find multiple optimal solutions, so that the user can have a better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be replaced by another optimum solution. Recently, we proposed a novel and successful evolutionary multi-objective approach to multimodal optimization. Our work however made use of three different parameters which had to be set properly for the optimal performance of the proposed algorithm. In this paper, we have eliminated one of the parameters and made the other two self-adaptive. This makes the proposed multimodal optimization procedure devoid of user specified parameters (other than the parameters required for the evolutionary algorithm). We present successful results on a number of different multimodal optimization problems of upto 16 variables to demonstrate the generic applicability of the proposed algorithm.

The full paper is available from Springer’s website here

Paper at GECCO: Check.

After the highly unsuccessful attempt at getting a paper published in GECCO 2008 with Yanhua Sun,  GECCO 2010 gives me the opportunity to scribble a “check” against GECCO in my mental “gotta be done” list.

My work with Prof. Deb, “Finding Multiple Solutions for Multimodal Optimization Problems Using a Multi-Objective Evolutionary Approach” will be presented at GECCO 2010 during the coming days.  In our work, we have proposed a new approach to multimodal optimization tasks using a Evolutionary Multi-objective approach. We have demonstrated the successful demonstration of our algorithm on high-dimensional unconstrained and constrained optimization problems.

The paper is not yet available on the ACM web pages. If you do not have access to the ACM, please email me for a PDF of the same. Here is the abstract:

In a multimodal optimization task, the main purpose is to find multiple optimal (global and local) solutions associated with a single objective function. Starting with the preselection method suggested in 1970, most of the existing evolutionary algorithms based methodologies employ variants of niching in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are de-emphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different and generic strategy in which a single-objective multimodal optimization problem in converted into a suitable bi-objective optimization problem so that all local and global optimal solutions become members of the resulting weak Pareto-optimal set. We solve up to 16-variable test-problems having as many as 48 optima and also demonstrate successful results on constrained multimodal test-problems, suggested for the first time. The concept of using multi-objective optimization for solving single-objective multimodal problems seems novel and interesting, and importantly opens further avenues for research.

Here is a demo showing our approach in action:

To learn more about Evolutionary Multimodal Optimization:

Niching in Genetic Algorithms

Updated on September 23, 2011

Niching is a term often used in the Evolutionary Algorithms literature and its significance and implications may become clear only after the researcher has worked her way up some of them literature. My aim with this post is to informally define the term, and hopefully hint at its meaning and significance and more importantly consolidate the foundational literature in this area.

Since my niche is Genetic Algorithms (GAs), the discussion here will be limited to them. However, as the literature will show Niching methods originally proposed for the GA community has found applications in other Evolutionary Algorithms (EAs) as well.  Niching methods extend canonical Genetic Algorithms to domains that require finding and maintenance of multiple solutions. They are traditionally used in domains when the finding of multiple solutions is desired, such as in a multi-modal optimization task or to maintain diversity so as to lead to one single, final solution in a difficult optimization task. A niching method must be able to form and maintain multiple diverse solutions and preserve them for the entire duration of the GA run.  Under the effect of niching, the population of solutions is dynamically stable under the selection pressure.

From [3], “Niching involves the formation of distinct species exploiting different niches (resources) in the environment. It can promote cooperative populations that work together to solve a problem. Whereas the simple GA is purely competitive, with the best individuals quickly taking over the population, niched GAs converge to a population of diverse species (niches) that together cover a set of resources or rewards“. The resources or rewards, of course vary from one application domain to another.  In a multi-modal optimization task, the resources would be the multiple optima, for example.

Classification of Niching methods

In [1] and [2], Mahfoud suggests a classification based on the way multiple niches are found in a GA: (Note that this classification is independent of the number of physical computer processors being used)

  1. Spatial or Parallel Niching methods: Niching methods belonging to this category finds and maintains multiple niches simultaneously in a single population. Examples of parallel niching methods are Sharing, Crowding function approach and Clearing method
  2. Temporal or Sequential Niching methods: These niching methods find multiple niches iteratively or temporally. For example the Sequential Niching method finds multiple niches iteratively.

Frequently Answered Questions (Answered in [1])

  • Why do niching?
  • Why maintain niches?
  • Why not locate multiple solutions individually, by iterating the GA?

Essence of Niching

To better understand the essence and need for niching, here is a demo showing a function optimization task when no effort is made to maintain multiple soltuions and a second one where an explicit multiple-solution preserving mechanism is in place(For details refer here and here)

(please note that the demo has been created with an algorithm which does not use a niching mechanism, but it shows why niching is needed and what happens in the absence of a multiple solution preserving mechanism, such as niching)

Without multiple-solution preserving mechanism

With multiple-solution preserving mechanism



Use of Niching in Maintaining diverse feasible solutions in constrained optimization

The idea of niching is applicable in optimization of constrained problems. In such problems, maintaining diverse feasible solutions is desirable so as to prevent accumulation of solutions only in one part of the feasible space, especially in problems containing disconnected patches of feasible regions. Prof. Deb in his constraint handling paper [5] suggests one such use of niching.


References [1] and [3] are the most exhaustive treatment of Niching I have come across in the literature

  1. Niching methods for Genetic Algorithms
  2. A Comparison of Parallel and Sequential Niching Methods
  3. The Nature of Niching:, Genetic Algorithms and the Evolution of Optimal, Cooperative Populations
  4. Niching methods, specifically in the context of Multi-modal function optimization is discussed in the book “Multi-objective Optimization using Evolutionary Algorithms
  5. An Efficient Constraint Handling Method for Genetic Algorithms

Satisfying the function evaluation constraints in Evolutionary Algorithms

Function evaluation will mean both Objective function evaluation and Constraint function evaluation

In a recent work for the IEEE CEC 2010 competition on Constrained Real parameter optimization problems, we (along with Prof. Deb and Ritu ) coupled an idea from the Gradient projection (GP) method for handling equality constraints with our Genetic Algorithms (GA) based algorithm for finding the global optima of the problems.  The approach worked wonderfully well for us.  There was a problem, however. There were limits on the maximum number of function evaluations and our algorithm did not perform well for some of the problems, with this constraint! (As if the actual constraints were not enough!! :D). So what we did was, instead of applying the GP method every generation, we started applying it only after some generations. We achieved decent results with this within the specified max. number of function evaluations. In every generation, we applied the GP to every individual. There could be a form of evolution control [1] in play here, which we didn’t explore during the course of our work because of time constraints.

This is a common problem when you are working with EAs such as a GA. Since there is a population of solutions being processed at each generation, the function evaluation count goes up every generation. Besides the huge number of function evaluations, sometimes in complex problems, the time taken for each evaluation is so high that parallel methods have to adopted in order to solve them. (In a very recent playing-around, the resident CUDA guru here, just parallelized the objective function evaluation in the Innovization guru’s code and they achieved a huge speedup).

Using approximation methods: Since parallelization is not always the best option, one way to avoid expensive function evaluations (or comply to the function evaluation limits – parallelization won’t help here ofcourse) is to use approximation models (or meta-model). That is, using a set of true data (samples) build up a model, and then use this model to obtain the approximation to an unknown input sample, thus saving the actual function evaluation(s). The important consideration,  however is how true the approximation model,  also called the fidelity of the model. If the model is not a very good approximation, then the EA might only be able to return a sub-optimal result which is not what we want.

Following are some papers I recently read and will be useful to anyone looking to investigate this field:

  1. Yaochu Jin ,  Markus Olhofer ,  Bernhard Sendhoff, “A Framework for Evolutionary Optimization With Approximate Fitness Functions
  2. El-Beltagy, M.A. and Keane, A.J. (1999), “Metamodeling Techniques For Evolutionary Optimization of Computationally Expensive Problems : Promises and Limitations”
  3. Y. Jin. “A comprehensive survey of fitness approximation in evolutionary computation.”
  4. Won, Ray, Tai, “A Framework for Optimization Using Approximate Functions”


Some notes on [1] above:

  • individual and generation based evolution control– evolution proceeds based on fitness evaluations using not only the approximate fitness model but also the original fitness function
  • interactive evolutionary computation and usage of approximate models therein
  • use in multi-modal test function optimization: smoothen the rough fitness function, and thus prevent stagnation in a local optimum
  • Two approaches of evolution control:  (1) Every generation, only certain individuals are evaluated using the original fitness function (2) All individuals only every N generations per M generations are evaluated- determination of the number of individuals in (1) and generations in (2) has been answered via simulations
  • frequency at which the original function is called and the approximate model is updated should be determined by the local fidelity of the approximate model.The lower the fidelity, more frequently the original function should be called. *local fidelity model* is stressed

I hope to experiment with approximation models for the work I mentioned at the beginning. The challenge would be to find the perfect evolution control strategy.

Evolutionary Multi-modal Optimization (EMMO)

I have created a Wikipedia article on Evolutionary Multi-modal optimization here. Please help to make it complete.

EMO is already taken: Evolutionary Multi-objective Optimization, so EMMO it is.

Optimization task which involves finding all or most of the multiple solutions  (as opposed to a single best solution) is termed as Multi-modal optimization. Knowledge of multiple solutions to an optimization task is specially helpful in Engineering, when due to physical (and/or cost) constraints, the best results may not always be realizable.  In such a scenario, if multiple solutions (local and global) are known, the implementation can be quickly switched to another solution and still obtain a optimal system performance.

Classical techniques of optimization would need multiple restart p0ints and multiple runs in the hope that a different solution may be discovered every run, with no guarantee however.  Evolutionary alg0rithms (EAs) due to their population based approach, provide a natural advantage over classical optimization techniques. They maintain a population of possible solutions, which are processed every generation,  and if the multiple solutions can be preserved over all these generations, then at termination of the algorithm we will have multiple good solutions,  rather than only the best solution. Note that, this is against the natural tendency of EAs, which will always converge to the best solution, or a sub-optimal solution (in a rugged, “badly behaving” function). Finding and Maintenance of multiple solutions is wherein lies the challenge of using EAs for multi-modal optimization.

The field of EAs today encompass Genetic Algorithms (GAs), Differential Evolution (DE),  Particle Swarm Optimization (PSO) among others. Attempts have been made to solve multi-modal optimization in all these realms. Since my main area of research involves GAs,  I shall mainly talk about them.

Multi-modal optimization using GAs

Niching is a generic term referred to as the technique of preserving multiple niches, or favorable  parts of the solution space possibly around multiple solutions, so as to prevent convergence to a single solution. Petrwoski’s clearing method,  Goldberg’s sharing function approach,  restricted mating, maintaining multiple subpopulations are some of the popular approaches that have been proposed many years ago. The first two methods are very well studied and respected in the GA community. For a detailed reference to all these and many more papers, please refer the end of this post.

Very recently, we have proposed an approach to obtain multiple solutions by using principles from Multi-objective optimization. You can see some demos showing our approach in action:

Our approach doesn’t implement niching to maintain multiple solutions.  The multi-objective technique helps maintain multiple solutions, due to a property called weak-pareto optimality.

Multi-modal Optimization using DE

For a comprehensive treatment of Multi-modal optimization methods in DE, refer this Ph.D thesis Ronkkonen, J. (2009). Continuous Multimodal Global Optimization with Differential Evolution Based Methods.

Resources to look up:

This post will see further updates.

GA Demo: Multiple global optima of the Vincent function

The Vincent function is defined as follows: f(\vec{X}) = - \sum_{i=1}^n sin(10.log(x_i)), with \vec{X} varying in [0.25:10].  This is a plot of the function in 1 dimension:

Vincent Function- 1 Dimension

As you can see there, are 6 global minima. This function is such that in N dimension, the function with have 6^D global minima. For example, in 2D it has 36 minima:

Vincent Function - 2D

The job of a multi-modal optimizer is to find all the global optima in a multi-modal function.  See this demo:

See the interesting bit? As the population of the GA evolves, it settles into the 36 minima, forming beautiful clusters around them.

The algorithm used here is based on our work  Deb, K., Saha, A., Finding Multiple Solutions for Multimodal Optimization Problems Using a Multi-Objective Evolutionary Approach (Accepted to be presented as full paper in GECCO-2010). (Let me know if you want to take a look at the paper).

You may also be interested in my past two demos: