Programming and writing about it.

echo $RANDOM

Tag: AI

Prof. Deb’s podcast on Science Watch

Hear Prof. Kalyanmoy Deb speak about his research on Multi-objective Optimization, Evolutionary Algorithms, Innovisation, Multi-modal Optimization in this ScienceWatch podcast:

Kalyanmoy Deb\’s podcast


SEAL 2010 Papers I liked reading

SEAL 2010 is happening at IIT Kanpur, India from the 1st to 4th December, 2010. As someone who was a part of the initial arrangements and was part of the lab hosting it, it is surely going to be legendary what with people like Narendra Karmarkar delivering keynote lectures. Needless to say, it would have been great to be present.

The papers are up on the Springer website and here are some papers I liked reading:

  • Improving Differential Evolution by Altering Steps in EC: This is a very approachable paper where the authors describe their experiments by modifying a standard DE algorithm my incorporating relevant ideas from another EA, G3-PCX. The bigger picture is to move towards unified approach to Evolutionary Computing
  • Bayesian Reliability Analysis under Incomplete Information Using Evolutionary Algorithms
  • Metamodels for Fast Multi-objective Optimization: Trading off Global Exploration and Local Exploitation
  • Generating Sequential Space-Filling Designs Using Genetic Algorithms and Monte Carlo Methods
  • And a paper which I would have surely liked, had I udnerstood the paper fully would be Beyond Convexity: New Perspectives in Computational Optimization

The papers are available online at


MIT ECJ Paper is in

The work under review on Evolutionary Multimodal Optimization in MIT Evolutionary Computation has been accepted! Prof. Deb just wrote me an email saying “Our Paper is in”. Wonderful! I am guessing it will take quite a while for the paper to be visible to the external world. For some background reading, refer my page at

In other news, this is a time of learning- human learning as well as machine learning for me.  As a start I am attending the Machine Learning Summer School at ANU campus in Canberra.

Paper at GECCO: Check.

After the highly unsuccessful attempt at getting a paper published in GECCO 2008 with Yanhua Sun,  GECCO 2010 gives me the opportunity to scribble a “check” against GECCO in my mental “gotta be done” list.

My work with Prof. Deb, “Finding Multiple Solutions for Multimodal Optimization Problems Using a Multi-Objective Evolutionary Approach” will be presented at GECCO 2010 during the coming days.  In our work, we have proposed a new approach to multimodal optimization tasks using a Evolutionary Multi-objective approach. We have demonstrated the successful demonstration of our algorithm on high-dimensional unconstrained and constrained optimization problems.

The paper is not yet available on the ACM web pages. If you do not have access to the ACM, please email me for a PDF of the same. Here is the abstract:

In a multimodal optimization task, the main purpose is to find multiple optimal (global and local) solutions associated with a single objective function. Starting with the preselection method suggested in 1970, most of the existing evolutionary algorithms based methodologies employ variants of niching in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are de-emphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different and generic strategy in which a single-objective multimodal optimization problem in converted into a suitable bi-objective optimization problem so that all local and global optimal solutions become members of the resulting weak Pareto-optimal set. We solve up to 16-variable test-problems having as many as 48 optima and also demonstrate successful results on constrained multimodal test-problems, suggested for the first time. The concept of using multi-objective optimization for solving single-objective multimodal problems seems novel and interesting, and importantly opens further avenues for research.

Here is a demo showing our approach in action:

To learn more about Evolutionary Multimodal Optimization:

Niching in Genetic Algorithms

Updated on September 23, 2011

Niching is a term often used in the Evolutionary Algorithms literature and its significance and implications may become clear only after the researcher has worked her way up some of them literature. My aim with this post is to informally define the term, and hopefully hint at its meaning and significance and more importantly consolidate the foundational literature in this area.

Since my niche is Genetic Algorithms (GAs), the discussion here will be limited to them. However, as the literature will show Niching methods originally proposed for the GA community has found applications in other Evolutionary Algorithms (EAs) as well.  Niching methods extend canonical Genetic Algorithms to domains that require finding and maintenance of multiple solutions. They are traditionally used in domains when the finding of multiple solutions is desired, such as in a multi-modal optimization task or to maintain diversity so as to lead to one single, final solution in a difficult optimization task. A niching method must be able to form and maintain multiple diverse solutions and preserve them for the entire duration of the GA run.  Under the effect of niching, the population of solutions is dynamically stable under the selection pressure.

From [3], “Niching involves the formation of distinct species exploiting different niches (resources) in the environment. It can promote cooperative populations that work together to solve a problem. Whereas the simple GA is purely competitive, with the best individuals quickly taking over the population, niched GAs converge to a population of diverse species (niches) that together cover a set of resources or rewards“. The resources or rewards, of course vary from one application domain to another.  In a multi-modal optimization task, the resources would be the multiple optima, for example.

Classification of Niching methods

In [1] and [2], Mahfoud suggests a classification based on the way multiple niches are found in a GA: (Note that this classification is independent of the number of physical computer processors being used)

  1. Spatial or Parallel Niching methods: Niching methods belonging to this category finds and maintains multiple niches simultaneously in a single population. Examples of parallel niching methods are Sharing, Crowding function approach and Clearing method
  2. Temporal or Sequential Niching methods: These niching methods find multiple niches iteratively or temporally. For example the Sequential Niching method finds multiple niches iteratively.

Frequently Answered Questions (Answered in [1])

  • Why do niching?
  • Why maintain niches?
  • Why not locate multiple solutions individually, by iterating the GA?

Essence of Niching

To better understand the essence and need for niching, here is a demo showing a function optimization task when no effort is made to maintain multiple soltuions and a second one where an explicit multiple-solution preserving mechanism is in place(For details refer here and here)

(please note that the demo has been created with an algorithm which does not use a niching mechanism, but it shows why niching is needed and what happens in the absence of a multiple solution preserving mechanism, such as niching)

Without multiple-solution preserving mechanism

With multiple-solution preserving mechanism



Use of Niching in Maintaining diverse feasible solutions in constrained optimization

The idea of niching is applicable in optimization of constrained problems. In such problems, maintaining diverse feasible solutions is desirable so as to prevent accumulation of solutions only in one part of the feasible space, especially in problems containing disconnected patches of feasible regions. Prof. Deb in his constraint handling paper [5] suggests one such use of niching.


References [1] and [3] are the most exhaustive treatment of Niching I have come across in the literature

  1. Niching methods for Genetic Algorithms
  2. A Comparison of Parallel and Sequential Niching Methods
  3. The Nature of Niching:, Genetic Algorithms and the Evolution of Optimal, Cooperative Populations
  4. Niching methods, specifically in the context of Multi-modal function optimization is discussed in the book “Multi-objective Optimization using Evolutionary Algorithms
  5. An Efficient Constraint Handling Method for Genetic Algorithms

Satisfying the function evaluation constraints in Evolutionary Algorithms

Function evaluation will mean both Objective function evaluation and Constraint function evaluation

In a recent work for the IEEE CEC 2010 competition on Constrained Real parameter optimization problems, we (along with Prof. Deb and Ritu ) coupled an idea from the Gradient projection (GP) method for handling equality constraints with our Genetic Algorithms (GA) based algorithm for finding the global optima of the problems.  The approach worked wonderfully well for us.  There was a problem, however. There were limits on the maximum number of function evaluations and our algorithm did not perform well for some of the problems, with this constraint! (As if the actual constraints were not enough!! :D). So what we did was, instead of applying the GP method every generation, we started applying it only after some generations. We achieved decent results with this within the specified max. number of function evaluations. In every generation, we applied the GP to every individual. There could be a form of evolution control [1] in play here, which we didn’t explore during the course of our work because of time constraints.

This is a common problem when you are working with EAs such as a GA. Since there is a population of solutions being processed at each generation, the function evaluation count goes up every generation. Besides the huge number of function evaluations, sometimes in complex problems, the time taken for each evaluation is so high that parallel methods have to adopted in order to solve them. (In a very recent playing-around, the resident CUDA guru here, just parallelized the objective function evaluation in the Innovization guru’s code and they achieved a huge speedup).

Using approximation methods: Since parallelization is not always the best option, one way to avoid expensive function evaluations (or comply to the function evaluation limits – parallelization won’t help here ofcourse) is to use approximation models (or meta-model). That is, using a set of true data (samples) build up a model, and then use this model to obtain the approximation to an unknown input sample, thus saving the actual function evaluation(s). The important consideration,  however is how true the approximation model,  also called the fidelity of the model. If the model is not a very good approximation, then the EA might only be able to return a sub-optimal result which is not what we want.

Following are some papers I recently read and will be useful to anyone looking to investigate this field:

  1. Yaochu Jin ,  Markus Olhofer ,  Bernhard Sendhoff, “A Framework for Evolutionary Optimization With Approximate Fitness Functions
  2. El-Beltagy, M.A. and Keane, A.J. (1999), “Metamodeling Techniques For Evolutionary Optimization of Computationally Expensive Problems : Promises and Limitations”
  3. Y. Jin. “A comprehensive survey of fitness approximation in evolutionary computation.”
  4. Won, Ray, Tai, “A Framework for Optimization Using Approximate Functions”


Some notes on [1] above:

  • individual and generation based evolution control– evolution proceeds based on fitness evaluations using not only the approximate fitness model but also the original fitness function
  • interactive evolutionary computation and usage of approximate models therein
  • use in multi-modal test function optimization: smoothen the rough fitness function, and thus prevent stagnation in a local optimum
  • Two approaches of evolution control:  (1) Every generation, only certain individuals are evaluated using the original fitness function (2) All individuals only every N generations per M generations are evaluated- determination of the number of individuals in (1) and generations in (2) has been answered via simulations
  • frequency at which the original function is called and the approximate model is updated should be determined by the local fidelity of the approximate model.The lower the fidelity, more frequently the original function should be called. *local fidelity model* is stressed

I hope to experiment with approximation models for the work I mentioned at the beginning. The challenge would be to find the perfect evolution control strategy.

Evolutionary Multi-modal Optimization (EMMO)

I have created a Wikipedia article on Evolutionary Multi-modal optimization here. Please help to make it complete.

EMO is already taken: Evolutionary Multi-objective Optimization, so EMMO it is.

Optimization task which involves finding all or most of the multiple solutions  (as opposed to a single best solution) is termed as Multi-modal optimization. Knowledge of multiple solutions to an optimization task is specially helpful in Engineering, when due to physical (and/or cost) constraints, the best results may not always be realizable.  In such a scenario, if multiple solutions (local and global) are known, the implementation can be quickly switched to another solution and still obtain a optimal system performance.

Classical techniques of optimization would need multiple restart p0ints and multiple runs in the hope that a different solution may be discovered every run, with no guarantee however.  Evolutionary alg0rithms (EAs) due to their population based approach, provide a natural advantage over classical optimization techniques. They maintain a population of possible solutions, which are processed every generation,  and if the multiple solutions can be preserved over all these generations, then at termination of the algorithm we will have multiple good solutions,  rather than only the best solution. Note that, this is against the natural tendency of EAs, which will always converge to the best solution, or a sub-optimal solution (in a rugged, “badly behaving” function). Finding and Maintenance of multiple solutions is wherein lies the challenge of using EAs for multi-modal optimization.

The field of EAs today encompass Genetic Algorithms (GAs), Differential Evolution (DE),  Particle Swarm Optimization (PSO) among others. Attempts have been made to solve multi-modal optimization in all these realms. Since my main area of research involves GAs,  I shall mainly talk about them.

Multi-modal optimization using GAs

Niching is a generic term referred to as the technique of preserving multiple niches, or favorable  parts of the solution space possibly around multiple solutions, so as to prevent convergence to a single solution. Petrwoski’s clearing method,  Goldberg’s sharing function approach,  restricted mating, maintaining multiple subpopulations are some of the popular approaches that have been proposed many years ago. The first two methods are very well studied and respected in the GA community. For a detailed reference to all these and many more papers, please refer the end of this post.

Very recently, we have proposed an approach to obtain multiple solutions by using principles from Multi-objective optimization. You can see some demos showing our approach in action:

Our approach doesn’t implement niching to maintain multiple solutions.  The multi-objective technique helps maintain multiple solutions, due to a property called weak-pareto optimality.

Multi-modal Optimization using DE

For a comprehensive treatment of Multi-modal optimization methods in DE, refer this Ph.D thesis Ronkkonen, J. (2009). Continuous Multimodal Global Optimization with Differential Evolution Based Methods.

Resources to look up:

This post will see further updates.

GA Demo: Multiple global optima of the Vincent function

The Vincent function is defined as follows: f(\vec{X}) = - \sum_{i=1}^n sin(10.log(x_i)), with \vec{X} varying in [0.25:10].  This is a plot of the function in 1 dimension:

Vincent Function- 1 Dimension

As you can see there, are 6 global minima. This function is such that in N dimension, the function with have 6^D global minima. For example, in 2D it has 36 minima:

Vincent Function - 2D

The job of a multi-modal optimizer is to find all the global optima in a multi-modal function.  See this demo:

See the interesting bit? As the population of the GA evolves, it settles into the 36 minima, forming beautiful clusters around them.

The algorithm used here is based on our work  Deb, K., Saha, A., Finding Multiple Solutions for Multimodal Optimization Problems Using a Multi-Objective Evolutionary Approach (Accepted to be presented as full paper in GECCO-2010). (Let me know if you want to take a look at the paper).

You may also be interested in my past two demos:

Simulated Evolution And Learning (SEAL-2010)

Simulated Evolution and Learning SEAL-2010 (  is the eighth biennial conference in the highly  successful conference series that aims at exploring these two forms of  adaptation and their roles and interactions in adaptive systems. Any  paper involving evolution as a vehicle for adaptive and artificial  problem solving tasks and any form of computational and machine  learning procedure for developing and analyzing adaptive or artificial  systems will be of interest to this conference. Cross-fertilisation  between evolutionary learning and other machine learning approaches,  such as neural network learning, reinforcement learning, decision tree  learning, fuzzy system learning, etc., are encouraged by the conference. The other major theme of the conference is optimization problem solving by evolutionary approaches or hybrid evolutionary
approaches. The topics of interest to this conference include but are not limited to the following: Evolutionary Learning; Evolutionary Optimization (single and multi-objective); Hybrid Learning; Hybrid Optimization, Adaptive Systems; Theoretical Issues in Evolutionary Computation; Real- World Applications of Evolutionary Computation and Learning Techniques.

All accepted papers which are presented at the conference will be included in the conference proceedings, published as a volume of the series Lecture Notes in Computer Science, Springer. Selected best papers will be invited for further revisions and extensions for possible publications by volutionary computing related journals.


05 July 2010, for submission of full papers (<=10 pages (pdf only) ) (Latex Template, Word Template).
16 August 2010, Notification of acceptance
03 September 2010, Deadline for camera-ready copies of accepted papers
1-4 December 2010, Conference sessions (including tutorials and workshops)

For further details visit:

GA Demo: Multimodal Function Optimization

In my last GA demo, we saw how the population of individuals in a Genetic Algorithm converged to the globally best part of the search space. What if we want all the best individuals i.e locally and globally best ones? The problem then becomes a multimodal optimization problem. See how it happens:

  • Uses the NSGA-II algorithm
  • Population size: 100, Generations: 150
  • Crossover: SBX Crossover
  • Mutation: Polynomial Mutation

Here is another demo with another function:

  • Uses the NSGA-II algorithm
  • Population size: 60, Generations: 100
  • Crossover: SBX Crossover
  • Mutation: Polynomial Mutation

The algorithm used here is based on our work  Deb, K., Saha, A., Finding Multiple Solutions for Multimodal Optimization Problems Using a Multi-Objective Evolutionary Approach (Accepted to be presented as full paper in GECCO-2010). (Let me know if you want to take a look at the paper).