Evolutionary Algorithms (short EA) are optimisation algorithms, meaning they find the optimal solution for a problem. Actually, EAs do heuristic optimisation, so they don’t find the optimal solution but only a near-optimal or “probably the optimal” solution. This makes them faster and less complex than classic optimisation algorithms. Accordingly, EAs are often used for repetitive problems that regularly need to be solved with a reasonably good solution and for problems that need to adapt to change quickly. Just like the business case we are currently working on with an evolutionary algorithm, but more about that at the end.

 

Let’s look at an optimisation problem together, the classic travelling salesperson problem: A salesperson needs to visit several cities on their route, they cannot visit the same city twice, and it would be ideal for taking the shortest route perhaps because they are cycling from city to city. A classic optimisation algorithm might create a table with distances between every two cities and create a route from there with more or less complex maths which is then improved. 


As suggested by the name, Evolutionary Algorithms were inspired by the biological concept of evolution. So instead of working with a singular solution, EAs work with a population of possible solutions which “live” within an environment with limited resources. This causes survival of the fittest leading to the evolution of better solutions. Many variations of evolutionary algorithms differ in the representation of solutions and how they become parents, reproduce, mutate and survive. 
This article is not a complete guide and only provides information on some techniques and modules of EAs. For more information, we recommend “Introduction to Evolutionary Computing” by Eiben and Smith or this lecture by Tizhoosh.


Before we create a population for our example, we need to find a way to represent the solutions to our problem. We want our solution to relate to a biological chromosome consisting of genes with specific alleles/values. For our example, a solution is like an array containing the different cities. In our case, the different alleles are the cities: Berlin, Cologne, Frankfurt, Hamburg and Munich. The position also known as the locus of the gene within the chromosome determines the order in which the cities are visited. This is called a permutation representation, often chosen when optimising the order of things. 

 

This image depicts a chromosome for the example of the travelling salesperson problem. A chromosome is made up from many genes. A gene consists of a locust, the position of the gene and an allele the value of the gene. In the case of the chromosome [Berlin, Cologne, Frankfurt, Hamburg, Munich], the first gene has the locus 1 and the allele Berlin.


Now we can create a population of solutions, either with some domain knowledge in mind or randomly. In our example, it could look like this: [Cologne, Berlin, Munich, Hamburg, Frankfurt], [Cologne, Berlin, Frankfurt, Munich, Hamburg], [Berlin, Cologne, Frankfurt, Hamburg, Munich]. The amount of solutions we have in our population is determined by the parameter called population size; in this case, our population size is three. Typically the population size is larger. It can also happen that two solutions are identical. That is no problem unless, of course, all solutions are identical. 


Finally, evolution begins. For a set amount of generations, we let the solutions evolve. During one generation, the first thing that happens is that we select parents to produce offspring, as would occur in nature. This selection can happen randomly or with one of many parent selection operators; a simple one is rank-based parent selection. For this, the fitness of all solutions is determined. A solution’s fitness is determined by specific evaluation criteria. In our example, this would be the length of the way which needs to be calculated for each solution within the tournament. This fitness can either be minimised or maximised. In our case, we prefer the shortest route, so it will be minimised. The solutions are then ranked by their fitness. If we want to choose two parents, we simply select the first two solutions for this. In reality, the number of parents would be larger, but we will keep it simple here. 

 

From the three solutions in our population, the fitnesses are [Berlin, Cologne, Frankfurt, Hamburg, Munich] with 2.075 km, [Cologne, Berlin, Frankfurt, Munich, Hamburg] with 2.265 km and [Cologne, Berlin, Munich, Hamburg, Frankfurt] 2.472 km when ranked. This way, we can choose the best two solutions as parents.

 

Once we gather all future parents, they can be recombined with one of many recombination operations, which also depend on the chosen representation. Order crossover is used for permutation representation and works as follows: 


We start with two parents and two random positions/loci (plural of locus).

The two parents [Berlin, Cologne, Frankfurt, Hamburg, Munich] and [Cologne, Berlin, Frankfurt, Munich, Hamburg] are split after locus 1 and again after locus 3.


In the first step, the two offspring are filled with the area between the loci from one parent each. In the second step, we fill up the empty genes with the genes of the second parent, skipping those with alleles that already occur in this offspring and starting at the beginning of the chromosome once we reach the end. This way, two new offspring are produced and added to the population.

 

The offspring creation in the two steps described in the text. Resulting in [blank, Cologne, Frankfurt, blank, blank] and [blank, Berlin, Frankfurt, blank, blank] in step 1 and [Berlin, Cologne, Frankfurt, Munich, Hamburg] and [Cologne, Berlin, Frankfurt, Hamburg, Munich] in step 2.


While recombination is more similar to the exploitation occurring in classic optimisation, mutation is more similar to exploration. Meaning it occurs more randomly. Thus solutions are chosen randomly from the population and have one or more of their genes mutated. One mutation operator is swap mutation, where the allele/value of two random genes is swapped to the other allele/value. This new solution is also added to the population. 

 

The mutation of the alleles at loci two and four creating the solution [Berlin, Hamburg, Frankfurt, Cologne, Munich].


At the end of one evolution step/generation, the survivors must be determined so that our population size remains stable. This could happen at random, based on the solutions’ age or fitness, where we first determine the fitness of all solutions and then delete the worst solutions from the population. 

 

The ranking of the population for suvivor selection with the following results: 1. [Berlin, Hamburg, Frankfurt, Cologne, Munich], 2. [Berlin, Cologne, Frankfurt, Munich, Hamburg] with 1.953 km, 3. [Berlin, Cologne, Frankfurt, Hamburg, Munich] with 2.075 km, 4. [Cologne, Berlin, Frankfurt, Munich, Hamburg] with 2.265 km 5. [Cologne, Berlin, Frankfurt, Hamburg, Munich] with 2.386 and 6. [Cologne, Berlin, Munich, Hamburg, Frankfurt] 2.472 km


After our set number of iterations, we are left with a population of solutions that are likely better than the population we started with, which you can already observe in our example. One or many of these solutions have the best fitness, meaning these are our near-optimal, possibly even true optimal solutions. This means we can also use genetic algorithms for multi-objective optimisation as we naturally have a set of solutions to choose from. Overall, genetic algorithms are relatively easy to understand, implement and compute. However, they are not useful if you require the exact optimal solution, as depending on the parameters and components, you might not get the actual best solution from an EA.


The current challenge we are addressing aligns well with the strengths of an evolutionary algorithm (EA): frequent updates, high complexity in finding the optimal solution and multiple objectives. For this use case we are looking into ways to upgrade the current system that schedules ad spots to ad breaks on linear TV.


Our colleagues from the advertising business have various preferences, some more important than others, in addition to the basic rules of ad scheduling, making this a multi-objective optimisation case. But most importantly, we need to stay within budget.


Inefficient ad scheduling can lead to financial losses for ad sellers if ads are over-delivered or under-delivered, potentially resulting in contract breaches. As numerous campaigns span across channels, the scheduling challenge rapidly becomes more complex. While the optimal solution is desirable, adaptability is necessary due to frequent program changes. Therefore, we have high expectations for our EA in addressing these requirements and look forward to sharing more insights about this use case in the future.