Course Modules

Optimization involving discrete variables

joel-filipe-_AjqGGafofE-unsplashWe take a page from nature’s playbook and look at an algorithm inspired by evolution that is well-suited to solving very difficult discrete optimization problems.

Introduction

Another class of optimization problems involves variables that are not continuous. There are many examples of things that can only exist in certain states: obviously all integer variables are limited to whole numbers and a switch can only be on or off. Or consider a DNA sequence: it consists of a serial combination of only four bases. These kinds of optimization problems are tricky because we can no longer “climb the hill” by making incremental changes to our variable set and traveling along the gradient.

The DNA example provides us with a starting point for a numerical method to solve these kinds of problems. You might consider evolution as nature’s discrete optimization algorithm: the fitness of a given “solution” is dependent largely on their genetic code which can only take defined states. So, how does nature continuously improve its designs?

There are a few key elements to nature’s algorithm, but the main idea is that a population is involved. Instead of starting at some point and incrementally improving a single solution, nature improves by mixing and matching different solutions within a large population. And that will be our approach. We will evolve optimum (or near optimum) solutions from a set of many initial solutions, some good and some bad.

This class of methods falls under the heading of evolutionary computation of which the most important are genetic algorithms.

References

Most of the material in this section of the course is based on the work of Melanie Mitchell and two excellent books she published on genetic algorithms and complexity theory:

  • Mitchell, M. (1996). An Introduction to Genetic Algorithms, MIT.
  • Mitchell, M. (2009). Complexity: A Guided Tour, Oxford University Press.

The discrete optimization example involving a wind farm is based on a series of papers from the literature, beginning with the original paper of Mosetti, et. al.

  • Mosetti, G., Poloni, C., & Diviacco, B. “Optimization of wind turbine positioning in large wind farms by means of a genetic algorithm”. Journal of Wind Engineering and Industrial Aerodynamics, Volume 51, Issue 1, 1994, pp 105–116.
  • Grady, S.A., Hussaini, M.Y., & Abdullah, M.M. “Placement of wind turbines using genetic algorithms”. Renewable Energy, Volume 30, 2005, pp 259-270.
  • Marmidis, G., Lazarou, S., & Pyrgioti, El. “Optimal placement of wind turbines in a wind park using Monte Carlo simulation”. Renewable Energy, Volume 33, Issue 7, 2008, pp 1455-1460.

Lecture slides

 


Tasks
  • Design a ruleset using the workshop at https://compute.dawsoncollege.qc.ca/apps/robby_the_robot/workshop. Click the ‘Submit’ button to get a blank starter strategy. Click the ‘Update’ button to see it run.
  • When designing your ruleset, start by thinking about what the robot should do in the following situations:
    • There is a can in the current square;
    • There is a can in an adjacent square;
    • There are no cans nearby.
  • Once you have fine-tuned to your liking, copy the strategy and submit it at https://compute.dawsoncollege.qc.ca/apps/robby_the_robot/submit. The app will test your strategy against 10,000 randomly generated rooms and return the average score. Don’t forget to hit ‘Save’!

 

Robby the Robot

iRobotThe objective of this lab is to design an optimal strategy for a room-cleaning robot using a genetic algorithm.

Code

There are two pieces of code today.

  1. GAStudents.java is where you will implement the genetic operators. I have already taken care of the other pieces (encoding the problem, randomly generating an initial population, evaluating the fitness of a given strategy, and a roulette wheel for selecting parents). I encourage you to browse through the code before getting started to get an overall picture of how things work.
  2. DrawStudents.java allows us to test our optimal strategy under various conditions that will give us some insight into how the optimal strategy works. You do not need it until you have submitted your best GA strategy online and are satisfied with its score.

In the lab

Implementing the genetic operators

Task 1

Implement two-point crossover starting on line 141 of GAStudents.java.

Task 2

Implement mutation starting on line 171 of GAStudents.java. Remember the alphabet is 0 through 6 (not just 0 and 1). Make sure the operator is applied to every gene in the sequence of a given strategy!

Evolving a solution

Task 3

  • Once your code is working, run it. This will take some time, very likely between 3000 and 4000 generations. Your best solution should hit 10% between 500 and 700 generations. If this isn’t the case, double-check your code and restart your simulation.
  • When the best solution in a given generation is consistently better than say 80-85%, you can halt the code. A perfect score would be 100%, so this GA-derived strategy is getting close to optimal and should be comparable to reasonable human strategies. But why? Are the strategies the same? Plug it into the web app and watch it work.
  • When you reach this point, show your results to the teacher. You must reach this point for full marks in the lab.

As homework

Understanding Robby

The optimal solution (~95%?) evolved by the genetic algorithm is sneaky good. It may take many runs to evolve a solution that gets those last few percentage points, but it is worth it because something truly interesting will happen. The goal here is to understand the magic in this optimal solution.

Once your optimal GA solution scores in the mid 90s, submit it to the web app and prepare a short report answering the following questions. Be sure to include your optimal solution in your report!

Task 4 – No cans nearby

  • When there are no cans near Robby, a common human strategy is to assign a random move, but what does the GA suggest? If this situation persists, Robby will repeatedly carry out the same action and should eventually encounter a wall. If there are still no cans nearby, what do you expect him to do? What does he do? Again, if this situation repeats, Robby will repeatedly carry out the same action and should eventually encounter another wall. If there are still no cans nearby, what do you expect him to do? What does he do? Yet again, if this situation repeats, Robby will repeatedly carry out the same action and should eventually encounter yet another wall. If there are still no cans nearby, what do you expect him to do? What does he do?
  • Run DrawStudents.java using Test Case 2 to see how your optimal Robby behaves in a nearly empty room. A pattern should be apparent now as to how Robby searches for cans. Explain Robby’s strategy for searching for cans.

Task 5 – Many cans nearby

  • When there are cans in the E, W and C positions, a common human strategy is to pick up the can at the current position and then move to another site with a can, but what does the GA suggest?
  • Run DrawStudents.java using Test Case 3 to see how Robby behaves in this situation. You can also check Test Case 4. Explain Robby’s strategy when he sees many cans.

Task 6 – Randomness

  • How many 4s and 6s appear in the optimum gene sequence? At the start, they each occur roughly 1/7th of the time. How about at convergence? Why should there be any 4s? How many truly random moves does Robby make and in what situations?
  • Call the analyze function (line 47) in DrawStudents.java to see what genes contain 4s or 6s.

Task 7 – Specificity

  • The primary difference between the human strategy and the GA strategy is the cooperation between genes. Common human strategies are based solely on what Robby currently sees, but the GA strategy is in some sense deeper: it assigns actions based on Robby’s behaviour in other situations, which is evidently more efficient. But it is also worth noting that the GA evolved Robby’s strategy for a particular environment and it will not work nearly as well in other environments. Compare the GA strategy and the human strategy for a room full of cans.
  • Run DrawStudents.java using Test Case 5. Run the code a few times. Which strategy is better?

Open Challenge

Task 8 (Optional)

  • Can you find a strategy that outperforms the GA? Are there any adjustments you might make to the GA solution to eke out a slightly higher score?
  • Do whatever you can to get the absolute highest possible score and submit it to the web app.


Last Modified: October 26, 2020