Reading Notes: The Master Algorithm – Evolutionaries (part 5)

Ronald Fisher’s classic treatise The Genetical Theory of natural Selection formulated the first mathematical theory of evolution.

The key input to a genetic algorithm is a fitness function.

Evolution as a whole has no known purpose.

Genetic algorithms are a lot like selective breeding. They breed programs instead of living creatures, and a generation is a few seconds of computer time instead o a creature’s lifetime.

Sexual reproduction consists of swapping material between chromosomes from the mother and father, a process called crossing over. A genetic algorithm works by mimicking this process. In each generation, it mates the fittest individuals, producing two offspring from each pair of parents by crossing over their bit strings at a random point. After applying point mutations to the new strings, it lets them loose in its virtual world. Each one returns with a fitness score, and the process repeats. Each generation is fitter than the previous one, and the process terminates when the desired fitness is reached or time runs out.

One way in which genetic algorithms routinely cheat is by allowing immortality (too bad we cannot do that in real life).

Backpros VS genetic algorithm:

  • Backprop entertains a single hypothesis at any given time, and the hypothesis changes gradually until it settles into a local optimum. Backprop learns weights for a predefined network architecture; denser network are more flexible but also harder to learn.
  • Genetic algorithms
    • are full of random choices: which hypotheses to keep alive and cross over (with fitter hypotheses being more likely candidates), where to cross two strings, which bits to mutate.
    • make no a priori assumptions about the structure they will learn, other than their general form.
    • Much less likely than backprop to get stuck in a local optimum
    • Better able to come up with something truly new
    • Much more hard to analyze

One of the most important problem in ML, and life, is the exploration-exploitation dilemma.

* A genetic algorithm is like the ringleader of a group of gamblers, playing slot machines in every casino in town at the same time. Two schemas compete with each other if they include the same bits and differ in at least one of them, like *10 and *11, and n competing schemas are like n slot machines. Every set of competing schemas is a casino, and the genetic algorithm simultaneously figure out the winning machine in every casino, following the optimal strategy of playing the better-seeming machines with exponentially increasing frequency. Pretty smart.

Instead of evolving comparatively simple things like If…Then…, rules are gas pipeline controllers, why not evolve full-blown computer programs? Why stick with bit strings as the representation? A program is really a tree of subroutine calls, so better to directly cross over those subtrees than to shoehorn them into bit strings and run the risk of destroying perfectly good subroutines when you cross them over random point.

Cross over two program trees by randomly swapping two of their subtrees. Starting with a population of random program trees, genetic programming uses crossover, mutation, and survival to gradually evolve better programs until it’s satisfied.

A more illustrative example of what genetic programming can do is figuring out the sequence of actions a robot needs to perform to achieve some goal. Starting with the robot’s “atomic” behaviors, and their allowed combinations, genetic programing can assemble a complex behavior that accomplishes the desired goal .

Evolutionaries VS connectionists:

  • Common: both design learning algorithms inspired by nature
  • Evolutionaries:
    • Focus on learning structure. So fine-tuning an evolved structure by optimizing parameters is of secondary importance.
    • Structure learning
  • Connectionists:
    • Prefer to take a simple, hand-coded structure with lots of connections and let weight learning do all the work
    • Weight learning

Evolution VS brain:

  • The brain can learning anything, but it can’t evolve a brain
  • Evolution takes billions of years to learn, and the brain takes a lifetime
  • Evolution is excruciatingly slow.
    • The entire life of an organism yields only one piece of information about its genome: its fitness, reflected in the organism’s number of offspring.
  • Nature VS nurture debate

In contrast to the connectionists and evolutionaries, symbolists and bayesians do not believe in emulating nature. Rather, they want to figure out from first principles what learners should do. They need to make optimal decisions, not just decision that seem good.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s