# Update

[cross-posted to Notional Slurry]

I real­ize I’ve turned quiet as far as the blogs are con­cerned. I’ve been work­ing on trans­lat­ing the draft con­tent for the Answer Fac­to­ries book into pub­lished man­u­script. Mark­down is lovely, but talk­ing in detail about the process of soft­ware devel­op­ment still requires an awful lot of cutting-and-pasting, it turns out….

I recently updated the pub­lished draft; if you’re behind, feel free to go update your copy now. New con­tent includes a descrip­tion of the iPad game Cargo-bot, and a detailed test-driven re-implementation of the game logic in an emu­la­tor we’ll use for GP in forth­com­ing chap­ters. I spent a lot of time on the test-driven devel­op­ment, so I’d like some feed­back if you’re willing.

# Help me pick the programming language for the second project?

Over at the Google group I’m solic­it­ing advice on what pro­gram­ming lan­guage I should use to imple­ment the sec­ond project for the book.

Also: Do you pre­fer the blog or the Google group for this sort of thing?

One of the deep­est philo­soph­i­cal qualms I’ve had about the struc­ture of the book is the neces­sity to show code in some par­tic­u­lar pro­gram­ming lan­guage, with­out imply­ing that if an algo­rithm were writ­ten in some other lan­guage it would be “bet­ter” or “worse” or in any sense func­tion­ally dif­fer­ent. If I pick Ruby, things not only devolve into “Genetic Pro­gram­ming in Ruby” (which is mis­lead­ing), but Python and Clo­jure and J pro­gram­mers are left unable to fol­low along.

Tra­di­tion­ally com­puter sci­en­tists have use pseudocode as a solu­tion to this prob­lem, but to be blunt pseudocode sucks: the accept­abil­ity of runnable code should not be dri­ven by struc­tural fea­tures pseudocode enforces (loops, branches, argu­ments) but the behav­ior it exhibits. As far as I’m con­cerned it’s not only mis­lead­ing but dan­ger­ous to hand out pseudocode, if noth­ing else because I’ve watched peo­ple treat the text as if it were a form to fill out, replac­ing every line of the pseudocode with an “equiv­a­lent” line of code-in-some-language, expect­ing the result to run as intended.

Of course in a big chunk of the read­er­ship, those famil­iar with test-driven and behavior-driven devel­op­ment, tests do the trick hand­ily. But in the other big chunk of read­ers, that sen­si­bil­ity and suite of habits is miss­ing, and even alien.

So here’s what I decided: I’ll write code myself, and in the begin­ning project show all the code I write, but only in a behavior-driven way. The Cucum­ber tests will be there for read­ers to use (or kludge along with) in their own lan­guage of choice. I’ll be work­ing in Ruby for the first project because peo­ple who are com­fort­able with Ruby are the first audi­ence I hope to reach.

But there are six projects planned, and so I’d like to grad­u­ally set the explicit code aside, and come to rely more grad­u­ally on the Cucum­ber fea­tures and sto­ries, or per­haps “more generic” (?!) state­ments of accep­tance tests the reader’s libraries should pass.

So the pro­gram­ming lan­guage I use in the sec­ond project is up for grabs. Clo­jure? Python? Javascript? Objective-C? In any case it must be some­thing with a sta­ble and usable accep­tance test­ing frame­work on hand; I’m not sched­ul­ing any time to write my own test­ing libraries here.

The sec­ond project’s sub­ject mat­ter will either be “evolv­ing geo­met­ric algo­rithms and proofs” or “data-mining and stock-trading and related crap”. So any lan­guage I’ve listed will work fine, and per­haps a few oth­ers I could learn rel­a­tively eas­ily. (But not Java.)

# Protection against surprise

The fol­low­ing is an extract from the immi­nent Prag­matic GP book. I’m post­ing it because it’s the first place I’ve man­aged to tie together some­thing that’s been bug­ging me for years, about the fail­ures so many Very Smart pro­gram­mers and com­puter sci­en­tists have expe­ri­enced, over and over in some cases, when try­ing to build use­ful, inter­est­ing, robust GP sys­tems that solve com­plex prob­lems. I sus­pected that part of the prob­lem was in their pro­gram­ming style, but I real­ize here that col­lab­o­ra­tion habits are really the key to suc­cess in GP. More on that later; now I’m going to go pub­lish the rest of this project.

This Cargo-bot project will be the first of a half-dozen we will work through together, so it seems impor­tant to say a bit about what I expect.

As I said, my inten­tion for this effort is that you learn by doing, that is by build­ing your own GP sys­tems, bottom-up, more or less from scratch. But it’s the nature of com­puter pro­gram­ming that it’s sur­pris­ingly dif­fi­cult to com­mu­ni­cate, except by exam­ple. There are some abstruse math­e­mat­i­cal ways, and some hand-wavey (but over-constrained) indi­rec­tions like that awful pseudocode crap, but really in the end I think the only way for you to purely learn by doing would be to write code with me — in person.

More­over, there are a lot of inter­est­ing and impor­tant sub­jec­tive aspects to suc­cess­fully build­ing a GP project, that don’t come across in code as such. Why is that there? What does this do? How did you make that jump? Indeed, the lack of those sub­jec­tive aspects in the other writ­ing on GP are what’s sparked me sit­ting here typ­ing today: they’re cru­cial; they’re the Lan­guage of the Project.

So at least for this first project, I’ve decided to write the code myself. There is still no oblig­a­tion that you use the same pro­gram­ming lan­guage (Ruby) that I will — in fact, I hope you won’t. I hope you’ll re-write the code in some other lan­guage I’ve never heard of, and post it on GitHub, and share it with the other folks out there as more exam­ples of how this can be done. But what­ever you do, it had bet­ter still have all the same accep­tance tests I’ll be describ­ing myself, and demon­strate (auto­mat­i­cally!) that all those fea­tures are present in your code, and the tests pass.

Beyond the fact that it will sur­face some of the impor­tant sub­jec­tive aspects of the project that are nor­mally elided in descrip­tions of GP, there’s a sec­ond (and more impor­tant) rea­son I’m going through this process of writ­ing the code in plain sight.

Peo­ple don’t like being sur­prised. Remem­ber what I’ve said a few times already: GP (when it works) accel­er­ates the process of dis­cov­ery, which is to say you being sur­prised by some unex­pected result or pat­tern.

I’ve found that many well-trained folks, young and old, who have learned and inar­guably mas­tered com­puter sci­ence and pro­gram­ming to the point where they can squint real hard at a prob­lem and smash-key their way into a text edi­tor and in a few hours pro­duce some exotic self-contained crys­talline beauty of an algo­rithm… well, those peo­ple often react poorly to being sur­prised. They are so used to being right, to intu­it­ing what they’re going to do, and then doing just that; they ham­mer out the code they envi­sion (though maybe hit­ting the back­space key a lot) with­out ever hav­ing to speak to another per­son or do much more than rtfm now and then.

In my expe­ri­ence, those peo­ple have the most trou­ble build­ing and using a GP sys­tem. This is noth­ing to do with their cod­ing abil­i­ties, or their under­stand­ing of sys­tem design or syn­tax or parsers or any of that com­put­ery stuff. It has a lot to do with them see­ing the thing they build — the GP soft­ware process — make the same sort of blue-sky, intu­itive, often sur­pris­ing design deci­sions that they make. Any well-made GP sys­tem will quickly spit out “design deci­sions” that may or may not make sense (it’s hard to tell some­times), but which in any case are noth­ing like what the pro­gram­mer expected. And in my expe­ri­ence Very Smart pro­gram­mer folks are not often equipped to deal with that.

This is a prob­lem of com­mu­ni­ca­tion, not code-writing. I think it may be asso­ci­ated with a bun­dle of habits Very Smart pro­gram­mers develop in our cul­ture, which tend to keep them from hav­ing to dis­cuss what and why they are writ­ing all the indi­vid­ual lit­tle incre­men­tal bits of code they write. They’re often so smart, they need not have any help visu­al­iz­ing an entire start-to-finish solu­tion in their heads, or do any explicit intro­spec­tion, or spell out why they have made one deci­sion ver­sus another. But if any­body with some dif­fer­ent vision of an equally effec­tive start-to-finish solu­tion starts chang­ing their code­base, or changes the rules or goals in mid-project, or argues with them about their basic toolkit… well, it doesn’t go well. More often than not they freak out, as a mat­ter of my expe­ri­ence. Some­times there is slamming.

This is not a crit­i­cism, but rather a diag­no­sis: Very Smart pro­gram­mers are, in my expe­ri­ence, eas­ily thrown off track by hav­ing unex­pected changes made to the project they’re work­ing on. Or by cop­ing with unex­pected or emer­gent behav­ior that arises dur­ing a project. Or by hav­ing their fun­da­men­tal assump­tions about a domain ques­tioned. Or by hav­ing to inte­grate or adapt to an alter­na­tive design approach.

Any well-built GP sys­tem, work­ing on any inter­est­ing project, will most def­i­nitely do all those things.

I am not a Very Smart pro­gram­mer. I’ve had to stum­ble my way through all the cod­ing I’ve ever done, and I have very poor visualizing-everything-start-to-finish-in-my-head skills. So in the last decade or so, pro­gram­ming in my not-Very-Smart way, I’ve been obliged to speak with other peo­ple a lot, and explain what it is I’m doing in each iter­a­tive deci­sion I make, and ask them what they meant by what they were doing, and all those other things Very Smart pro­gram­mers are able to do auto­mat­i­cally with­out con­sult­ing other people.

Along the way, I’ve dis­cov­ered that there is a set of tools for that. Com­mu­ni­ca­tion and col­lab­o­ra­tion tools. And I’ve found that those same tools, or at least very sim­i­lar ones, help to deal with this weird “col­lab­o­ra­tion” thing one needs to do with a GP sys­tem. It’s not col­lab­o­ra­tion, and it’s not com­mu­ni­ca­tion; but it is some­thing a lot like the mutual men­tal mod­el­ing we do when we speak to one another (which, you may be amused to dis­cover, is some­times referred to as Prag­mat­ics).

So as I go along, in my not-Very-Smart way, I am going to be build­ing in some of those safe­guards I’ve learned, and using tools that were orig­i­nally designed to fos­ter col­lab­o­ra­tion between human beings work­ing together on code. You, since you may be Very Smart, might not have encoun­tered these before; in my expe­ri­ence many pro­gram­mers who work in the Uni­ver­sity or other uncol­lab­o­ra­tive set­tings may have missed them. Worse, you may have encoun­tered them (for exam­ple as prac­tices that form part of the method­ol­ogy referred to as Agile by some folks), and dis­missed them as being unim­por­tant to you in your work-life.

They form a cru­cial buffer­ing inter­face, one that will help you avoid being sur­prised into fail­ure. Do not ignore them.

# The Three Languages of GP

[This is a draft of an intro­duc­tory chap­ter of the book; expect some changes as I fin­ish up the first par­tial release. Also, I note some links and cross-references don’t trans­late to the blog directly.]

# Three Lan­guages

I like to say that suc­cess on GP project involves work­ing in three lan­guages. Look again at the 3×5 card, and you’ll see hints of them all there. I call them the Lan­guage of Answers, the Lan­guage of Search, and the Lan­guage of the Project.

## The Lan­guage of Answers: What?

A GP sys­tem itself doesn’t “think”. It’s a sys­tem for accel­er­at­ing the explo­ration of alter­na­tive answers to a formally-stated ques­tion. A sin­gle project will often shift to explore sev­eral dif­fer­ent ques­tions: mat­ters of whim­si­cal open-ended curios­ity or earnestly ded­i­cated pur­po­sive sci­ence. But you should pay atten­tion to only one at a time.

To be able to treat any par­tic­u­lar ques­tion using GP, and explore the vast num­ber of diverse alter­na­tive answers, you will need to write code that embod­ies your inter­ests and goals.

Set­ting up a GP project obliges you to do some cod­ing work. Usu­ally the major­ity of your work will be the design and imple­men­ta­tion of a domain-specific lan­guage. It doesn’t have to be very com­pli­cated, but it will need the flex­i­bil­ity and capac­ity to describe any inter­est­ing answer to your project’s ques­tion of the moment — the “smart” answers, and also the “dumb” ones. After all: your prob­lem is inter­est­ing because you don’t know which answers are smart and which dumb.…

I pre­fer to call the scripts you write in this domain-specific lan­guage — as inter­preted in the con­text of your prob­lem — “Answers”. In the GP lit­er­a­ture you’ll see them called “indi­vid­u­als” and “genomes”; those are his­tor­i­cally impor­tant terms, but they carry a lot of poten­tially mis­lead­ing metaphor­i­cal bag­gage. Here we’ll stick with the term Answers, to remind us that they are con­tin­gent on the prob­lem you’re considering.

You’ll have seen folks list­ing some of the poten­tial appli­ca­tion of GP, talk­ing about evolv­ing “pro­grams” and “strate­gies” and “puz­zle solu­tions” and “mol­e­cules” and “con­trollers” and “robots”… all kinds of com­plex actual things. As language-using humans, some­times we mis­take our rep­re­sen­ta­tions of con­cepts for the con­cepts them­selves. Remem­ber: a script doesn’t do any­thing until we run it on a par­tic­u­lar inter­preter or com­piler, and even then only with cer­tain vari­ables bound to mean­ing­ful values.

A strat­egy is a mean­ing­less poem until you invoke it in the con­text in which it was con­ceived; we can­not mean­ing­fully read a “pure strat­egy” with­out know­ing what war or game or busi­ness it was meant for. A real mol­e­cule is not a string of “ACGT”, or even a pretty col­ored pic­ture of lit­tle candy balls on sticks, but nonethe­less when we “evolve mol­e­cules” we’re evolv­ing balls on sticks or strings of let­ters… then inter­pret­ing those in some mol­e­c­u­lar sim­u­la­tion. A con­troller for a robot is only a string until you upload it to a phys­i­cal robot or a sim­u­la­tion so you can see what might hap­pen when it runs. A plan for trad­ing stocks is mean­ing­less — and risky — with­out also con­sid­er­ing the par­tic­u­lar his­tor­i­cal con­text and the spe­cific trade exe­cu­tion sys­tem for which it was devel­oped. And so on. An Answer needs both pieces of infra­struc­ture: a state­ment or script writ­ten in a domain-specific lan­guage (often one you design), and also a for­mal set­ting in which the func­tion embod­ied in a script can be expressed and explored meaningfully.

If the Answers in your project are sim­ple DNA sequence strings like ACGTCTAGCA..., you’ll also need to obtain (or write) a sim­u­la­tor that trans­lates those strings into pro­teins, or folds them, or tests them for tox­i­c­ity, or does what­ever a com­puter needs to do in order to deter­mine the salient aspects of their func­tion. If you want to evolve robot con­troller scripts, you’ll need a real or a sim­u­lated robot that can exe­cute your con­troller scripts and reveal their function.

This is true even for the sim­plest and most com­mon appli­ca­tion of GP1, sym­bolic regres­sion — fit­ting math­e­mat­i­cal func­tions to train­ing data. The most com­mon approach is to rep­re­sent these math­e­mat­i­cal equa­tions as S-expressions, a form famil­iar to many Com­puter Sci­en­tists who learned to pro­gram in Lisp. For exam­ple, ( + x ( / 2 9 ) ) is an S-expression rep­re­sent­ing the func­tion $y=x+\frac{2}{9}$.

But notice that the S-expression script ( + x ( / 2 9 ) ) is not in itself the math­e­mat­i­cal func­tion (unless you hap­pen to be run­ning a Clo­jure inter­preter in your head or some­thing). Even though it’s very close to the runnable code, it’s not fully an Answer until you express it by pars­ing and eval­u­at­ing its out­put value in an inter­preter — one in which $x$ has a num­ber assigned to it.2

Even when there’s a “general-purpose” GP-ready full-featured lan­guage avail­able — some­thing like Clo­jush or even a human-readable lan­guage like Java—you’ll usu­ally need to expand it with libraries or cus­tom code to include domain-specific vocab­u­lary. And for rea­sons we’ll dis­cover in the first project, some­times when you use a full-featured lan­guage, you’ll also need to trim back its capacity.

Focus for a moment on the phrase “domain-specific” and how it needs to cut both ways: You don’t typ­i­cally find for...next loops or set-theoretic oper­a­tions in sym­bolic regres­sion projects, because peo­ple are ask­ing for arith­metic Answers, and those peo­ple rarely see for...next loops in arith­metic. You can fit data algo­rith­mi­cally using loops and Boolean oper­a­tors and bit-shifting — after all, that’s how com­put­ers them­selves do it. But you won’t find a shift_right oper­a­tor in most off-the-shelf sym­bolic regres­sion pack­ages, because the Answers that arise which use it to explore the prob­lem would “feel weird”.

If you’re work­ing on a project where you want to explore string-matching algo­rithms to clas­sify DNA into genes and introns, your Lan­guage of Answers will prob­a­bly include some­thing about reg­u­lar expres­sions. Not a lot about sin() and cos().

If you’re work­ing on a project where you want to explore game-playing algo­rithms for a text-based dun­geon crawl, your Lan­guage of Answers will prob­a­bly include prim­i­tives like look and if and fight. And maybe if you’re fancy, you’ll roll in a library for cre­at­ing deci­sion trees so your adven­turer can learn. But again, not a lot of sin() or cos() hap­pen­ing in the ol’ Crypt of Creatures.

And just to prove I’ve got noth­ing against trigonom­e­try as such: If you’re work­ing on a project where you want to explore the set of plane geom­e­try dia­grams which can be con­structed using a straight-edge and com­pass, you will almost cer­tainly want some sin() and cos() float­ing around in the mix.

### So GP is “auto­mated” how exactly?

No escap­ing it. In almost every GP project, you will need to hand-code this Lan­guage of Answers. Both parts: not just the “scripts” but also the con­tex­tu­al­iz­ing sys­tem used to inter­pret scripts and express their func­tions meaningfully.

Does this seem like a lot of effort? It’s not, when you put it in per­spec­tive. Real­ize that when you explore a prob­lem with GP, you should expect to exam­ine mil­lions of alter­na­tive Answers. In tra­di­tional approaches to problem-solving, you might (if you’re Ever So Smart), be able to con­sider a few dozen— the ones you can keep in your head and note­books. Even if you use algo­rith­mic tools like lin­ear pro­gram­ming, real­ize they are para­met­ric explo­rations of dif­fer­ent con­stant assign­ments… within one Answer at a time.

If you want access to the mil­lions instead of the dozens, you need to put in the up-front work to pro­gram­mat­i­cally rep­re­sent the struc­ture of Answers, and also hook up the mech­a­nisms needed to express them func­tion­ally. That’s the invest­ment you make.

## The Lan­guage of Search: How?

“Lan­guage of Search” is my catch-all for the innu­mer­able tricks of the GP trade. I count any­thing that changes the sub­set of Answers you’re con­sid­er­ing, includ­ing ran­dom guess­ing and assign­ing them a score based on their per­for­mance in context.

There’s all the famil­iar biologically-inspired stuff like crossover, muta­tion, selec­tion, and the more fan­ci­ful manip­u­la­tions. And also the idiomatic tools we use to imple­ment learn­ing or evolv­ing or improv­ing: pop­u­la­tions, back-propagation, selec­tion, sta­tis­ti­cal analy­sis, 1+1 Evo­lu­tion­sstrate­gie.… Basi­cally any­thing and every­thing that reduces the amount of per­sonal atten­tion you need to pay to all those alter­na­tive Answers.

There is no par­tic­u­lar “right way” to use or com­bine these com­po­nents. They’re really all design pat­terns, and they are used dif­fer­ently in dif­fer­ent geo­graph­i­cal regions and schools; they are most like the mythic mar­tial arts styles you see in movies, and the par­tic­u­lar moves one school or Mas­ter may teach his stu­dents. But just as the mar­tial arts share a pur­pose (if not an atti­tude), the many parts of the Lan­guage of Search address one ques­tion: Based on what you have dis­cov­ered already, how do you iden­tify new Answers that will be more satisfying?

Every GP project uses selec­tion in one form or another, so let’s look at that more closely. Say we’ve built a GP sys­tem with a pop­u­la­tion of 100 Answers, and we want to design a process to pick “par­ents” in order to breed a new gen­er­a­tion. There are lit­er­ally hun­dreds of approaches, but here are four. We might:

• …pick two par­ents with equal prob­a­bil­ity and remove them from the pop­u­la­tion; breed them to pro­duce two or more off­spring; keep the two best-performing family-members (includ­ing, pos­si­bly, the par­ents), and replace those win­ning fam­ily mem­bers back into the population.
• …pick two par­ents ran­domly from the pop­u­la­tion, using a bias towards better-scoring ones; breed those two par­ents to pro­duce one off­spring, and set it aside in a new “gen­er­a­tion”; con­tinue (with replace­ment of par­ents) until you have as many in the next gen­er­a­tion as you did in the last.
• …pick two par­ents at ran­dom from the pop­u­la­tion, with uni­form prob­a­bil­ity; breed them, and return the par­ents and the off­spring to the pop­u­la­tion; con­tinue until the pop­u­la­tion size is dou­bled; destroy half the pop­u­la­tion, culling it back down to the size where it started.
• …pick ten dif­fer­ent indi­vid­u­als from the pop­u­la­tion with uni­form prob­a­bil­ity; choose the best one of that tour­na­ment to be the first par­ent; repeat for the sec­ond par­ent; breed, and then… (&c &c)

These are all per­fectly rea­son­able and prac­ti­cal ways of choos­ing answers to breed and cull from a pop­u­la­tion. Three of them have for­mal names, even. Occa­sion­ally one may feel “bet­ter” than another for a given project, but none is intrin­si­cally bet­ter in all situations.

My point in list­ing them is to high­light the obvi­ous fact that they’re all just recipes in a for­mal lan­guage: the lan­guage I’m refer­ring to as the Lan­guage of Search. The “prim­i­tives” in this lan­guage are things you can surely see in my ver­bal descrip­tions: eval­u­a­tion, sub­set­ting and sam­pling, breed­ing (itself a whole blan­ket process that usu­ally refers to “mix­ing up Answer scripts with one another”)… and of course the basic pro­gram­ming infra­struc­ture of iter­a­tion and con­di­tional exe­cu­tion and sort­ing.

All nor­mal com­puter pro­gram­ming stuff, though maybe a bit more sto­chas­tic than you’re used to. But note that the Lan­guage of Search isn’t lim­ited to code: There’s an impor­tant class of GP sys­tems known as “user-centric” or “inter­ac­tive”, in which a real live human being makes con­scious deci­sions as part of the algo­rithm. This is a valu­able tool for explor­ing mat­ters of aes­thet­ics and sub­jec­tive judge­ment. (And we’ll build some­thing like that in a later project.)

The Lan­guage of Search is huge, but it’s not oner­ous. While you almost always need to design and imple­ment your project’s Lan­guage of Answers, even the most “advanced” tools in the Lan­guage of Search toolkit are sim­ple in com­par­i­son. Things like “chop up a string and mix up the parts” or “change a token in a script to a ran­dom value” or “assign a score to an Answer by run­ning it in con­text, given spe­cific input conditions”.

When I keep say­ing GP is sim­ple, that’s what I mean: the Lan­guage of Search is sim­ple. It’s really just a big cat­a­log of small parts you cob­ble together, and there’s absolutely no rea­son you should try to learn all the tools any­body has ever tried, or use more than three or four basics in a given project.

And that would be your cue to ask: Why then does GP have a rep­u­ta­tion for being so hard?

## The Lan­guage of the Project: Why?

Almost all GP writ­ing focuses on the Lan­guage of Search, either spelling out new tools and algo­rithms, or hav­ing lit­tle bench­mark­ing con­tests between vari­a­tions. A bit of the writ­ing — mostly the­o­ret­i­cal Com­puter Sci­ence — touches on the Lan­guage of Answers under the head­ing “rep­re­sen­ta­tion theory”.

As far as I know, very lit­tle has been writ­ten about this stuff I’m call­ing the “Lan­guage of the Project”. Yet I argue it’s the most impor­tant of the three — not least because it’s the decid­ing fac­tor when it comes to pre­dict­ing whether a project will suc­ceed or fail.

The Lan­guage of the Project is the lan­guage we use to talk about our­selves, in our roles as part of the project. It’s the fram­ing we use to express what we want, and why. It’s our expres­sion of the rea­sons one Answer is more sat­is­fy­ing than another, and our con­sid­er­a­tion of the pos­si­bil­ity that no sat­is­fy­ing Answer exists. It’s the lan­guage we use to process the sur­prises GP inevitably throws our way.

Big chunks of my Lan­guage of the Project fall in the realm of well-studied dis­ci­plines: “user expe­ri­ence”, “project man­age­ment” and “domain mod­el­ing”. Why do I feel it’s impor­tant to con­coct a catch-all neol­o­gism just to lump together those esteemed fields for this spe­cial GP junk? Worse: why is a tech­ni­cal com­put­ing book about “arti­fi­cial intel­li­gence” get­ting all touchy-feely and psychological?

Sim­ple answer: Because peo­ple don’t like being surprised.

That may ring a bell, since when you check you will see that the sub­ti­tle of this very book is “The Engi­neer­ing of Use­ful Sur­prises”. And I specif­i­cally argued ear­lier that GP is “a pros­the­sis for accel­er­at­ing inno­va­tion” — inno­va­tion in the sense of sur­prises.

Yup. And that’s the biggest obsta­cle in the way of broader adop­tion of GP, and also the biggest obsta­cle you per­son­ally will have work­ing on your own projects: Peo­ple don’t like being sur­prised.

A lot of folks seem to have decided that GP is “auto­matic”; that it’s used for “auto­matic search”, or “auto­matic pro­gram­ming”, or build­ing “inven­tion machines” that spit out inven­tions that are of “human-competitive” qual­ity. Those folks won’t think my third Lan­guage is worth their attention.

To them, GP — and arti­fi­cial intel­li­gence more gen­er­ally — is a sort of self-contained box of magic think­ing stuff. I won­der if maybe those peo­ple have read post hoc reports of suc­cess­ful GP (or AI) projects, with­out con­sid­er­ing all of what hap­pens over the course of an actual project: a lot of non-artificial human think­ing, typ­ing, com­pil­ing, swear­ing, whiteboard-scribbling, and con­ver­sa­tion… fil­tered through a series of iter­a­tive pro­gram­ming attempts and argu­ments and writ­ing, until even­tu­ally an encour­ag­ing result was pub­lished. If you don’t count that as part of the project, then of course you shouldn’t think a GP (or AI) sys­tem includes the project team rewrit­ing the algo­rithms, or the plan­ning sketches, or the con­ver­sa­tions and read­ing, or the re-starts with dif­fer­ent set­tings to try to get more con­sis­tent results, or the sta­tis­ti­cal analy­ses try­ing to “tune” or “speed up” the thing, or even the story writ­ten down in the paper that describes what hap­pened.

And if you’re will­ing to draw the bound­ary around the sys­tem that way, in a way that leads you to think GP (or AI) is a self-contained magic box of think­ing stuff that peo­ple stand in front of and pat and hug and even­tu­ally coax intel­li­gence out of… well you ought to get started now, because time’s-a-wastin’.

But while you’re occu­pied in pat­ting and fos­ter­ing self-organized cre­ative urges, muse about it my way for a minute.

Recall that the Lan­guage of Answers is some­thing you will almost always build from scratch. It’s not just domain-specific, it’s often problem-specific. The only time you can get away with using a pre-cooked Lan­guage of Answers is when you’ve uncon­sciously selected a prob­lem that makes it eas­ier to stom­ach reuse, or reduced the domain-specific qual­i­ties to raw num­bers and true/false decisions.

Given that reminder: How do you design the con­stants, vari­ables and oper­a­tors to use in your project’s Lan­guage of Answers? Which instruc­tions will be more help­ful in mak­ing inter­est­ing Answers? Which will be too weird? How do you ensure every Answer will be syn­tac­ti­cally cor­rect, or seman­ti­cally con­sis­tent? Or do you have to? How do you know whether your Lan­guage of Answers is capa­ble of rep­re­sent­ing any sat­is­fy­ing Answer at all, let alone an “opti­mal” one? How do you tell and what do you do when your GP sys­tem is ignor­ing impor­tant tools you want to see it use?

Those are ques­tions from the Lan­guage of the Project. No mat­ter where you draw your sys­tem lines, a per­son needs to ask and answer these ques­tions. Every time, for every project, for every prob­lem. And a per­son needs to design and imple­ment the solu­tions to them, using the other tools at their dis­posal. None of that is “automatic”.

And you may also recall that the Lan­guage of Search is a bulging toolkit, full of lit­er­ally thou­sands of design pat­terns and rules of thumb for manip­u­lat­ing answers in context-dependent use­ful ways. I can describe six­teen muta­tion algo­rithms with­out break­ing a sweat; then you’ve got crossover, and sim­u­lated anneal­ing, and steady-state pop­u­la­tion dynam­ics, and demes, triv­ial geog­ra­phy, hill-climbing, ini­tial­iza­tion bias­ing, multi-objective sort­ing, par­ti­cle swarms, automatically-defined func­tions, ver­ti­cal slic­ing, age-layered pop­u­la­tions.… Any rif­fle through any GP book will give you fifty more.

Given that reminder: How do you pick the mech­a­nisms for search and learn­ing in your project? How do you know which com­bi­na­tion may be best or even use­ful for your prob­lem? What do you even watch in order to decide whether a GP search is “work­ing” or not? Should you let your cur­rent search run longer, or start it over again? If you start it over, should you change the para­me­ters a bit, or try a dif­fer­ent design pat­tern? What do you do when it gives you an answer that “solves the prob­lem” in a totally stu­pid way?3

A per­son needs to mind­fully adapt the struc­ture of the project to fit the dynamic con­text of their wants and knowl­edge, and man­age the sys­tem into giv­ing them the answers they will find satisfying.

My “Lan­guage of the Project” isn’t iden­ti­cal with user expe­ri­ence, or project man­age­ment, or domain mod­el­ing, or even their union. Those dis­ci­plines are admirable, but they are designed for unac­cel­er­ated human-powered projects.

### “Excuse me: What just happened?”

You write soft­ware. I know this, or you wouldn’t bother read­ing this far. If a project isn’t giv­ing you sat­is­fy­ing answers — whether it involves GP or not — then you (per­son­ally) need to check that it’s imple­mented cor­rectly. And when you’re con­vinced it is run­ning as intended, you then (per­son­ally) need to reflect and decide whether it’s doing what you want it to. And if you decide that it isn’t, then you (per­son­ally) need to either change how it’s writ­ten, or change what you think it’s for.

In non-GP projects — soft­ware devel­op­ment or finan­cial or home improve­ment or med­ical research projects — there’s a rea­son­able sense that one can “re-start”. But of course in the con­text of human-powered projects, “re-starting” is never mis­un­der­stood to mean “from the same ini­tial con­di­tions”. You (per­son­ally, with all the other human beings on your team) “re-start” hav­ing learned some­thing use­ful and help­ful. You intend to do some­thing dif­fer­ently the sec­ond time around, and you don’t have to con­cen­trate very hard on remem­ber­ing to change stuff.

This dif­fer­ence between you-before-the-first-try and you-after-the-first-try doesn’t get men­tioned, because it’s such a fun­da­men­tal fact of life that it goes with­out say­ing. But notice that you (per­son­ally) are under­stood intu­itively to be part of the problem-solving sys­tem before and after the “re-start”.

Just the other day I was work­ing on the code for a later sec­tion of this book: the part where we will evolve Conway’s Game of Life. I found that the GP sys­tem I started with was hav­ing a lot of trou­ble pro­duc­ing inter­est­ing answers. I worked a few days, try­ing to get it to do what I expected.

And then I real­ized that it had been work­ing the whole time. I mean totally work­ing. It gave me the best pos­si­ble answer, every time.

Only then did I real­ize that the ques­tion I was ask­ing was super bor­ing. There was only one right answer, and the GP sys­tem I built kept giv­ing me that answer. Imme­di­ately.

Now if you are one of the folks who want to think GP is a self-contained box of magic think­ing stuff, this might seem like a good out­come, and not a prob­lem. Who wouldn’t want an “opti­miza­tion algo­rithm” to give them The One Right Answer?

Well, me. And you, I expect.

I would sound like this, if I were on stage at the Amaz­ing Answer Machine Show: “Ladies and Gen­tle­men, I am think­ing of a spe­cial algo­rithm! I have pro­vided this, The Box of Magic Think­ing Stuff, with 512 carefully-chosen exam­ples and a col­lec­tion of use­ful tools, none of which in itself is the algo­rithm. By recom­bin­ing those tools in a very com­pli­cated way while I stand over here, The Box will now guess the func­tion I’m think­ing of in a mat­ter of mere moments.…”

A card trick. Boring.

What did I do then? I revised my notion of the project’s goals. I “re-started”, and in doing so I changed the story I’d been telling myself, the ques­tions I was ask­ing, and I expanded the Lan­guage of Answers accordingly.

The answer my GP sys­tem gave me was a sur­prise. One I wasn’t men­tally pre­pared to under­stand, not least because it hap­pened in a mat­ter of sec­onds where I was expect­ing it to take some time. When I finally parsed what it kept repeat­ing, I had a sec­ond sur­prise: the ques­tion I had asked was boring.

If I had been work­ing in a tra­di­tional unac­cel­er­ated way — with a white­board or a yel­low legal pad, chew­ing on the end of a pen and pac­ing with my hands behind my back like a think-tank car­i­ca­ture — I might have frowned and erased some stuff, or crum­pled up a page or two and made a cup of tea.

I wouldn’t have been surprised.

### Mixed bless­ings

Intro­spec­tion is hard. Most peo­ple, for what­ever rea­son, don’t like to ques­tion their assump­tions. They like cer­tain­ties and prov­able cor­rect­ness, famil­iar mod­els and known best prac­tices, math­e­mat­i­cal rigor pre­sented on a buoy­ant comfort-cushion of assumptions.

That’s what I mean when I say they don’t like to be surprised.

Sur­prises aren’t just pleas­ant eureka moments, they’re also the oh shit moments. GP can be use­ful as an “inno­va­tion pros­the­sis” because it short­ens the time between those eureka surprises.

GP feels com­pli­cated and dif­fi­cult and annoy­ing because it also short­ens the time between oh shit sur­prises. And it can’t tell the difference.

GP projects often fail because novices run into oh shit sur­prises before any eureka ones. They’re cul­tur­ally mal­adapted to cope with this dis­or­der: they’re often Very Smart Com­puter Sci­en­tists or early-adopter domain experts, and they can pick up some infor­ma­tion from the books or the nerds down the street, and they start dab­bling in what I’ve called the Lan­guages of Answers and Search.

But nobody ever tells them about these inevitable oh shits.

I going to focus on this cobbled-together “Lan­guage of the Project” exactly because of those issues. I’ve watched dozens of Very Smart engi­neery peo­ple dive in and (metaphor­i­cally) drown in GP. We need to erase the tra­di­tional bound­ary between what you think of as “the project” and you (per­son­ally), the “researcher”.

This is not to advance some Agilist social agenda, but rather as a cop­ing mech­a­nism. Your best and most use­ful habits as a Very Smart Per­son are based on your expe­ri­ences think­ing very hard and hand-coding solu­tions to prob­lems one at a time, and con­sid­er­ing a few dozens of alter­na­tives. With­out any thousand-fold enhancement.

I see it often: Smart per­son down­loads some pack­age; writes some code; fol­lows along with a tuto­r­ial and builds a GP sys­tem and—boom—it starts spit­ting out ten thou­sand reasonable-sounding solu­tions every hour. Already they’re way out­side the range of what their habits pre­pare them for. But they’re Very Smart, and so they look at the answers they have so far, and they fid­dle with some things and change some para­me­ters… and—boom—in an hour they have ten thou­sand com­pletely dif­fer­ent answers.

“What just happened?”

When it works, answers emerge from a GP sys­tem, in the sense of emer­gent behav­ior. Good Answers and bad ones. But real­ize they can’t emerge from a GP sys­tem of the sort I’m teach­ing you about — the sort that includes you (per­son­ally) as one of the com­po­nents — until you (per­son­ally) exam­ine those Answers and even­tu­ally decide you’re sat­is­fied. You can’t suc­ceed unless you can cope with the acceleration.

Here’s one of the core ques­tions in GP (and AI) research, a deep and trou­bling one that many man-years of research have been spent con­sid­er­ing: How do you know whether you should (a) keep a GP sys­tem run­ning, on the off chance it will get bet­ter soon and give you new unex­pected answers, or (b) stop it and start over from dif­fer­ent ini­tial conditions?

If you think GP (and AI) is a self-contained magic box of think­ing stuff: You don’t.

If you real­ize you’re a core com­po­nent in the GP sys­tem: Pick the one that is more sat­is­fy­ing to you at the moment, and try the other if that doesn’t work out.

And here is a deep-rooted prob­lem affect­ing all of search and opti­miza­tion, not just in AI but all com­pu­ta­tional approaches: How do you know a pri­ori which search tech­nique will pro­vide reli­ably bet­ter answers for a given problem?

If you think of the pro­gram as a self-contained box of opti­miza­tion tools (and magic think­ing stuff), the proven4 answer is: You can’t.

GP is sim­ple. Reg­u­lar old human-scale problem-solving is hard enough that peo­ple will tell you you’re a Very Smart Per­son if you demon­strate even occa­sional com­pe­tence. But cop­ing with a thousand-fold accel­er­a­tion will break your model of your­self and what you think you’re doing.

So. Let’s start breaking.

1. So com­mon that the old Wikipedia page for Sym­bolic Regres­sion now redi­rects to the one for Genetic Pro­gram­ming. Am I allowed to put a “facepalm” in a book?

2. I worry there’s a bit too much sub­tlety here: In some projects, an Answer may well be a for­mal func­tion that is not eval­u­ated with vari­able assign­ments — a project involv­ing alge­braic trans­for­ma­tions, for exam­ple. It’s the goal of sym­bolic regres­sion to fit par­tic­u­lar train­ing and test data; assign­ing those par­tic­u­lar val­ues is part of inter­pret­ing an Answer in that con­text.

3. Let me share a sym­bolic regres­sion result I was given by a sys­tem I was test­ing. I was just putting it through its paces, and so I was look­ing for func­tions that fit ten sam­pled data points from $y=x+6$. It came up with the per­fectly rea­son­able answer that started with $y=(2x — \frac{72x}{32x^2\div4x+\dots}$ and went on for four more lines after that. When I sim­pli­fied it, it meant the same thing as $y=x+6$, although along the way it added sev­en­teen con­stants together, mul­ti­plied them by 166, and divided by a huge num­ber to mul­ti­ply some extra terms by 0. This was the sort of sur­prise I mean.

4. This is an impor­tant result, and it pisses peo­ple off because it chal­lenges some of the same mod­els of self and project that I’m call­ing into ques­tion. It’s called the No Free Lunch Prob­lem for Search and Opti­miza­tion. Among other things, it demon­strates that for any per­for­mance cri­te­rion you can develop, the aver­age per­for­mance of any search algo­rithm — over all prob­lems — is no dif­fer­ent from the aver­age per­for­mance of any other algo­rithm.

# Measuring the “error” in stacked colored crates

In rewrit­ing the first sec­tion of the book, I’m work­ing through the code required to evolve Cargo-Bot puz­zle solu­tions with genetic pro­gram­ming. The trick isn’t rep­re­sent­ing these puz­zle solu­tions, since they’re already rep­re­sented in a domain-specific language.

The tricky pro­gram­ming part is eval­u­at­ing how close the final arrange­ment of col­ored crates is to the target.

Now I’ve already poked around the var­i­ous for­mal dis­tance mea­sures you’d expect to find in a string-rewriting set­ting, and they’re all too gen­eral. We’re actu­ally talk­ing about a robot that picks crates up, and sets them down again on one another.

The rep­re­sen­ta­tion of a set of stacked crates, in my Ruby imple­men­ta­tion, is an Array of Arrays, with each crate rep­re­sented by a Sym­bol indi­cat­ing its color. So for instance [[:r, :b], [:b, :r], []] is a set of three loca­tions on which crates can be stacked, with the left­most posi­tion hold­ing a blue crate on a red one, and the mid­dle posi­tion hold­ing a red one on a blue one, and the right posi­tion empty.

To deter­mine whether (for exam­ple) [[:r, :b], [:b, :r], []] is “closer” to [[:b, :r], [:b, :r], []] or [[:r, :b], [:b], [:r]], I think we need to estab­lish a met­ric that takes into account the actual one-at-a-time move­ment of crates sit­ting on stacks. So “replac­ing” a crate from some other stack must, phys­i­cally, involve unstack­ing the wrong crate, and dig­ging out the cor­rect crate, and then replac­ing the bad one with the new good one.

So my sim­pli­fy­ing notion here is that we can deter­mine the cleanup_error for every crate in turn, and add them all up as if each were “wrong” or “right” inde­pen­dently; that is, as if we only had that par­tic­u­lar crate to “fix”.

For each crate in the tar­get arrange­ment (assum­ing all the crates are the same in both setups), if the observed arrangement:

• … has the cor­rect crate in that posi­tion: score 0 error
• … has the wrong color in that posi­tion: score the MINIMUM num­ber of crates needed to dig out the right col­ored crate from any stack, PLUS the num­ber of crates needed to remove the WRONG crate
• … has no crate in that posi­tion: score the MINIMUM num­ber of crates needed to dig out the right col­ored crate from any stack, plus the num­ber of crates needed (if any) to sup­port the miss­ing crate

So for exam­ple, the cleanup_error for the :r crate when the tar­get is [[:r],[]] and the observed state is [[],[:r]] is 2: one point for shift­ing off the :r from the sec­ond stack, and one point for stick­ing it where it should be.

Or if the tar­get is [[:r], [:b, :b]] and the observed arrange­ment [[], [:r, :b, :b]], then the cleanup_error will be 4: three points for shift­ing out the :r from the bot­tom of the sec­ond stack, and another one to place it correctly.

Or if the tar­get is [[:y, :y, :y, :r], [:b, :b]] and the observed arrange­ment is [[], [:y, :y, :y, :r, :b, :b]], (to score the :r only) we’ve got to move three crates to dig it out, then stack three crates under it, then place it on top, for 7 points. To fur­ther sim­plify, let’s just assume we have a “pool” of extra crates to use for fill­ing in gaps under­neath float­ing crates.

And here’s the min­i­mum thing in action: If the tar­get is [[:r], [:r, :b, :b], [:g, :r, :g]], and the observed is [[], [:r, :r, :b, :b], [:g, :r, :g]], then the cleanup_error of the :r in the first stack is 3 (not 4), since it’s two steps to dig an :r out of the third stack, but three steps from the second.

How about this one: If the tar­get is [[:r, :r, :r, :b, :r]] and we have [[:b, :r, :r, :r, :r]], what is the cleanup_error of the :b crate?

Well, we need to remove 5 crates to free it up from the bot­tom of the stack, and place three under it, and then place it on top. So 9.

Agile folks: can you write this code in a sim­ple test-driven way?

# What is GP?

The fol­low­ing is a draft of the intro­duc­tion from the book.

# What is Genetic Programming?

I’ve noticed that when you look up “genetic pro­gram­ming” at Google and read the top hits, it often sounds as though the writer imag­ines you already know what he means by the phrase. After twenty years, here’s what I think: Nei­ther you nor they know what they mean by the phrase.

But then I’m not even sure I know.

I use the phrase, of course. “Genetic Pro­gram­ming.” “GP.” And I act as though I know what I mean. It’s what I do.

Let’s try some more research. It seems like maybe you have an Inter­net where you are, and your copy of Wikipedia isn’t bro­ken. Go see what they say about genetic pro­gram­ming there.

Come back when you’re done. I’ll be here.

OK, so as I read it — at least as of this writ­ing, and Wikipedia being what it is — “Genetic Pro­gram­ming” is some kind of computer-sciencey thing that does arti­fi­cial intel­li­gence with genes that con­nect ‘+’ signs and stuff in lit­tle trees. If you read closely, there’s some­thing about com­puter pro­grams that write them­selves auto­mat­i­cally. Plus there’s a lot of dif­fer­ent alter­na­tive approaches to it… what­ever it is. And based on the word­ing and the edit his­tory of the Wikipedia page some ways of doing it are clearly bet­ter than oth­ers… at some­thing… even I don’t quite under­stand what.

Also there’s muta­tion and crossover.

Yeah, that sounds tech­ni­cal enough, right? Can we agree to move ahead with that?

Ah, yes.… I thought not. Let me look around for a bit.

How about this? Here is a very good book I rec­om­mend to all my stu­dents: The Field Guide to Genetic Pro­gram­ming by Ric­cardo Poli, William B. Lang­don and Nic McPhee. It’s avail­able elec­tron­i­cally! You can read it now.

No? Not quite done?

All right, let’s bring out the big guns. How about Sean Luke’s Essen­tials of Meta­heuris­tics. I vouch for it whole­heart­edly: it’s full of inspir­ing machine learn­ing things, all explained sim­ply. And also avail­able elec­tron­i­cally. Read that!

Before we go any far­ther, let me tell you how this is going to end:

The stuff we call “genetic pro­gram­ming” is an inco­her­ent suite of tech­ni­cal habits — design pat­terns, mod­els, idioms — most often used to accel­er­ate human inno­va­tion.

## All that; not that

The sen­ti­ment isn’t new. It just doesn’t get repeated often enough.

It’s a cliché when the author of a a tech­ni­cal work starts off by say­ing he’s a “bit of a heretic”, imply­ing that what he’s about to impart will prob­a­bly get the reader in trou­ble if repeated in the wrong com­pany.

For one thing it helps pro­mote a sense that the for­mal dis­ci­pline as “dynamic” and “lively”. You know, with beardy codgers and plucky upstarts con­ven­ing in lux­u­ri­ous Vic­to­rian audi­to­ria to threaten one another with walking-sticks before rac­ing to the Pole to show those fools what a real dinosaur looks like.

Also a nasty back-handed recruit­ing trick, if you ask me. I’ve been to way too many meet­ings, and they would have all been much bet­ter if we’d had walking-sticks, let alone dinosaurs.

The prover­bial “bit of heresy” can also be help­ful when the author is feel­ing self-conscious about play­ing fast and loose with details, or wants to puff up his own author­ity, or might even be fail­ing to give credit to col­leagues who deserve it. I write these words on the anniver­sary of one par­tic­u­larly noto­ri­ous exam­ple of the lat­ter, so don’t think it doesn’t hap­pen: being an “out­sider” sug­gests to the inno­cent reader that you might have thought all this stuff up on your own.

Telegraphed “heresy” can also be ped­a­gog­i­cally use­ful. If only they keep read­ing, the read­ers might be let in on a juicy bit of gos­sip about you know… that whole Leib­niz – New­ton thing, or… have you heard about how Alexan­der the Great really com­pared as a ruler to his dad? Keeps them from falling asleep or skip­ping to the answers in the back of the book.

But then — and you can’t tell me you didn’t see this com­ing: some­times it’s true.

So this is my hereti­cal ver­sion of What Genetic Pro­gram­ming Actu­ally Is:

I have no damned idea.

It’s all over the place. No, seri­ously — you have no notion what a bur­den it can be, try­ing to write one one of these intro­duc­tory overviews.

First we would have to review some his­tory. I’d point out that seven or eight (or a dozen) inde­pen­dent thinkers invented Genetic Pro­gram­ming through the last fifty years. They each called their vari­a­tion some dif­fer­ent thing1, and the details of imple­men­ta­tion were all dif­fer­ent, and some of the vari­a­tions are little-known while oth­ers are huge stars. None is everything.

Then to be fair I would have to say not only what all those inven­tors did back then, but also sum up all the impor­tant things the ten thou­sand sub­se­quent peo­ple work­ing with GP did in their papers and books and arti­cles and con­fer­ence posters on the sub­ject. Plus there’s all the domain-specific appli­ca­tion work. Plus the com­mer­cial and pro­pri­etary meth­ods, each one vying for authen­tic­ity and authority.

But that’s just a raw fact-dump. So next I’d need to cover the trends and cul­tural norms, themes and motifs, note­wor­thy genealo­gies and regionally-distinct Schools of Thought.

And then I’d need to fix some of your mis­con­cep­tions because “Genetic Pro­gram­ming” may be the most mis­lead­ing tech­ni­cal name in the whole world. I’d point out that it’s not genetic algo­rithms even though it sounds the same. It’s not really any­thing like math­e­mat­i­cal pro­gram­ming or con­straint pro­gram­ming. It’s not philo­soph­i­cally any­thing like bio­log­i­cal evo­lu­tion, even if you squint. It’s not quite the same as machine learn­ing (or it is, depend­ing on who you ask), not least because say­ing so pisses off the Sta­tis­ti­cians (who know bet­ter). It’s not just evolv­ing LISP trees, it’s evolv­ing all kinds of struc­tures and plans and algo­rithms and ideas and art. It’s not just sym­bolic regres­sion. It’s not a lot of things, apparently.

So what is it?

## No, really

What­ever it isn’t, I can say that Genetic Pro­gram­ming is the cumu­la­tive work of a huge num­ber of very smart peo­ple. Thou­sands of researchers and prac­ti­tion­ers around the world. They have almost all been pas­sion­ate vision­ar­ies, and have all done amaz­ing things to… well, to achieve what­ever Genetic Pro­gram­ming turns out to be for in their diverse indi­vid­ual cases.

I am reminded that the soci­ol­o­gist Andrew Abbott pub­lished a very inter­est­ing and read­able book in 1988, which has helped me quite a bit to under­stand what GP actu­ally is. Abbott’s book is called The Sys­tem of Pro­fes­sions: An Essay on the Divi­sion of Expert Labor.

What? Why shouldn’t I define it with a soci­ol­ogy book? How is it you have paid so lit­tle atten­tion to the rant thus far?!

Any­way, in Sys­tem of Pro­fes­sions, Abbott describes the dynam­ics of pro­fes­sion­al­iza­tion. That is, how tech­ni­cally astute peo­ple with over­lap­ping tech­ni­cal roles come to self-identify and pro­mote their shared inter­ests by cre­at­ing (and even­tu­ally polic­ing) a pro­fes­sion. In Abbott’s model, pre-professional “fields” arise when­ever diverse peo­ple find them­selves explor­ing and exploit­ing par­tic­u­lar new oppor­tu­ni­ties — espe­cially new tech­ni­cal inventions.

His story of the stages of pro­fes­sion­al­iza­tion includes the devel­op­ment of regional and social com­mu­ni­ties of shared inter­est, then com­mu­ni­ties of prac­tice… then at some point they name them­selves. Then the boundary-setting starts, and the self-definition, and the author­i­ta­tive self-regulated train­ing and cre­den­tial­ing sys­tems, and finally — as a pat­tern, not a rule — we find them build­ing legal infra­struc­ture, rang­ing from Asso­ci­a­tions to Unions to state-licensed reg­u­la­tory bod­ies.2

No, this isn’t a digres­sion. You asked. Well, OK, I asked rhetor­i­cally for you: What is Genetic Programming?

And I answer, not at all rhetor­i­cally: Genetic Pro­gram­ming is a “field” emerg­ing from the inter­ests of diverse peo­ple, who find them­selves explor­ing and exploit­ing a par­tic­u­lar new oppor­tu­nity. It is their shared prac­tices and norms, their habits and their goals.

I could define GP as “the search for for­mal algo­rith­mic struc­tures by using meta­heuris­tics inspired by bio­log­i­cal evo­lu­tion”, but it can­not merely be that. Because (as you’ll learn first-hand) you don’t have to use evolution-like things to search.

I could try to uniquely iden­tify GP as “meta­heuris­tic opti­miza­tion of struc­tures, as opposed to tra­di­tional para­met­ric search or analytically-derived opti­miza­tion algo­rithms”. But (as you’ll learn first-hand) we some­times use those other things too. GP can’t just be evolv­ing pro­grams, because some peo­ple evolve anten­nas and bridges and molecules.

GP can’t just be for data min­ing, because some peo­ple evolve com­pletely abstract proofs. It isn’t about the tools or techniques.

It is, in fact and not just metaphor­i­cally, a com­mu­nity of self-identified peo­ple who share a way of try­ing to solve problems.

Ask­ing what GP “is” at this point in its pro­fes­sional his­tory is like ask­ing what “pro­gram­ming” is: Pro­gram­mers use com­put­ers to solve prob­lems for peo­ple. They don’t do it in any par­tic­u­lar way, except that most of them type on a key­board.

But “typ­ing” is not pro­gram­ming. Just as “evolv­ing code” is not GP.

Look at pro­fes­sional com­put­ing. You can eas­ily see pro­fes­sional bound­aries between the many peo­ple who write pro­grams. There are Soft­ware Engi­neers, and Com­puter Sci­en­tists, Pro­gram­mers and Ana­lysts. And of course there are those who pre­fer the label Soft­ware Devel­op­ers, so they can self-differentiate them­selves as the ones who actu­ally know how to col­lab­o­rate and make pro­grams that peo­ple can actu­ally to use to do stuff.3

I’m quite seri­ous: “Genetic Pro­gram­ming” lives some­where a bit ear­lier in the same pro­fes­sion­al­iza­tion story. As Rick Riolo has said many times: “It’s an art try­ing to become a craft.”

If you ask them, most will say they are doing auto­mated search for abstract struc­tures that solve prob­lems. But the details vary wildly, and every real or the­o­ret­i­cal prob­lem is still a spe­cial case.

So for the time being Genetic Pro­gram­ming is what peo­ple do, who self-identify as “using Genetic Programming.”

## Tozier, that isn’t really very helpful

Yeah, trust me: I am totally on your side.4

But I have writ­ten this book, and you are read­ing it. Rather than think­ing you and I are both crazy, explain it this way:

If we play our cards right, we can our­selves define Genetic Programming.

I don’t mean to imply “GP is what you think it is”. I mean the field is so young and mal­leable, that you can learn to do amaz­ing things with­out ever being told you’re doing it wrong.

In these last twenty years I’ve seen for­tunes made, dis­ease treat­ments invented, patentable inven­tions piled thou­sand deep, philo­soph­i­cal and the­o­ret­i­cal prob­lems set­tled, space probes launched, robots that learn to walk in their dreams.…

Peo­ple can use GP to cre­ate things they could oth­er­wise only imag­ine. Here’s my lit­tle True Heresy, stated another way: Those peo­ple are not using GP to “auto­mat­i­cally invent” things. It isn’t a magic inven­tion machine.

It’s an accelerator.

I’ve hung out with a num­ber of these folks, through the years. They’re not smug geniuses… as a rule. Rather, they walk around in a sort of daze, telling one another how sur­prised they were by what they were shown when they started using GP.

A human being invents when she uses GP to con­sider a mil­lion out­ra­geous struc­tures and lay­outs no sane design engi­neer could incre­men­tally develop. The “inven­tion” hap­pens when she — a standard-issue human being—notices that some of those mil­lion designs is interesting.

That’s the same thing a tra­di­tional design engi­neer does, but faster. The effort is in a dif­fer­ent place.

A human being explains some­thing the world when he uses GP to con­sider a thou­sand novel mod­els of data, in less time than a tra­di­tional sta­tis­ti­cian can eval­u­ate two. The “expla­na­tion” hap­pens when he — a standard-issue human being—notices that some of the best mod­els invoke rela­tion­ships between vari­ables that nobody else had never mentioned.

That’s the same thing a tra­di­tional sta­tis­ti­cian work­ing with a domain expert does to explain the world, but faster. The effort is in a dif­fer­ent place.

And so on: an artist explores a thou­sand com­po­si­tions; a bio­med­ical researcher exam­ines a dozen or a hun­dred genomes and a mil­lion gene expres­sion pro­files; a trader mon­i­tors a mil­lion port­fo­lio man­age­ment rules.

The same thing they would nor­mally do. But more. The effort is in a dif­fer­ent place: on the think­ing.

Genetic Pro­gram­ming doesn’t auto­mate think­ing or cre­ativ­ity or any of those things. It helps peo­ple notice things.

## GP is a prosthesis

Think about writ­ing — you know, with a pen, on paper. Writ­ing isn’t “auto­mated mem­ory”. Or think about pro­gram­ming com­put­ers. It isn’t “auto­mated arithmetic”.

Writ­ing and pro­gram­ming extend your mind. Writ­ing is a pros­the­sis in the sense that it offloads mem­o­ries to a long-term exter­nal stor­age medium. Pro­gram­ming is a pros­the­sis in the sense that it cal­cu­lates stuff really really fast.

But nei­ther one is “auto­matic”. Harry Pot­ter notwith­stand­ing, there are no self-writing pens, and no self-programming computers.

See what I did right there? There are no self-programming com­put­ers. That includes Genetic Pro­gram­ming, regard­less of what you may have heard from the nerds down the street.

I can’t tell you how many peo­ple I’ve seen come to GP, hav­ing read the hype about auto­mated inven­tion and stuff. Like a per­son who wants to write bet­ter, so she gets a really pow­er­ful pen. The per­son who wants to learn to pro­gram games, so he gets a really pow­er­ful com­puter.

How do you learn to write? How do you learn to pro­gram? Same with GP. Through guided prac­tice. We think a bit, we try some­thing, we learn if we’re lucky, and maybe we solve some problems.

And if we’re very good problem-solvers, we can use GP to help our­selves become use­fully surprised.

1. Evo­lu­tion­ary pro­gram­ming, genetic pro­gram­ming, some Ger­man ones I can’t recall the names of at the moment… no many doubt oth­ers.

2. Ellen Mazur Thom­son pro­vides a lovely exam­ple of this same pro­fes­sion­al­iza­tion dynamic in her well-written his­tor­i­cal case study of the print­ing and graphic design trades: The Ori­gins of Graphic Design in Amer­ica, 1870 – 1920.

3. Though even they are frag­ment­ing on the basis of method­ol­ogy and domain.…

4. If only we’d had walk­ing sticks at the con­fer­ences these last twenty years, it would have all been so much more effi­cient.…