Wednesday, April 22, 2009

Intelligent Design

As some of you will no doubt realize by now, I'm a big fan of intelligent design.  No, not the one that tries to compete with evolution, but just, you know, good design.  The MacBook, for instance, or TiVo.  Both are designed intelligently.

I had a conversation about design with the kids yesterday at breakfast.  The brandy barrel had broken off Caroline's toy St. Bernard, and as I tried to piece it back together I made a comment about it being a poor design.  Because of how it was attached, there is absolutely no way that brandy barrel could possibly have remained attached for any length of time under normal usage conditions.  Even Caroline, who is exceptionally careful with her things, managed to snap it off in under 10 minutes. 

My comment led inevitably to the sorts of questions one is generally unprepared to answer before 10am, questions about the meaning of the word 'design', and what things in the room had been designed, and by whom.

Among the many classic Looney Tunes episodes shown on TV here is this one, "Water, Water Every Hare".  Go ahead, watch it.  You know you want to.

The kids, who spend a lot of time watching classic Looney Tunes episodes, had recently seen this one, and I was well impressed when Michael asserted that the scientist had designed the robot, or at least I was once I'd puzzled out what the hell he was talking about.

This led to a conversation about the various capabilities of the robot, what he would be designed to do, and of the sort of moral and intellectual limitations that a large metal animatron designed by an evil scientist might exhibit.  

Into this discussion, I injected the question of whether it would be better to build a robot that doesn't walk into walls, or one that doesn't fall over when it walks into a wall.  My belief, based on a fair amount of experience with software and infrastructure architecture, is that an intelligent design would be one that assumes that no matter how much effort is put into the wall avoidance mechanism, there are unforeseeable conditions which at some point will cause the robot to walk into a wall.  My view, therefore, is that while a sensible effort should be made to avoid intersecting with walls unnecessarily, considerably more effort should be put into equipping the robot with the necessary mechanisms to gracefully handle those situations which might otherwise cause him to fall over.

Of course, this view is somewhat abstract so I'd assumed that the kids would think that it would be better to build a robot that doesn't walk into walls, thus affording me the opportunity to lecture them on the necessity to plan for and mitigate failure.  I was therefore really surprised that Caroline said it's better to build the robot, "not to fall over, because it might accidentally walk into a wall even if it's not meant to".

Now, you might see this entire post up to this point as me shamelessly constructing for myself a platform from which to brag about how smart my children are, and you'd be right but only partly.  Because it also provides a useful starting point to raise the subject of human intelligence in general.

The human mind is a subject which has long fascinated me.  How do we think?  How does our environment influence the outcome of our thought processes?  How does our own ego, our view of ourselves, alter our thinking?  

To make this discussion somewhat less abstract, let me give you my own, somewhat embarrassing example.  In 1994 or '95, I visited Penn State and attended a technology fair.  One of the products being demonstrated was a graphical interface for the command-line Internet technologies I'd spent so many late nights in college mastering: FTP, Gopher, RXIRC.  Having bested these beastly applications was a source of pride for me.  How dare someone make them easy to use!  How dare the founders of America On Line throw wide the gates of my technological temple to the great unwashed masses!

Of course, you know how this story ends.  Some years later, while elbowing my little Honda through rush hour traffic, I was overtaken by an impossibly large Jaguar.  As it glid lithely past my window, I noticed the number plate: "THX AOL".  'Nuff said.

The point is that I learned from that experience - I changed my thought approach.  My initial view was heavily affected by pride and ego, but my subsequent thinking is influenced by examining that initial thinking and comparing the outcome which arose from it to the actual state of the world.  In other words, I've since realized that I'd have been the guy driving the Jag if I hadn't been so pig-headed and territorial.  But the good news is that I wasn't bound irrevocably to my initial thought pattern.  I'm still not driving a car that's roughly the length of the Lusitania, but at least I know enough now to set aside personal prejudices and think differently when the situation warrants doing so.

I worry about a lot of things: war, global warming, the economy.  But these fears are based on the assumption that everything continues infinitely along its current trajectory, that nothing changes.  Of course, we all know this to be a patently false assumption.  In the late 1800's, the horsepower to move people around in cities was provided almost exclusively by actual horses.  The problem, of course, is that horses, like politicians, produce enormous quantities of shit, and it all has to end up somewhere.  The trouble at the end of the 19th century was that it ended up in the street.  At least one urban planner predicted that by 1950, every street in London would be buried under 9 feet of horse manure.  Most everyone at the time agreed that streets awash in horse crap were at odds with the emerging image of the modern city, yet attempts to solve the problem directly failed.

Of course, the problem was ultimately solved, but not by people attacking it head-on.  Rather, it was solved because innovation responds to incentive, and not necessarily to a specific problem.  Henry Ford didn't develop a mass-produced automobile to solve the horse manure problem, he developed it because he recognized that there would be massive financial rewards accruing to anyone who could provide an affordable, ubiquitous means of transport.  The fact that his work also happened to address a looming future problem was a good, but secondary outcome.

And this, I think, is the real story.  I didn't teach Caroline that it's better to incorporate failure recovery than to try to avoid all possible failure scenarios, she figured that out on her own.  How, I don't know, but she did, so that's good.  Most species that have existed on this planet are now extinct.  We're not.  Why?  Because as a species, we're undeniably intelligent.  Individually we may do unfathomably stupid things, but collectively we can, and have, created a world that couldn't have been dreamt of by our ancestors.  

The other day, I watched our fox teach her pups how to hunt and kill their prey.  As the pups wrestled each other, she crouched, waiting, until one of them disentangled himself from his brother.  Then she sprang on him, wrestling him to the ground, holding his throat in her powerful jaws.  When she released the pup, he rejoined his brothers and attempted the same maneuver on one of them.  It was fascinating, to watch but it also supports my bullishness on humanity's future.  While foxes are born with a certain amount of instinct, they need to be taught through demonstration.  We can solve problems without having had someone show us how first.  I believe (well, I'm starting to convince myself, anyway) that innovation through the exercise of human intelligence will help us either avoid those catastrophic outcomes or recover from them more quickly. 

If we can build a robot that won't fall over, maybe there's hope for us after all.

 

No comments: