Skip to content

Critical Design Flaws Creep In

March 22, 2016

google-car-crashThis is somewhat of an ill-informed rant.  I have no connection to the project discussed here other than having read some rather distressing things on a non-technical web site.  However, I think the general theme is a good one and could be instructive to a number of people out there.  If you disagree please leave a comment.

I read an article discussing Google’s self-driving vehicle program where a self-driving car failed to properly recognize a bus and was involved in a collision.  It was stated in the article, copied from somewhere else, that to correct this problem an additional 3500 tests were being added to properly recognize a bus and deal with it.

I am going to say that based on this little tidbit of information, which is admittedly probably not very accurate, this sounds like sheer insanity to me.  It also sounds like a scheme that could be cooked up by some folks with little programming experience, led by some more people with little programming experience and utterly no sense of “history” in the real world.

Why is This Insane?

Let’s just briefly discuss the idea of a self-driving car that has a lot of “tests” for different sorts of objects that might be encountered.  Also, just for the purposes of this discussion, let’s assume we are talking about a car on a limited-access highway otherwise known as an “Interstate” in the US.  These are roads where the only sorts of vehicles that are supposed to be there are cars, trucks and motorcycles.  No people walking, no dogs, no bicycles, literally just cars, trucks and motorcycles.

So how do you set about building a self-driving system?  Well,  one way is to start down the road of categorizing objects by subjecting them to various tests to identify them.  This would involve setting down the characteristics of cars, trucks and motorcycles that could be observed using cameras and possibly other sensors.  You could then fairly easily run through a bunch of identification tests to determine what something is and then control the behavior from that.  Sounds fairly straightforward, right?

Well, there is a serious problem with this and they would appear to be on the verge of discovering what this problem is over at Google’s self-driving works.  They had a car that crashed into a bus because it (apparently) was not identified as the sort of vehicle that the car would have to stay away from.  The article said the car assumed the bus would yield to the car rather than the other way around.

This says two things: unlike an experienced human driver, they are assuming other objects on the road will unconditionally yield to the car under some circumstances, and perhaps most importantly, failure to identify an object leads to incorrect assumptions.

I am going to say here that after many years of driving on Interstate roads that you can – and will – find way more objects than simply cars, trucks, and motorcycles present.  There will be furniture, dead animals, live animals, various household goods in various states of repair and well, just about anything you can imagine.  There will be people walking and riding bicycles.  There will be animals trying to cross the road.  There will be animals chasing the cars.  I could go on and on, but I won’t; consider the point made.

RaccoonThe problem here is that if a misidentified object required the addition of 3500 tests to get it right, what are they going to do with the first painted-over raccoon that is encountered?  What do you think the chances are that something like this would be “properly identified?”  Having a hard and fast rules-based system for categorizing objects sounds initially like a great way to do things – especially when you don’t have a strong AI system sitting around waiting for something to do, but it will absolutely be an utter failure.  Not only are there apparently problems with not treating every single object that is encountered as a potential threat, but the need to categorize an object in order to determine the behavior will lead to dramatic failures.

In a car, as many unfortunate teenager’s would attest to (if they could), dramatic failures cost lives.

How Can This be Fixed?

I am going to say that if they have started out with a rules-based approach for interacting with the real world, they will eventually figure out that this approach has far too many tests required and every “new” object that is encountered will require more and more tests.  Eventually, regardless of how much computing power you throw at the problems, there simply will not be enough computing power to deal with the tests.

Can you fix this?  Not really.  A rules-based approach can work when you can limit the universe of things that need to be dealt with.  In the real world, out on a road, you can’t.  What happens when a large bug impacts the cover over the camera lens?  Well, people can be trained to do something about it but with a rules-based system all you can do is identify the situation and have a programmed response.  Eventually you are going to run out of room to set up these programmed responses because the “situations” that can be encountered are really unlimited.  And this is assuming a limited-access highway, not a busy suburban street where the children, dogs, bicycles, etc. are supposed to be there sharing the roadway.

So I’d say eventually they will rediscover the wheel – because we have been through all this before in different ways – and figure out that a rules-based approach isn’t going to work at all, ever.  Should such a thing ever be released on the roadways of the US I would strongly recommend avoiding potential encounters, taking whatever steps are required – such as moving to rural Scotland – before such an encounter happens where people die.

What is the “Right” Way?

Assuming there is one, right?  Well, there is.  And it is something that we have pretty much known since the 1970s.  The idea that open-ended questions like “What is this object?” can be answered by rule-based computations has been pretty much rejected.  At least partly this is because it has been recognized that a human, or even lots of humans, aren’t going to be able to come up with all the rules in advance.  So whatever you build cannot really be a static system.  In general, a more learning oriented approach has been thought to be necessary to eliminate the requirement that a static system “know” everything about its environment in advance.

It would seem that Google has the foundation for both machine learning and utilizing the Internet to facilitate “crowdsourcing” of machine learning.  So why did they apparently design a static rule-based system?  Who knows?  My opinion is that this is the result of hiring people with little experience and little knowledge of the history of machine learning, artificial intelligence and the problems with static systems in dynamic environments.

Something that dates even earlier than the 1970s about this which is something that I would assume is required knowledge for anyone even skating around the edges of this field of computer science and programming is the early writings of Isaac Asimov.  His positronic robots were built with three “rules” and everything else was apparently learned.  There are a number of references to this in his works over the years.  A self-driving car is something that we should be far more concerned about being built by someone with no knowledge of the “Three Laws” than a simple industrial robot.

Wider Considerations

I think it can be assumed that Google is exercising “common business sense” with their self-driving car program.  They want to produce tangible results as soon as possible with a minimum investment.  We have been working on “strong AI” of the sort that is going to be eventually required for a self-driving car since at least the 1970s and what, exactly, do we have to show for it?  Well, we aren’t talking to our computers and natural language speech recognition is one of the first applications of “strong AI”.

So, I guess Google can be forgiven for trying to create a self-driving car and side-stepping building strong AI as part of the project.

However, a lesson for all software projects is that you need to understand the environment the result of your creation is expected to function in.  Failure to do so results in an unusable system which in some cases (not many) could actually be dangerous.  If you don’t understand the environment, you are going to make some mistakes that might be seen later as being really obvious.

Also, never, ever underestimate the importance of history.  No, I am not talking about knowing the complete list of English monarchs from Edward the Confessor forward.  I am referring to the idea that since the beginning of computing in the 1940s there have been a lot of people involved with everything from file systems, error recovery and even artificial intelligence.  Assuming these people had nothing to contribute and all of their work is “obsolete” is a huge mistake and one that will come back to bite you.  I have seen this happen over and over in software development and it can really hurt.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: