I have to admit, I’m kind of turned onto the idea of self driving cars.
You have to understand that I’m that person who got his drivers license at the age of 20. Why? First, driving scared the hell out of me. Second, driving with my mom scared, even more, hell out of me.

Given my tenuous start with driving, I have a cautious fascination with self driving technology. The ability to get from point A to point B without getting behind the wheel or paying a random stranger to get me there is appealing.

Perhaps I just dream of getting work done on my commute to work so I can watch cat videos at the office.
Then, I see statistics like this: self driving cars average about one issue per mile. That means a human being needs to get involved to avoid an accident.

If you were driving down the highway at 60 miles an hour, you’d have to intervene every single minute. No one would get any work done with that kind of interruptions, especially the kind that can kill you.

What’s worse, the future of self driving technology has been called into question as companies like Uber have either scaled back or completely canceled their autonomous research projects.

So, is this concept dead?

The Limitations of Self Driving Cars Today

The biggest problem with self driving technology is simple: there’s no human. I know that might seem contrary to the idea of “self driving,” but it’s actually an important point.

Your brain, once it has learned how to operate a car and deal with all the confusing things it’ll encounter on the road, is equipped to handle all the information thrown at it – assuming it’s not distracted by a cell phone. The average person drives 19,000 miles a year and gets into an accident once every 18 years. That’s one accident every 342,000 miles.

342,000 miles is a lot bigger than 1 mile.

More importantly, your brain does this automatically. It knows how to learn and adapt to new situations over time.

Computers suck at this. Right now, we are still trying to teach them how to learn for themselves. This means we’re spoon feeding them.

In theory, these cars will be able to drive safer and more efficiently than humans. In practice, many of these cars can’t even tell the difference between the side of the truck and the sky.
One major problem is we’re teaching them how to use their senses. No one ever had to teach you how to use your eyes and ears, except for maybe your parents when they told you to “listen more.”

Programmers and scientists have to teach these cars how to use the radar, video cameras, and other devices that provide a view of the world outside.
More importantly, these machines need to be taught something some of us take in college for an “easy A”: philosophy. 

When you’re driving, you might be presented with certain situations that have no obvious or logical solution. If, for example, you had to choose between running over a mother and her baby or a group of 15 nuns, how would you decide?

If you struggle with a choice like that, that’s good. It shows that you’re human.

Right now, computers lack the ability even to begin to process a decision like that. Sure, you could program it to always prefer mothers/children or nuns, but that would limit its ability to actually make important decisions.

These cars need the ability to think and learn in the same way that we do. They need to become self-sufficient with the mental processing involved with driving.

The Future of Self Driving Cars

Thankfully, all is not lost. The next necessary step is to create autonomous vehicles that can truly think and make decisions in the same way we do.

Many companies are beginning to understand this necessity.
IBM, for example, recently just received a patent for a “Cognitive System to Manage Self Driving Cars.” That might sound all complex and fancy, but really it just means they have patented something that will model human driving behaviors… the good ones.

They want to create something that can recognize potential driving hazards and determine for itself what to do. It’s the same thing humans do, but potentially much, much faster, and therefore, safer.
This kind of digital cognitive system is nothing new. IBM’s more well known, Jeopardy-winning and Urban Dictionary learning computer, Watson, is an example of a cognitive system.
If computer scientists and programmers can get over this decision making hurdle, self driving cars will go to the next level. They will no longer be confused by construction zones or troubled by bicycle lanes.
They will think for themselves in a way that shouldn’t lead to a robot uprising.

Looking Towards the Future

All this is obviously a long way off. More research is required, which is complicated by the risks of testing an autonomous vehicle on the open road.

It certainly won’t be easy, but it is possible. Despite the recent setbacks autonomous cars have experienced, their wheels are already in motion.

Maybe one day I can look forward to a time where I can get all my work done in the car without fearing for my life. Until then, I guess I’ll just have to make do with my friendly neighborhood Uber driver.




  1. It’s in fact a very interesting article, I was wondering if the one key piece missing is the feedback we have in the learning process “pain”. I don’t mean to give pain to machines, but if you think, pain (psychological and physical) makes us not to be un hazardous situations, so giving a goal to not receive a penalty as a way to make the machines able to learn faster and better could be an option.

    I’m pretty sure someone is already doing this, there are really smart people with great ideas, but made me think about it when I read the article.


Please enter your comment!
Please enter your name here