All it takes is a traffic 'stop' sign! Any stop sign. And some inexpensive hacking ingenuity. That is all it took a group of researchers from the University Of Washington to remotely hack a self-Driving Car! And while all the automotive industry giants like Mercedes, BMW, and Tesla as well as some technology giants like Google and Apple are investing large sums of money in the development of such vehicles, they seem to have a serious cyber headache to resolve. With the estimates that by 2020 some 10 million self-driving cars will be on the road, it is becoming more and more obvious that they have some serious security questions to resolve.

Hacking problem has been there from the beginning

Self-driving, fully autonomous cars seem such a convenient proposition to everybody. They can make it easy for practically anybody to move around - whether they are young, old disabled in a hurry or just cruising. Some estimates also say that their use can dramatically reduce the number of annual deaths from traffic accidents. Of course, they also present the possibility for developers to come up with new beneficial technologies, but it seems, also for those with bad intentions with some malicious ones.

The possibility of hacking self-driving vehicles has been known for a while. Last year, Frenchman Jonathan Petit was able to confuse the complicated spinning sensors these care rely on with a simple laser pointer.

Petit's experiment showed that the problem becomes even more serious when you take into consideration that the sensors cost more than $10,000 each and the laser sensor he used in hacking them measly $43.

But the recent experiment conducted by the researchers at the University of Washington was even cheaper! All it took them to confuse a self-driving car was placing a simple sticker onto a stop sign.These stickers can be easily printed at home, but can definitely cause some serious accidents.

The hack the researchers used was placing stickers over the whole sign, or just over part of it. In their research paper, "Robust Physical-World Attacks on Machine Learning Models," they listed a number of ways such vehicles can be disrupted when reading and interpreting traffic signs. One such example was adding 'love, ' and 'hate' graphics on the stop sign and the cars were reading it as a Speed Limit 45 sign!

More needs to be done

Both experiments, the one conducted by Petit and the one done by University of Washington's team show that much more needs to be done to secure the safety of autonomous vehicles. While much more sophisticated hacking possibilities were also discovered, it is the simplicity of these two that is worrisome. If the developers of self-driving cars can develop such expensive and sophisticated equipment as spinning sensors, they can surely invest adequate sums to ensure better the safety of people that will use such vehicles.