Average reading time: 7 minutes
With the progress that has recently been made in computational power, connected technologies and artificial intelligence, the vision of self-driving cars is now finally coming true. Fully-autonomous cars are expected within the next two years. But the potential of this technology goes much further than just adding vehicle safety. It could just be the thing to solve many of our 21st century problems.
While our cities get more and more congested and traffic gets out of control, we have to admit: our current model of transportation is not only inefficient but also dangerous. According to WHO, we suffer of 1.25 million traffic deaths each and every year. Why is that? And can self-driving cars be a solution? I want to argue that self-driving cars are not just a cool new technology – but the real completion of the concept of individual motorized mobility. In hindsight, the era of human-driven cars might be regarded as nothing more than a temporary transition era.
The mess of human-driven traffic
The abilities of an entity are usually defined by its purpose. Let’s regard the human body and mind for a moment from a strictly scientific purpose-oriented point of view. When we say, the process of evolution has shaped our body – with what purpose was it shaped? The survival in natural environment. Thus our primary abilities are based (not limited) on surviving. Our ability to react, our ability to estimate situations, our ability to sense dangers – all these are grounded in the two million years of human evolution. When we run on our feet and stumble upon an obstacle, we can react fast enough by mitigating the fall with our hands. We can sense the danger when leaning too far over a bridge’s balustrade. Our abilities have been formed to estimate heights, distances and dangers we encounter in our natural environment.
Cars on the other hand are a new concept. We have getting so used to shaping our world towards automobile mobility that we often forget, we are dealing with a technology that has only been around for a little more than mere hundred years. This is, needlessly to say, far too short for any human evolution to happen. We have to rely on our abilities, shaped to deal with far different challenges, to control them.
However, we do not have the abilities to measure the difference between 100km/h and 120km/h without even feeling any wind in our face. We can barely judge if 20 meters distance is enough safety distance at a certain speed or 30 would be better. We are tough challenged to observe the behavior of dozens of cars, bicycles, pedestrians around us while other thoughts might be on our mind. Can we brake fast enough in case something unexpected happens – and will the people driving behind us grasp the danger we are reacting to? When driving a car, we are pretending to have abilities which we simply cannot have.
Ultimately, a steering wheel is an interface. An interface between a machine – the car – and a control unit – us. This is the root of the problem: The control unit does not have the abilities needed to control a car in all possible circumstances – because the abilities have been shaped towards far other challenges in a completely different context. Wouldn’t it then be awesome, if we could prevent this incompatibility altogether and make the machine an integrated system – with a control unit that is actually made with the purpose of controlling a multi-ton super-fast vehicle?
The formal rules… and the informal
Self-driving cars are not a new concept. We have been using cars for 130 years. We have been using computers for many decades. Why did it take so long to build a self-driving car? Because it is vastly different from building any other machine.
Most machines we are building today operate in a controlled environment following pre-defined formal rules for their actions. The purpose of a self-driving algorithm however is to control a car in the real world. We sure have extensive regulations on how to behave in traffic, pretending to organize the relationships of all actors on the streets, but let’s be honest: Traffic regulations are only an illusion of control – and just a part of the abilities needed to drive a car.
Many modern cars today already master limited self-driving capabilities such as holding a lane. This is effectively a machine operating under a (fake) controlled environment. The driver tells the machine, when he’s on the highway, the traffic is easy and the streets and lanes are in good condition. However, for a car to operate really autonomously in the real world, the environment does not stop at the lane borders (if there’s even any). Unforeseen things can and will happen. New actors can anytime suddenly enter your environment. Other road users will improvise or simply make mistakes. Construction sites may spontaneously change the flow of traffic. There may even be places with no fixed regulation after all, such as the progressive shared space concept, increasingly being put to use in urban environments.
In the real world, traffic is governed by complex rules and relationships in which formal traffic rules only take a small place. In a traffic system dominated by human-controlled cars, a self-driving car being limited to just the knowledge of formal traffic regulations, is an alien.
Imagine a bus stopping in a crowded street. On the opposite sidewalk, a group of people are starting to run. A human driver may conclude, that the pedestrians most likely will try to cross the street in a non-cautious manner in order to reach the bus. As a result, he will lower his speed and increase his attention. This conclusion not only requires registering all potential actors around the car, but also interpreting their behavioral context.
The conclusion is that self-driving cars both need the ability to follow regulations and at the same time the ability to recognize human interactions, as fixed regulations are spontaneously altered. How can you even predict human behavior? Sometimes the conditions for certain actions may be different. Or the actions taken by actors may change. Example: As a traffic light is turning from green to yellow, some drivers will hit the breaks – while others will accelerate to pass before red light, sometimes well above the speed limit. To make matters worse, different actors are often influencing each other, eventually creating a complex system. Drivers surpassing the speed limit will lead to other drivers to act just the same. Drivers ignoring certain impractical rules will incentivize others to follow. A group of pedestrians crossing a street will incentivize others to follow, even if a car is approaching. So, how is a machine supposed to interpret human behavior, act upon dynamic rules that are being established on the fly?
Building a real self-driving car is only possible by the usage of artificial intelligence – a technique that has only risen to real success recently, and thus explaining why we could not build such cars in the past. A self-driving algorithm needs to learn from its experiences and incorporate a kind of adaptivity not possible for regular algorithms.
Training the A.I.
All approaches on true artificial intelligence incorporate a learning process. Well, what do we need to learn real-world traffic rules? Experiences in the real world.
Last month, Tesla announced, it had 780 million miles of semi-autonomous driving data, adding another million every 10 hours. Beginning in 2014, Tesla has equipped its cars with the hardware and software for its Autopilot. This is the incremental approach: Tesla is rolling out cutting-edge technology to its users and receives data when it is used. This way, the company has created an unmatched wealth of experience of its autopilot algorithm in the real world. Every experience helps in making the algorithm to get better. As updates are delivered via wireless connections to the cars, customers directly profit from advancements. Tesla’s CEO Elon Musk predicted the first fully self-driving cars to be ready in just about two years. On the other hand, of course, putting an in-development software on the streets, requires a high amount of responsibility of the driver, to judge what is possible with the system and what is not, and also a lot of self-discipline, to not get careless when being driven around (UPDATE in July: A Tesla driver died in an accident while autopilot was enabled. It seems to be both an error on the driver side, not paying attention anymore, and of the software, not recognizing a large white trailer on the highway against the bright sky. Tesla emphasizes that each driver has to acknowledge that Autopilot is an “assist feature” and does not provide full autonomy. For more info see Tesla Blog).
Google uses a different approach, a more direct one. Their currently 58 self-driving cars – not even equipped with traditional steering wheels – drive through four selected U.S. cities using professional test “drivers” (or rather: observers), and gathered data of 1.6 million miles since 2009. While the Google approach obviously creates much fewer data than Tesla’s, it also allows for a more progressive and also more safe approach. When not having to worry about practical customer considerations, they can directly aim for full self-driving capability. As Google will need more and more experience data, the test program is expanded. Positions as a self-driving car test driver are currently open. (Update in July: Google’s cars now learned to predict cyclist behavior and hand signs using an A.I.-driven learning approach – very impressive! Source).
While both strategies have their advantages, it is foreseeable for both companies to have leading positions for self-driving algorithms in the foreseeable future. Traditional closed-environment small-scale testbed strategies can only advance to a certain point (mostly revolving around learning formal rules), but for fully autonomous driving and interactions with human-driven vehicles, extensive real world data is necessary. For example, Google is currently learning its cars when to honk – useful for situations of urgency and warning in the real world, not defined by formal traffic rules: “Our goal is to teach our cars to honk like a patient, seasoned driver. As we become more experienced honkers, we hope our cars will also be able to predict how other drivers respond to a beep in different situations.” (Source) Real-world data is key for learning processes revolving around self-driving cars.
The next step
In a world with ever expanding cities, how do we prevent the total traffic collapse? At a certain urban density, there is just no more room to increase the traffic capacity. The solution in fact could be self-driving cars. If we take traffic capacity as a limited good, traffic lights are always a very inefficient way of distributing this good. Intersections become bottlenecks. By using self-driving cars and thus removing the need for traffic lights, the capacity and efficiency of streets could dramatically be increased, possibly doubled, as Italian researchers have just recently shown.
Moreover, by combining ubiquitous connectedness with artificial intelligence, we could utilize intelligent traffic management to organize and spread daily commuting routes and thus make the most out of a cities’ existing infrastructure at peak times. If combined with the revolution towards electric vehicles, which is just getting started right now, and the expansion of regenerative energy sources, pollution could be drastically reduced and quality of life and health increased. Hopefully, self-driving cars are just the first promise to come true towards a more reasonable solution of mobility.
It is about time.