Solving our traffic problems with self-driving cars

Average reading time: 7 minutes

With the progress that has recently been made in computational power, connected technologies and artificial intelligence, the vision of self-driving cars is now finally coming true. Fully-autonomous cars are expected within the next two years. But the potential of this technology goes much further than just adding vehicle safety. It could just be the thing to solve many of our 21st century problems.

For the first time in history, more people now live in urban areas than in rural ones. The urban population has grown from 746 million in 1950 to 3.9 billion in 2014 and is predicted by the U.N. to reach 6 billion in 2045. The number of mega-cities with more than 10 million inhabitants will double. As a consequence, cities will get more and more congested. However, the demand for mobility is still rising. How can we manage millions of cars in dense areas without collapsing traffic? Self-driving cars could just be the solution we need right now.


Much more potential than just cool looks.

I want to argue that self-driving cars are not just a cool new technology – but the real completion of the concept of individual motorized mobility. In hindsight, the era of human-driven cars might be regarded as nothing more than a temporary transition era. Ultimately we have to admit, that the combination of humans driving an automobile can sometimes be difficult. Driving by car is dangerous. According to WHO, we suffer of 1.25 million traffic deaths each and every year. Why is that?

The mess of human-driven traffic

The abilities of an entity are usually defined by its purpose. Let’s regard the human body and mind for a moment from a strictly scientific purpose-oriented point of view. When we say, the process of evolution has shaped our body – with what purpose was it shaped? The survival in natural environment. Thus our primary abilities are based (not limited) on surviving. Our ability to react, our ability to estimate situations, our ability to sense dangers – all these are grounded in the two million years of human evolution. When we run on our feet and stumble upon an obstacle, we can react fast enough by mitigating the fall with our hands. We can sense the danger when leaning too far over a bridge’s balustrade. Our abilities have been formed to estimate heights, distances and dangers we encounter in our natural environment.

Cars on the other hand are a new concept. We have getting so used to shaping our world towards automobile mobility that we often forget, we are dealing with a technology that has only been around for a little more than mere hundred years. This is, needlessly to say, far too short for any human evolution to happen. We have to rely on our abilities, shaped to deal with far different challenges, to control them.


Today’s car traffic: A dangerous mess.

However, we do not have the abilities to measure the difference between 100km/h and 120km/h without even feeling any wind in our face. We can barely judge if 20 meters distance is enough safety distance at a certain speed or 30 would be better. We are tough challenged to observe the behavior of dozens of cars, bicycles, pedestrians around us while other thoughts might be on our mind. Can we brake fast enough in case something unexpected happens – and will the people driving behind us grasp the danger we are reacting to? When driving a car, we are pretending to have abilities which we simply cannot have.

Ultimately, a steering wheel is an interface. An interface between a machine – the car – and a control unit – us. This is the root of the problem: The control unit does not have the abilities needed to control a car in all possible circumstances – because the abilities have been shaped towards far other challenges in a completely different context. Wouldn’t it then be awesome, if we could prevent this incompatibility altogether and make the machine an integrated system – with a control unit that is actually made with the purpose of controlling a multi-ton super-fast vehicle?

The formal rules… and the informal

Self-driving cars are not a new concept. We have been using cars for 130 years. We have been using computers for many decades. Why did it take so long to build a self-driving car? Because it is vastly different from building any other machine.

Most machines we are building today operate in a controlled environment following pre-defined formal rules for their actions. The purpose of a self-driving algorithm however is to control a car in the real world. We sure have extensive regulations on how to behave in traffic, pretending to organize the relationships of all actors on the streets, but let’s be honest: Traffic regulations are only an illusion of control – and just a part of the abilities needed to drive a car.


Formal traffic rules are the defining parameters of the traffic system… are they?

Many modern cars today already master limited self-driving capabilities such as holding a lane. This is effectively a machine operating under a (fake) controlled environment. The driver tells the machine, when he’s on the highway, the traffic is easy and the streets and lanes are in good condition. However, for a car to operate really autonomously in the real world, the environment does not stop at the lane borders (if there’s even any). Unforeseen things can and will happen. New actors can anytime suddenly enter your environment. Other road users will improvise or simply make mistakes. Construction sites may spontaneously change the flow of traffic. There may even be places with no fixed regulation after all, such as the progressive shared space concept, increasingly being put to use in urban environments.

In the real world, traffic is governed by complex rules and relationships in which formal traffic rules only take a small place. In a traffic system dominated by human-controlled cars, a self-driving car being limited to just the knowledge of formal traffic regulations, is an alien.


Imagine a bus stopping in a crowded street. On the opposite sidewalk, a group of people are starting to run. A human driver may conclude, that the pedestrians most likely will try to cross the street in a non-cautious manner in order to reach the bus. As a result, he will lower his speed and increase his attention. This conclusion not only requires registering all potential actors around the car, but also interpreting their behavioral context.


Making context-sensitive predictions with limited knowledge.

The conclusion is that self-driving cars both need the ability to follow regulations and at the same time the ability to recognize human interactions, as fixed regulations are spontaneously altered. How can you even predict human behavior? Sometimes the conditions for certain actions may be different. Or the actions taken by actors may change. Example: As a traffic light is turning from green to yellow, some drivers will hit the breaks – while others will accelerate to pass before red light, sometimes well above the speed limit. To make matters worse, different actors are often influencing each other, eventually creating a complex system. Drivers surpassing the speed limit will lead to other drivers to act just the same. Drivers ignoring certain impractical rules will incentivize others to follow. A group of pedestrians crossing a street will incentivize others to follow, even if a car is approaching. So, how is a machine supposed to interpret human behavior, act upon dynamic rules that are being established on the fly?

Building a real self-driving car is only possible by the usage of artificial intelligence – a technique that has only risen to real success recently, and thus explaining why we could not build such cars in the past. A self-driving algorithm needs to learn from its experiences and incorporate a kind of adaptivity not possible for regular algorithms.

Training the A.I.

All approaches on true artificial intelligence incorporate a learning process. Well, what do we need to learn real-world traffic rules? Experiences in the real world.

Last month, Tesla announced, it had 780 million miles of semi-autonomous driving data, adding another million every 10 hours. Beginning in 2014, Tesla has equipped its cars with the hardware and software for its Autopilot. This is the incremental approach: Tesla is rolling out cutting-edge technology to its users and receives data when it is used. This way, the company has created an unmatched wealth of experience of its autopilot algorithm in the real world. Every experience helps in making the algorithm to get better. As updates are delivered via wireless connections to the cars, customers directly profit from advancements. Tesla’s CEO Elon Musk predicted the first fully self-driving cars to be ready in just about two years. On the other hand, of course, putting an in-development software on the streets, requires a high amount of responsibility of the driver, to judge what is possible with the system and what is not, and also a lot of self-discipline, to not get careless when being driven around (UPDATE in July: A Tesla driver died in an accident while autopilot was enabled. It seems to be both an error on the driver side, not paying attention anymore, and of the software, not recognizing a large white trailer on the highway against the bright sky. Tesla emphasizes that each driver has to acknowledge that Autopilot is an “assist feature” and does not provide full autonomy. For more info see Tesla Blog).


Tesla’s autopilot with the ability to change highway lanes. © Tesla Motors, Press Kit

Google uses a different approach, a more direct one. Their currently 58 self-driving cars – not even equipped with traditional steering wheels – drive through four selected U.S. cities using professional test “drivers” (or rather: observers), and gathered data of 1.6 million miles since 2009. While the Google approach obviously creates much fewer data than Tesla’s, it also allows for a more progressive and also more safe approach. When not having to worry about practical customer considerations, they can directly aim for full self-driving capability. As Google will need more and more experience data, the test program is expanded. Positions as a self-driving car test driver are currently open. (Update in July: Google’s cars now learned to predict cyclist behavior and hand signs using an A.I.-driven learning approach – very impressive! Source).

While both strategies have their advantages, it is foreseeable for both companies to have leading positions for self-driving algorithms in the foreseeable future. Traditional closed-environment small-scale testbed strategies can only advance to a certain point (mostly revolving around learning formal rules), but for fully autonomous driving and interactions with human-driven vehicles, extensive real world data is necessary. For example, Google is currently learning its cars when to honk – useful for situations of urgency and warning in the real world, not defined by formal traffic rules: “Our goal is to teach our cars to honk like a patient, seasoned driver. As we become more experienced honkers, we hope our cars will also be able to predict how other drivers respond to a beep in different situations.” (Source) Real-world data is key for learning processes revolving around self-driving cars.

The next step

In a world with ever expanding cities, how do we prevent the total traffic collapse? At a certain urban density, there is just no more room to increase the traffic capacity. The solution in fact could be self-driving cars. If we take traffic capacity as a limited good, traffic lights are always a very inefficient way of distributing this good. Intersections become bottlenecks. By using self-driving cars and thus removing the need for traffic lights, the capacity and efficiency of streets could dramatically be increased, possibly doubled, as Italian researchers have just recently shown.

Moreover, by combining ubiquitous connectedness with artificial intelligence, we could utilize intelligent traffic management to organize and spread daily commuting routes and thus make the most out of a cities’ existing infrastructure at peak times. If combined with the revolution towards electric vehicles, which is just getting started right now, and the expansion of regenerative energy sources, pollution could be drastically reduced and quality of life and health increased. Hopefully, self-driving cars are just the first promise to come true towards a more reasonable solution of mobility.

It is about time.


How do we ensure Quality of Life in the urban age? More blogs to come to answer this… there’s a subscribe button at the bottom.



8 thoughts on “Solving our traffic problems with self-driving cars

  1. great read! Some of the problems you mention self-driving cars will have to face are things I hadn’t even thought about. Especially the one about how they have to be able to predict human behavior.


  2. Nice write-up of all the challenges and possibilities of AI-driven traffic. At some point, it will get easier as more and more cars will be (semi-)autonomous and therefore a little more predictable than your average human driver. Also worth noting is that you can also work on the road-side of things, e.g. better and direct communication between traffic lights, road signs and vehicles to remove ambiguity in interpretation.
    As a side-note: autonomous track-bound vehicles, a.k.a. automatic trains or metros, have been around for a while and do a very good job but within a very controlled environment. Next year, DB will start experimenting with a driverless train utilizing normal tracks and mixing in with human-operated trains. That will also be interesting to watch.

    Liked by 1 person

  3. Maybe the focus should be on coordinated driving instead of autonomous driving. This also highlights another point: If coordination is (even worse: partially) automated in a complex environment, what kinds of vulnerabilities for the system and its elements (e.g. humans) result and how can they be managed? E.g. what is the algorithmic basis for the estimation of human movement? Is it risk-based? Experience-based? This is a general question when it comes to critical decision-making by machines.
    Other points are of course the soundness of critical infrastructure like coordination devices which might be internet-based. When the system of traffic lights in a city collapses nowadays, traffic can still continue since drivers can act according to an underlying set of basic rules. Of course, accidents might increase in such a period, but in general the system is resilient and able to cope with serious disruptions. In contrast, I can very well imagine that self-driving cars will have massive problems when e.g. internet is down (e.g. concerning the system you described in the other article as the progression of Uber).
    To sum it up, how can we reach the level of resilience with self-coordinating traffic which is necessary to keep traffic systems reliable?


    • Hey Niko 🙂 Thanks for the read and comment! You point out a lot of interesting/critical points.

      The main problem with a coordinated system, i.e. an algorithm directing all traffic from “above”, is of course that not all traffic elements can be integrated. Even with only self-driving cars on the road, you still have pedestrians, bicycles etc that cannot be coordinated that way. In a free society, they will make their own moves, they might want to not be observed and they also will react irrational, act against rules, etc. So basically, it’s very hard to take a top-down approach when you cannot control all elements.
      A solution might be decentralized coordinated decision-making and information-sharing with all self-driving vehicles acting like a swarm-intelligence. This also has the benefit of offering resilience and avoid a single party having control over everything (e.g. The Internet). However, this also would obviously require a lot of standardization measures and most likely a sophisticated mixed public/private legal framework. So that’s a far shot but might be a working model for the future.

      Another critical point you made is by which paradigm to progress the A.I. algorithms. Machine-learning is mostly experience-based. However, this creates the problem: An algorithm will react right in 99,99% of the situations. But is this sufficient when human lifes are at stake? The thing with experience-based learning is that it will steadily get better in the direction of 100% but will never really reach 100,0%. Then again, human drivers also won’t react in 100% of the situations right. This leads to risk-perception issues. (i.e. one self-driving death being all over the media for weeks, but thousands of human-driven deaths worldwide are a broadly accepted fact of life and no news worth).


      • Thanks for the reply, Julian. I guess you mentioned a very important point, stating that risk-acceptance is crucial for further development. Even though I am critical in face of the opportunity to get killed by a self-driving car, I think that getting killed by some texting or drunk driver is just as bad at the end of the day. I guess we will accept it over time as we get used to it and our attention will fade in comparison to “historic” events like the first person getting killed by [place innovation here].
        Nonetheless, another aspect of it might lead to more complicated debates: accountability. When a drunk driver rolls over your legs, this person will get punished (hopefully). However, when an individual gets killed by an autonomous car provided by a giant transnational corporation, who will be punished? Will there be retaliation restoring the belief, that similar incidents might happen less in the future or will there be only some compensation payment which corporations can include in their risk profile? From the perspective of common and even well-informed knowledge, effective sanctioning of economically important transnational companies will just not happen.


  4. Yeah, that’s another crucial point. In Germany for example, it’s planned with Dobrindt’s new law next year, that for fully self-driving cars, the manufacturers are in fact responsible in the case of accidents – not the driver anymore (which is a revolution in itself).

    However, we haven’t arrived at level 4 fully autonomous cars, yet. We are still only at level 2 assisting systems. And for partial-self-driving mechanisms, it’s really tough, as the companies will always argue that the system was not properly used / in the wrong circumstances, etc. While the driver will argue the contrary.

    So the borderline between company/private liability needs to be made more clear from a legal point of view and it would also help the companies’ credibility if they would be more transparent about these questions. Because issues like these will arise in the thousands in the upcoming years. So I think, maximum transparency in this area – both from a legal and company point of view – is the key to lead to more acceptance while transitioning to (partially/full) self-driving cars.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s