Self-driving cars
Self-driving cars were prophesied to conquer the world 20 years ago.
When will it finally come true?
The first autonomous drive across the USA happened in 1995, and back then many believed that soon these cars will take over on the roads. 25 years later we're still waiting for this, and now many companies, including the Russian ones, are taking part in the race. In this article you'll find out why many companies fail to achieve their goals, and who has good chances to succeed in production of autonomous vehicles.
Self-driving cars were prophesied
to conquer the world 20 years ago.
When will it finally come true?
At first glance it may seem that the advantages of self-driving cars are obvious. Statistics shows that 54 million people get injuries and 1,4 million people die in car accidents annually. Cars have killed more people than WWII, and the material damage caused is more than 870 billion US dollars per year.

Thus, there were attempts to design a driverless car ever since 1920s. However, somehow viable they became only with the invention of computers. In 1995 a team of engineers from Carnegie Mellon University (USA) transformed a Pontiac Trans Sport minivan, which managed to run for 4585 kilometres from California to Pennsylvania. Remarkably, the car was going autonomously all the way except for 80 kilometres, where the engineers had to take control because of road works. A tiny bit had to be finished: the system needed some alterations to exclude engineers' interference. However, it's 2020 and there's still no system, which could be totally independent on the roads. Even Waymo, taxis in Phoenix, Arizona, yet are running with an engineer, who steers in a difficult situation.

Dean Pomerleau and Todd Jochem
the developers of Navlab 5 driverless car, which drove from California to Pennsylvania in 1995.
Map-based driving approach
There are two main types of autonomous systems. The first and the most widespread one is based on maps. To make it possible, a car needs a digital map of surroundings and sensors, which help the car to spot itself on the map. In 1995 Navlab 5 used cameras for this, but they were of little help due to inability to measure distance between the objects properly.

Having spotted itself on the map, the car "understands" which road signs regulate traffic and what behaviour is expected. If the distance is not known, the car can't project its position and move. Radar, on the other hand, is accurate at measuring distance, but can't detect living organisms – pedestrians, for instance. GPS can help to track the car's position, but it lacks accuracy and signal gets lost in tunnels.

For those reasons almost all autonomous cars nowadays have lidar as their basis. It uses laser light, which is reflected by an object near the car, and the return time helps to calculate the distance with extreme accuracy. Lidar is operating in infrared spectrum, which means it can "see" both objects and living organisms. The technology is so valuable that, despite its high price (thousands of US dollars), Google, Waymo and Yandex use it in their cars. By the way, cars by Yandex demonstrated good results on promo videos shot in winter Moscow:
However, there are reasons for inability to turn Waymo's cars into totally autonomous, in spite of years of ongoing developing. The main is that robotic cars that rely on maps drive not like humans: in real world they encounter problems to which people are adapted.

A real-life example of such problems is the car accident that happened on 18 March 2018 in Tempe, Arizona. Late at night a driverless Uber with lidar and cameras was going at only 64 km/h, when an African American woman crossed a road with a bicycle and white supermarket plastic bags on it. Both lidar and cameras detected her, but they defined it as a false signal, as there was no such object registered in the data base. However, defining false signal is critically important, otherwise any sun glare will be perceived as an obstacle.

In this story the main problem with map-based cars became evident. The cars stick to the rules: if they detect traffic lights, they identify their position, classify them as "traffic lights" and choose a particular program of future actions.

But in real life other pedestrians and drivers often don't follow the rules. Sometimes one can cross a street without paying attention to the traffic lights and carry something difficult to identify. If you try to follow rules in the world where they're being constantly broken, then you increase the probability of an accident. That's why Waymo and Yandex will keep assisting engineers in their self-driving cars for quite long.

As well as that, lidars also have some technical issues.
Firstly, to operate, they should be rotating all the time, just like good old radars, so the rotating components break quite regularly. And in general, the good condition doesn't last long, although the price of lidars is 10,000 US dollars or higher.

Secondly, infrared detectors don't work effectively in foggy weather, as the rays are absorbed by water drops. For now, while engineers are tackling the problem, the trials take place in arid areas, but in case of mass production, this drawback should be eliminated.

Thirdly, despite the weather, lidar doesn't see colours, which means that road signs and marking can't be perceived. Usually, the digital map helps out, as there's certain location of traffic lights, but in reality everything is always changing. Once a self-driving Uber ran a red light, because there was new traffic light, which wasn't marked on the digital map. There are many situations like this happening daily, so the lidar in its current state is not the key to success.
Self-driving Lyft with lidar on the roof
driving approach


But what if we take humans as an example? People use their eyes which are capable of measuring distance at least approximately and identifying surroundings, so a person can drive in a place he/she has never been to. If we see a woman with a bicycle and plastic bags, we won't dismiss her as a false signal, because our brain understands that the objects are separate, but together they can become quite a nuisance. Why not create software, which is not map-based but human-like?

This approach has several advantages.
Firstly, there's no need in a lidar – expensive, constantly breaking device unable to read signs and work in foggy weather.

Secondly, the lidar consumes too much electricity: a lot more than a regular internal combustion engine can handle. That's why lidar-equipped driverless cars are usually hybrids or electric cars, but they discharge quicker, which has negative influence on the length of a ride.
To drive in human-like way, a computer must be taught to identify all the objects and react exactly the same way as a human driver would do. Sounds easy, but it's not. People identify objects using their brain, consciousness, which analyses the data we receive from the eyes. Algorithmic computers, in their turn, can only recognise objects that were uploaded to data base beforehand.

Any internet user who encountered captcha knows that Google uses pics of road signs and traffic lights to detect bots. Real people understand what exactly they see, while software doesn't. If the weather is fair, system can easily recognise signs and act accordingly. But if the sign is rusty or covered in snow, it differs from its image in computer's memory and can't be read by the machine.

Tesla set out to solve this problem through machine learning of the neural network. For instance, the network receives an image of a traffic light, but it can't classify it, therefore it is labelled as any object in the data base. Later a person assigns a label to the image, and neural network learns to recognise objects from different angles and in various environments, including sun glares, snow and mud. Thus, the human-like driving approach outdoes the map-based one, but it requires a bigger training set. And though Elon Musk claimed that in 2020 Tesla's cars will be driving as autonomous taxis, we shouldn't count on it. For now Tesla have produced only 600,000 vehicles with computers, collecting data for learning, and most of them are on the roads for less than 2 years.

What about
What about Russia?
Firstly, the computer of autopilot consumes too much energy, especially if there are lidars. That's why it should be based on hybrids or electric cars, but they're not yet widespread in Russia.

Secondly, most of tech companies stick to map-based driving approach. And judging by the experience of Waymo, working on their system for a decade and investing a lot in it, still there are many drawbacks.

Thirdly, Tesla's approach is not accessible to companies here. There's no car company that would aim at producing thousands of electric cars or hybrids with computers on it, so the training set can't be collected.
From the points about two approaches mentioned above it's quite clear that Russian tech companies may have some issues.
So, Russian companies should choose a strategy that will help them to overcome issues above. For example, Megwatt sales autonomous vehicles for enclosed zones like factories or storehouses. It's much easier to program neural network as environment is not changing and transport is isolated from roads.
"Matryoshka" was intended to be used as low-speed transportation for university campuses, such as Skolkovo.
Other companies have given up their attempts to tackle the problem. There was a concept of "Matryoshka", an autonomous platform, which failed to prosper as the investments were beyond possible. However, "Matryoshka" appeared on the roads during the celebrations of Moscow city anniversary in 2018.
A third block of engineers is trying to improve their cars designed on map-based approach, but it's not always successful. For example, on December 10, 2019, at the NAMI training ground, the final of the "Winter City", a technology competition with a prize of 175 million rubles, was held. Yandex didn't participate in the event, and it had only small teams: Nizhny Novgorod State Technical University, as well as startups: StarLine (St. Petersburg), Avto-RTK (Taganrog, Kursk), Winter City MADI (Moscow) and BaseTrack (Moscow).

When passing the intersection, one of the vehicles "malfunctioned", seeing an obstacle that didn't exist, and the rest of the cars couldn't get around it correctly and also stood at the intersection. The organizers of the complex gave the teams a second chance by restarting the race, but again no one could drive past the ill-fated intersection (it was even called the Crossroads of Fate). After the third restart, the finalist car drove into the concrete block at low speed, slightly breaking the bumper. In the end, no one won the main prize.

The self-driving cars competition didn't have any winners not because the level of teams was low: many similar competitions abroad give similar results. Large companies try not to participate in them, so as not to risk their reputation due to a minor software error.

However, it shows that the problems faced by Russian developers of new type of vehicles are actually considerable. Their efforts are impressive, especially in Yandex case. But we wouldn't wait for a full-fledged Russian autopilot, capable of working without an engineer in the car, neither in 2020 nor in 2021.

The race for developing the safest and most effective autonomous driving system is getting tighter. Probably, in few years Tesla will succeed in their human-like driving approach to designing the software, and many other companies, including Russian ones, will follow their lead.