The Exciting (Near) Future of Autonomous Cars
The use of artificial intelligence (AI) in automotive applications is not open for debate. The only discussion right now centers on how quickly industries can advance this technology and truly disrupt all we have known about how both the car and the larger automotive world operate. This centers on autonomous cars but is much broader than that. Hints of AI and its impact are already appearing, but consumers should brace themselves for a much bigger change in the near future (as in the next generation of cars).
Jensen Huang, CEO of NVIDIA, sets the stage: “For as long as we have been designing computers, AI has been the final frontier. Building intelligent machines that can perceive the world as we do, understand our language, and learn from examples has been the life’s work of computer scientists for over five decades.”
Going after that frontier has engaged not only tech companies such as Google, Facebook, and Apple but traditional automotive companies like Toyota and Ford. Toyota laid the gauntlet down near the end of 2015, announcing that it would start a new company, Toyota Research Institute, and invest $1 billion over five years with the goal of accelerating R&D and bridging the gap between research and products. Toyota sees the implications of this technology in safety, enhanced vehicle access, and extended mobility for seniors.
Ford likewise dropped $1 billion into Argo AI in a bid to speed up its autonomous vehicle program. It hopes to create a “virtual driver system,” a software “brain” that could potentially be licensed to other companies.
Step Back for a Moment
Consider this: cars used to be just appliances, created for one particular task. You got in and told them what to do but also steered them, stopped them, and were in control.
If you haven’t noticed, modern technology is chipping away at those time-honored practices, and it’s a good thing. A growing number of cars in all price ranges will now keep you in your lane if you drift, stop your car quicker than you can, and alert you to hazards. Of course, things like adaptive cruise control that can take over both the accelerator and the brake are getting to be old news.
The next wave of self-driving cars will still have a human behind the wheel (we’ve got a couple of levels to go until we reach full autonomy). We’re at Level 2 on the five-level scale of autonomous driving. The technology now found in cars, such as adaptive/radar cruise control, lane-keeping technology, and related tech allows you to take your hands off the wheel and foot off the pedals.
Many automakers are looking to skip Level 3 (a mixed bag of autonomy and driver assistance) and move right to the fully autonomous Level 4. Meanwhile, Level 5 is the Holy Grail achieved so far only by DARPA Challenge winners making their own roads through the desert. However, this is an arduous challenge, as at that level the car must think like a human driver. In fact, it needs to think better than a human driver if it’s going to replace one.
Deep Learning & Machine Learning
Designing fast computers that can process tons of data is a vital step towards creating completely autonomous vehicles. The human brain processes thousands of bits of information instantaneously when driving. Nimble computers that could replicate this have only begun to appear, relying on cloud computing and other sophisticated techniques based on high-speed wired and wireless connections.
But, as noted, the autonomous car has to do more than just track the road, maintain speed, and stop as needed. The car also has to react to unexpected interference – a cyclist swerving into the road, a jaywalker, a car drifting out of its lane, and any other unforeseen instances of human error. Humans react based on their experience and attentiveness. AI machines, according to NVIDIA Vice President Rob Csonger, “improve over time with additional training data and testing.” That’s with AI systems that Csonger says “have levels of perception and performance far beyond humans, and importantly, do not get distracted, fatigued, or impaired.”
The keys here are deep learning and machine learning, two related concepts both dealing with computers’ ability to grasp certain concepts even if they haven’t been explicitly programmed to, hence the term “artificial intelligence.” Csonger’s company, NVIDIA, makes the graphic processing units (GPUs) that are the brains of AI machines, making them faster and smarter. NVIDIA currently works with the majority of automotive companies worldwide (225 in total), enabling the move to smarter “brains” for cars.
The Brains of the Machine
What allows the human brain to think? It’s not something we tend to spend much time contemplating, but AI scientists always focus on it. They look at the neural networks processing information from a person’s eyes, nose, mouth, fingers, and skin. Of course, the human brain adds a layer of past experience and maybe some inferred knowledge based on something read or heard. Or sometimes a person can react non-linearly, suddenly deciding to take a different course because the road looks like it has lighter traffic or better scenery.
Both tech and automotive industries see an inherent limitation if these smart cars are just cast out into the automotive landscape on their own. Yes, they may be safer and smarter than a human driver, but there will be plenty of human drivers on the road for some time to come. Because of that, a parallel push along with self-driving technology is for more vehicle-to-vehicle (V2V) connectivity and more vehicle-to-infrastructure (V2I) communications (more data for that computer brain to absorb). At that point the machines will take over, ideally ushering in an era of increased safety and mobility.
All those connections raise issues beyond how the car will operate in the real world. Connected cars are vulnerable to hacking, as has been shown multiple times during the past several years. Here’s where AI may play another role. Eli David, an expert in computational intelligence and CTO of Deep Instinct, a security firm with roots in Israel’s defense industry, wants to use AI to even the odds against cyberhackers. In his interview with Michael Copeland, David explained that it “takes…about 24 hours in one day to train [the] artificial brain. Had we not run it on GPUs, it would have taken three months.” He hopes this training will give the security industry the upper hand against hackers as, according to David, one million new pieces of malware are created every day.
Big and Small Companies Working Together
All of this movement is taking place at the intersection of visual processing, high-performance computing, and artificial intelligence. The goal is to avoid a collision. And, to make sure that doesn’t happen, someone has to take charge. An example of the worst-case scenario was when California proposed new regulations for autonomous cars a few years ago. The proposed regulations called for the company requesting to test a self-driving car on public roads to get permission from each area it passed through on its test path. Google led the pushback on the Department of Motor Vehicles, as they would have had to deal with more than a dozen different jurisdictions to run down the freeway from their headquarters in Mountain View to nearby San Francisco. The regulation was changed, but it highlights the potential problems of patchwork rules for this industry.
On the other hand, the local jurisdictions are the ones who will be on the front lines when autonomous vehicles hit the road. Local cops, parking enforcement personnel, and law enforcement individuals will have to deal with any issues raised when the driver isn’t in control of the vehicle. It’s a prelude to what to expect when the driver may exit the vehicle completely, as was demonstrated by Mercedes-Benz and Bosch in a self-parking garage in Germany. In that demonstration, Bosch software and Daimler (Mercedes) vehicle technology were combined in a smartphone app that would take the car to its designated parking space on its own and return it when summoned.
The AI/autonomous world is further complicated by the alliances and affiliations that mark this new area. While tech companies have long been involved in the car business, the kind of arrangements now being forged are at a different level. For instance, Waymo-Fiat Chrysler Automobiles plans to get 500 autonomous minivans on the road in Phoenix. Uber has also formed an alliance with Volvo and Ford to deploy autonomous cars in Pittsburgh and San Francisco. Dozens of smaller tech companies are also finding encouragement, funding, and support from both auto companies and tech corporations as self-driving cars and technology march forward.
Who Will Get There First and Does it Matter?
The list of auto companies (and more than a few tech companies) pushing toward fully automated vehicles is long. Some have said they’ll be there in 2020, some in 2021, and others later in the decade. Tesla, not surprisingly, has said it already has the hardware installed in its cars to deliver full autonomy, although the enabling software is still under development. But Tesla has hinted it could roll out the software as early as next year. Cadillac will debut its Super Cruise system on the 2018 CT6 in fall 2017. GM describes Super Cruise as a program “where the car is doing the driving, and the human is overseeing it. It’s an explicitly hands-off system.” Audi’s just-previewed 2018 A8 includes what it is calling “AI Traffic Jam Pilot,” another hands-off system designed to operate under 37 mph on divided highways. The company is also including automated parking that can park the car from a phone app without the driver inside.
All this new technology may spur a competitive race, particularly among the luxury makes, that pushes full autonomy and its foundational AI technology into the mainstream sooner than expected. Regardless of when this technology launches, we should be anticipating major shakeups in the automotive industry’s future.