How AI is taking over our gadgets

0

If you think of AI as something futuristic and abstract, start thinking differently.

We are now seeing a turning point for artificial intelligence, as more of it descends from the clouds to our smartphones and automobiles. While it’s fair to say that the AI ​​that lives on the ‘edge’, where you and I are, is still much less powerful than its datacenter-based counterpart, it is potentially much more meaningful to our lives. daily.

A key example: This fall, Apple’s Siri Assistant will begin processing voice on iPhones. Right now, even your request to set a timer is sent as an audio recording to the cloud, where it is processed, triggering a response that is sent back to the phone. By processing voice over the phone, Apple says, Siri will respond faster. This will only work on the iPhone XS and newer models, which have a compatible processor built into AI that Apple calls a “neural engine.” People can also feel more secure knowing that their voice recordings are not being sent to invisible computers in remote locations.

Google actually led the way with phone processing: in 2019, it introduced a Pixel phone that could transcribe speech to text and perform other tasks without any connection to the cloud. One of the reasons Google decided to make their own phones was that the company saw potential in creating custom, bespoke hardware to run AI, says Brian Rakowski, Pixel Group Product Manager at Google.

A Google Pixel 4a smartphone.


Photo:

Phil Barker / Future publication / Getty Images

These so-called peripheral devices can be just about anything with a microchip and memory, but they tend to be the newest and most sophisticated of smartphones, automobiles, drones, home appliances, and industrial sensors and actuators. Edge AI has the potential to deliver on some of AI’s long-delayed promises, like more responsive intelligent assistants, better car safety systems, new types of robots, and even autonomous military machines.

The challenges of making AI work at the edge – that is, making it reliable enough to do its job, and then justifying the complexity and added expense of integrating it into our devices – are monumental. Existing AI can be inflexible, easily fooled, unreliable, and biased. In the cloud, it can be trained on the fly to improve. Think about how Alexa improves over time. When in a device, it should be pre-trained and periodically updated. Yet improvements in chip technology in recent years have made real breakthroughs in the way we experience AI, and the commercial demand for this type of functionality is high.

From swords to plowshares

Shield AI, a Department of Defense subcontractor, has put a lot of AI into quadcopter-type drones that have already performed and continue to be used in real-world combat missions. One mission is to help soldiers search for enemy fighters in buildings that need to be cleared. The DoD has been eager to use the company’s drones, says Shield AI co-founder Brandon Tseng, because even if they fail, they can be used to reduce loss of life.

“In 2016 and early 2017, we had first prototypes with something like 75% reliability, something that you would never put on the market, and the DoD was saying, ‘We’re going to take it overseas and use it in combat now, ”Tseng said. When he protested that the system was not ready, the military response was that anything was better than soldiers walking through a door and being shot.

In a combat zone, you can’t rely on a fast, robust wireless cloud connection, especially now that enemies often jam wireless communications and GPS signals. On mission, image processing and recognition must be done on the company’s drones themselves.

FarmWise’s driverless tractor, which uses AI to determine if a plant is a food crop or a weed.


Photo:

FarmWise

Shield AI uses an efficient small computer made by Nvidia, designed to run AI on devices, to create a quadcopter drone no larger than a typical consumer model using a camera. The Nova 2 can fly long enough to enter a building and use the AI ​​to recognize and examine dozens of hallways, stairs, and rooms, cataloging objects and people it sees in its path.

Meanwhile, in the city of Salinas, Calif., The birthplace of “Grapes of Wrath” author John Steinbeck and an agricultural hub to this day, an SUV-sized robot spends the season. this year’s growth to rake the earth with its 12 robotic arms. Manufactured by FarmWise Labs Inc., the robot moves along the celery fields as if it were any other tractor. Under its metal shroud, it uses computer vision and an artificial intelligence system to decide, in less than a second, whether a plant is a food crop or a weed, and directs its plow-shaped claws to avoid or eradicate the plant accordingly.

FarmWise’s massive diesel robo-weeder can generate its own electricity, allowing it to carry the processing power of a real supercomputer: four GPUs and 16 CPUs that together consume 500 watts of electricity.

In our day to day lives, things like voice transcription that works whether or not we have a connection, or how good it is, could mean changes in the way we prefer to interact with our mobile devices. Getting a voice transcription always available to work on Google’s Pixel phone “took a lot of breakthroughs to work on the phone as well as on a remote server,” says Rakowski.

Google has almost limitless resources to experiment with AI in the cloud, but getting these same algorithms, for everything from voice transcription and power management to real-time translation and image processing, to work. on phones required the introduction of custom microprocessors like the Pixel Neural Core, Rakowski adds.

Turning cats into pure math

What almost all cutting-edge AI systems have in common is that, as pre-trained AI, they only perform “inference,” says Dennis Laudick, vice president of marketing for AI and machine learning at Arm Holdings, which authorizes chip designs and companies such as Apple, Samsung, Qualcomm, Nvidia and others.

Broadly speaking, machine learning AI consists of four phases:

  • Data is captured or collected: say, for example, in the form of millions of images of cats.
  • Humans tag the data: Yes, it’s pictures of cats.
  • The AI ​​is trained with the tagged data: this process selects patterns that identify cats.
  • Then the resulting code stack is turned into an algorithm and implemented in software: here is a camera app for cat lovers!

(Note: If it doesn’t already exist, consider it your million dollar idea of ​​the day.)

The last step in the process, something like this cat ID software, is the inference phase. The software in many smart surveillance cameras, for example, makes inferences, says Eric Goodness, vice president of research at technology consultancy Gartner..

These systems can already identify how many customers are in the restaurant, whether some are behaving unwantedly or whether the fries have been left in the deep fryer for too long.

These are just mathematical functions, so complicated that it would take a monumental effort on the part of humans to write them down, but machine learning systems can create when trained on enough data.

Robot traps

While all of this technology holds great promise, getting AI to work on individual devices, whether or not they can connect to the cloud, comes with a daunting set of challenges, says Elisa Bertino, professor of computer science at Purdue University.

Modern AI, which is primarily used to recognize patterns, can have difficulty handling input outside of the data it was trained on. Operating in the real world only makes things more difficult – just consider the classic example of a Tesla braking when it sees a stop sign on a billboard.

A Shield AI drone uses AI to examine hallways, stairwells, and rooms while cataloging objects and people it sees in its path.


Photo:

Johnny Tarabola / SHIELD AI

To make edge AI systems more proficient, an edge device can collect data and then partner with another more powerful device that can integrate data from various sensors, says Dr. Bertino. If you wear a smartwatch with a heart rate monitor, you’ve already witnessed this: the watch’s on-board AI pre-processes your weak heart rate signal, then transmits this data to your smartphone, which can analyze further. those data. – whether or not it is connected to the Internet.

The overwhelming majority of AI algorithms are still trained in the cloud. They can also be recycled using more or newer data, allowing them to continuously improve. Later, says Mr Goodness, cutting-edge AI systems will begin to learn on their own, that is, they will become powerful enough to move beyond inference and collect data and use them to train their own algorithms.

AI that can learn on its own, without a connection to a cloud superintelligence, could potentially raise legal and ethical challenges. How can a company certify an algorithm that has evolved in the real world for years after its initial release, asks Dr. Bertino. And in future wars, who will be willing to let their robots decide when to pull the trigger? Whoever does so might end up with an advantage, but also all the collateral damage that occurs when AI inevitably makes mistakes.

For more analysis, opinions, tips and headlines on WSJ technology, sign up for our weekly newsletter.

Write to Christopher Mims at [email protected]

Copyright © 2020 Dow Jones & Company, Inc. All rights reserved. 87990cbe856818d5eddac44c7b1cdeb8


Source link

Share.

About Author

Leave A Reply