AI Is Forever Changing How We Interact with Our Devices

The world is abuzz with excitement — plus a healthy dose of skepticism and anxiety — over the potential for artificial intelligence (AI) to reshape just everything, including how we interact with our phones and other electronics devices.

"We're entering the age of generative AI, and on-device generative AI has the potential to profoundly impact how we interact with our devices," said Cristiano Amon, president and CEO of Qualcomm, the chipmaker known for its Snapdragon line of processors that run on today’s most popular smartphones and owner of the aptX Bluetooth codec and related technologies such as aptX HD, aptX Adaptive, and aptX Lossless.

"Running AI pervasively and continually on the device will transform our user experience, making it more natural, intuitive, relevant, and personal, with increased immediacy, privacy, and security."

The comments were made in a Consumer Technology Association (CTA) news release announcing Amon as a keynote speaker at the upcoming CES 2024 and reiterated at the recent Snapdragon Summit 2023 where Qualcomm showcased its next-generation technology, including super-responsive chat assistants for Windows 11 laptops and advanced AI-powered noise cancellation for wireless earbuds and other products.

“On-device generative AI will play a critical role in delivering powerful, fast, personal, efficient, secure and highly optimized experiences,” he said, adding “you will see generative AI going virtually everywhere that Snapdragon goes.”

During an interview at the Bloomberg Technology Summit in June, Amon was more specific in addressing AI-powered applications that go well beyond the phone.

“We have a lot of AI-in the car today for assisted driving and autonomy. Just to give an example, GM’s Super Cruise and Ultra Cruise [systems] are running on a Qualcomm AI processor. The car needs to make decisions that are very context-related in real time. The sensors in the car for assisted driving (AD) see an image and need to make a [split-second] decision. That computation needs to happen locally [not in the cloud].

“The reason Qualcomm became successful in AD is because you can’t put a server in the trunk of the car, especially an EV, because it will take away from its driving range. But now that computing power can be used for a large language model in the car. “This model can be as big as the model in the cloud and you now have real contextual information.”

Recalling the 1980s TV series Knight Rider he watched as a kid, Amon said we’ll talk to our cars in the not-too-distant future like Michael Knight did with his heavily modified, “KITT” computer-controlled 1982 Trans Am.

“You will give a very complex instruction to the car: ‘I want to go home and, on the way home, I want to stop here and I want to order this and pick it up.’ Those instructions are going to be closely related to the information you have. It doesn't mean you're separating the car from the cloud — they’re all combined — but AI in the car is going to make a huge difference for everything that is contextually rich for that moment.”

Asked when we might see such AI-assisted interaction in our cars, Amon said, “It’s very difficult to make a prediction but [broadly speaking] I’m an optimist so I think we’re going to start to see phone applications in 2024 that are a lot richer in their ability to use AI for photography, how people share information and photos, and create content. Next year, you’re going to see a lot of productivity.”

To illustrate his point, Amon showed a demo of how in seconds a generative AI model on a phone created a unique and striking image of canals in Venice from the simple input prompt: “Masterpiece, Venice canals, 4K, sunset, clear image, award winning, hyper realistic.”

What’s all this mean for home entertainment?

AI has the potential of supercharging interactions with all kinds of audio/video devices, from making music and TV content searches more relevant, which is already happening, to “conversing” with our AV gear — using natural language to issue strings of commands that set up perfect movie watching or music listening scenarios without having to push any buttons. The possibilities are, indeed, endless.

COMMENTS
barfle's picture

What I can see AI doing is generating music based on my mood at the time. It seems like overkill to use it for noise-cancelling, but maybe it could have applications in an adaptive setting.

But, man, right now it takes a TON of computing power, and it seems wasteful for stuff like that, at least to me.

ChatGPT Nederlands's picture

This article addresses the potential application of artificial intelligence (AI) in the near future, especially ChatGPT , to interact with electronic devices such as smartphones. I agree that ChatGPT can help improve user experience, but am also concerned about the high computational cost and data privacy. However, I believe that AI technology will continue to develop and bring many practical benefits.

glovesfungus's picture

Actually, AI has a strong impact on many fields. But developers research according to specific characteristics to set up the geometry dash lite program. This is really easy to understand when one can see its effects. Your analysis is really effective. I read a lot of documents on this topic and found each analysis to have a different and fascinating meaning.

superedan's picture

AI technologies, such as computer vision and natural language understanding, are driving innovations in user interface design. Voice-controlled interfaces enable hands-free interaction with devices, while gesture recognition Watermelon Game and facial recognition technologies offer intuitive and immersive user experiences.

X