What’s After Generative AI? Interactive AI, According to DeepMind Co-Founder

Interactive AI,

There’s no doubt that AI has started to move very, very quickly. However, it was a slow build. Decades of research, beginning decades ago, brought about the beginning of artificial intelligence, with deep learning that could classify information.

That led to the current iteration of the technology – generative AI, with text applications such as ChatGPT, Claude and Bard, as well as video and image applications such as Runway, Dall-E and Midjourney.

But, what’s next?

DeepMind’s Mustafa Suleyman on Interactive AI

If you ask Mustafa Suleyman, co-founder of Google DeepMind, the next wave is all about interaction.

interactive AI

In a new interview with MIT, Suleyman says: “The third wave will be the interactive phase. That’s why I’ve bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, you’re going to talk to your AI. And these AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. They’ll talk to other people, talk to other AIs.”

In a new interview with MIT, Suleyman says: “The third wave will be the interactive phase. That’s why I’ve bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, you’re going to talk to your AI. And these AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. They’ll talk to other people, talk to other AIs.”

And Suleyman’s new company is gearing up for that future. His startup, Inflection, has launched with AI minds from DeepMind and OpenAI, and “one of the biggest stockpiles of specialized AI hardware in the world.”

Automated, interactive, artificial intelligence is the kind of AI that fuels anti-AI sentiment.

Suleyman counters that, saying the key is “setting boundaries, limits that an AI can’t cross. And ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIs—or with humans—to the motivations and incentives of the companies creating the technology.”

You can read the full interview with Suleyman over at MIT’s Technology Review.

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *