Home Tech What’s next for AI in 2025

What’s next for AI in 2025

57
0
What’s next for AI in 2025

How do we calculate the end time? The four hot trends to watch out for in 2024 include so-called special chatbots – interactive assistant applications powered by multimodal big language models (check: we don’t know yet, but we’re talking about what are now called agents, the hottest thing in AI now); generative video (check: some technology has improved significantly in the last 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, every other week this December); and other general robots that can do a wider range of tasks (check: payoffs from large base models continue to trickle down to other parts of the tech industry, and robotics tops the list). We also said that AI-generated election disinformation would be everywhere, but here – happily – we were wrong. There are many things to do this year, but the fake politics is thin on the ground. So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that smaller, more efficient agents and language models will continue to shape the industry. However, there are five alternative options from our AI team. 1. The generative virtual playground If 2023 is the year of generative images and 2024 is the year of generative video—what will come next? If you guessed the generative virtual world (aka video game), high fives all round. We got a glimpse of this technology in February, when Google DeepMind announced a generative model called Genie that can take still images and turn them into side-scrolling 2D platform games that players can interact with. In December, the company announced the Genie 2, a model that can play startup images into the entire virtual world. Other companies are creating similar technology. In October, AI startups Decart and Etched announced an unofficial Minecraft hack where every frame of the game is rendered on the fly as you play. And World Labs, a startup founded by Fei-Fei Li—the creator of ImageNet, the big data collection of photos that started the deep learning boom—is building what it calls a large world model, or LWM. One obvious application is video games. There is a playful tone to these early experiments, and generative 3D simulations can be used to explore design concepts for new games, turning sketches into playable environments on the fly. This can lead to an entirely new type of game. But it can also be used to train robots. World Labs wants to develop what it calls spatial intelligence—the ability for machines to interpret and interact with the everyday world. But robotics researchers don’t have good data on real-world scenarios to train the technology on. Spinning countless virtual worlds and putting virtual robots into them to learn by trial and error can help solve it.

Source link