The different future of AI

Free unlock edited abstracts

FT's editor Roula Khalaf chose her favorite stories in this weekly newsletter.

Does the future belong to the few all-around, wide range of AI agents representing us browsing the world around the world - with successors to changpts, claudes and groks who are trying to handle any tasks you impose? Or will it be filled by many professional digital assistants, each accepting narrow tasks and calling only when needed?

Some of the mixing of the two seems likely, but the huge speed of change has even led the field leaders to admit that they have not looked at the appearance of things for a year or two.

There are many encouraging developments for supporters of the idea that “one AI rules everyone”. For example, Openai added a shopping feature to Chatgpt this week that points out how personalized AI agents can reorder e-commerce economics. Using a single query to get chatbots for your product research and making purchase suggestions can disrupt the entire "funnel" of brands relying on turning to buyers, thus putting OpenAI very much in the center.

Such advancements may attract the greatest attention, but behind the scenes, a new generation of more professional agents are beginning to form. These promises will be narrowly positioned and - a key consideration - are much cheaper to build and run.

Meta's Llamacon developer meeting this week caught a glimpse of the game state. The social networking company bets its adaptability to "open weights" and the open source structure of the AI ​​model is limited. This allows others to use and adapt to the model even if they don't see how they train.

One sign that Meta is trapped in the broader tech world, a “open” llama model for its “open” llama model in its first two years. The vast majority of these involves camel versions adapted by other developers for specific purposes and can then be downloaded by anyone.

The technology to turn these open weight models into useful tools is rapidly evolving. For example, distillation - bringing small models to some intelligent intelligence has become a common technique. Companies with "closed" models (such as Openai) reserve the right to decide how and through whom distillate. In contrast, in an open weight world, developers can adjust the model as needed.

In recent months, interest in creating more professional models has attracted attention, as the focus of AI development has changed the data-intensive (very expensive) preliminary training, which is the initial training for the largest model. Instead, many special seasonings in the latest sauce are created in the next steps, i.e. "After training," which are often used to shape the results using a technique called reinforcement learning, and the so-called test time phase used in the inference model, to solve the problem.

Databricks CEO Ali Ghodsi said a powerful post-training form involves using the company’s proprietary data to shape models during its reinforcement learning phase, making it more reliable for business use. He said at Meta's event that only open models are possible.

Another favorite new trick is combining the best parts of different open models. For example, after DeepSeek shocked the AI ​​world with its success with its low-cost R1 inference model, other developers quickly learned how to copy its inference “traces” (step-by-step thought patterns that show how it runs through problems) and run it to Meta’s Llama, top,

These and others are expected to tide waves, and these smart agents require cheaper hardware and consume less power.

Meanwhile, for model builders, it increases the risk of commodification – cheaper alternatives will undermine their most expensive, state-of-the-art models.

But as the cost of AI waterfall, all the biggest winners may be users: companies with companies with design and embedded professional agents.

Richard.waters@ft.com