Ai Engineering 4 min read

AMI Labs Launches With $1.03 Billion Seed Round to Build World Models

Yann LeCun's AMI Labs launched and unveiled a $1.03 billion seed round to pursue world-model AI beyond text-only LLMs.

AMI Labs officially launched on March 10, 2026 and said it raised a $1.03 billion seed round at a $3.5 billion pre-money valuation to build world models, a class of AI systems aimed at learning abstract representations of the physical world rather than scaling text-only next-token prediction. AMI’s launch update makes clear why this matters for AI engineers: this is a billion-dollar bet that the next frontier after large language models is sensor-grounded, action-conditioned reasoning.

AMI is led by Yann LeCun as co-founder and chairman, with Alexandre LeBrun as CEO. The company launched with operations across Paris, New York, Montreal, and Singapore, and it is explicitly positioning itself as a long-horizon research company rather than a near-term API business.

Funding Scale

The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, with NVIDIA, Toyota Ventures, Temasek, Samsung, Sea, Mark Cuban, Eric Schmidt, Xavier Niel, Bpifrance Digital Venture, and Publicis Groupe also participating. The mix combines venture capital, hyperscale-adjacent compute interests, industrial capital, and enterprise distribution. This looks like a capital stack designed for expensive research, multimodal data partnerships, and eventual deployment in regulated or operational domains.

Technical Direction

AMI’s public technical argument is straightforward. Real-world intelligence requires models that can operate on continuous, noisy, high-dimensional sensor data, build abstract latent representations, and predict the effects of actions in that representation space. AMI’s company page describes a pipeline centered on learning from real-world sensor streams, discarding unpredictable details, and planning with action-conditioned world models. This aligns with the JEPA, or Joint Embedding Predictive Architecture, direction long associated with LeCun. For developers building robotics, industrial automation, healthcare decision support, or multimodal agents, that research direction is directly relevant to architecture choices.

No Product Yet

AMI did not launch a model, API, benchmark suite, or pricing page. This is a funding and research-direction announcement. As of launch, there were no public model names, context windows, benchmark scores, or API contracts. The significance is that one of the most prominent critics of LLM-only roadmaps now has enough capital to pursue an alternative at frontier scale.

Team and Research Credibility

Alongside LeCun and LeBrun, AMI named Laurent Solly as COO, Saining Xie as Chief Science Officer, Pascale Fung as Chief Research and Innovation Officer, and Michael Rabbat as VP leading world models. This is a concentrated research bench with deep ties to Meta’s FAIR ecosystem, NYU, and DeepMind. AMI has committed to publish open research and release open-source code.

Competitive Positioning

AMI enters a small but increasingly well-funded category. Fei-Fei Li’s World Labs reportedly raised $1 billion, and SpAItial raised $13 million. Capital is flowing into post-LLM architectural bets tied to physical environments, multimodal perception, and planning. If you are building agents that must interact with tools, state, sensors, and the physical world, the center of gravity may shift toward architectures that resemble world models more than chat interfaces. See What Are AI Agents and How Do They Work? and Context Engineering for the patterns.

Enterprise Focus

AMI’s first publicly named partner is Nabla, the digital health company previously led by LeBrun. Healthcare, manufacturing, robotics, and industrial control share a common requirement: the model must reason over messy, stateful, real-world inputs and operate under constraints. One track is the LLM application stack, built around prompting, RAG, and structured outputs. Another is the agent and embodied stack, built around world state, control loops, memory, and evaluation against outcomes. AMI launched as a global company headquartered in Europe, with Paris as a core base and a four-city footprint. For engineers hiring or deciding where to build, the map expands.

There is no SDK to test today, so the immediate impact is architectural rather than operational. If you build LLM applications, AMI does not change your stack this quarter. If you build systems that need persistent state, multimodal grounding, planning, or reliable control, track this launch closely. The practical move is to separate language generation from world interaction in your architecture. Keep your LLM layer for interfaces and abstraction. Build your state, tools, memory, and control logic so they can absorb stronger world-model components if those become usable. That design will age better than assuming next-token models are the final substrate for every agentic system.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading