Summary: Valeo and NATIX Network have announced a partnership to develop one of the world’s largest open-source, multi-camera World Foundation Models (WFMs) for physical AI. By combining Valeo’s expertise in generative world modeling for autonomous driving with NATIX’s decentralized global network of 360° real-world camera data, the collaboration aims to create models that can learn, predict, and reason about real-world motion and interactions. The initiative will release models, datasets, and training tools under an open-source framework to support the wider research and developer community.
Key engineering takeaway: Moving from single front-camera perception to multi-camera, time-aware world models is a major step toward true 4D (space + time) understanding. The combination of Valeo’s VaViM and VaVAM frameworks with NATIX’s large-scale, continuously updated multi-camera dataset enables predictive, generative models that go beyond perception to anticipation and action in real-world environments.
Why it matters: World Foundation Models are emerging as a foundational technology for autonomous driving, robotics, and other “physical AI” systems, much like large language models were for text. An open, multi-camera approach trained on diverse, real-world edge cases could accelerate safer deployment of autonomy while reducing reliance on closed, OEM-only datasets. For the industry, this signals a shift toward shared, open foundations for next-generation mobility intelligence rather than siloed, proprietary AI stacks.
Valeo, a global leader in automotive technology, and NATIX Network, a global camera DePIN (decentralized physical infrastructure network), announce a partnership to build one of the largest open-source multi-camera World Foundation Model (WFM).
The rapid advancement of autonomous driving and robotics is creating new possibilities, driven by the growing demand for diverse, high-quality real-world data. By combining Valeo’s world models expertise and NATIX’s decentralized 360° real-world data network, the partners will build an open-source world model capable of learning, predicting, and reasoning about real-world motion and interaction.
“Since our creation in 2018, Valeo’s AI research center has been at the forefront of AI research in the automotive industry, especially in the fields of assisted and autonomous driving. Our goal has always been to advance mobility intelligence safely and responsibly,” said Marc Vrecko, Chief Executive Officer of Valeo’s Brain Division. “By combining Valeo’s generative world modeling research expertise with NATIX’s global multi-camera data, we are accelerating both the quality and the accessibility of next-generation end-to-end AI models, enabling the research community to build upon strong open models.”
“WFMs are a once-in-a-generation opportunity — similar to the rise of LLMs in 2017–2020,” said Alireza Ghods, CEO and co-founder of NATIX. “The teams that build the first scalable world models will define the foundation of the next AI wave: Physical AIs. With our distributed multi-camera network, NATIX has a clear advantage of being able to move faster than large OEMs.”
A New Foundation for Open Access Real-World Modeling
To build autonomous systems that can function in the physical world, machines must learn to understand the 4-dimensional environment, i.e., space and time. World Foundation Models (WFMs) push the boundaries of generative AI beyond text to the real world, enabling systems to reason, predict future states, and act in physical environments.
Unlike existing perception-only models, multi-camera world models anticipate what will happen next, not just what is happening now. Grounded in continuously captured, real-world multi-camera data, the Valeo–NATIX approach enables AI to learn from true edge cases and accelerates the safe deployment of autonomous systems.
Developed under an open-source framework, the Valeo–NATIX approach will release models, datasets, and training tools openly, enabling developers to fine-tune world models and benchmark Physical AI across regions and driving conditions. This collaboration builds on Valeo’s VaViM (Video Autoregressive Model) and VaVAM (Video- Action Model), two open-source frameworks trained mainly on front-camera video, including large-scale online datasets. NATIX complements this with its multi-camera network, which has collected over 100k hours of multi-camera driving data (600K hours of video data) in 7 months, and continuously captures such data from real vehicles across the US, Europe, and Asia. Extending world models from front-view to multi-camera inputs gives AI the same complete spatial perception that autonomous vehicles and robots use in practice.
For more simulation technology news, click here.

