Cisco Backs World Labs, Fei-Fei Li’s Spatial Intelligence Startup — Signaling AI’s Shift Beyond Language
- Niv Nissenson
- Nov 24
- 2 min read
Cisco (Nasdaq: CSCO) has invested in World Labs Technologies, the spatial-intelligence startup founded by Dr. Fei-Fei Li, an influential figure in computer vision according to a press release by Cisco. The investment marks World Labs’ largest strategic financing to date and highlights an accelerating trend in the AI industry: the move from text-based AI to full 3D world understanding.
World Labs is developing Large World Models (LWMs) — multimodal systems designed to perceive, reason, and act within 3D environments. In other words, moving from AI that understands words to AI that understands worlds.
Cisco views this technology as the next major platform shift in AI. We've covered before that Physical AI is probably the most complicated.
“The next great platform evolution in AI will be built around spatial intelligence,” said Jeetu Patel, Cisco’s Chief Product Officer.
A16z’s Martin Casado echoed the sentiment, calling the transition from “linguistic intelligence to spatial intelligence” the next frontier of AI.
For World Labs, Cisco brings what Dr. Li describes as “secure, scalable infrastructure”—a critical component for deploying physical AI safely in enterprise and industrial environments.
Why Spatial Intelligence Matters
Spatial AI enables machines to:
Understand 3D environments
Navigate physical space
Manipulate objects
Simulate real-world dynamics
Reason about geometry, physics, and long-horizon tasks
This is essential for robotics, autonomous operations, AR/VR, manufacturing, logistics, and agentic systems that need to interact with the real world rather than just generate language. In many ways, spatial AI is to the physical world what LLMs are to text.
Cisco's stock has been doing great this year (up 32%) and has reached a market cap of $300Bn so it's in a good position invest.
Dr. Fei-Fei Li is a leading AI and computer vision pioneer best known for creating ImageNet, the dataset that helped ignite the deep-learning revolution. A Stanford professor and co-director of the Human-Centered AI Institute, she previously served as Chief Scientist of AI/ML at Google Cloud. Li has published extensively across top venues, advanced work in spatial intelligence and AI in healthcare, and co-founded AI4ALL to expand diversity in AI. Her contributions have earned her major honors, including election to the NAE, NAM, AAAS, and the Queen Elizabeth Prize for Engineering (source: Wikipedia).
TheMarketAI.com Take
We may be watching the early stages of AI’s next major advance: moving from language-first models to spatial and multimodal intelligence. LLMs unlocked enormous value, but they remain grounded in text. Spatial AI and multimodal AI unlocks the ability for AI to understand and operate within physical environments.
If the last decade was about teaching AI to read and write, the next decade may be about teaching AI to see, navigate, and operate.




