Thinking Machines Labs Reportedly Seeks $50B Valuation — But Does It Add Up?
- Niv Nissenson
- Nov 17
- 4 min read

Thinking Machines Labs, the AI startup founded by former OpenAI CTO Mira Murati, is reportedly in talks to raise a new funding round at a valuation of around $50 billion, according to Bloomberg News. Another source is MSN.
The company was last valued at $12 billion in July, following a $2 billion raise meaning this new target would represent a more than fourfold jump in valuation in just a few months. Some sources told Bloomberg the number could reach as high as $55–60 billion, though discussions remain early and terms could change.
A Company with buzz
The funding talks come amid a period of visible churn at the company. Just weeks ago, Andrew Tulloch, Thinking Machines’ co-founder, left to join Meta, following what we previously reported as a months-long recruitment effort by Mark Zuckerberg. Earlier this year it was reported that Andrew Tulloch's offer from Meta may have exceeded a billion dollars.
The company also launched its first product, Tinker, in October a platform designed to help organizations fine-tune language models and adjust model behavior through a visual interface. While some viewed it as technically impressive, Tinker’s release has drawn mixed reactions from analysts and developers, many of whom described it as “useful but incremental” rather than breakthrough.
A Pattern Emerging
In previous coverage, TheMarketAI.com noted that Thinking Machines’ published research focus on LLM nondeterminism which some (we as well) took it as a signal Thinking Machines is tackling one of the most fundamental problems with LLMs. The recent sequence of events, from a high-profile co-founder exit to an arguably modest first product, reinforces that view.
Yet, despite limited commercial traction, the company now appears to be chasing a valuation normally reserved for market-proven leaders with established customer bases.
TheMarketAI.com Take
The rumored $50 billion valuation for Thinking Machines Labs raises questions about how the market is pricing AI ambition versus actual adoption. Initially we could wrap our head around the $12 billion valuation based on the pedigree of the team Mira Murati assembled (see below). Mira Murati’s reputation and the team’s research depth are undeniable, the products we’ve seen so far — like Tinker — don’t yet justify a price tag on par with companies that have scaled real enterprise AI revenue.
It’s possible investors are betting not on what Thinking Machines is, but on what it might become an alternative foundation model provider or agent platform to rival OpenAI and Anthropic.
Thinking Machines founding team (source: https://thinkingmachines.ai July 2025):
Name | Bio |
Alex Gartrell | Former Leader of Server Operating Systems at Meta, expert in Linux kernel, networking, and containerization. |
Alexander Kirillov | Co-creator of Advanced Voice Mode at OpenAI and Segment Anything Model (SAM) at Meta AI, previously multimodal post-training lead at OpenAI. |
Andrew Gu | Previously working on PyTorch and Llama pretraining efficiency. |
Andrew Tulloch (Chief Architect) - Now at Meta | ML systems research and engineering, previously at OpenAI and Meta. |
Barret Zoph (CTO) | Formerly VP of Research (post-training) at OpenAI. Co-creator of ChatGPT. |
Brydon Eastman | Formerly post-training research at OpenAI, specializing in human and synthetic data, model alignment and RL. |
Chih-Kuan Yeh | Previously Building data for Google Gemini and Mistral AI. |
Christian Gibson | Formerly infrastructure engineer at OpenAI, focused on supercomputers used in training frontier models. |
Devendra Chaplot | Founding team member & Head of Multimodal Research at Mistral AI, co-creator of Mixtral and Pixtral. Expert in VLMs, RL, & Robotics. |
Horace He | Interested in making both researchers and GPUs happy, formerly working on PyTorch Compilers at Meta, co-creator of FlexAttention/gpt-fast/torch.compile |
Ian O'Connell | Infrastructure engineering, previously OpenAI, Netflix, Stripe. |
Jacob Menick | ML researcher, led GPT-4o-mini at OpenAI, previously contributed to the creation of ChatGPT and deep generative models at DeepMind. |
Joel Parish | Security generalist, helped ship and scale the first versions of ChatGPT at OpenAI. |
John Schulman (Chief Scientist) | Pioneer of deep reinforcement learning and creator of PPO, cofounder of OpenAI, co-led ChatGPT and OpenAI post-training team. |
Jonathan Lachman | Operations executive, former head of special projects at OpenAI and White House national security budget director. |
Joshua Gross | Built product and research infrastructure at OpenAI, shaping ChatGPT's learning systems and GPU fleet; previously on product infra at Meta. |
Kevin Button | Security engineer focused on infrastructure and data security, formerly at OpenAI. |
Kurt Shuster | Reasoning at Google DeepMind, full-stack pre-training and inference at Character.AI, and fundamental dialogue research at Meta AI. |
Kyle Luther | ML researcher, previously at OpenAI. |
Lia Guy | Previously at OpenAI and DeepMind, working on model architecture research. |
Lilian Weng | Formerly VP of Research (safety) at OpenAI. Author of Lil'Log. |
Luke Carlson | Former ML Engineer in Apple's Machine Learning Research group, designed ML frameworks for model orchestration, speech generation, private federated learning, and image diffusion. |
Luke Metz | Research scientist and engineer, previously at OpenAI and Google Brain. Co-creator of ChatGPT. |
Mario Saltarelli | Former IT and Security leader at OpenAI. |
Mark Jen | Generalist, most recently infra @ Meta. |
Mianna Chen | Previously at OpenAI and Google DeepMind. Led advanced voice mode, 4o, 4o-mini, o1-preview, and o1-mini launches. |
Mira Murati (CEO) | Former CTO of OpenAI, led OpenAI's research, product and safety. |
Myle Ott | AI researcher, founding team at Character.AI, early LLM lead at Meta, creator of FSDP and fairseq. |
Naman Goyal | Previously distributed training and scaling at FAIR and GenAI @Meta, most recently LLAMA pretraining. |
Nikki Sommer | Formerly VP HRBP at OpenAI and Director, HRBP at Twitter. |
Noah Shpak | ML Engineer, loves making data go vroom while GPUs go Brrr. |
Pia Santos | Executive Operations Leader, previously at OpenAI. |
Randall Lin | Previously babysitting ChatGPT at OpenAI and co-tech leading 'the Twitter algorithm' at X. |
Rowan Zellers | Formerly at OpenAI, working on realtime multimodal posttraining. |
Ruiqi Zhong | Passionate about human+AI collaboration, previously PhD at UC Berkeley, working on scalable oversight and explainability. |
Sam Schoenholz | Led the reliable scaling team and GPT-4o optimization at OpenAI. Previously worked at the intersection between Statistical Physics & ML at Google Brain. |
Sam Shleifer | Research engineer specializing in inference, previously at Character.AI, Google DeepMind, FAIR, HuggingFace. |
Saurabh Garg | Researcher, formerly working on all things multimodal at Mistral AI. Deep into the magic of pretraining data and loving every byte of it! |
Shaojie Bai | Avid ML researcher to make audio-visual models better and faster, previously at Meta. |
Stephen Roller | Previously full-stack pre-training at DeepMind, CharacterAI, and MetaAI. |
Yifu Wang | Passionate about novel ways of overlapping/fusing GPU compute and communication. Formerly PyTorch @ Meta. |
Yinghai Lu | ML system engineer, formerly led various inference efforts at OpenAI and Meta. |


