top of page

Meta tried to acquire Thinking Machines Labs

  • Writer: Niv Nissenson
    Niv Nissenson
  • Aug 6
  • 4 min read
ree

The tech world was abuzz last week with reports that Mark Zuckerberg had offered a staggering $1.5 billion job package to Andrew Tulloch — a top AI researcher currently at Thinking Machines Labs. The Wall Street Journal reports that this wasn’t an isolated move, but part of a broader effort by Meta to acquire Thinking Machines outright.


When Thinking Machines CEO, Mira Murati, declined Zuckerberg’s acquisition attempt, Meta allegedly pivoted to poaching mode attempting to lure Andrew Tulloch and probably other talent from Thinking Machines Labs. As you can see in the chart below many of Thinking Machines Labs staff are ex-OpenAI engineers and represent some of the most sought-after talent in frontier AI.


Meta, for its part, has pushed back. Spokesman Andy Stone called the description of the offer “inaccurate and ridiculous,” noting that any compensation package would depend on stock performance. He also stated that Meta is not interested in acquiring Thinking Machines.


Thinking Machines Labs, which only launched earlier this year and raised a record-breaking $2 billion seed round, has yet to release a product. But in today’s AI landscape, that hardly matters. It’s the people building the tech — not just the tech itself — that hold the power.


Right now, AI talent is the most valuable resource in software. Full stop.

Foundational model builders, compiler engineers, and distributed training wizards are commanding valuations normally reserved for unicorn startups. Meta is playing hardball — allegedly offering billions, orchestrating talent raids, and treating elite AI developers as the key to long-term dominance.


While AI is often framed as a threat to jobs, it may have just triggered the largest job offer in history. Call it irony — or maybe AIrony.



Thinking Machines founding team or Meta's "poach" list (source: https://thinkingmachines.ai):

Name

Bio

Alex Gartrell

Former Leader of Server Operating Systems at Meta, expert in Linux kernel, networking, and containerization.

Alexander Kirillov

Co-creator of Advanced Voice Mode at OpenAI and Segment Anything Model (SAM) at Meta AI, previously multimodal post-training lead at OpenAI.

Andrew Gu

Previously working on PyTorch and Llama pretraining efficiency.

Andrew Tulloch (Chief Architect)

ML systems research and engineering, previously at OpenAI and Meta.

Barret Zoph (CTO)

Formerly VP of Research (post-training) at OpenAI. Co-creator of ChatGPT.

Brydon Eastman

Formerly post-training research at OpenAI, specializing in human and synthetic data, model alignment and RL.

Chih-Kuan Yeh

Previously Building data for Google Gemini and Mistral AI.

Christian Gibson

Formerly infrastructure engineer at OpenAI, focused on supercomputers used in training frontier models.

Devendra Chaplot

Founding team member & Head of Multimodal Research at Mistral AI, co-creator of Mixtral and Pixtral. Expert in VLMs, RL, & Robotics.

Horace He

Interested in making both researchers and GPUs happy, formerly working on PyTorch Compilers at Meta, co-creator of FlexAttention/gpt-fast/torch.compile

Ian O'Connell

Infrastructure engineering, previously OpenAI, Netflix, Stripe.

Jacob Menick

ML researcher, led GPT-4o-mini at OpenAI, previously contributed to the creation of ChatGPT and deep generative models at DeepMind.

Joel Parish

Security generalist, helped ship and scale the first versions of ChatGPT at OpenAI.

John Schulman (Chief Scientist)

Pioneer of deep reinforcement learning and creator of PPO, cofounder of OpenAI, co-led ChatGPT and OpenAI post-training team.

Jonathan Lachman

Operations executive, former head of special projects at OpenAI and White House national security budget director.

Joshua Gross

Built product and research infrastructure at OpenAI, shaping ChatGPT's learning systems and GPU fleet; previously on product infra at Meta.

Kevin Button

Security engineer focused on infrastructure and data security, formerly at OpenAI.

Kurt Shuster

Reasoning at Google DeepMind, full-stack pre-training and inference at Character.AI, and fundamental dialogue research at Meta AI.

Kyle Luther

ML researcher, previously at OpenAI.

Lia Guy

Previously at OpenAI and DeepMind, working on model architecture research.

Lilian Weng

Formerly VP of Research (safety) at OpenAI. Author of Lil'Log.

Luke Carlson

Former ML Engineer in Apple's Machine Learning Research group, designed ML frameworks for model orchestration, speech generation, private federated learning, and image diffusion.

Luke Metz

Research scientist and engineer, previously at OpenAI and Google Brain. Co-creator of ChatGPT.

Mario Saltarelli

Former IT and Security leader at OpenAI.

Mark Jen

Generalist, most recently infra @ Meta.

Mianna Chen

Previously at OpenAI and Google DeepMind. Led advanced voice mode, 4o, 4o-mini, o1-preview, and o1-mini launches.

Mira Murati (CEO)

Former CTO of OpenAI, led OpenAI's research, product and safety.

Myle Ott

AI researcher, founding team at Character.AI, early LLM lead at Meta, creator of FSDP and fairseq.

Naman Goyal

Previously distributed training and scaling at FAIR and GenAI @Meta, most recently LLAMA pretraining.

Nikki Sommer

Formerly VP HRBP at OpenAI and Director, HRBP at Twitter.

Noah Shpak

ML Engineer, loves making data go vroom while GPUs go Brrr.

Pia Santos

Executive Operations Leader, previously at OpenAI.

Randall Lin

Previously babysitting ChatGPT at OpenAI and co-tech leading 'the Twitter algorithm' at X.

Rowan Zellers

Formerly at OpenAI, working on realtime multimodal posttraining.

Ruiqi Zhong

Passionate about human+AI collaboration, previously PhD at UC Berkeley, working on scalable oversight and explainability.

Sam Schoenholz

Led the reliable scaling team and GPT-4o optimization at OpenAI. Previously worked at the intersection between Statistical Physics & ML at Google Brain.

Sam Shleifer

Research engineer specializing in inference, previously at Character.AI, Google DeepMind, FAIR, HuggingFace.

Saurabh Garg

Researcher, formerly working on all things multimodal at Mistral AI. Deep into the magic of pretraining data and loving every byte of it!

Shaojie Bai

Avid ML researcher to make audio-visual models better and faster, previously at Meta.

Stephen Roller

Previously full-stack pre-training at DeepMind, CharacterAI, and MetaAI.

Yifu Wang

Passionate about novel ways of overlapping/fusing GPU compute and communication. Formerly PyTorch @ Meta.

Yinghai Lu

ML system engineer, formerly led various inference efforts at OpenAI and Meta.




bottom of page