top of page
Search
AI Hallucinations


Thinking Machines Labs Takes Aim at AI’s “Nondeterminism Problem”
A new article from Thinking Machines Labs may hint at where the record-setting startup is heading. The piece explores why LLMs remain nondeterministic even with temperature set to zero tracing the issue to floating-point math across multiple cores. With reports that 95% of AI pilots fail, Thinking Machines seems to be signaling that unpredictability is more than a quirk it’s a core obstacle to scaling AI reliably.
Sep 15, 20252 min read


We Tried to Recreate Claude’s Hedge Fund Demo. Here’s What Actually Happened.
We tried to recreate Claude’s viral hedge fund demo using IONQ—and the results didn’t hold up. While it summarized the earnings call well, it failed to pull basic financials and ran a flawed DCF with shaky assumptions. The AI analyst hype sounds great, but real-world results still lag.
Aug 5, 20254 min read


We Tried the New ChatGPT Agent to Calculate Our AI Index — Here's What Happened
We tested OpenAI’s new GPT Agent on a real task: calculating our AI SC 15 Index. The result? Fast, accurate, and low-friction. As tools like this improve, startups built on similar workflows may struggle to keep up — a trend we explored in our recent piece on AI’s rapid pace.
Jul 26, 20252 min read


AI Hallucination hangups - did the chat just gaslight me?
What happens when AI systems generate plausible-sounding but completely false information? Welcome to AI hallucinations—fabricated content that's not grounded in context or world knowledge. As AI becomes more sophisticated, distinguishing accurate responses from confident fiction becomes crucial for anyone using these tools.
Jul 3, 20252 min read
bottom of page