Hyping “rising AI compute costs” is easy. Measuring AI compute efficiency is harder. This week, we break down what the latest news really tells us about AI profitability, scan the satellite internet market, dissect BitGo’s income statement, and round up new funding across the AI stack.
Is AI compute getting cheaper or not?
This week, both OpenAI and Anthropic shared new data on compute costs, and it immediately made waves. Influencers riding the “AI is a bubble” narrative rushed to declare: “See, AI compute is not getting cheaper.”
So what actually happened?
OpenAI’s CFO, Sarah Friar, published a post saying:
“Compute grew 3× year over year, or ~9.5× from 2023 to 2025… while revenue followed the same curve, growing ~3× year over year, or 10× from 2023 to 2025: $2B ARR in 2023, $6B in 2024, and $20B+ in 2025.”
At the same time, Anthropic updated its profit margin projections, according to The Information:
“Anthropic projected it would generate a 40% gross profit margin from selling AI to businesses and application developers in 2025… The lower-than-expected margin resulted from inference costs that were 23% higher than anticipated.”
Yes — overall costs for AI labs are rising. But none of this means that AI compute cost per useful output (usable tokens at target quality, decisions completed, workflows automated) is getting higher.
The mistake critics keep making is collapsing everything into the “cost of running a GPU per hour.” That is far from the right metric for AI efficiency. ChatGPT summed it up with a cute analogy: “pricing AI by GPU-hours is like pricing airlines by engine-hours flown instead of passenger-miles.”
Cost per token is closer, but still not enough, because token economics are task-specific. The question we are looking for: what is the cost per completed task at target quality?
Once you frame it that way, it becomes obvious why AI software companies end up with very different gross margins. OpenEvidence recently shared it reached ~90% gross margins, which tells you something about how narrow, high-value use cases monetize compute very differently.
For multiproduct AI labs, this means more products, more monetization models. And that’s exactly what we saw this week.
OpenAI has started offering chatbot ads to dozens of advertisers. This product will be optimized for user engagement and high-volume, low-value queries — tasks people won’t pay for directly, like search. One analysis by Signull suggests that if OpenAI monetized users at Meta’s average ARPU, that alone could imply ~$57B in annual revenue.
At the other end of the task-complexity spectrum, Sarah Friar is talking about “value sharing.” In drug discovery, she suggested OpenAI could take a license or revenue share on drugs discovered using its technology. Similar arrangements were mentioned for energy and finance.
Anthropic, meanwhile, released its new constitution for Claude. It reads as careful and expensive by design. The January 2026 Claude Constitution explicitly pushes the model toward precision, wisdom, context-sensitivity, and ethical maturity — not fast, shallow, engagement-optimized answers. That choice likely means more tokens per interaction: longer explanations, more caveats, more context gathering — and higher inference costs.
As Anthropic puts it:
“Think about what it means to have access to a brilliant friend who happens to have the knowledge of a doctor, lawyer, financial advisor… a friend who speaks frankly, engages with our problem, and knows when to refer us elsewhere.”
Claude is being optimized for precision over accessibility and engagement. That feels like a deliberate path toward higher-cost, higher-value professional and enterprise subscriptions.
OpenAI in talks with Middle Eastern sovereign wealth funds
OpenAI is in talks with sovereign wealth funds in the Middle East about a new multibillion-dollar funding round. The round is expected to be around $50 billion, though terms are still fluid and no term sheets have been signed. It’s currently expected to close in Q1.
If OpenAI does raise the full $50 billion, it would give the company a multi-year war chest, even with projected cash burn of roughly $14 billion this year. But…
The $110B question hanging over OpenAI
Elon Musk is seeking up to $110 billion from OpenAI if he prevails in his breach-of-charitable-trust lawsuit, according to a court filing submitted last Friday.
Musk’s lawyers argue that OpenAI should be required to disgorge between $65.5 billion and $109.43 billion. They also claim that Microsoft, which Musk alleges aided and abetted OpenAI’s breach of fiduciary duty, could be liable for an additional $13.3 billion to $25.06 billion.
The filing raises the stakes of the dispute significantly, turning what began as a governance fight into a potential nine-figure financial overhang for the companies involved.
Sequoia breaks tradition with Anthropic bet
Anthropic is raising a new funding round at a reported $350 billion valuation, with total proceeds expected to reach $25 billion or more.
The round is notable for Sequoia Capital, which is set to make a significant first-time investment in Anthropic. If completed, the deal would make Sequoia a backer of all three frontier AI labs, a clear break from its long-standing tradition of avoiding overlapping bets in the same category.
Blue Origin targets enterprise satellite internet
Jeff Bezos’ space company Blue Origin is planning to build an ultra-fast satellite internet service aimed at large companies, data centers, and governments.
The service, called TeraWave, would put Blue Origin in competition with SpaceX’s Starlink and Amazon’s low-Earth-orbit satellite network. Blue Origin plans to launch the first TeraWave satellites in the fourth quarter of 2027.
Unlike Starlink or Amazon’s LEO network, which target both individual consumers and enterprise customers, TeraWave is aimed exclusively at large customers that require especially fast and reliable internet connections.
Today, SpaceX’s Starlink is by far the leader in satellite internet, with more than 9 million customers and billions of dollars in revenue. Amazon has launched around 180 satellites and began an enterprise preview in November, but its service has yet to become widely available.
Execution risk surfaces at Thinking Machines
An exodus of staffers from Thinking Machines Lab this week has rattled some investors. During an all-hands meeting earlier this week, two senior researchers — Luke Metz and Sam Schoenholz — posted their resignations in the company Slack, catching employees off guard. At the same meeting, Mira Murati announced that she had fired Thinking Machines co-founder and CTO Barret Zoph.
In total, five Thinking Machines employees left last week, four of whom joined OpenAI.
The talent raid could complicate Thinking Machines’ efforts to close an ambitious funding round that aims to value the one-year-old company at $50 billion.
Funding round-up across the AI stack
ClickHouse, a database management startup competing with Snowflake and Databricks, raised $400 million in a round led by Dragoneer Investment Group, valuing the company at $15 billion including the new capital — more than double its valuation from May.
AI medical tool OpenEvidence doubled its valuation to $12 billion. The company last reported around $100 million in revenue in 2025, with indications it has since crossed $150 million. OpenEvidence monetizes via advertising on a free product offered exclusively to verified healthcare providers.
Replit raised at a $9 billion valuation, continuing the momentum behind developer-facing AI platforms.
Zipline raised $600 million at a $7.6 billion valuation, while Temporal is reportedly eyeing a $5 billion valuation as workflow orchestration infrastructure gains renewed attention.
Baseten, which helps companies including Cursor and Notion run AI models in production, raised $300 million at a $5 billion valuation, underscoring investor appetite for inference and deployment infrastructure.
Earlier-stage rounds were just as striking. Humans&, developing so-called “human-centric” AI tools, raised $480 million in a seed round at a $4.48 billion valuation. Lightning AI, which offers tools for customizing AI models, merged with data center operator Voltage Park into a single company valued at over $2.5 billion.
On the hardware side, FuriosaAI, an AI chip designer, is raising $300–$500 million in a Series D round.
And in real-time voice AI, Deepgram raised $130 million, pushing it into unicorn territory as demand accelerates for human-to-machine voice interfaces.
