Chronicles of 2025

It’s a relatively new thing, you know.

I was in D.C. in I believe the winter of 1987 and the only picture I remember – because I took it with my Yashica 35mm film camera loaded with slide film – is a homeless person sitting on a steaming manhole cover on the sidewalk with the White House in the background.
I focused on the homeless man, but the White House is clearly recognizable in the boceh.

1 Like

I was having coffee at a nearby bakery this morning and realized just how silly this trade imbalance story is. That bakery runs a UUGE trade deficit with me. Not counting digital sercices (of course!) these “freeloaders” don’t consume ANYTHING that I produce. We used to call it specialization, and it used to be praised as good thing happening to humanity. Has it stopped simply because the world is now dealing with a crazy man holding a gun, or do there actually exist sane argument for that worldview?

P.S. I copied the above to Gemini and it actually produced some decent arguments for protectionism.

6 Likes

The two most significant arguments are

  • the orange leader knows tariffs are a great thing
  • tariffs bring in the federal revenues required to justify tax cuts for the rich

… and the beautiful thing is „they are funded by foreigners“. You better don’t argue with those superior insights - regardless whether you work for the government, or a Corporation:

Hold on, you said ‚sane‘…

6 Likes

Since tariffs are no longer a thing in the news, let’s go back to the market.

The stock market this year:

(stolen here)

5 Likes

Given ELV and UNH were boosted, I bought CI instead.

I also bought a bunch of Swiss stocks just as a fair warning to you all :wink:

Speaking of AI … this is a fun Twitter thread in the flavor of “Ooh, is this a hornets nest? Let me stir in it with a stick”:

Why AI is a house of cards

TL;DR (I think):
The only thing propping up AI is venture capital (VC) funding and eventually, VC will dry out.
An interesing follow on conclusion is that there’s still a few companies you can go long, like $NVDA, $MSFT, $GOOG, $AMZN.

1 Like

I do think the AI industry is changing the assumptions from this Twitter thread though:

  1. Various IDEs are increasing prices to match their costs

  2. I do think increasingly models are actually profitable or near break even to sell for model hosters.

So things are decreasingly dependent on the VC funding.

Now of course you still have the mismatch between opex and capex for the HW, so if the hype stops it could be not fun for some HW vendors/DC builders, but they’re not the entire AI industry. Similarly the near infinite money into new model development depends on the hype/growth estimates. But pure serving has an increasing survivability I think.

Furthermore, there is a bunch of more enterprise focused stuff like support agents. Whether companies will keep spending there or this is just experimentation that will stop I don’t know, but it isn’t dependent on VC spending.

Then again, who knows, this is all hard to predict. I do think there is value in AI, the real question is how much of an industry it can support.

2 Likes

The entire thread is predicated on AI having bad unit economics, but that’s mostly not the case.

  • The hardware manufacturers have great margins.
  • The public clouds appear to have great margins.
  • The frontier labs have amazing margins on the products they sell. The only reason they’re making a loss is because they’re giving out tons of service for free. Most of the free users will not convert to subscribers, but the unit economics are so good that they can be fabulously profitable with ad-based monetization instead.

And the people with horrible margins are:

  • Thin LLM wrappers with all-you-can-eat subscriptions. These companies are indeed total trash, though some of them might still do well out of being acquired or acquihired.

Now, the theory here seems to be that those horrible wrapper businesses are transitively propping up the margins of the other layers. But I don’t think it can be true, they’re just not large enough businesses in terms of revenue or capital to account for most of the frontier labs’ revenue. Yes, there’s some value being transferred from VCs to the frontier labs, but not relative to the frontier labs’ entire businesses, let alone the public clouds’ entire compute businesses.

(Of the stocks you quoted, I am overweight on NVDA, GOOG, and MSFT, but not on AMZN. I didn’t like that stock even before AI, and even less after AWS dropped the ball with AI. If I could invest in OpenAI and Anthropic at their currently rumored valuations, I absolutely would put a 6 figure sum in each. Microsoft is a poor proxy for OpenAI.)

1 Like

And model training. I bet they make money on inference but training model has huge costs and because there’s no monopoly they have to continue spending more and more for marginal gains.

(I think they hope at some point there’s a winner take all and they don’t have to constantly retrain, or they have a different approach that makes training less costly).

1 Like

That is a common narrative, but I believe it’s incorrect when you look closely ast the numbers.

OpenAI will be making $10-15 billion in revenue this year (their current run rate is reported $12B).

That revenue is being generated with models that were mostly trained last year, when OpenAI reportedly spent $4B on training. That’s not just the final successful training run, but all the research and failed experiments as well. For the few top labs, the training is getting amortized over absurd amounts of inference, so even taking into account the training costs wouldn’t move the inference margins that much.

OpenAI’s biggest cost is the revenue they’re not making by choosing not to monetize their 700M free users. The users are highly engaged, generate vast amounts of data that would be useful for ad targeting, and are often making commercially relevant queries. They’re forgoing tens of billions / year in revenue by not showing any ads. (Yet. They will, and every other frontier lab will follow.)

That seems low, I’m curious do you have a link? (Also latest models is what matters if they stop releasing new one, they might lose the race)

(Tho chatgpt is a bit different since that’s the household name for LLM)

OpenAI is a huge property. They have the potential to monetize their user base in the future.

Just affiliate/referrals/adverts would be worth keeping the freeloaders on board.

Plus they should be able to cut costs. With the scale they have, and power law distributions, they should be able to cache and serve cached queries for a substantial class of queries.

Leaked OpenAI financials were widely reported on last year. Those were projections, but the training budget is the easiest part of their financials to predict. The source was The Information, which is paywalled, but has the following graph:

Their actual training compute number if $3B, but I added the research compute amortization on top for a worst case.

The number appears credible if you look at the final training run costs for e.g. Deepseek V1/R1 or more recently OpenAI’s gpt-oss-120b model (both about $5M).

Yes, but the point of comparing 2024 training costs to 2025 revenue (and inference costs) is to account for the time lag between investing and the model actually being in production use and generating money. It does not matter in a steady state, but does matter for a market growing at 3x / year for revenue and 50x / year for unit volume.

They are indeed the household name for consumers, but Anthropic and Google have comparable scale, just from different segments. But yeah, most other labs will be struggling to reach sufficient scale to justify expanding the frontier, they’re just hoping to keep pace at six months behind.

Of course none of this is really actionable for any of us[0], since we can’t invest in OpenAI or Anthropic. Sorry for the thread hijack. The gap between the narrative on the supposedly unsustainable AI spending and vs. the reality of lucrative margins and rocketship growth is just something that really bugs me, so I fall for this every time :stuck_out_tongue:

[0] I assume. Both of them do have a tiny Zurich office, so it’s not totally impossible there’s some lurkers.

4 Likes

My understanding is that frontier models have ballooning costs (costs 10x more than previous gen for a 10% improvement). So I’m not sure those are comparable.

I’m curious if there’s been leaks for 2025, because at least part of the 350B spend by meta/Microsoft/Amazon/google this year (400B planned next year) should be accounted to training. And both xAI and openai have a 100B data center project as well.

OpenAI had multiple semi-open seed rounds, last one with Moonshot in Switzerland (RIP).

2 Likes

Uneducated take on AI, I hope someone (or more!) has ideas about it they want to discuss here.

I read that tech firms are firing loads of people, first I heard this was due to cutting over recruitment from COVID, but this FEELS like a long time ago now to still be the case. I also wonder if AI replaces a lot of jobs, especially anything to do with junior level knowledge workers, where is the consumer spending going to come from, and whether that will eventually hit everyone hard as in the end valuations (and our investments) need earnings, and earnings need spending.

If I discuss this with my left-wing friends they say “of course, it’s a plan, money to the rich, shit for the non-rich, it’s what we are telling you the last 150 years, they will crash the market to buy everything on the cheap”. I feel conflicted because I’m not conspiratorial, but I can’t readily counter the point that it looks plausible that it could play out that way, either directed or serendipitously.

Another point, I feel social media algorithms are really not that smart. This is not hubris, I am aware they have some of the smartest people on earth, and highest computing power working on them, but they are ham-handed, almost juvenile. Say I interact with a post of a friend on FB that I haven’t had any contact with for 5+ years, the algo started bombarding me with their updates, old posts, hallucinating “memories”. Same with say…hovering over a photo of the Alps for 0.5 second longer than the median, it then starts dropping Swiss rail “deals”, and Alpine photo groups (are these even real people?) and friend suggestions, and hiking groups, and hiking gear. Whoah buddy chill, I just looked at a picture of a nice mountain! Happy to be called out for saying anecdotal n = 1 “data”!

I mean, I get it, it’s trying to learn so it can become better at influencing me, but is that really the best it can do? In discussions with friends about this, we feel that our generation of over 40 but under 55 is a bad target group as we learnt human interaction before the net, but not way before, we followed the net’s evolution relatively closely, while people under 30 and over 60 broadly either grew up with it or learnt it fairly late. Then again 40-55 is prime earning potential so we should be the target group for sales, with the under 30s being the grooming group. Sounds disgusting doesn’t it?

One of my brothers, who could be a smidge smarter than me, says “anything anyone has ever done, ever, anywhere, has been influenced by a marketer, somehow”.

2 Likes

This would pose another problem (and it is my layman understanding that it is happening) in that, if you don’t have a large junior workers base, you’ll lack good senior workers later on.

Junior level knowledge workers don’t strike me as the core of the consumption base of our society? Unless we have different ideas of “knowledge workers”. Engineers aren’t going anywhere anytime soon. Coders and translators might (but that’ll give increased workload to the senior among them who’ll have to fix what AI breaks).

To your point, I feel the corporate world in the US is strangely (to me) not fighting much to preserve the rule of law, which I’d consider core to an ability to do business, so that intrigues me.

I tend to stay away from absolute statements that are also vague enough to make them unverifiable (“influenced”, “somehow”). :wink: Though, yes, probably. Depends how you define “marketer”. Also non actionable.

1 Like

Just saw this chart on Animal Spirits:

Guess what happened around the end of May.



















College was over.
4 Likes

Yeah saw that going around, note that it’s not actually chatgpt usage.

It’s some usage for third party apps using openrouter as their API which would be a tiny fraction of chatgpt (or even of claude), it could be a dev switching their app to a different model as well.

edit: and since it’s 4o-mini, I doubt it’s students doing homework, most likely an app developper that switched models or optimized? (or is no longer using openrouter and hardcoded to a provider directly)

And if you use Anthropic stats you could have the opposite conclusion, say that usage increases during holidays :slight_smile: (but anyway I think openrouter is a fairly small fraction of overall traffic, it’s good to see which models are popular but not overall usage)

3 Likes

Thanks for explaining (I just had the picture in that show and a comment from one of the hosts that the drop-off was due to college ending).

Anecdote: my son used ChatGPT since it came out for the homework due in school. He even bought a subscription for a month or two for the final project he had to deliver to get his degree.
After getting the degree, he dropped the subscription immediately.