Tether buys everything (incl crypto)
Do they also buy Tether?
Iâll see myself out âŠ
thatâs crazy man, why arenât you? The stock already doubled this year, even though itâs not paying divvies.
It is paying divvies.
For the rest of your comment Iâm not sure if youâre just teasing me.
You ought not to watch a stock you once sold.
So basically, Tether is just a vehicle for unregulated ETC products.
Conceptually they probably mimic something closer to bank deposits (with self regulation tho
).
They hold some risky assets but they also have some equity to absorb losses. (Donât ask me if it would pass a more stringent regulation like basel III)
Enough crypto already! Back to AI ⊠![]()
I for one am fascinated by the current narrative battles around Nvidia and AI (and Google).
Michael Burry â who cried wolf lots of times, admittedly â apparently posted his views on Nvidia on Substack a couple of days ago (Cassandra Unchained, paywalled). Ok, fine. Summary according to Gemini in points 1 and 2 in the footnote.[1]
Nvidia apparently feels like they have to respond by sending a private memo to Wall Street sell side analysts ⊠wait, what? Why respond at all? Anyway, their rebuttal is summarized in point 3 in the footnote.[1]
Michael Burry tweet responds again.
![]()
Another nice account of the narrative battles, summarized by Gemini:
The author argues that despite being the worldâs most valuable company, NVIDIA had to issue a rebuttal to short-sellers like Michael Burry because the sustainability of its 75% gross margins is fragile.
The reasons include:
- Customer Dependency: NVIDIAâs largest customers are unprofitable and rely on fickle investor capital to fund their massive hardware purchases.
- Uncertain Value Capture: It is unclear who will capture the massive value spread (e.g., 99.9% cost reduction in an MRI scan) created by AIâthe chip maker, the model maker, or the end application.
- The Real Rival: NVIDIAâs true competitor is Google, which controls the entire AI stack (TPUs, Gemini, data, and distribution) and is strategically focused on driving down costs, directly conflicting with NVIDIAâs strategy of maintaining high margins.
The text concludes that the current unit economics suggest AI may evolve into a commodity business where low cost ultimately wins, intensifying the conflict between the two giantsâ strategies.
1 Geminiâs summary:
Michael Burryâs recent post on his Substack newsletter, âCassandra Unchained,â is a powerful critique of the current AI boom, arguing that NVIDIA is the âCiscoâ of this cycle and that the rally is being inflated by aggressive accounting practices, particularly among NVIDIAâs major cloud customers.
Here is a summary of the key points from his post:
1. NVIDIA is the New Cisco
Burry draws a direct parallel between NVIDIAâs position today and Ciscoâs position at the peak of the dot-com bubble (1999â2000).2 Cisco sold the hardware (âpicks and shovelsâ) for the internet infrastructure buildout, which turned out to be catastrophic supply-side gluttony that far outpaced real demand. He suggests the massive investment promises in AI infrastructure today ($3 trillion over the next few years) mirror that same overbuilding.
2. The Core Accounting Trick: Depreciation
Burryâs central argument is not about NVIDIAâs own financial fraud, but rather the accounting methods of its customers (the âhyperscalersâ like Microsoft, Meta, and Oracle).
- The Allegation: These cloud giants are systematically extending the useful life of AI chips >and servers for depreciation purposes from the traditional 3 years to 5 or 6 years.
- The Impact: Since depreciation is a cost spread over time, extending the timeline artificially boosts current reported earnings for these customers, masking the true, massive cost of the AI hardware.6 Burry warned that companies could be overstating earnings by 20â27% if these depreciation timelines are out of sync with the rapid pace of chip cycles.
3. Rebuttal and Counter-Attack
NVIDIA reportedly sent a memo to Wall Street analysts to push back on Burryâs public criticisms. In his Substack, Burry dismissed NVIDIAâs memo as âdisingenuousâ and attacking âstraw men,â arguing that NVIDIA was deflecting criticism away from his actual concerns about customer depreciation.
4. Continued Bearish Stance
Burry reaffirmed that he stands by his analysis and disclosed that he still holds put options (a bearish bet) on both NVIDIA and Palantir. He is now using his paid newsletter to argue that this AI cycle is propped up by financial and accounting scaffolding that will not age well.
IMO. It is obvious that the Nvidia margins are unsustainable.
However, they will still have a near monopoly position for a few more years where they can continue to enjoy monopoly margins. Maybe more years if intelligent robotics really comes soon.
OK. I just crashed the market: I bought NVDA. Brace! Brace!
I am no expert but I heard that NVDA moat is actually the quality of compiler tools even more than silicon design. That may ironically be eroded fast by AI itself.
Why? Building a competing ecosystem is a massive effort, part of it is also network effect. How would AI help? (Genuinely curious)
In a world where AI-written software is 10x cheaper than human-crafted software, the value of existing software drops by 10x. Both in the terms of pure monetary/opportunity cost of writing a competitor, but in terms of network effects.
What do I care if thereâs a hyper-optimized kernel for some computation on CUDA if AI can produce a competitive one for my platform anyway.
Obviously thereâs a pretty big qualifier there. What if we donât get to a world of 10x productivity in software development? Well, then it doesnât really matter which hardware ecosystem wins, because AI clearly wonât be capturing most of the worldâs economic activity let alone the value from that activity.
Huh? Whatâs that number? For the record, AI has been doing wonders and picking up speed in diagnostics, particularly imaging triage and interpretation, but itâs been years now, before LLMs.
Was very busy yesterday, looked at my tickers in the evening, all very green. Thanksgiving+Santa?
Itâs buried in the full tweet.
- More broadly, Nvidia is unsure where value will land in the AI business. Anyone using the latest models knows how they can lower the cost of legal advice or reading an MRI. Our own work finds that reading an MRI costs $150 for a human doctor compared to $0.15 for an AI model. This 99.9% spread is the biggest we have ever seen in business. But who gets it? The chip maker? The model maker? An application maker? The doctor? The patient?
And yes, I agree, the interpretation of medical images has been a thing for years already.
I think the author wanted to make the point that whatever cost reduction future LLM applications may bring, itâs unclear who reaps in the profits.
Iâm pretty confident that it will not be the patient ![]()
NVDA: You need a filter, a FOMOFIâŠ
I was just about to say âthe patientsâ, misdiagnoses and delays are still a serious problem. But more broadly, it still looks like the only use cases we have to date is of the âa good startâ and âshave some time offâ level. Legal is another key one, and it was figured out within weeks of chatGPT launching, I remember a comparison of LLMs vs junior associates, the LLMs wiped the floor with the associates for both speed and accuracy, clearly large, structured data sets are the LLMsâ playground.
On point: cost reduction can equal higher margins, right, I get that, but is it enough? Higher productivity is not a given. Talking to my friend mentioned below, who is a lawyer, his point is that junior associates do the legwork but also get on-the-job training while at it, similar to analysts in consulting/banking, and that regardless of whether they go through a pile of paperâŠitâs still one pile of paper. Having more associates free wonât make more piles of paper (and hence billable hours)! In the medical field - for example with digital health apps - thereâs a perception among people who pay Drs (as I have talked to many of them at the healthcare system level) that eg giving a Dr 15 more mins in their day wonât mean they will see more patients in that time, or do a better job at it, but rather they will look at their phone, write on mustachianpost etc ![]()
/random Mirage writings
Funnily enough, among the first professions called âsafe from AIâ was psychology and psychotherapy, a few months back a friend of mine whoâs had therapy for many years told me that the paid chatGPT subscription he has is saving him money in therapy. I trust - because I know him - that heâs sharp enough to guide the LLM to not just give him constant positive reinforcement. In a trial I did, asking chatGPT to roast an idea I had, it needed 3 reminders to stop giving me cuddles and encouragement and really roast the scheisse of my idea. It did a mediocre job, I have friends who do a much better job whenever I want a jolt of tough love ![]()
Just listened to a podcast where Michael Burry did an interview with Michael Lewis:
I have to say, after listening to this Iâm a little infatuated with M.D. Michael Burry. I love this guy.
Very grounded.
But to get back to your quote: financial media seems to have misreported the short position and they probably misinterpreted Burryâs 13-F filing (skip to minute 14 for learning about the details). His short position (according to him) is 10 million dollars âŠ[$]
I havenât fact checked this myself (apparently the 13-F reporting for options is messed up), but I can totally see how a headline like âMichael Burry (of The Big Short) has a billion short position on Palantir and Nvidiaâ will generate more clicks than âWell, looks like Michael Burry bet $10M on Palantir and Nvidia going down over the next two yearsâ.
$ For those interested in the details:
- Palantir is trading at $200.
- Burry buys 50â000 Puts on Palantir expiring in two years at strike $50. He thinks theyâre going to $30.
- Since each option contract is for 100 shares, the press reports this as a $200 x 50â000 x 100 = $1B short bet.
- In reality, the bet cost Burry 50â000 x $2 x 100 = $10M.
