NVDA: You need a filter, a FOMOFI…
I was just about to say “the patients”, misdiagnoses and delays are still a serious problem. But more broadly, it still looks like the only use cases we have to date is of the “a good start” and “shave some time off” level. Legal is another key one, and it was figured out within weeks of chatGPT launching, I remember a comparison of LLMs vs junior associates, the LLMs wiped the floor with the associates for both speed and accuracy, clearly large, structured data sets are the LLMs’ playground.
On point: cost reduction can equal higher margins, right, I get that, but is it enough? Higher productivity is not a given. Talking to my friend mentioned below, who is a lawyer, his point is that junior associates do the legwork but also get on-the-job training while at it, similar to analysts in consulting/banking, and that regardless of whether they go through a pile of paper…it’s still one pile of paper. Having more associates free won’t make more piles of paper (and hence billable hours)! In the medical field - for example with digital health apps - there’s a perception among people who pay Drs (as I have talked to many of them at the healthcare system level) that eg giving a Dr 15 more mins in their day won’t mean they will see more patients in that time, or do a better job at it, but rather they will look at their phone, write on mustachianpost etc ![]()
/random Mirage writings
Funnily enough, among the first professions called “safe from AI” was psychology and psychotherapy, a few months back a friend of mine who’s had therapy for many years told me that the paid chatGPT subscription he has is saving him money in therapy. I trust - because I know him - that he’s sharp enough to guide the LLM to not just give him constant positive reinforcement. In a trial I did, asking chatGPT to roast an idea I had, it needed 3 reminders to stop giving me cuddles and encouragement and really roast the scheisse of my idea. It did a mediocre job, I have friends who do a much better job whenever I want a jolt of tough love ![]()
Just listened to a podcast where Michael Burry did an interview with Michael Lewis:
I have to say, after listening to this I’m a little infatuated with M.D. Michael Burry. I love this guy.
Very grounded.
But to get back to your quote: financial media seems to have misreported the short position and they probably misinterpreted Burry’s 13-F filing (skip to minute 14 for learning about the details). His short position (according to him) is 10 million dollars …[$]
I haven’t fact checked this myself (apparently the 13-F reporting for options is messed up), but I can totally see how a headline like “Michael Burry (of The Big Short) has a billion short position on Palantir and Nvidia” will generate more clicks than “Well, looks like Michael Burry bet $10M on Palantir and Nvidia going down over the next two years”.
$ For those interested in the details:
- Palantir is trading at $200.
- Burry buys 50’000 Puts on Palantir expiring in two years at strike $50. He thinks they’re going to $30.
- Since each option contract is for 100 shares, the press reports this as a $200 x 50’000 x 100 = $1B short bet.
- In reality, the bet cost Burry 50’000 x $2 x 100 = $10M.
Well, there clearly is room for improvement here. I asked the free version of chatgpt questions around private enterprise valuation for tax purposes (Kreisschreiben 28 for the geekier amongst us) and it kept hallucinating about what it says, coming up with quotes that I welcomed as they would have supported what I wanted (to lower my tax bill😉) - except they were nowhere to be found in the actual document, or interpretations. When confronted, first it kept making excuses about having paraphrased, only when pushed admitting it made things up (‚thank you for keeping a sharp eye‘)…
That’s really funny isn’t it, clearly it’s made to behave like a human would ![]()
The real cash flow generating use case will tolerate this margin of error.
(neuralink porn).
For a $100m payoff if things go his way ![]()
Or $250m if Palantir goes to zero … ![]()
Thanks for sharing.
Burry is absolutely right when he says bitcoin is worthless. And that people don’t get what they’re buying. People may be stupid.
But this is the point. Markets and the world is driven by human beings. Mostly less intelligent than this beautiful mind burry is. If they feel something stupid is right, then you have your bitcoin at 1M.
Being right does not always make you rich.
I feel, sniffing out bullshit coming from the LLM is a skill like Google-fu. One pattern I saw in current ChatGPT is that it throws a lot of tangentially related information at you if it can’t find an answer. Then it is a good time to call it out and force it to base interpretation on verbatim quotes from sources.
The LLMs (and ChatGPT in particular) are just so much quicker to find you relevant sources and the relevant part in the sources. You just need to guide it the right way.
“The largest cumulative losses before young/start-up companies turned a profit compared with cash burn for OpenAI and Anthropic based on media reports”
(Source)
You know it is bad when the chart cries for a log scale!
Given the Anthropic/OpenAI difference I’ve recently been wondering if AI has a problem or just OpenAI has a problem.
That said when was the last time a company reached 800M users + became a company name in 3y?
Is this chart for real, I.e. has anybody ever done this using their own nalysis?
This correlation looks uncannigly good. Hard to argue for stock investments nowadays looking at this.
When you have something with better risk-adjusted returns in the long term please come and tell us.



