Will the stock market in 2025 continue the run from 2023 and 2024?

Do you see AI as a threat to GOOG? I’m the same, I used to use GOOG a lot, but now AI has displaced a LOT of Google searching. And only a small part of the AI I am using is with GOOG. It is split between OpenAI, Gemini, Local, Deepseek and Perplexity. As well as third party providers such as Groq, SambaNova etc.

I don’t get the trend of using LLMs rather than search engines for searches. Google search has become crappier and now requires JavaScript on the client side to run :nauseated_face: but a good search engine allows me to, in the blink of an eye:

  • choose which site I’m visiting, allowing me to select a source I trust from the onset.
  • quickly browse a text to find the exact information I am searching for.
  • allow me to immediately get some context if I want to expand my search.

Whereas LLMs give me a short text that may contain hallucinations, the source of which requires extra steps to check with even more extra steps if I am not happy with the sources used and want to find other ones.

Am I doing this wrong?

3 Likes

Some queries are hard to do with traditional search engines. e.g. yesterday I asked: “what japanese cartoon in the 80s had irreverant humor and had a mad inventor.” and the LLM returned the answer immediately. and also dealt with follow-up questions such as “wasn’t there some funny anecdote about the singer of the cantonese version of the theme song?”.

LLMs can also save you time being able to ask questions like “what is the length of a cube of gold weighing 1kg?” or “what is the wavelength of a human being?”

1 Like

I always ask myself the question: Can I find the answer with one Google search? If not, any AI is probably the better choice (but also because Google has become worse thanks to SEO spam).

5 Likes

Nobody knows. I would say stick to your plan and think to realize some profits when the market is overbought

You shouldn’t think of LLMs as a standalone thing (the “stored” knowledge is fairly limited and can easily hallucinate).

Imagine your LLM has access to 1. google (or another search engine) 2. a “browser tool”

Can’t the LLMs issue the query, fetch the page and do the summarization? IMO the result in that case is close to you doing it yourself (except the LLM can do it more efficiently, technically you could shove an entire PDF or wiki page in the context window).

It might still hallucinate, but usually with the “factual” data being available in the context the risk is really low (and LLMs are fairly good at summarization).

(this approach is the approach taken by search gpt, kagi, perplexity, google ai overviews, etc. but technically you could just hookup your own API with e.g. MCP and Claude and have your DIY LLM powered knowledge retrieval).

1 Like

My understanding is that a LLM that I would have personally trained would be an amazing assistant as it would search things in the pattern I want, use the specific databases I am feeding it and displaying it in the format that is fully useful to me. It’s hard to come over such a solution at low cost (though DeepSeek might provide a way to reach that with its open source code, I haven’t dug in it enough).

Absent that, I think it might help someone who’s not used to doing searches or for specific contextual things like PhilMongoose’s examples but I’m dubious that it really helps someone who knows what they want to find and what kind of sources they’ll trust.

If anybody here knows of a not purse breaking way to actually get to choose what a personnal AI bot accesses to (that being a meaningful amount of data as feeding them a few documents isn’t an issue), I’d be very interested as that’s where they’re real value added comes in my opinion (and that’s a huge value added at that point).

I think notebooklm kinda does that (the rest of the product not the gimmicky podcast thing)

for example: I've been using NotebookLM powered by Gemini 2.0 for three projects and it is _r... | Hacker News

1 Like

You can create your own system which does the same. You basically index all the data you want and then set up the LLM so that when you make a query, it does a fuzzy search on the data and injects it into the context window so it has all the background info to hand.

The search term you want is RAG (retrieval augmented generation). A lot of font ends such as Open WebUI have some form of this built-in and you can also build out your own using libraries such ax txtai.

3 Likes

I understand that the younger generations are big into “manifesting™”, applied enlightment and other advanced radical visions. From my very limited understanding, the principle is that when you really believe you want and deserve something and you act like you already have it or you will get it soon, it will happen.

I clearly wasted my life :expressionless:

Well maybe the younger generation is overdoing it a bit, but I actually do believe that people who are convinced they can do something and have an optimistic outlook are more likely to achieve that goal compared to people who constantly doubt themselves, think they don’t deserve success etc.

1 Like

My experience is that when you look like you know what you are doing and you are doing something, very few people will actually try to stop you. If you ask for doing it beforehand, though, many of those same people will say no.

There is such a thing as overstepping one’s level of expertise/competency, looking like a fool and incurring real reputational damage as a result, though. Young people can chalk it to inexperience so maybe they can hope to get away with it.

1 Like