Great article. I suppose it is worth mentioning that it looks like you are implying that RAGs only weakness is when it pulls wrong stuff off the internet.
LLMs will happily make stuff up even when "grounded" in RAG stuff.
Agreed! I didn't mention Perplexity because the details of how it works aren't very clear to me. It seems like it's mostly the same things under the hood, but they're pretty cagey with the details.
Agreed. I would assume the way Perplexity fine tunes their model or the way they use it to create vectors is different (and better) than what Google tries
Great piece, loved the detailed breakdown of what hallucinations are and how they occur. I agree that Search and LLMs, especially when you're Google (the first and the most trusted the source of truth on the internet) is a particularly bad idea.
Maybe LLMs are faulty search engines. Or maybe they’re actual search engines offering a society that’s forced itself to think linearly due to the time constraints presented by our artificial, machine-centric reality, one last chance to expand our cognition in the way the Enlightenment—the reason behind the revolution—originally intended.
LLMs give us the superpower of time dilation, the ability to traverse and converge the web at the speed of demand, there’s no more excuse to stunt human cognition in the name of productivity.
If we open our minds to the liminal space we’re in, to the notion that this new tech isn’t for us, and that we are the pre-industrial farmers meant to usher in a new era without the luxury of having been shaped by it yet, then we’ll step aside and recognize that machine-age finding was prehistoric search. What if it’s more important that the next generation learns to navigate than that progressing technology gives us a better binary answer engine.
Great article. I suppose it is worth mentioning that it looks like you are implying that RAGs only weakness is when it pulls wrong stuff off the internet.
LLMs will happily make stuff up even when "grounded" in RAG stuff.
Good point!
Seems like Google thought they could just copy perplexity and realized it’s a lot harder to pull off than just a UX on an LLM
Agreed! I didn't mention Perplexity because the details of how it works aren't very clear to me. It seems like it's mostly the same things under the hood, but they're pretty cagey with the details.
Agreed. I would assume the way Perplexity fine tunes their model or the way they use it to create vectors is different (and better) than what Google tries
Great piece, loved the detailed breakdown of what hallucinations are and how they occur. I agree that Search and LLMs, especially when you're Google (the first and the most trusted the source of truth on the internet) is a particularly bad idea.
Thanks so much! Yeah I'm not sure what they were thinking. Don't break your flagship product!
A WHAT IF I’ve been playing with:
Maybe LLMs are faulty search engines. Or maybe they’re actual search engines offering a society that’s forced itself to think linearly due to the time constraints presented by our artificial, machine-centric reality, one last chance to expand our cognition in the way the Enlightenment—the reason behind the revolution—originally intended.
LLMs give us the superpower of time dilation, the ability to traverse and converge the web at the speed of demand, there’s no more excuse to stunt human cognition in the name of productivity.
If we open our minds to the liminal space we’re in, to the notion that this new tech isn’t for us, and that we are the pre-industrial farmers meant to usher in a new era without the luxury of having been shaped by it yet, then we’ll step aside and recognize that machine-age finding was prehistoric search. What if it’s more important that the next generation learns to navigate than that progressing technology gives us a better binary answer engine.