The most important takeaways from Google’s "Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities." Paper
"A lot of people make claims that lower hallucinations and increasing context windows mean that we no longer need RAG. Just plug things into long context LLMs and live life.
These people are dumb dumbs. Stay away from them, b/c their intense stupidity is transmissible."
Magic context windows would be nice. But wishing something were true does not make it true.
It's hard to over-emphasize how important context window management is - regardless of context window size - and how much RAG can help address some of the limitations.
The challenge is hiding all that complexity from non-technical users without pissing them off. And when I say "pissing them off," I mean it literally - I've gotten that exact comment from multiple clients about ChatGPT forgetfulness and Claude hitting a wall (two different consequences of alternate context window management approaches).
So, this is definitely an area where there are opportunities to make an impact on the market. And although I have some ideas for potential mitigations in general-purpose chatbots, I wonder if moving to an agent delegation model might be the end game to ensure that AI can perform discrete tasks/prompts with just the context it needs, to avoid context dilution problems.
LOL
"A lot of people make claims that lower hallucinations and increasing context windows mean that we no longer need RAG. Just plug things into long context LLMs and live life.
These people are dumb dumbs. Stay away from them, b/c their intense stupidity is transmissible."
Magic context windows would be nice. But wishing something were true does not make it true.
It's hard to over-emphasize how important context window management is - regardless of context window size - and how much RAG can help address some of the limitations.
The challenge is hiding all that complexity from non-technical users without pissing them off. And when I say "pissing them off," I mean it literally - I've gotten that exact comment from multiple clients about ChatGPT forgetfulness and Claude hitting a wall (two different consequences of alternate context window management approaches).
So, this is definitely an area where there are opportunities to make an impact on the market. And although I have some ideas for potential mitigations in general-purpose chatbots, I wonder if moving to an agent delegation model might be the end game to ensure that AI can perform discrete tasks/prompts with just the context it needs, to avoid context dilution problems.
Spot on!
Looks familiar.
Interesting. How is the project going so far? Any interesting outcomes?