My mission is simple: Help create Utopia with AI. Crystal clear elucidation is my super power. I use a transdisciplinary approach, synthesizing numerous disc...
We could have AI models in a couple years that hold the entire internet in their context window.
Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.
That’s a really bold claim.
Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.
Yeah, long term memory where ai can access only what it needs/wants is the way.
For now, I’d be happy with an AI that had access to and remembered the beginning of our conversation.