I think what you are calling "delusional intelligence" is not that at all - it's more of a reflection that our brains learn in ways that we don't understand ourselves and can't model accurately because we don't "know how we know". Kathy Sierra calls this "perceptual knowledge" in her marvelous book "Badass: Making Users Awesome", and that's probably as good an explanation as any.
It is not about data at all - it is about experiencing the world physically, via a body, and learning directly, interactively, continuously. Core intelligence needs no explicit computation using symbols, which means it doesn't need math, language or other symbols.
Here's an interesting observation a friend just made, very much in the middle of all this: an LLM will read the entire internet (or as much of it as can be included in the pre-training data), and from what I understand, there is little or no weighting given to the importance of stuff like references (EG, Wikipedia) vs just telling the thing to decide on its own how to interpret the importance of the sources.
Initially, I saw this as a major flaw in the system - EG, why would a random YouTube comment have the same weight as a Wikipedia entry?
Whoa definitely am not anxious to follow this thread. The idea that objective measures of productivity shouldn't be in wevalued in the workplace seems bonkers.
That's not said at all. Just because data has limitations, doesn't mean you don't try to use it. It's about acknowledging the limitations and understanding when something is appropriate as opposed to when it's not.
I think what you are calling "delusional intelligence" is not that at all - it's more of a reflection that our brains learn in ways that we don't understand ourselves and can't model accurately because we don't "know how we know". Kathy Sierra calls this "perceptual knowledge" in her marvelous book "Badass: Making Users Awesome", and that's probably as good an explanation as any.
Will look into this. Thanks
Badass is such a criminally underrated book. It's so good, as everything Kathy writes can only be.
brilliant essay , keep it up !!! so much to learn and ponder
Thanks you
Hi Devansh, excellent post!
It is not about data at all - it is about experiencing the world physically, via a body, and learning directly, interactively, continuously. Core intelligence needs no explicit computation using symbols, which means it doesn't need math, language or other symbols.
In my piece on why AGI is never coming, I said something similar- Intelligence creates Data, not the other way round
Nice! Exactly - I joke that DATA is DOA :)
Here's an interesting observation a friend just made, very much in the middle of all this: an LLM will read the entire internet (or as much of it as can be included in the pre-training data), and from what I understand, there is little or no weighting given to the importance of stuff like references (EG, Wikipedia) vs just telling the thing to decide on its own how to interpret the importance of the sources.
Initially, I saw this as a major flaw in the system - EG, why would a random YouTube comment have the same weight as a Wikipedia entry?
But now, I'm not so sure.
This is an idea called hierarchical embeddings- which I talked about before. Teching your AI to discount certain inputs and prioritize other.
This is an idea called hierarchical embeddings- which I talked about before. Teching your AI to discount certain inputs and prioritize other.
I remember two biases data collection bias and representation bias from a recent literature.
We train our systems on these biases which amplify them.
True
Whoa definitely am not anxious to follow this thread. The idea that objective measures of productivity shouldn't be in wevalued in the workplace seems bonkers.
That's not said at all. Just because data has limitations, doesn't mean you don't try to use it. It's about acknowledging the limitations and understanding when something is appropriate as opposed to when it's not.
You're playing games with data. Who's claiming data has no limitations?