16 Comments
Feb 10·edited May 22Liked by Devansh

I think what you are calling "delusional intelligence" is not that at all - it's more of a reflection that our brains learn in ways that we don't understand ourselves and can't model accurately because we don't "know how we know". Kathy Sierra calls this "perceptual knowledge" in her marvelous book "Badass: Making Users Awesome", and that's probably as good an explanation as any.

Expand full comment
author

Will look into this. Thanks

Expand full comment

Badass is such a criminally underrated book. It's so good, as everything Kathy writes can only be.

Expand full comment
Feb 15Liked by Devansh

brilliant essay , keep it up !!! so much to learn and ponder

Expand full comment
author

Thanks you

Expand full comment
Feb 10·edited Feb 10Liked by Devansh

Hi Devansh, excellent post!

It is not about data at all - it is about experiencing the world physically, via a body, and learning directly, interactively, continuously. Core intelligence needs no explicit computation using symbols, which means it doesn't need math, language or other symbols.

Expand full comment
author

In my piece on why AGI is never coming, I said something similar- Intelligence creates Data, not the other way round

Expand full comment

Nice! Exactly - I joke that DATA is DOA :)

Expand full comment

Here's an interesting observation a friend just made, very much in the middle of all this: an LLM will read the entire internet (or as much of it as can be included in the pre-training data), and from what I understand, there is little or no weighting given to the importance of stuff like references (EG, Wikipedia) vs just telling the thing to decide on its own how to interpret the importance of the sources.

Initially, I saw this as a major flaw in the system - EG, why would a random YouTube comment have the same weight as a Wikipedia entry?

But now, I'm not so sure.

Expand full comment
author

This is an idea called hierarchical embeddings- which I talked about before. Teching your AI to discount certain inputs and prioritize other.

Expand full comment
author

This is an idea called hierarchical embeddings- which I talked about before. Teching your AI to discount certain inputs and prioritize other.

Expand full comment

I remember two biases data collection bias and representation bias from a recent literature.

We train our systems on these biases which amplify them.

Expand full comment
author

True

Expand full comment

Whoa definitely am not anxious to follow this thread. The idea that objective measures of productivity shouldn't be in wevalued in the workplace seems bonkers.

Expand full comment
author

That's not said at all. Just because data has limitations, doesn't mean you don't try to use it. It's about acknowledging the limitations and understanding when something is appropriate as opposed to when it's not.

Expand full comment

You're playing games with data. Who's claiming data has no limitations?

Expand full comment