19 Comments

I think what you are calling "delusional intelligence" is not that at all - it's more of a reflection that our brains learn in ways that we don't understand ourselves and can't model accurately because we don't "know how we know". Kathy Sierra calls this "perceptual knowledge" in her marvelous book "Badass: Making Users Awesome", and that's probably as good an explanation as any.

Expand full comment

Will look into this. Thanks

Expand full comment

Badass is such a criminally underrated book. It's so good, as everything Kathy writes can only be.

Expand full comment

Glad you resurfaced this—I’ve had quite a few cases where founders have pitched “all we have to do is get this data” for complex problems and I’m like 🤨

It’s not to say it doesn’t make incremental progress, but the idea we’ll go from effectively capturing none of these types of implicit knowledge to “it works” is naive at best, and otherwise just a pretty big gap in their mental models on how things work

I suppose me making this comment reflects what I think for those I send/share this to, but oh well, lol

Expand full comment

There will be a part 2 resurface which will be on the limitations of mathematical reasoning too

Expand full comment

brilliant essay , keep it up !!! so much to learn and ponder

Expand full comment

Thanks you

Expand full comment

Hi Devansh, excellent post!

It is not about data at all - it is about experiencing the world physically, via a body, and learning directly, interactively, continuously. Core intelligence needs no explicit computation using symbols, which means it doesn't need math, language or other symbols.

Expand full comment

In my piece on why AGI is never coming, I said something similar- Intelligence creates Data, not the other way round

Expand full comment

Nice! Exactly - I joke that DATA is DOA :)

Expand full comment

Here's an interesting observation a friend just made, very much in the middle of all this: an LLM will read the entire internet (or as much of it as can be included in the pre-training data), and from what I understand, there is little or no weighting given to the importance of stuff like references (EG, Wikipedia) vs just telling the thing to decide on its own how to interpret the importance of the sources.

Initially, I saw this as a major flaw in the system - EG, why would a random YouTube comment have the same weight as a Wikipedia entry?

But now, I'm not so sure.

Expand full comment

This is an idea called hierarchical embeddings- which I talked about before. Teching your AI to discount certain inputs and prioritize other.

Expand full comment

This is an idea called hierarchical embeddings- which I talked about before. Teching your AI to discount certain inputs and prioritize other.

Expand full comment

I remember two biases data collection bias and representation bias from a recent literature.

We train our systems on these biases which amplify them.

Expand full comment

True

Expand full comment

Thank you for this insightful article on the limitations of data in AI. Your points about Cultural, Delusional, and Subjective Intelligence are particularly thought-provoking. I've recently written an article that explores potential solutions to some of these data representation challenges, particularly addressing the issue of true comprehension in AI systems.

In my piece, "The Semiotic Web: A New Vision for Meaning-Centric AI," I discuss a framework called the Tokum Framework, which aims to bridge the gap between data and meaning. This approach could potentially address the limitations you've outlined:

Cultural Intelligence: The Semiotic Web proposes a system that can unify concepts across languages and cultures, potentially capturing cultural nuances more effectively.

Delusional Intelligence: By implementing a meaning-centric architecture, we might better replicate the human ability to "fill in the blanks" and create coherent models from incomplete data.

Subjective Intelligence: The framework includes mechanisms for preserving context and individual perspectives, which could help address the issue of minority viewpoints being overlooked.

If you're interested in exploring this topic further, you can find my article here: https://medium.com/@eric_54205/the-semiotic-web-a-new-vision-for-meaning-centric-ai-040dcbce0b37

I'd love to hear your thoughts on how this approach might complement or extend the ideas you've presented.

Expand full comment

Whoa definitely am not anxious to follow this thread. The idea that objective measures of productivity shouldn't be in wevalued in the workplace seems bonkers.

Expand full comment

That's not said at all. Just because data has limitations, doesn't mean you don't try to use it. It's about acknowledging the limitations and understanding when something is appropriate as opposed to when it's not.

Expand full comment

You're playing games with data. Who's claiming data has no limitations?

Expand full comment