Discussion about this post

User's avatar
Filippo Marino's avatar

This is an exceptionally thought-provoking article that finally breaks free from the repetitive tropes of the AI debate. The hysteria surrounding LLM hallucinations – and the poorly framed X-risk scenarios they inspire – stems largely from uninformed assumptions about human cognition.

Whether or not a generative AI can actually ‘reason’ or is capable of original insight is an interesting question – even more so because it ironically suggests we assume those are dominant traits in human behavior. The bulk of our discourse, knowledge, and choices are far from being rational or original, let alone capable of passing any of the tests being devised to determine the reasoning competencies of LLMs. https://youtu.be/yVpsTa_dnSc?si=pBiFJ5h6ZTwZ78Y-

The same formula that prompts AI fearmongering (natural language + generative mechanisms) offers incredible potential for augmenting human capacity – and that's why the partnership model holds such power.

Consider risk judgment and decision-making, a domain increasingly relevant since COVID-19 and now the AI risk debate. Humans often flock to 'rationales' that are actually creative writing masquerading as critical thought – lacking symbolic logic or formal decision-support frameworks. (Here's a perfect example: https://unchartedterritories.tomaspueyo.com/p/openai-and-the-biggest-threat-in)

This is where an AI partner excels. They offer superior statistical and probabilistic skills (aka risk numeracy), and can model and learn from specialized domain expertise. This creates a powerful force multiplier.

Well done!

Expand full comment
Logan Thorneloe's avatar

You're killing it with finding great guests for posts, Devansh. Keep 'em coming.

Expand full comment
3 more comments...

No posts