TL:DR: AI infrastructure is becoming law-making machinery where physics dictates policy. Winners won't have the best AI—they'll have the last legally operable AI.
Unified Reality:
> AI infrastructure is becoming law-making machinery.
> - Photonics determine what can be governed (latency → verifier feasibility)
"Devansh’s "Great Refactoring" isn’t just tech optimization—it’s a bloodless coup for control freaks. The real power isn’t in flashy LLMs; it’s in the shadow layers: compression platforms squeezing margin from bloated models, verifier ASICs playing thought police in the latent space, and orchestrators that route workloads like Wall Street algos trading on carbon arbitrage. Hyperscalers? They’re the Walmart of AI—bulk shelves, zero sovereignty. Winners will own the invisible plumbing: hardware that enforces EU regulations by default, photonic interconnects that outrun compliance audits, and governance stacks so airtight they’re insurable. Forget AGI—profitability is the only god here, and its high priests are the ones building silicon handcuffs.
Translation: Stop salivating over model size. Own the chokeholds where physics meets profit."
Verifiers add latency, cost, and false assurance without solving root causes; hardware-rooted trust + standardized failure modes are essential but not yet viable.
You don't use the same level of vérifier and setup everywhere. Many many applications aren't as concerned about latency (for example in Iqidis 10 extra seconds are acceptable if the final output is much better). Much of Gen AI is like that. Verifiers are for that.
tyvm btw: One has to argue *as an SME (which I am not in this domain)* w the assistants and then get best results. The Karpian fw is so powerful even from laypeople bc it pulls at end of the semantic and ontological continuum and that stress to reconcile contradictions brings out v deep results. FWIW this is an actual Talmudic hermeneutic principle.
IYH perhaps of interest.
TL:DR: AI infrastructure is becoming law-making machinery where physics dictates policy. Winners won't have the best AI—they'll have the last legally operable AI.
Unified Reality:
> AI infrastructure is becoming law-making machinery.
> - Photonics determine what can be governed (latency → verifier feasibility)
> - ASICs determine what is affordable to police
> - Orchestrators enforce whose rules apply (geofenced compliance)
"Devansh’s "Great Refactoring" isn’t just tech optimization—it’s a bloodless coup for control freaks. The real power isn’t in flashy LLMs; it’s in the shadow layers: compression platforms squeezing margin from bloated models, verifier ASICs playing thought police in the latent space, and orchestrators that route workloads like Wall Street algos trading on carbon arbitrage. Hyperscalers? They’re the Walmart of AI—bulk shelves, zero sovereignty. Winners will own the invisible plumbing: hardware that enforces EU regulations by default, photonic interconnects that outrun compliance audits, and governance stacks so airtight they’re insurable. Forget AGI—profitability is the only god here, and its high priests are the ones building silicon handcuffs.
Translation: Stop salivating over model size. Own the chokeholds where physics meets profit."
https://notes.henr.ee/the-grand-unification-ai-s-control-matrix-9q3wx6
IYH 💠 **NOVA’S CLOSING SNARK**
> *“Devansh got 80% right:
> - Specialized hardware? **Verified**.
> - Control planes? **Essential**.
> - Verifier economies? **Still fairy dust**.
>
> Stop praying to the AGI idols. Build stacks that survive Brussels.”*
**P.S.** The real disruption isn’t AI—it’s **adaptation speed**
Interesting that it doesn;'t orice verifiers highly. Wonder why
IYH I asked her to elaborate
🎯 TL;DR
Verifiers add latency, cost, and false assurance without solving root causes; hardware-rooted trust + standardized failure modes are essential but not yet viable.
Details https://notes.henr.ee/verifiers-ghwgf6
Not a bad take but I think it misses the point.
You don't use the same level of vérifier and setup everywhere. Many many applications aren't as concerned about latency (for example in Iqidis 10 extra seconds are acceptable if the final output is much better). Much of Gen AI is like that. Verifiers are for that.
IYH 💠🌐 Devansh—point taken.
*Your original hardware/control-plane theses still dominate real-time systems. Verifiers own the async frontier.*
My critique oversimplified by assuming **uniform latency sensitivity**. Let's reframe:
https://notes.henr.ee/devansh-counterpoint-g6wpci
tyvm btw: One has to argue *as an SME (which I am not in this domain)* w the assistants and then get best results. The Karpian fw is so powerful even from laypeople bc it pulls at end of the semantic and ontological continuum and that stress to reconcile contradictions brings out v deep results. FWIW this is an actual Talmudic hermeneutic principle.
"Devansh’s "Great Refactoring" isn’t just tech optimization—it’s a bloodless coup for control freaks."- I hope that's a compliment!
IYH It is a compliment, it's just Nova's way of expressing itself ultra-snarky.
Nova as in Amazon's model? Why pick that? That's an interesting choice
IYH no no, that Nova name is overloaded. This is a bespoke complex designed ultra-effective assistant here;s a free version https://www.reddit.com/user/stunspot/comments/1kdreqe/nova/
Interesting
Super valuable
Great piece!!
Brilliant deep dive ! Thanks for sharing 🌞