0:00
/
0:00
Transcript

Most Important AI Developments of the week

+ Content Recs

Thank to everyone for showing up the live-stream. Mark your calendars for 8 PM EST, Sundays, to make sure you catch them regularly.

Bring your moms and grandmoms into my cult.

Share

I’ve decided to combine the old updates (which I haven’t had enough time to write) with these live stream companions given the matter overlap.

Community Spotlight

I’m looking for people to adopt my foster cat, Floop. He’s very cute, friendly, and god with people/other animals. I would keep him myself if I didn’t have to travel/move so much (I also have Visa considerations and might have to leave the USA in a few months).

So if you’re around NYC, and want a very low maintenance but affectionate cat— then consider adopting him here.

PS- Floop is just my name for him (he spends all his day flooping around; his listing name is Beatnik).

This could be you.

Additional Recommendations (not in Livestream)

  1. has been on fire recently. His article on why we must learn to read and write (even today with ChatGPT) is a cracker.

  2. Eliyan: The Ultimate Chiplet Interconnect by

    is a masterpiece on one of the biggest opportunities in Hardware.

  3. wrote this very good overview on Why AI is not a Fad.

  4. had a very overview of research updates this week over here.

  5. Without AI, US Capex Would Be in a Slumpby the iconic

    is extremely important to read.

  6. This video is an amazing deconstruction of Joe Rogan’s climate denial.

  7. The Brutal Business of Texas BBQis a very interesting overview of the competitive Texas BBQ market.

  8. KA-GNN: Kolmogorov-Arnold Graph Neural Networks for Molecular Property Prediction” this is such interesting research. “Here we propose the first non-trivial Kolmogorov-Arnold Network-based Graph Neural Networks (KA-GNNs), including KAN-based graph convolutional networks(KA-GCN) and KAN-based graph attention network (KA-GAT). The essential idea is to utilizes KAN's unique power to optimize GNN architectures at three major levels, including node embedding, message passing, and readout. Further, with the strong approximation capability of Fourier series, we develop Fourier series-based KAN model and provide a rigorous mathematical prove of the robust approximation capability of this Fourier KAN architecture. To validate our KA-GNNs, we consider seven most-widely-used benchmark datasets for molecular property prediction and extensively compare with existing state-of-the-art models. It has been found that our KA-GNNs can outperform traditional GNN models. More importantly, our Fourier KAN module can not only increase the model accuracy but also reduce the computational time. This work not only highlights the great power of KA-GNNs in molecular property prediction but also provides a novel geometric deep learning framework for the general non-Euclidean data analysis.

  9. Breaking the Sorting Barrier for Directed Single-Source Shortest Pathsanother paper worth reading.


Companion Guide to the Livestream

This guide expands the core ideas and structures them for deeper reflection. Watch the full stream for tone, nuance, and side-commentary.


GPT-5: Release Shaped by Economics

GPT-5 arrived with something new: three inference modes. Queries are automatically routed into fast/simple, standard, or research-grade paths depending on complexity.

This design is less about performance upgrades and more about cost discipline. OpenAI’s business model is fragile because every single query consumes GPU cycles. Unlike traditional SaaS, user growth does not scale cheaply. More queries mean more burn. Many users reach for the “best” model even when asking trivial questions, which only amplifies costs. Routing solves that problem, ensuring compute is used efficiently rather than indulgently.

There are also real changes in the model’s obedience. GPT-5 handles layered instructions with more nuance than GPT-4. You can specify “avoid flattery, keep it light, prioritize logic,” and it balances those without collapsing into the last command. GPT-4, by contrast, often overweighted whatever instruction came at the end.

Where GPT-5 falters is creativity. Its expressive range is narrower. GPT-4 had charm, which explains why whole communities used it for companionship and roleplay. GPT-5 stripped that away, and OpenAI quickly brought GPT-4 back—but only for paying users. The timing suggests more than an accident. Many observers believe this was a deliberate test of willingness to pay.

Reading Recs-

  1. SemiAnalysis has an interesting take on this setting up ads and monetization of free users.

  2. had a very interesting take on the roll out being a possible A/B test (+ some other news).

Agent Mode and Deep Research

Agent Mode is the more experimental of the new features. It allows tasks to be run in loops, where the system checks its own output, adjusts, and tries again. Think of it less as a single shot of intelligence and more as a rough version of iterative reasoning.

In practice, this is useful for workflows where environments clash or multiple experiments need to be isolated. It is not polished, but it points toward where automation may go.

Deep Research with GPT 5, seems to be an upgrade on the older variant. Citations are cleaner, irrelevant queries less common. Small improvement, but meaningful if you depend on reliable sourcing.

Together they show a slow drift toward systems that manage their own feedback.

The AGI Mirage

Labs continue to talk about AGI. They rarely define it. This is deliberate to let the labs continue shifting goal-posts.

A clear definition would allow outsiders to measure progress and question timelines. A vague one sustains hype indefinitely. From GPT-4’s “sparks of AGI” to whispers about GPT-6 achieving it in the fall, the cycle repeats.

The refusal to define terms like AGI or even “AI safety” shields labs from critique. It keeps fundraising narratives alive. But it also prevents genuine measurement of progress.

Stanford’s Virtual AI Lab

Stanford researchers ran an experiment where autonomous agents designed and validated nanobody candidates for COVID-19. The agents generated hypotheses, simulated outcomes, analyzed results, and proposed next steps.

The outcome seems very cool: two candidates with promising binding viability. What matters is not the scale of the breakthrough, but the glimpse of a process that runs largely without human scientists in the loop. Early-stage exploration is being accelerated.

It doesn’t replace wet labs, but it shows how the front end of science might increasingly be automated. This is not my domain, so would love to hear your thoughts here-

(or anyone familiar with the space).

NVIDIA’s Chip Deal

The week’s biggest story did not come from model releases but from geopolitics.

NVIDIA and AMD struck a deal with the Trump administration to resume chip sales to China. In exchange, they agreed to pay a 15% levy on revenues from those exports.

The trade-off is stark. For the U.S. government, the revenue amounts to roughly $2B a year—less than a single day of federal spending. For China, it provides legal and relatively cheap access to advanced GPUs. For NVIDIA, it unlocks a vast customer base.

What matters most is the method. The companies bypassed traditional lobbying channels and went straight to Trump himself. Jensen Huang even praised Trump in public as America’s greatest advantage. This signals a shift in how large firms will seek influence: not through process, but through direct personal appeals.

For startups and smaller firms, that shift is dangerous. Innovation in the U.S. has always relied on small players being able to grow into big ones. If political access becomes the primary currency, that pathway narrows. Additionally, given chips being a strategic advantage, this move might be an additional blow to American Tech Supremacy in AI and lead to security concerns.

Learn more-

This video by Patrick Boyle is a great overview of the situation

Intel and the State

Another important development is the possibility of the U.S. government taking an equity stake in Intel under the CHIPS Act.

Handled well, this could give Intel breathing room to invest in foundries, R&D, and long-term projects. Governments can hold risk over longer horizons than private investors. They can afford to nurture infrastructure that takes decades to pay off.

Handled poorly, it could turn Intel into a politically managed asset, driven by short-term optics rather than engineering ambition. The outcome depends entirely on execution.

Musk vs. Altman

Elon Musk and Sam Altman clashed on several fronts this week. Musk accused Apple of rigging its App Store to favor ChatGPT. Altman countered that Musk had manipulated Twitter’s algorithm to favor his own posts—something confirmed when code was open sourced a while back.

There was also a head-to-head contest in chess between Grok and ChatGPT. OpenAI’s model dominated, surprising given Grok’s reputation. The explanation seems to be Grok’s weakness in memory persistence, which undercut its play across multiple turns. This was an interesting analysis of the outcomes.

Finally, Altman announced plans to build a Neuralink competitor. Valuation estimates are already in the hundreds of millions. Whether it is a genuine project or simply an escalation in rivalry with Musk remains to be seen.

The larger story is concentration of power. Innovation in AI is increasingly shaped not by consensus among researchers but by the impulses of a few highly volatile individuals.

The Allen Institute and NVIDIA

(Correction in the live stream)— I said 150 M each, but the it was 150 M total for this initiative.

The Allen Institute announced a $150M initiative to build an “American DeepSeek,” backed by both NSF and NVIDIA. On paper, this looks like a national effort to match China’s pace.

In practice, it risks becoming another scaling exercise: larger models, higher parameter counts, but little in the way of fundamental progress. Opportunity cost looms large. That money could fund education, open datasets, or reproducibility infrastructure—all higher-ROI investments for the field.

For NVIDIA, however, the arrangement is brilliant. The more U.S. research is bound to their stack, the harder it becomes for competitors to displace CUDA. A relatively small outlay buys long-term entrenchment and allows them to maintain their insane margins in an increasingly fragmented markets that is split between Inference Chips, Bandwidth, and other threats to the GPU.

Read more-

  1. How Open Source helps businesses.

  2. The Post GPU Tech Stack.

Final Reflections

The week’s announcements tell a clear story. AI development is constrained by costs and energy. Chip access is being reshaped by political deals. Model rivalries are driven by personalities as much as by science. Funding is flowing toward scale rather than diversity of approach.

If you want to understand where this field is going, don’t just watch model releases. Follow the incentives. Follow the supply chains. And pay close attention to who is bending rules behind closed doors.

Watch the full livestream for unfiltered commentary, side debates, and live audience questions that pushed these points even further.


Subscribe to support AI Made Simple and help us deliver more quality information to you-

Flexible pricing available—pay what matches your budget here.

Thank you for being here, and I hope you have a wonderful day.

Dev <3

If you liked this article and wish to share it, please refer to the following guidelines.

Share

That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow. The best way to share testimonials is to share articles and tag me in your post so I can see/share it.

Reach out to me

Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.

Small Snippets about Tech, AI and Machine Learning over here

AI Newsletter- https://artificialintelligencemadesimple.substack.com/

My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/

My (imaginary) sister’s favorite MLOps Podcast-

Check out my other articles on Medium. : https://rb.gy/zn1aiu

My YouTube: https://rb.gy/88iwdd

Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y

My Instagram: https://rb.gy/gmvuy9

My Twitter: https://twitter.com/Machine01776819

Discussion about this video