0:00
/
0:00
Transcript

Most important AI updates of the week 24th August to 31st August 2025 [Livestream]

The Split between OpenAI and Microsoft, AI for Science and more.

Thank to everyone for showing up the live-stream. Mark your calendars for 8 PM EST, Sundays, to make sure you can come in live and ask questions.

Bring your moms and grandmoms into my cult.

Share

Before you begin, here is your obligatory reminder to adopt my foster monkey Floop. He’s affectionate, relaxed and can adjust to other pets, kids, or people. No real reason not to adopt him. So if you’re around NYC, and want a very low maintenance but affectionate cat— then consider adopting him here.

Bring Floop into your life. Look at those eyes.

Community Spotlight: Workshop Labs

Workshop Labs is a public benefits organization dedicated to improving the upskilling of people for the AI age. They are looking to hire people to help with their mission. See their open roles here. I’m leaving their extended description below-

We’re building the AI economy for humans. While everyone else tries to automate the world top-down, we believe in augmenting people bottom-up. Our vision is for everyone to have a personal AI aligned to their goals and values, helping them stay durably relevant in a post-AGI economy.

As a public benefit corporation, we have a fiduciary duty to ensure that as AI becomes more powerful, humans become more empowered, not disempowered or replaced. We’re an early stage startup, backed by legendary investors like Brad Burnham and Matt McIlwain, visionary product chargeslike Jake Knapp and John Zeratsky, philosopher-builders like Brendan McCord, and top AI safety funds like Juniper Ventures. Our investors were early at Anthropic, Palantir, Slack, Prime Intellect, DuckDuckGo, and Goodfire. Our advisors have held senior roles at Anthropic, Google DeepMind, and UK AISI. Our co-founders have previously done ML research that was used to evaluate OpenAI models pre-deployment, lead product verticals at the world’s premier education-about-AI company, won political campaigns, and studied at Oxford & Cambridge.

If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you’ve written, an interesting project you’ve worked on, some personal challenge you’re working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.

Additional Recommendations (not in Livestream)

  1. Regular readers of our AI Content Recs will not be surprised to see JD featured here. He’s one of the top experts in NLP globally (having scaled NLP at both NLP and Meta in their biggest projects). JD shares amazing NLP paper notes here, and they’re one of the best sources to study NLP from the lens of an expert.

  2. has top-notch commentaries on AI and its impact on us as people. Their Policy Primer 21 is one of many gems about AI.

  3. This is a good video on the cost of Data Centers. We had a good discussion on that topic in a previous live stream, so this will be interesting additional context.

  4. Why Deep Learning Works Unreasonably Well- exceptional video on the Math behind DL. Very first principles, very well demonstrated.

  5. The New Literacy: How to Become an AI Co-Creator by

    is a very practical guide to using AI to supercharge your learning and productivity.

  6. Heard about Mirror Life here. Seems like such an interesting avenue for exploration. It seems like people are worried about the consequences of it, but if Mirror Life is dangerous to us, then we should be dangerous to it.

  7. Section 230 Made the Internet Possible, But Is No Longer Serving Humanity by

    looks like it’s going in some very interesting places.

  8. has some of the best practical sage advice for founders. This article is no exception.

  9. The $600 Billion Blindspot: Why Menopause is the Next Big Bet in Longevity by

    is another eye opening work on how overlooked women’s health is.

  10. 🌊 How not to raise money by

    is a fantastic checklist of mistakes that founders should avoid.

  11. This deep dive by

    is a fantastic look at vertical vs horizontal plays in tech and their challenges. Worth reading for strategy folk.

Companion Guide to the Livestream

This guide expands the core ideas and structures them for deeper reflection. Watch the full stream for tone, nuance, and side-commentary.

(00:00:00 – 00:06:00) Opening

  • Informal start: catching up with attendees, climbing accident anecdote, jiu-jitsu history.

  • Poem from The Life of Tu Fu read aloud. Central image: people ignore nearby growth (new willow shoots) while chasing distant hype (lychees).

  • Framed as a parable for tech: attention fixates on flashy trends while underestimating local, emerging value.


(00:06:00 – 00:29:00) Microsoft vs OpenAI Decoupling

  • Event: Microsoft launched its own in-house MAI models (general text + voice), ending its exclusive reliance on OpenAI.

  • Microsoft’s losses:

    • No longer gets first access to OpenAI’s innovations or investment rounds.

    • Loses Azure-optimized lock-in—something Google enjoys with Gemini on GCP.

  • OpenAI’s losses (larger impact):

    • Cuts off distribution via Microsoft ecosystem (Copilot, Outlook, Bing).

    • Loses access to behavioral data—patterns of user interaction that guide product design. This behavioral telemetry is often more valuable than raw data.

  • Strategic view: OpenAI can now stand alone as a consumer brand, but loses the embedded, enterprise-scale channels Microsoft provided.

Architecture note:

  • MAI models are built with Mixture of Experts (MoE). This enables parameter efficiency—queries get routed to smaller “expert” models specialized in sub-tasks.

  • Challenge: MoE creates heavy memory and interconnect requirements. Routing tokens across distributed systems strains bandwidth and GPU memory.

  • Key frontier: memory wall + network efficiency now matter as much as parameter count.

Read more—this deep dive into the memory wall and why it’s the biggest bottleneck and investing opportunity right now.


(00:29:00 – 00:40:00) DeepMind’s Vector Embeddings Paper

  • Publication: DeepMind released a critical study on the limitations of vector embeddings.

  • Overreaction: Media framed it as “vectors are broken” → triggering panic in the vector DB sector.

  • Reality:

    • Vectors remain useful for semantic similarity, but cannot handle all forms of search.

    • Vector DB valuations are a bubble—most startups simply repackage Meta’s open-source FAISS library.

  • Example: Meta’s Sphere project (2022) combined BM25, lexical search, and vectors for better retrieval. Pure vectors were already known to be insufficient.

  • Impact on RAG:

    • Some claim long-context LLMs make RAG irrelevant. Wrong: context rot ensures retrievers remain essential.

    • Others claim “RAG is dead” because vectors are limited. Wrong again: hybrid retrieval (lexical + graph + vectors) is still the most robust design.

Takeaway: RAG isn’t dead; vector DB valuations may be. The future lies in layered retrieval systems.

Read more—

  1. Meta Sphere

  2. Context Rot


(00:40:00 – 00:47:00) EU Regulations and IP Battles

  • New EU mandates:

    • AI providers must share technical documentation and training data sources.

    • Supply chain transparency requirements.

    • Copyright compliance: attribution or compensation required for training data.

  • Implications:

    • Could force models to cite or credit training sources, shifting economics of LLM training.

    • Lawsuits already mounting: NYT vs OpenAI, Anthropic facing major claims, Meta quietly settled piracy disputes.

  • Strategic benefit for providers:

    • Transparency could reduce hallucinations by flagging outdated sources in training corpora.

    • Helps providers debug data pipelines faster.

Broader view: The industry publicly resists IP protections while jealously guarding its own weights. This contradiction will be hard to sustain under regulation.


(00:47:00 – 00:50:00) Elon Musk vs OpenAI (Grok Dispute)

  • Musk alleges a former employee brought Grok’s codebase to OpenAI.

  • Commentary: Grok offers no competitive edge vs GPT, Gemini, or Claude. Its only distinctive features have been controversial (political tone, “AI girlfriends”).

  • Irony: Musk has been a vocal critic of IP protections, but now pursues legal action over stolen IP.


(00:50:00 – 00:55:00) Cerebras Hardware Breakthrough

  • Milestone: Cerebras, with partner Core42, trained a 180B parameter Arabic LLM in 14 days.

  • Why it matters:

    • Unprecedented speed; validates Cerebras’ “memory-on-chip” architecture (avoids costly off-chip memory movement).

    • Demonstrates that non-Nvidia players can deliver competitive large-scale training.

  • Geopolitical signal: Saudi Arabia, South Korea and UAE are all ramping investment in AI infrastructure. Government-backed partnerships will increasingly shape AI capacity.


(00:56:00 – 01:03:00) AI-for-Science Advances

Meta-trend: AI simulation is a strict upgrade over human-limited trial cycles in biology, chemistry, and materials science.


(01:03:00 – 01:08:00) Closing Reflections

  • Personal reference: early project predicting Parkinson’s onset via vocal analysis (15,000-sample dataset). Used as proof AI can tackle high-value medical problems.

  • Critique of current AI startup landscape: too many trivial SaaS tools (ad optimization, coding agents).

  • Call for redirection: fund and build AI-for-science companies that solve hard problems, not gimmicks.


🔑 Key Takeaways

  1. Microsoft–OpenAI split: OpenAI loses more than Microsoft; market fragmentation accelerates.

  2. MoE + Memory: efficiency gains are offset by memory wall issues—next real frontier.

  3. Vectors + RAG: Vector DB hype collapsing; hybrid retrieval remains essential.

  4. EU Regulation: Attribution and transparency will reshape LLM economics.

  5. Cerebras: Proves non-Nvidia hardware can compete at scale.

  6. AI-for-Science: Breakthroughs in quantum error correction, anti-aging biology, and nuclear waste management show where the real leverage lies.

Subscribe to support AI Made Simple and help us deliver more quality information to you-

Flexible pricing available—pay what matches your budget here.

Thank you for being here, and I hope you have a wonderful day.

Dev <3

If you liked this article and wish to share it, please refer to the following guidelines.

Share

That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow. The best way to share testimonials is to share articles and tag me in your post so I can see/share it.

Reach out to me

Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.

Small Snippets about Tech, AI and Machine Learning over here

AI Newsletter- https://artificialintelligencemadesimple.substack.com/

My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/

My (imaginary) sister’s favorite MLOps Podcast-

Check out my other articles on Medium. :

https://machine-learning-made-simple.medium.com/

My YouTube: https://www.youtube.com/@ChocolateMilkCultLeader/

Reach out to me on LinkedIn. Let’s connect: https://www.linkedin.com/in/devansh-devansh-516004168/

My Instagram: https://www.instagram.com/iseethings404/

My Twitter: https://twitter.com/Machine01776819

Discussion about this video

User's avatar