17 Comments
User's avatar
Hugo Rauch's avatar

👏👏

Expand full comment
Devansh's avatar

Thank you

Expand full comment
John Michael Thomas's avatar

Excellent article, thanks for doing the deep dive.

One nit: Having seen a few computing architecture shifts in my life time, I suspect your time line is a bit optimistic. My experience has been that, while computing power follows Moore's law, the creation and adoption of new computing paradigms seems to always take about the same time to even take significant market share, let alone reach market dominance; I expect it will take at least a decade, probably more.

Expand full comment
Devansh's avatar

A few things-

I thought about that, but imo the timelines might be more compressed now for a few reasons-

1. Internet, Open Source, AI all reduce innovation cycles. Something like this might have taken me months to do pre internet, now I was able to do it much faster. Adoption too is sped up thanks to this.

2. Private Markets are struggling for exits--> more money people that will be willing to take risks in this space.

3. After Nvidia proved out GPUs, I think more investors are open to exporing alt hardware paradigms (not to mention I know some BTS stuff at major hardware companies that makes this seem encouraging--> case in point Qualcomm acquisition.

Market dominance isn't replace GPUs for everything here, rather replace in these specialized domains, which I think is a much easier task when people already don't love GPUs for these types of tasks.

That said, I could be overtly optimistic; organizational intertia is a bitch.

Expand full comment
John Michael Thomas's avatar

I've been out of the semiconductor space for 11 years, so I may be a little fuzzy on a few things. But I think a key consideration is whether any new designs will be able to use existing semiconductor manufacturing processes, or will need some additional tweaks. If all it takes is a new mask to create chips with the new architecture, it could move very quickly, since it wouldn't require a change to supply chains or manufacturing equipment/processes to scale.

I'm just not sure we know that yet. It sounds like we don't even know for sure yet how these new architectures will work, let alone how they'll be manufactured and scaled.

But of course, all we can do is speculate. I'd just apply more weight to the time it takes to roll out new semiconductor processes even when there's already agreement on exactly what needs to be done. Just reducing the manufacturing node size (from 45nm to 32nm, for example) typically takes 4-6 years from inception to dominance, and that's an increment of a very-well-known architecture, driven by the 3 largest players - rather than something new and potentially niche which hasn't even been fully defined yet.

Expand full comment
Devansh's avatar

Great points. Yes that's definitely a factor worth considering. How would you shift the timelines accordingly?

Expand full comment
John Michael Thomas's avatar

As with most communications, I think it probably depends on the audience :)

If the audience is sophisticated enough, I'd offer 2 different timelines, to highlight how it's likely to play out differently depending on whether the new architecture can be manufactured on existing processes or not. For less sophisticated audiences, I'd probably add wider ranges into the time line.

Expand full comment
Devansh's avatar

That's such a great piece of advise. I really appreciate it, and for you take the time to give it to me. I'll have to study the manufacturing angle of it in more detail to know how to factor that given this and a few other things that I didn't mention in the article but were important in my BTS research, but I will do so and publish a retraction/update in due time.

If in the meantime I can get you to come and share your thoughts on a guest post, it might be great.

Expand full comment
David Pace's avatar

Ah, I just made a comment about this above.

It's not necessarily going to be in the Chips. It's about doing things differently.

Expand full comment
John Michael Thomas's avatar

And on that note... if there's competing architectures, any of them that are compatible with existing semiconductor processes (they can be manufactured on existing lines) will have a huge leg up on alternatives. This will be true even if those compatible architectures are technically inferior to other architectures that require new processes or tooling. This could end up being a key factor for investors to consider.

Expand full comment
David Pace's avatar

@john Michael Thomas Just a hunch, but it might be conservative because everyone is a “specialist” now in any topic they do choose. And considering the majority of people cannot afford the massive compute necessary to run their own private models there's a bunch of us trying to figure it out. (I'm going to publish something soon that will change your mind).

Expand full comment
Edward Metzler's avatar

This is a great article. I’m retired, but like to keep up with high technology.

Expand full comment
Devansh's avatar

Glad you liked it. Please do share this online if it helped you, I rely on word of mouth to grow my platform

Expand full comment
Michael Spencer's avatar

The US will need to innovate in energy if it wants to remain the leader for the next decade. New architectures could surprise too.

Expand full comment
Vaibhav's avatar

Database indexing might be the immediate use case for this today. Works heavily with if-else using b-trees.

Besides current language and compilers are optimized for contiguous memory. For example a struct in C is laid out in a contiguous memory and will add redundant bytes in between to be divisible by a number let’s say 4. This allows it to access data with offsets.

Terrible for data that is sparse and all over the place.

Expand full comment
Sutha Kamal's avatar

Feels like a pendulum swing? Not dissimilar from RISC->CISC->RISC ? As we exhaust the limits of the new compute paradigm, we think of redesigning things that flip back the other way? Fantastic writeup.

Expand full comment
Luis Llorens's avatar

great one!!

Expand full comment