8 Comments
User's avatar
🎲 Monetization Product Manager's avatar

Superb article. 👏

Probably the wrong framing but I’m getting Cisco in the 90s vibe from the opportunity

Expand full comment
Ade Shemsu's avatar

+1 with Devansh; would love to get your perspective on that vibe check.

Expand full comment
Devansh's avatar

Wdym?

Expand full comment
Ade Shemsu's avatar

Just echoing your 'Tell me more', but I see they just responded!

Expand full comment
Devansh's avatar

Tell me more

Expand full comment
🎲 Monetization Product Manager's avatar

My simplistic view is that value moves from capacity to coordination to control over chain-link systems

But I’ll let ChatGPT give a more complete answer 🤣

The Ai (Cisco) architectural inflection point:

1. New Core Infrastructure Layer Being Built

* 1990s: Cisco built the backbone of the internet — routers/switches connecting local networks into global systems.

* Now: AI interconnects (e.g. NVLink, Infiniband, Ethernet variants) are building the backbone of AI compute fabrics — connecting GPUs into large-scale clusters and distributed training systems.

2. Hardware Abstraction & Orchestration as Control Plane

* Then: Cisco provided not just hardware, but IOS (software) for managing routing, traffic, and protocols.

* Now: Orchestration layers like NVIDIA’s NVLink/NVSwitch, AMD’s ROCm, or emerging AI cluster schedulers (e.g. Run:ai, Mosaic, etc.) are becoming the control plane for distributed AI workloads.

3. Ecosystem Lock-in & Standards Race

* Then: TCP/IP vs proprietary protocols, standards war.

* Now: NVLink vs Ethernet vs Infiniband; CUDA vs ROCm; vendor-specific interconnects competing for dominance.

4. Bottleneck Moved from Compute to Interconnect

* Then: CPUs were fast, but networks were slow — networking was the bottleneck.

* Now: GPUs are fast, but multi-GPU training suffers without fast interconnects — again, networking is the bottleneck.

5. Monopolistic Moats Around Infrastructure Control

* Cisco became indispensable because nothing worked without them.

* Now NVIDIA is trying the same: dominating the stack from silicon to software to interconnect — NVIDIA is the new Cisco.

6. Orchestration & QoS as a Business Differentiator

* Just like QoS and packet shaping became productized in networking, expect:

* Priority scheduling of training jobs

* Dataflow routing across GPU clusters

* Bandwidth shaping across AI pipelines

These will create enterprise-grade offerings in AI infrastructure.

7. Huge CAPEX-driven Buildout Cycle

* In the 90s: Telcos and ISPs spent billions on routers/switches.

* Now: Hyperscalers and AI startups are spending billions on GPU clusters and interconnects.

Whoever owns the next control points can tax the entire AI stack.

Expand full comment
Amit Agarwal's avatar

Fantastic article. For interconnects, don't we have multiple categories like: on chip, chip to chip, (Astera labs come to mind), servers, across data centers by the same hyperscalar, across hyperscalars.. Aren't there multiple companies playing in that field.

For orchestration, you say kubernetes (i assume Kubeflow) ain't prepared for it, so what is? Are we talking Airflow/ML Flow etc?

Expand full comment