When a company bets two billion dollars on a laser company and two billion more on a fiber optics firm in the same move, it is worth pausing to ask what they know about physics that the rest of the industry is only beginning to feel.
NVIDIA's reported $2B stake in Coherent Corp and a further $2B in Lumentum are not portfolio diversification. Both companies are already integrated into NVIDIA's Spectrum-X networking platform, which means this is operational infrastructure strategy dressed as investment. NVIDIA is quietly declaring that copper — the material that has carried electrical signals through data centres for decades — is reaching the end of its useful life as an interconnect medium for AI-scale computation. And they are right.
The Original Carousel — Six Slides
Original LinkedIn carousel · 6 slides
Wall
Heat management becomes the bottleneck — not the computation.
Electrical I/O is eating the data center.
Photons don't.
Signal loss over meters
Power-hungry repeaters
🔥 HOT & SLOW
Travels across data halls
No repeaters needed
❄ COOL & FAST
At data center scale, this difference is existential.
64 lanes of data.
Copper → 1 channel, degrades after just a few meters.
Zero copper bottleneck at the source.
Lower power · Higher bandwidth · Robust at AI scale.
► For PhD & postdocs: Your laser & WDM skills are now in demand.
► For everyone: Hardware determines AI's future.
Carousel originally published on LinkedIn · Boris Louis · March 2026
The Physics of the Power Wall
The problem is not exotic — it follows directly from Ohm's law and the properties of conductors at high frequencies. As data rates push past 800 gigabits per second per lane, the copper traces carrying electrical signals inside a data centre generate enormous resistive heat. At those speeds, the skin effect concentrates current near the surface of the conductor, increasing effective resistance. Signal integrity degrades rapidly, requiring repeater chips every few metres, and each repeater burns more power and produces more heat. The cost of simply moving bits from one chip to another begins to rival — and in some configurations, exceed — the cost of actually computing with those bits.
This is what the industry calls the power wall, and it is not a problem you can solve by making better copper cables. It is a fundamental physical constraint of the medium.
Light carries no charge. It generates no resistive heat. A single optical fiber, through wavelength-division multiplexing, can carry dozens of independent data channels simultaneously — something copper cannot approach at comparable power budgets.
Electrons vs Photons at Scale
The comparison between electrical and optical interconnects reads like an undergraduate optics lecture, but its consequences at data centre scale are genuinely transformative. Photons do not interact with the medium the way electrons do. An optical signal can travel across a data hall — tens of metres — with negligible loss and without requiring intermediate amplification. Wavelength-division multiplexing then multiplies that already-superior bandwidth by stacking independent data channels on different colours of light, all propagating down the same fiber simultaneously.
The slide from the carousel puts it memorably: one fiber, 64 lanes. A copper cable achieves one lane and degrades badly after a few metres. The disparity in bandwidth density is not linear — it is multiple orders of magnitude, and it compounds as the number of chips in a cluster grows.
Co-Packaged Optics: Why Integration Is the Hard Part
The concept behind co-packaged optics (CPO) is straightforward: move the optical transceiver — the component that converts electrical signals to light and back — from the edge of a line card into the same physical package as the compute or switching chip. By eliminating the copper traces that previously ran from the die to the pluggable optics module, you remove the highest-loss, highest-power segment of the electrical signal path.
The engineering, however, is formidable. Integrating photonic components at chip-package scale requires precise alignment tolerances, thermal management strategies that account for the very different operating conditions of silicon photonics and CMOS logic, and manufacturing processes that are still maturing. Coherent Corp and Lumentum are among the few companies with the process depth and volume capacity to supply this technology at the scale NVIDIA requires. NVIDIA's investment is not just about access to components — it is about co-developing the supply chain for a technology that does not yet exist in the quantities needed.
What This Means If You Work With Lasers
I find this development genuinely exciting, partly because the physics involved is not distant from what I work with daily, even if the application context is completely different. The lasers used in optical interconnects — distributed feedback diodes, vertical-cavity surface-emitting lasers, electro-absorption modulators — share the same foundational physics as the light sources in advanced microscopy systems. The challenges of coupling light into single-mode fibers, managing wavelength stability, and maintaining spatial coherence over a photonic integrated circuit are structurally similar to challenges I encounter in building precision optical instruments, just miniaturised to a chip and run at a very different power regime.
For researchers in laser physics, fiber optics, WDM systems, or photonic integration: the practical demand for these skills is moving rapidly out of telecommunications and into the semiconductor and AI infrastructure sector. The knowledge transfer is more direct than it might appear from the outside. If your PhD touched coherent sources, optical coupling, or wavelength-selective components, it is worth paying attention to this space.
The Broader Signal
NVIDIA's move is a signal about where the constraints in AI hardware actually sit. The dominant narrative of AI infrastructure has been about compute — more GPU clusters, more memory bandwidth, faster matrix operations. That narrative is correct, but it is incomplete. At sufficiently large scale, the cost and power of simply moving data between chips — the interconnect problem — becomes the binding constraint. Solving it requires a different kind of expertise: photonics, not silicon logic.
Hardware determines the ceiling of what software can do. Optics is about to determine the ceiling of what AI hardware can do.
Frequently Asked Questions
Why is NVIDIA investing in photonics and optical interconnects?
As AI data centres scale to 800G speeds and beyond, moving data electrically consumes more power than the compute chips themselves. Photons carry no charge, generate no resistive heat, and can carry far more bandwidth per fiber via wavelength-division multiplexing. NVIDIA's investments in Coherent Corp and Lumentum signal that optical interconnects are becoming essential AI infrastructure rather than a niche research area.
What is Co-Packaged Optics (CPO) and why does it matter?
Co-Packaged Optics integrates optical transceivers directly inside the chip package, eliminating the copper traces that carry signals from the die to pluggable optics at the board edge. By moving the electrical-to-optical conversion as close as possible to the compute die, CPO removes the highest-loss and highest-power segment of the signal path — the copper bottleneck at its source.
What is wavelength-division multiplexing (WDM) and why is it relevant to AI data centres?
WDM encodes multiple independent data streams onto different wavelengths of light and transmits them simultaneously through a single fiber. A single WDM fiber can carry 64 or more parallel data lanes, far exceeding the bandwidth density of copper cables, which degrade significantly beyond a few metres and support only one channel per conductor.
Are skills from academic photonics labs transferable to AI hardware careers?
Yes — more directly than most researchers realise. The physics of laser sources, single-mode fiber coupling, WDM channel management, and photonic integration is identical whether you are building a microscope or a data centre interconnect. The scale differs, but the optical engineering expertise transfers. Researchers with hands-on experience in coherent light sources, fiber optics, and wavelength-selective systems are increasingly sought after in the semiconductor and AI infrastructure industry.