In my previous article, I introduced an ACC company to readers: Macom (Macom (MTSI US) – A Hidden Nvidia GB200 Play). Today, let me introduce another AEC company: Credo.
First, let's briefly explain the key differences among different types of copper cables. High-speed copper cables used in data centers can generally be divided into three types: DAC, ACC, and AEC. The first type, DAC, is a passive copper cable, meaning there are no chips inside the cable; both ACC and AEC are active copper cables, with redriver chips added to the copper cables in ACC, while retimer or DSP chips are added to the copper cables in AEC. In terms of performance, DSP > retimer > redriver, hence AEC performance > ACC > DAC. Better performance means that the copper cable can be made thinner, and that the transmission distance of the copper cable can be longer given the same data transmission speed. Of course, in terms of price, AEC > ACC > DAC. Therefore, in practical usage scenarios, CSP customers will choose the appropriate type of copper cable based on the distance and the data transmission speed needed.
As explained in my previous article about Macom, Nvidia's GB200 system chooses to use 1.6T ACC to connect adjacent NVL36 racks. However, besides purchasing Nvidia's GPGPU, each hyperscaler's data center also partly deploys its own-developed ASIC server. According to my supply chain research, among the top 4 hyperscalers in the United States, except for Meta, the other three (Google, AWS, MSFT) all use Credo's AEC to connect their ASIC server clusters.
Let's first take a look at the architecture of Google's TPU server (see the figure below):
As shown in the figure above, a standard Google TPU server contains 4 TPUs, and 32 TPU servers form one rack, 70 racks form one cluster, totaling 8960 TPUs. One TPU server is equipped with 8 OSFP ports, using CRDO's 800G AEC to connect to 8 ToR switches, so one rack has a total of 4x32=128 TPUs and 8x32=256 AECs, with a TPU : AEC ratio of 1:2. According to my supply chain research, Google plans to adopt AEC starting from the V6P TPU server in the second half of this year. It is estimated that the shipment volume of V6P TPU for full lifespan will be approximately 0.5 million units, and the price of one 800G AEC is approximately $300. Therefore, this Google TPU server project will potentially bring CRDO 0.5 x 2 x 300 = $300 million in revenue.
Moreover, as mentioned in my previous article (Marvell (MRVL US) vs. Alchip (3661 TT) – Who is the AWS ASIC Winner?), MRVL helped Amazon design its Tranium 2 chip, which will officially start mass production in 3Q24, and AWS's Tranium 2 server will also use CRDO's 400G AEC to connect the server and ToR switch.
According to supply chain research, there are 4 Tranium 2 chips in one ASIC server of AWS, 8 servers in one rack, totaling 32 Tranium 2 chips. The Tranium 2 server has a total lifespan of approximately 15k racks, corresponding to 15x32=480k Tranium 2 chips. Assuming a Tranium 2 : AEC ratio of 1:1 and a price of approximately $200 for one 400G AEC, the AWS ASIC server project will potentially bring CRDO 0.48 x 200 = $96 million in revenue.
Lastly, let's take a look at MSFT Maia 2 server. This is Microsoft's second-generation ASIC designed by itself (Maia 1 has very small volume), planned to start mass production at the end of 2025. The Maia 2 server has a total lifespan of approximately 8k racks, with 48 Maia 2 chips in one rack and 10 racks in one cluster. What's special about Microsoft's Maia 2 cluster is that it not only uses CRDO's 800G AEC to connect servers to ToR switches but also on spine-leaf connections, so the Maia 2 : AEC ratio is 1:2. As mentioned earlier, the price of one 800G AEC is approximately $300, so the MSFT Maia 2 server project will potentially bring CRDO 8 x 48 x 2 x 300 = $230 million in revenue.
Therefore, Credo can be regarded as a hidden play on hyperscalers' ASIC servers. Although investors are aware that Credo is one of the leading manufacturers of AEC, the AEC products that the company has supplied to MSFT and AWS so far are mainly for non-AI servers. However, starting from 2H24 until 2026, the company will gradually ramp up three major AI ASIC server projects: Google TPU, AWS Tranium 2, and MSFT Maia 2, completely transitioning itself to a pure AI ASIC play.
why do you think its credo aec? They have another supplier from what we see.