Last month, I posted a note on Substack asking readers whether they prefer me to write more about technical details or investment targets in my newsletter. I had originally thought that most readers would prefer to read investment-related content, but unexpectedly, more than half of the responses said that they prefer me to write technology-related technical details. I am really happy to find so many like-minded friends on Substack! In the following articles, I will continue the writing style of my previous newsletter and introduce more different technical details of the technology industry to readers. At the same time, I will also try to appropriately add more investment implications to the article in response to the need of some readers.
Since the beginning of the year, I have written two articles to explain the technical details of NVIDIA AI server rack (NVIDIA (NVDA US) GB300, Vera Rubin & Beyond – An Update on Future PCB/CCL and Power Design Change) and CPO switch (NVIDIA (NVDA US) 2025 GTC Preview -- An Update on its Latest CPO Switch and NVL288 Design). Today, let's change the taste a little bit and take a look at ASIC server rack. Since there are many articles on the internet discussing AWS's Tranium server architecture and Google's TPU server architecture, I will take a different approach to introduce the Meta Minerva server architecture, which is still rarely discussed, and its JDM partner in the Minerva project: Celestica.
Minerva:
This article is a bit long and can be divided into the following sections in the order of introduction:
- Minerva system overview & rack design details
- MTIA blade design details
- Network blade design details
- CMM (chassis management module)design details
- Cable backplane cartridge design details
- Thermal system design details
- Power delivery design details
- Meta, Google, and Open AI project contributions to Celestica financials
Now let's expand on the details below: