Semiconductor world - CPU/GPU Wars

AMD analyst day Financials: - Again, we focus on data center (reasons explained in the first few posts in this thread.).

1) Data center revenue share bigger than ever.

2) Datacenter revenues translate to FCF

3) 2022 guidance

4) Future -3-4 years:
Total Addressable market - impact of xilinx aquisition.

Future revenue 3-4 years prediction.

>50% revenue expected from data center - Data center is what we care about

Datacenter TAM in 2022 vs 2020 wiht aquisition of xilinx

Intel:
Last week intel PR mentioned they are looking at “decreasing window of leadership”. What this means is that as soon as intel beats AMD in performance, there is next chip of AMD ready. saphire rapids was competing with milan… but milan-x (3D v-cache) came in in about 2 months with better perf.

Example: Intel Sapphire rapids are delayed. Q4 2022 ramping up and validation. Most likely launch in Q1 2023. Sapphire rapids was supposed to compete with milan/milan-x. Not with genoa that is coming Q1 2023 (launching in H2 2022). So, intel’s upcoming products will compete with AMD products they were never meant to compete with. Intel is giving out good competition. But these delays are hurting them.

Since intel is in rear view mirror for AMD, they are taking steps as a competitor for NVDA. NVDA’s next workstation is with sapphire rapids. My opinion is nvda already sees AMD as competitor and hence they switched to intel. Otherwise I do not see any other reason to not pick a leadership CPU. Another reason can be that intel is only a stop gap until they fully move to Grace (NVDA ARM CPU) CPU.

Some intro into what is make or break for competing with NVDA: → My opinion. Others in the know how of this industry can comment too.
NVDA is the king in AI/HPC space when it comes to accelerators
They have two things

  1. Great hardware - This is a no brainer. Without hardware, you cannot compete here
  2. Great Software platform - This is the real winner. It is called CUDA. It is a platform for parallel computing. CUDA performance is tightly coupled with their hardware → Think of this strategy as what apple does with its software. They also put in a lot of effort to give libraries for scientific computing/AI/ML. Their libraries (as expected) always perform better with their own hardware. Their GPUs come at a premium because of this.

I have myself programmed a bit in opencl and I have browsed through CUDA programmer’s manual some 9 years ago. Even in those days I felt CUDA was more approachable with better manuals and controls. From the way I see it, AMD screwed up from the software side all along. Even though they had decent hardware, I feel they never really had the focus on software like nvda had. NVDA’s hardware is part of the CUDA platform. Not the other way around.

AMD situation in software stack now:
Even now, I was reading in forums that clients buy one NVDA accelerator for development with their CUDA platform. Then they try to migrate to AMD ROCm. Some complain that CUDA just works unlike ROCm. Sample: What Is AMD ROCm? | Hacker News
In light of that, I am happy to see the following slides in analyst day from Victor Peng.



They are unifying their stack, so that same code is reusable easily across multiple verticals. DATA Center AI fight will not be as easy for AMD as it was in CPU with intel. But nonetheless, it is still an opportunity. AMD only has to steal existing market share while NVDA and Intel have to make market (They are leaders).

My posts are going to increasingly focus on how AMD is faring in the compute AI space. MI300 will be keenly watched along with their software progress. Otherwise NVDA is a pure winner all along.

Disclosure (Does it matter with non Indian companies?): I already had AMD RSUs from my employment days in AMD. I am accumulating during this fall. I think the market is wrong in binning AMD with Intel and NVDA. AMD is beating Intel and it is more diversified than NVDA (especially with acquisition of xilinx).Irrespective of how it goes against NVDA in accelerators, I expect them to continue to do well in CPU. As of now a purely datacenter CPU share gain bet.

3 Likes

The underlying software is powered by AMD’s ROCm software stack, which stands in contrast to
NVIDIA’s CUDA thanks to its open-source approach. It also features a translation mechanism that can adapt CUDA-based code to AMD’s software stack with minimal software engineering efforts.

The highly-anticipated Intel server chips, codenamed “Sapphire Rapids,” may not ship until the second quarter of 2023, as opposed to a consensus expectation for a launch in the second half of 2022, noted tech analyst Ming-Chi Kuo said on Thursday.

Premised on a second-half launch, Morgan Stanley analyst Joseph Moore said in a December 2021 note that he expects the chip to help Intel narrow its server market share gap with rival Advanced Micro Devices, Inc.'s (NASDAQ: AMD) Milan server processor.

To make matters worse, AMD is expected to begin shipping its Genoa server chip, based on the next-gen, 5-nm processor node technology, in late 2022.

Here you go. First MI300 news. El Capitan was announced in 2020 by AMD, Installation late 2023. A bit of system information revealed now.

El Capitan will be powered by AMD’s forthcoming MI300 APUs.
Theoretical peak is two double-precision exaflops, [and we’ll] keep it under 40 megawatts—same reason as Oak Ridge, the operating cost.

Minor Intel sapphire rapids delay news.

Argonne National Laboratory is awaiting completion of the Aurora supercomputer, a 2-exaflops HPE-Intel machine that has undergone several reconceptualizations. Aurora’s execution is also potentially beset by additional delays pertaining to its Sapphire Rapids CPU, so the exact timeline is fuzzy — but, reportedly, installation is underway.

We miss some important events in history in search of ‘where is the money’. Having an APU in server is a world first and that too at exascale is quiet an achievement.

Being APUs, El Capitan will benefit from what’s likely to be the densest performance profile ever achieved in the world of supercomputing. Make no mistake: El Capitan will represent the pinnacle of semiconductor performance, design, and integration. It’s not hyperbolic to say that it’s likely to be one of humanity’s most technologically complex endeavors.