AMD analyst day Financials: - Again, we focus on data center (reasons explained in the first few posts in this thread.).
1) Data center revenue share bigger than ever.
2) Datacenter revenues translate to FCF
3) 2022 guidance
4) Future -3-4 years:
Total Addressable market - impact of xilinx aquisition.
Future revenue 3-4 years prediction.
>50% revenue expected from data center - Data center is what we care about
Datacenter TAM in 2022 vs 2020 wiht aquisition of xilinx
Last week intel PR mentioned they are looking at “decreasing window of leadership”. What this means is that as soon as intel beats AMD in performance, there is next chip of AMD ready. saphire rapids was competing with milan… but milan-x (3D v-cache) came in in about 2 months with better perf.
Example: Intel Sapphire rapids are delayed. Q4 2022 ramping up and validation. Most likely launch in Q1 2023. Sapphire rapids was supposed to compete with milan/milan-x. Not with genoa that is coming Q1 2023 (launching in H2 2022). So, intel’s upcoming products will compete with AMD products they were never meant to compete with. Intel is giving out good competition. But these delays are hurting them.
Since intel is in rear view mirror for AMD, they are taking steps as a competitor for NVDA. NVDA’s next workstation is with sapphire rapids. My opinion is nvda already sees AMD as competitor and hence they switched to intel. Otherwise I do not see any other reason to not pick a leadership CPU. Another reason can be that intel is only a stop gap until they fully move to Grace (NVDA ARM CPU) CPU.
Some intro into what is make or break for competing with NVDA: → My opinion. Others in the know how of this industry can comment too.
NVDA is the king in AI/HPC space when it comes to accelerators
They have two things
- Great hardware - This is a no brainer. Without hardware, you cannot compete here
- Great Software platform - This is the real winner. It is called CUDA. It is a platform for parallel computing. CUDA performance is tightly coupled with their hardware → Think of this strategy as what apple does with its software. They also put in a lot of effort to give libraries for scientific computing/AI/ML. Their libraries (as expected) always perform better with their own hardware. Their GPUs come at a premium because of this.
I have myself programmed a bit in opencl and I have browsed through CUDA programmer’s manual some 9 years ago. Even in those days I felt CUDA was more approachable with better manuals and controls. From the way I see it, AMD screwed up from the software side all along. Even though they had decent hardware, I feel they never really had the focus on software like nvda had. NVDA’s hardware is part of the CUDA platform. Not the other way around.
AMD situation in software stack now:
Even now, I was reading in forums that clients buy one NVDA accelerator for development with their CUDA platform. Then they try to migrate to AMD ROCm. Some complain that CUDA just works unlike ROCm. Sample: What Is AMD ROCm? | Hacker News
In light of that, I am happy to see the following slides in analyst day from Victor Peng.
They are unifying their stack, so that same code is reusable easily across multiple verticals. DATA Center AI fight will not be as easy for AMD as it was in CPU with intel. But nonetheless, it is still an opportunity. AMD only has to steal existing market share while NVDA and Intel have to make market (They are leaders).
My posts are going to increasingly focus on how AMD is faring in the compute AI space. MI300 will be keenly watched along with their software progress. Otherwise NVDA is a pure winner all along.
Disclosure (Does it matter with non Indian companies?): I already had AMD RSUs from my employment days in AMD. I am accumulating during this fall. I think the market is wrong in binning AMD with Intel and NVDA. AMD is beating Intel and it is more diversified than NVDA (especially with acquisition of xilinx).Irrespective of how it goes against NVDA in accelerators, I expect them to continue to do well in CPU. As of now a purely datacenter CPU share gain bet.
Here you go. First MI300 news. El Capitan was announced in 2020 by AMD, Installation late 2023. A bit of system information revealed now.
El Capitan will be powered by AMD’s forthcoming MI300 APUs.
Theoretical peak is two double-precision exaflops, [and we’ll] keep it under 40 megawatts—same reason as Oak Ridge, the operating cost.
Minor Intel sapphire rapids delay news.
Argonne National Laboratory is awaiting completion of the Aurora supercomputer, a 2-exaflops HPE-Intel machine that has undergone several reconceptualizations. Aurora’s execution is also potentially beset by additional delays pertaining to its Sapphire Rapids CPU, so the exact timeline is fuzzy — but, reportedly, installation is underway.
We miss some important events in history in search of ‘where is the money’. Having an APU in server is a world first and that too at exascale is quiet an achievement.
Being APUs, El Capitan will benefit from what’s likely to be the densest performance profile ever achieved in the world of supercomputing. Make no mistake: El Capitan will represent the pinnacle of semiconductor performance, design, and integration. It’s not hyperbolic to say that it’s likely to be one of humanity’s most technologically complex endeavors.