Semiconductor world - CPU/GPU Wars

2024 Outlook
Let me first get Non DC market out of the way since I want to look at DC (as was the case with this thread from the start).
Laptop/notebook

  1. Laptop/notebook is expected to turn the corner in 2024. Intel’s earnings indicate laptop market has bottomed out.

  2. Next year, a lot of hope is on AI to help the laptop market. Windows 12 release with AI features might force users to upgrade their laptop. Windows 12: New features, AI experiences, expected rollout, and everything we know so far | Windows Central. The only caveat is that the features need to be convincing enough.

  3. Entry of an qualcomm with its ARM processors for laptop market. -Qualcomm Snapdragon X Elite Performance Preview: A First Look at What’s to Come. Nvidia is also joining the party - https://www.reuters.com/technology/nvidia-make-arm-based-pc-chips-major-new-challenge-intel-2023-10-23/.

  4. More entrants expected to join because Microsoft’s Qualcomm exclusivity deal for Windows on Arm reportedly ending soon - The Verge

  5. Bottom Line - Laptop/notebook market has bottomed out. There will be market share dilution due to new entrants. Intel is definitely affected… How much is AMD affected? Honestly, anything better than current situation is good. Intel has some special sauce with OEM. Intel asks them to jump they leap (IMHO). The long term handholding really shows.

DC CPU
AMD

  1. I expect them to reach 40% revenue share. Current revenue share is 30% according to Forrest Norrod. This is mostly CSPs. Expect enterprise to open up this year following the footsteps of oracle. AMD Lays Out Almost The Entire Data Center Strategy At UBS Event
  2. Zen 5 is expected to be launched mid year and most likely in production by 2025

Intel

  1. Their “5 nodes in 4 years” plan appears to be a tad late because I recall 2022 was the year for intel 4. Anyway, 2024 end is when they had promised all 5 nodes ready for production - https://www.xda-developers.com/intel-roadmap-2025-explainer/. And as of now, it appears to be on track. https://www.youtube.com/watch?v=SOY0Yh8y-5Q - Anne Kelleher interview. She heads intel fabs.
  2. Ready for production is one thing - getting big whales (The QCs/Apples/Amazons/AMDs/NVDAs) to sign up is another. Not to mention Intel CEO keeps throwin shades at its competition (NVDA and AMD) and then keeps saying they will get them as customers. We know how it went with tower acquisition approval from china. The CEO kept talking about Chinese threat to USA for having local fab while also asking for tower acquisition.
  3. A single slip and they are gone. The fixed costs are going to eat them up.
  4. Even if they get big customers the processors are expected only in 2025 with these nodes. Because the time from supplying PDKs to customers to when the finally get the design takes time.
  5. As far as 2024 is concerned, we are looking out for granite rapids. I am expecting delay considering intel track record. Sierra forest goes against bergamo. But then, zen 5 is already sampling to customers. I expect it to be not competitive w.r.t. turing. Let us see.
  6. ARM is coming out as a major player among CSPs. MSFT has its own CPU now. Every major CSP except meta has its own CPU now.

DC GPU - The real deal from here on
What a gold rush looks like?
A company is using NVDA GPUs as collateral for 2.3B$ loan In Silicon Valley, GPUs are now as good as gold - The Verge

Hardware

  1. NVDA has reportedly pulled in schedule to bring in Blackwell B100 in 2024. Most likely H2 2024. This, it appears is because MI300 is competitive w.r.t. the large HBM memory. Good GPU perf is one thing but to be able to feed data to it is another. MI300 does really well here with its 128GB stacked memory. B100 comes with HBM3e. This is also NVDA’s first chiplet architecture. Wondering how NVDA does chiplet while navigating patent landmine laid out by AMD. Example: AMD's new chiplet GPU patent could finally do for graphics cards what Ryzen did for its CPUs | PC Gamer
  2. B100 could give NVDA the one two punch. H100/200 at reduced cost and B100 the latest and greatest. But does nvda have fab space to supply all volume? It appears not. Rumour is

AMD’s AI chip shipments are expected to significantly reach 30% or more of Nvidia’s (CoWoS-based) in 2025… "

  1. The order backlog of NVDA gives AMD the window with its MI300X GPUs starting to ship Q124. More on projected sales for AMD later.
  2. Currently training is the main driver for GPU sales. Expected to switch to inference with a long tail at some point.
  3. Intel - Gaudi - Majority sales is being driven by Xinese market. We need to see how much intel is able to go past US restrictions and sell. Gaudi3 is coming next year.
  4. AMD - Theoretical performance is better than H100 for MI300X but what matters is how well software extracts this perf. Some colour here - AMD MI300 Performance - Faster Than H100, But How Much?. Not to mention CUDA reduces precision when required to get perf advantage etc. Optimization. One area AMD lags is uniformity in performance as teh system scales. Their interconnect outside of the package is ~480GBps. Compared to nvidia ~900 GBps nvlink. Some color here. The bigger 128GB memory in MI300X helps mask this. But they are trying to overcome this by giving access to its infinity fabric to 3rd party and we can expect broadcom to to come up with a switch that takes care of scaling. Time will tell. MI400 will have more AI targeted architecture compared to MI300 (which was designed for HPC). So there is a good roadmap too. Not to mention their chiplet architecture lets them make more processors/wafer compared to nvidia.

Software

  1. Intel - Unsure where exactly they are. Because I do not see any real customers coming in and promising adoption. The comparisons looked fine. I don’t know. Need to work here considering gaudi3 is coming next year.
  2. NVDA - everyone knows the CUDA story. NVDA has even gone ahead and started an investment fund to fund CSP startups like coreweave. Not to mention nvidia partner program. Full stack… from software platform to hardware. Like Apple.
  3. AMD - ROCm has seen heightened effort from AMD and it is showing in order wins. One thought I missed earlier when comparing ROCm to CUDA was that ROCm or any competitor software does not have to do everything that cuda does. It has to focus on what is the most used in AI world for CSPs right now. They can optimize for the most used case. So achieving competing position against cuda in current large AI market may not be as difficult as I thought. They completely shunned ML perf benchmarks here. Nvidia responded with this. Then AMD responded back with some latest numbers with this. So while NVDA software will make its way into every nook and corner of AI applications, others only have to target the current on demand areas medium term.

My investment is in AMD So I will try to guess AMD price next year based on current expectations based on expected deliveries of MI300x. I did once earlier here. But we have more rumours about actual supply that amd has now. That will be next post.

2 Likes

A small correction on this. Intel foveros packaging is done in malaysia. Not taiwan.

Intel’s ongoing expansion in Penang will soon add the ability to do Foveros packaging in Malaysia.

Intel MTL appears underwhelming at first sight. But it can be intel’s own zen1 moment with chiplets (tiles)

On a technical level, it’s hard to say why Meteor Lake has regressed in this test, but the CPU’s performance characteristics elsewhere imply that Intel simply might not have cared as much about IPC. Meteor Lake is primarily designed to excel in AI applications and comes with the company’s most powerful integrated graphics yet. It also features Foveros technology and multiple tiles manufactured on different processes. So while Intel doesn’t beat AMD or Apple with Meteor Lake in IPC measurements, there’s a lot more going on under the hood.

Yes. What is going on under the hood is chiplets foundation for future CPUs. Not to mention intel is not going to depend on TSMC for packaging(CoWoS/SoIC etc). It has its own foveros packaging tech. Note that this is the real bottleneck in high end GPU/CPUs right now.

From curent business side, Intel is finding unique ways to reduce opex and get lean. Better than lay offs

On the other side, AMD has a free hand in 2024 too. While laptops will see other ARM entrants, datacenter is wide open and AMD comes top.

Ok Here is my promised valuation exercise for AMD. Willing to learn. None of the following is GAAP number. So what you see in websites’ PE will not match here.

Q4 2023 (Oct-Dec)

There are 4 segments that AMD reports its earnings - Data Center/Client/Gaming/Embedded

Data Center (DC)

  1. 2023 Q1 & Q2 1.3B$. This was during the lean period of semiconductor world where companies were looking for a footing. AI entered the show and CPU investments took a beating. Q3 showed some recovery at 1.6B$ dollars. This was on the backdrop of meta deal with bergamo. Intel also commented in Q3 that their margins are down due to competitive pressure. Q4 AMD expects 400M$ in revenue addition due to shipment of MI300A to El Capitan super computer. We can treat this as one time. In addition to this, MI300X starts to ship end of December/Jan 24. Expecting oracle moving to AMD CPUs (first real enterprise win) and current amazing genoa and bergamo CPUs to contribute 400M$.I am keeping Q3 2.0B$ as baseline and 400M$ El capitan as added revenue. So I expect Q4 DC estimate is 2.4B$ without adding some MI300x revenue

Gaming and Client
I am keeping gaming+client flat - gaming should benefit from festival sales but msft saw slump in xbox. PS5 is doing well. They are expecting 19% rise from last quarter. PlayStation 5 shipped 4.9m units during Q2 2023 | GamesIndustry.biz - I am really unsure where amd is in client since we did not see a lot of phoenix laptops on time for back to school. I will keep it flat to be safe - 1.4B$ + 1.3B$ = 2.8B$

Embedded
Embedded is expected to see downturn. It started q3. So Q4 Embedded sales at 1B$

This will make it a Total of 6.2B$ for Q4 . For 2023 (5.3+5.3+5.8+6.2). That would be ~22.6B$.
Segment wise split for Jan-Dec 2023.
DC 6.6B$
Client 4.7B$
Gaming 6.2B$
Embedded 5.2B$

At an operating margin of 25% (average of couple of years) - that is ~5.5B$ earnings - 3.45$/share
At current price per share 148$ - PE of 43.


Now, regarding 2024 -

Coming to 2024 (Jan-Dec).

  1. Client - I will give it 1.3B$*4 = 5.2B$. Client s expected to make a comeback since we are reaching a refresh cycle which usually 3.5 years. Last big purchase was in 2021 (when everyone was buying laptops). We can safely say it will not go below 1B$ (The client macro was doomed in 2023 Q1/Q2 but even then it made 0.7 & 1B$. Q3 was 1.5B$). Intel controls this space but we can expect it will not be as bad as 2023. Expecting The new intel meteor lake processes to be used by OEMs for enterprise refresh (as is always the case). Intel will do everything to keep their last real bastion.
  2. Gaming - Not sure where this is going. Lisa Su says the peak is usually 3rd year after a product. I will keep it flat - 6.2B$
  3. Embedded - embedded is expected to see a slump and possibly revive H2 2024. Say Flat or down. Say 10% down - 4.6B$
  4. Data Center - Expecting All else being equal and removing the 400M$ El capitan bump in 2023 Q4. We should see 6.2B$ without adding MI300X sales which I will come to now. MI 300X is expected to contribute 2B$ in sales as per Lisa Su in 2024.From what we see in the news reports, It appears that she is being very very conservative, How?
    various leakers/estimates show that AMD is going to ship 200000 - 400000 units of MI300X. AMD MI300 Ramp, GPT-4 Performance, ASP & Volumes & https://link.medium.com/FlWlPnROMDb. I arrived somewhere at 15000$ per unit for MI300x (ball park betwen H100 which sells for 30000$ per unit and mi250x which is at around 8000$ per unit). Selling 200000-400000 unit at 15000$ gives 3B$-6B$ sales. Let us say 4B$ midway… Since by 2024Q4 B100 (nvidia blackwell) will makes its entry. and 400000 may not materialize (but this can happen since nvidia is still supply constrained).
  • CSPs 100% want an alternative - this is going for AMD

  • Nvidia lost its chinese market due to restrictions - that can cause supply increase elsewhere. So that is going against AMD.

  • Nvidia pulled in its cadence and B100 is coming 2024 end - That means AMD has to work on MI400x.

  • Lisa Su is conservative. In 2021, she guided less at the start and kept raising the guide up as the earnings came.

Overall we can assume they hit 250000 - 300000 sales - So 4B$ sales (15000$/unit) for MI300X alone makes sense.
So overall, for Datacenter, we can expect - 6.2B$ + 4B$ = 10.2B$ sales
Segment wise split for Jan-Dec 2024.
DC 10.2B$
Client 4.6B$
Gaming 6.2B$
Embedded 5.2B$

Total 26.2B$ Revenue in 2024.
At an operating margin of 25% (average of couple of years) - that is ~6.5B$ earnings - 4$/share
At current price per share 148$ - PE of 37.

Looking beyond

  1. Expecting MI400X to be even better than mi300X which was never made for AI. MI400X will have IPs specifically made for AIs.
  2. Not to mention even better software parity.
  3. The only open source alternative compared to NVDA walled garden
  4. No one knows where is Intel Gaudi3 and its myterious customer (some rumour said they are shipping large to a customer) - who is going to buy it knowing that, that is a dead end roadmap? Since it will get merged with falcon shores. A different platform altogether
  5. This leaker again says AMD’s AI shipments will be 30% or more of NVDA’s in 2025. So supplies are not going to be a problem. Considering the fact that AMD does chiplets with better yield per chip compared to Nvidia, AMD can bring the price down to make itself competitive w.r.t the cuda moat while still maintaining the margins. nvidia blackwell is chiplet or multi chip module (which means reticle limit and bad yields again)?
  6. AMD is making some strategic partnerships with others like broadcom and giving access to design IPs to infinity fabric (Like they did for broadcom). This will have a long term strategic impact and definitely help reach the uniform scaling that NVlink gives.
  7. The perception is that nvidia is competing with CSPs and AMD is working with them.
  8. Current DC revenue share is 30% according to Forrest Norrod. AMD is heading to 40% before Intel can respond to genoa and bergamo

@kondal_investor Requesting your valuable feedback since I am weak at financials and valuation. Others too. Please chime in so that I can learn a thing or two.
@harsh.beria93 looking forward to your inputs on how to look at amd from valuation stand point. Thanks

2 Likes

I did not give any kind of forward value in my earlier post :slight_smile: Let me do that now.

As per Lisa Su, by 2027 we are looking at 400B$ AI GPU investment. But this seems a little too big w.r.t. other predictions - The Tidal Wave Of Rising GPU TAM Raises All Boats).

If AMD does 4B$ of AI GPU sales this year. Say the growth is 25% (Being very conservative) for GPU AI. We will let other biz to grow at 10%.

We are looking at Forward PE for each year at current price (with all other markets holding stable since nothing match 2023 doom and we can expect Embedded to revive by 2025).
Forward PE AT 148$/share
2024 - PE37 - @ 4B$ AI GPU sales + 22B$ Others @ 25% Operating margin
2025 - PE32 - @ 5B$ AI GPU sales + 24.2B$ Others @ 25% Operating margin
2026 - PE28.3 - @ 6.25B$ AI GPU sales + 26.6B$ Others @ 25% Operating margin
2027 - PE25 - @ 7.8B$ AI GPU sales + 29.2B$ Others @ 25% Operating margin

Year 2027 predictions on forward PE

On average looking at 150-200$ price range till 2027 @ 2025PE without including

  • AMD gaining DC market share from intel
  • AMD going beyond 10% AI GPU market share - One leaker says 30% of NVDA shipments in 2025.
  • GAAP financials - which includes amortisation of xilinx acquisition costs. This should get better with time. Though I do not know precisely how this ramps down.
  • What else am I missing?
    @harsh.beria93 @kondal_investor Can you please share your valuable inputs on these calculations. Unsure what I might be missing.
2 Likes

Q4 2023
Revenue $6.2B increased 10% YoY and 6% QoQ
Gross Margin 47% flat on QoQ basis
Data Center revenue $2.3B up by 43% QoQ
Gaming and Embedded revenue still suffering down by 9% and 15% QoQ basis

FY 2023
Revenue $22.7B down by 4%
Data Center Segment Revenue Up 7% y/y,
Embedded Up 17% y/y
Gross Margin 46% Up by 1%

Outlook for Q1,FY24
AMD expects revenue to be approximately $5.4 billion, plus or minus $300 million. Sequentially, AMD expects Data Center segment revenue to be flat, with a seasonal decline in server sales offset by a strong Data Center GPU ramp, Client, Embedded and Gaming segment sales are expected to decline sequentially, with semi-custom revenue expected to decline by a significant double-digit percentage. Non-GAAP gross margin is expected to be approximately 52%.

Overall It was a good result on DC side(43% up QoQ), however Mr.Market was expecting better guidance. Mr Lisa Su said, MI300X exceeded expectations based on strong customer demand, qualification, and manufacturing ramp.

1 Like

The chip designer’s CPU share grew a half point to 31.1 percent in 2023 while Intel’s decreased as much to 68.9 percent when counting total CPU shipments between both companies last year, according to CPU-tracking firm Mercury Research.

@kenshin I came across this thread while trying to understand about Nvidia and why its shares have gone “Tesla” crazy. You started this conversation in 2020 and seem to have believed in AMD back then. 4 years later what are your thoughts on Nvidia and the future of this industry? Personally I believe the company that is closest to the consumer is going to make the most money not the chip manufacturers. Question is who is going to be able to monetize the power of AI and come up with an application convincing enough that we want to pay for the service?

2 Likes

who is going to be able to monetize the power of AI and come up with an application convincing enough that we want to pay for the service

Read the following report and make up your mind.

MSFT comes first. Google is missing a little but with some retraining, the bias may be fixed I am guessing
There are many cos coming as you go up the software layers. Do read the above report to understand the path of revenue.

FWIW - nvidia is trying to be a cloud service providers and it is not just a chip designer (they do not manufacture FYI). They sell software platform. And their hardware is part of this platform. If you wanted any kind of long term investment in theme - AI, I would say nvidia is a no brainer. You can sip too.

AMD has been in a tear - ~70% last 3 months.

Man, companies like AMD need too much attention. I am getting tired of this actually. The swings in stock price needs some mental fortitude. I am getting old for this.

I came for datacenter CPU biz but now staying for their AI compute play. The pace of this has caught me off-guard (for good though).

I was in AMD all along because they were the only company that had decent CPU and GPU play.

But now, look at nvidia, they have arm cpus in the H100 H200 systems and AMD has competitive MI300x.
MI300x product is the fastest ramp I have ever seen (in my 17 years in this industry) at such complex design. I have never seen a product like mi300x out of the door at such a pace by any company from announcement to ramp to customer adoption. I approximate it at 8 months.

I think AMD has some distinct cost advantage vs nvidia as the game prolongs. CHIPLETS. The flexibility that chiplets has given AMD is such that, they could remove CPU dies from MI300A (designed for el capitan exascale supercomputer) and plonk some extra GPU dies and deliver a product for a new market, all in 8 months time frame. Unbelievable.

Recent Dell earnings call has confirmation that MI300x is indeed shipping and has order backlogs.

https://investors.delltechnologies.com/static-files/dcbb932e-8e25-49a9-a508-61e454f45ce5

Also, the market is warming up to the fact that inference will have larger deployment compared to training. Inference is where MI300x with its large on chip memory shines.

6 Likes

It is happening already

The way I think about this is cost. Currently the game is among rich players that can spend on AI. But as prodcution capacity ramps up (billions are being poured into silicon fabs), the supply is going to spike and things like openAI Sora are going to get cheap. So entire media industry is the target if you look at openAI sora as a service

One way to explore the future - Pick any industry. just look at nvidia’s presence. Example: google for “nvidia healthcare”
Solutions for Healthcare and Life Science Industries | NVIDIA - post that you can explore companies in this space and their adoption rate and their AI use reports. etc. I have no idea beyond that. The reason I say “nvidia + doman” search is that they have their hand in almost all science domains. They were recently invited to a healthcare conference (yes… a chip company was invited in healthcare conference).

6 Likes

I was just watching a video about AI and how its evolution/development needs to be controlled.

If AI is going to play such a big role in the future of humanity then the tech industry as a whole is going to become even more valuable/important in the future. Its also a safer bet for the average joe than taking individual calls. Like you said just look at the dell story. stock was going no where for the last few years. they release a story that is hot and suddenly the market thinks they are way more valuable

2 Likes

Do not have subscription… So cannot read full article.

very nice presentation summing up the semiconductor supply chain

2 Likes

Which Tata Group Company Will Manufacture Semiconductors?

Sasken has made new acquisition in a small startup called “Anups Silicon” for 38 crore. Management interview full of hot and buzz worthy words like “chips, foundry, silicon design, semiconductors

https://www.youtube.com/watch?v=TC39QZjKtXE

As far as services is concerned:
There is extreme shortage of experienced professionals in this space. Few big companies have not really contributed as much as they could in this area. Why? Large MNCs have these processes that abstract out the engineering knowhow to do the job to a great extent. So we are seeing professionals with 10-15 years experience doing Silicon Validation in top companies with close to nil understanding of electronics/electrical fundamentals.

I am also seeing services companies trying to level up their Silicon game. I have seen one of the well known companies fail to deliver on their commitments too. This is not an easy area to enter. Requires engineering knowledge/skills. The aquisition makes sense (in a way… looking at it from far off).

3 Likes
1 Like

Just a word of caution. I have long stopped following what this (moore’s law is dead) youtube channel posts. Noticing lot of his predictions do not checkout in the future. Off late he was also hooked on with false information by trollers.

1 Like

One of the security advisors of India has commented that new wafer sized chip technology is available now and it will be end of GPUs. Here is the article:

1 Like