Semiconductor world - CPU/GPU Wars

TSMC reportedly intends to expand its CoWoS capacity from 8,000 wafers per month today to 11,000 wafers per month by the end of the year, and then to around 20,000 by the end of 2024. But it looks like even then Nvidia will use around half of the capacity that TSMC will have, DigiTimes claims, citing sources familiar with the matter. Meanwhile, AMD is also trying to book additional CoWoS capacity for next year.

Megatrends like 5G, artificial intelligence (AI), and high-performance computing (HPC) are driving adoption of highly complex multi-chiplet designs like AMD’s Instinct MI300 or Nvidia’s H100. It is widely considered that Nvidia is the main beneficiary of the thriving demand for AI-bound compute GPUs, and that it controls over 90% of compute GPU shipments for new deployments. As a result, TSMC is struggling to meet demand for its CoWoS advanced packaging solutions.

TSMC currently has the capacity to process roughly 8,000 CoWoS wafers every month. Between them, Nvidia and AMD utilize about 70% to 80% of this capacity, making them the dominant users of this technology. Following them, Broadcom emerges as the third largest user, accounting for about 10% of the available CoWoS wafer processing capacity. The remaining capacity is distributed between 20 other fabless chip designers.

2 Likes
  • “I think this is an opportunity for us to write the next chapter of the AMD growth story,” Su told Fortune in a mid-September interview. “There are so few companies in the world that have access to the [intellectual property] that we have and the customer set that we have, and the opportunity frankly to really shape how AI is adopted across the world. I feel like we have that opportunity.”

  • “There will certainly be scenarios in 2024 when we would imagine Nvidia GPUs are sold out and customers only have access to AMD, and AMD can win some business that way, just based on availability,” says Morningstar tech sector director Brian Colello.

  • Gregory Diamos, cofounder of the AI startup Lamini and a former CUDA architect at Nvidia, says he believes AMD is closing the gap. “AMD has been putting hundreds of engineers behind their general-purpose AI initiative,” he says.

  • The forthcoming MI300-series data center chip combines a CPU with a GPU. “We actually think we will be the industry leader for inference solutions, because of some of the choices that we’ve made in our architecture,” says Su.

  • But many analysts believe the bigger part of the AI market lies not in training LLMs, but in deploying them: setting up systems to answer the billions of queries that are expected as AI becomes part of everyday existence. This is known as “inference” (because it involves the AI model using its training to infer things about the fresh data it is presented), and whether GPUs remain the go-to chips for inference is an open question.

  • “It has become abundantly clear, certainly with the adoption of generative AI in the last year, that this [industry] has the space to grow at an incredibly fast pace,” she said. “We’re looking at 50% compound annual growth rate for the next five-plus years, and there are few markets that do that at this size when you’re talking about tens of billions of dollars.”

1 Like

Question remains how can we play semiconductor industry in Indian market

The first line of the first post in this thread makes it clear this thread is not for indian listed cos. I can add a caveat “unless india produces companies that make cpu/gpu and sell it”.

1 Like

Intel asset fire sale continues. Only fab is the end goal (IMHO). At this rate, they might have planned spinning off design team too.
They are going to IPO altera… they acquired it in 2016.

1 Like

Note she is actually sandbagging MI300 here by only saying >$2 billion. We will share our numbers below, but note there is supreme visibility due to AMD MI300’s complicated supply chain, it takes ~7 months for AMD to actually have a MI300X 8 GPU Baseboard to ship from the moment TSMC starts working on the wafers.

1 Like

I agree with the author amd is not the ‘turnarround story’ anymore. The reason I am still sticking to amd is, this is the industry I know + There is still steam left for amd in AI(AMD’s Instinct GPU Business Is Coiled To Spring) . They are the only alternative to nvidia in programmable gpu use in AI. I will exit as soon as there is a slip in execution. Heavily invested. on another note Intel fits the bill of ‘turnarround story’ more but may be around 20$ (IMHO) to throw some money in and forget. But currently, they have a very short runway to fly with their fab.

https://twitter.com/realmemes6/status/1719711419191427354

This is about why I am using this pop to get $AMD puts this morning, why I think that way & how I could be wrong & how this situation is unusual and risky. I’ve followed $AMD since 2016, it was my main (sometimes only) holding into 2020. This is a difficult moment. 1/

1 Like
  1. New Microsoft Maia AI Accelerator: better than AWS, but less HBM memory than NVIDIA and AMD for large AI model training and inference
  2. New AMD MI300 instances for Azure: A serious challenger to NVIDIA H100
  3. New NVIDIA H200 instances coming: more HBM memory
  4. Maia is built on TSMC 5nm, and has strong TOPS and FLOPS, but was designed before the LLM explosion (it takes ~3 years to develop, fab, and test an ASIC). It is massive, with 105 B transistors (vs. 80B in H100). It cranks out 1600 TFLOPS of MXInt8 and 3200 TFLOPS of MXFP4. Its most significant deficit is that it only has 64 GB of HBM but a ton of SRAM
  5. Microsoft Cobalt Arm CPU for Azure: 40% faster than incumbent Ampere Altra
2 Likes

Regarding point 5. Note that it is enabled by arm neoverse (compute subsystem aka a boilerplate platform).

The basic idea is that Arm is providing not just the Core and fabric IP, but also ready-built IP blocks of many cores. The key building block Arm showed during that announcement was a 64-core IP block arranged in dual core compute tiles.

Arm is enabling CSPs to ‘backward integrate’ like apple did with its silicon.

1 Like

AMD also diverisifying CPU production with Samsung foundry

1 Like

A good report on what is happening in AI.

3 Likes

I would say this is a big win for samsung if true. We are not yet sure if this is for IO die or compute.
My bet is this is for the IO die which does not shrink well with nodes. Compute dies will still go to tsmc. AMD can do these things because chiplets lets them mix foundaries for the same processor.

Medium term AMD will benefit. Long term the whole industry will benefit to have more players.

EDIT: I read just now that samsung 4nm is gaafet. This is actually the next big phase in the process. So they have beaten intel in gaafet orders it appears. intel 2nm is gaafet. They are targetting next years, but I doubt they have any big orders or yield numbers are not known while there are rumours of improving yields in samsung 4nm

1 Like

I stand corrected . i said “chiplets allows companies to mix foundries for a single processor”. I don’t think we are there yet. Read the article above to understand why.

Very likely amd will fab one of their non high-end processor in samsung.

Polymatech Electronics is coming with IPO soon.

Looks like a good company. Opinions please.

1 Like

It sure looks interesting with long history as mentioned on their website.
earlier a japanese company. I dont know how safe is it to buy shares from open market but its quoting at around rs 730 now.

Their DRHP is very exhaustive with report from Care Ratings on the industry outlook. Available on their website as well.

This throws some light on the recent history of the company:

No prior experience in the products they are currently in; parent company’s financial health is also a concern.

Company started earning revenues only from 2021, and not cooperating with credit rating agencies (a major red flag for me)
https://www.crisilratings.com/mnt/winshare/Ratings/RatingList/RatingDocs/PolymatechElectronicsPrivateLimited_March%2030,%202023_RR_316143.html

1 Like

I suppose when on cannot find a thread that discusses a subject, we post in a thread that has some words matching. Polymatech has nothing to do with this thread. Lay off guys. Start a new thread or find another generic semiconductor thread.

2 Likes

https://www.bloomberg.com/news/videos/2023-12-06/amd-ceo-says-compute-demand-is-driving-opportunity-video

WoW. She is saying 2B$ is clear line of sight. And they have supply beyond that. “we plan for success” says when asked for supply constraints beyond 2B$. Nvda has backlogs till 2025. This is bumper for amd. Unbelievable. I invested in amd for cpu lead over intel and ended up staying for AI (DC GPU).

Good article on what happened on AMD AI day. And implication of open source, implications of msft/dell/meta/supermicro on stage. Stellar execution till now.

Out of words.

4 Likes

@kenshin Looks like good days ahead for AMD as the pipline seems to be visible and execution is on track. I see lot of media coverage now.

Kudos to your deep conviction.