Sat Jan 13 2024
On AI hardware, the most prominent content growth shall be in: Computing (Server), Throughput (Switch), and Connectivity (Transceiver and Interconnect). Due to capacity limitation, we will just talk about computing this time.
Google is using TPU (engaged with Broadcom), Microsoft is using GPU (as of now), and Amazon uses a mix of self-developed ASIC (engaged with Taiwanese ASIC enabler Alchip) and GPU. Facebook and Microsoft are said to be engaging with different partners to do self-developed ASIC, but their front end design capabilities are lagging Amazon and Google's. Meanwhile, we believe China is also finding its way to develop their own AI chips despite various restriction. Some of them are downgrading from 7nm to 28nm but in combination of chiplet technology in order to get pass the restriction.
We can see ASIC among datacenter as a key trend, but meanwhile, Nvidia AMD Intel GPU/ASIC are also evolving fast to compete with ASICs, launching more varieties of chips. This is just like the beginning when tablet was introduced 12 years ago – Nvidia TI Intel all launched their main computing chips (if you still remember), but at the end ecosystem of ARM and Apple win out.
The key for ASIC to win out, at the end, is still as TSMC always mentioned, best “PPA” – Power, Performance, Area (=cost). Next 6-12mon we may hear more chip launches, but within 2 years we shall see market dynamics more concentrated with inefficient guys giving up their chip mass production at the end.
We will talk about switch and transceiver next time. Welcome any feedback!