This has also created a new class of challenges that chip designers must explore and overcome. With the application of artificial intelligence (AI), semiconductor and systems companies can not only design better chips, but also speed time-to-market and save cost. Deloitte Global predicts that the world’s leading semiconductor companies will spend $300 million on internal and third-party AI tools for designing chips in 2023, and that number will grow by 20% annually for the next four years to surpass $500 million in 2026.
For advanced chip design, AI is the game changer
As we progress into a more digitally connected world, the demand for next-generation chips are on the rise. Today, computing systems must process data and perform complex calculations at high speeds to support hyperconnected devices such as smartphones, wearable devices, autonomous vehicles, and a plethora of other electronic gadgets that we use everyday.
Artificial intelligence and big data are transforming the world around us, as they are transforming the way we think about electronic design automation (EDA). EDA is the backbone of chip design – it encompasses the software, hardware and IP that chip designers use to design cutting-edge semiconductors. The latest innovation in EDA is the integration of artificial intelligence into the software. This is a game changer because it boosts engineering productivity and shrinks time to market, both of which are critical considerations in chip design. AI helps users with automated, intelligent design insights and offers the ability to greatly scale engineering team productivity.
Previously, when a chip was taped out, the valuable data was deleted to make way for the next project. There are valuable learnings in the legacy data, and today with the application of AI, it has become easy for engineering teams to access these learnings and apply them to future designs. This enables delivery of optimal engineering productivity and ultimately more predictable, higher-quality product outcomes.
Cloud-enabled EDA
Enterprises require AI platforms that are designed to run on on-premises equipment and are also cloud-enabled. By offloading the naturally compute-intensive AI algorithms to advanced high-performance servers in the cloud, companies can free up their on-prem capacity for more traditional EDA workloads. AI and ML workloads in EDA and systems will be powering the next explosion of cloud compute.
Leveraging the cloud for EDA tasks is growing exponentially as the industry moves towards advanced nodes and pursues ever-greater power, performance, and area (PPA), higher bandwidth and lower latency.
As we head towards 3nm and below, compute infrastructure requirements increase by multiple orders of magnitude, which, in-turn, necessitates the need for more advanced chips. It is a virtuous cycle that is driving the overall need, and it is easy to see why EDA in the cloud is rapidly becoming a necessity—even for companies with near-unlimited on-prem capacity.
With the adoption of cloud, companies are discovering that the performance of their current EDA tools increases by an order of magnitude, too—everything just works that much faster. With the next generation of processors available instantly only in the cloud, they are able to increase engineering productivity and slash costs and project timelines.
In summary, AI and cloud together will bring unprecedented functionality, scale, and access, which are enabling the next wave of innovation in semiconductor and electronics design.
Alok Jain is vice president of R&D in Cadence Design Systems