Home Computing ‘Heterogeneous compute capabilities are needed to democratise AI,’ Intel top executive says

‘Heterogeneous compute capabilities are needed to democratise AI,’ Intel top executive says

For several years now Intel has lagged its competitors in the advanced chip making. To close the yawning gap, CEO Pat Gelsinger embarked on an ambitious chip development roadmap three years ago to make five nodes in four years — 7nm, 4nm, 3nm, 20A, 18A. At that point, there was no ChatGPT and GPU chipmaker Nvidia’s market capitalisation was under a trillion dollars.

Fast forward to 2024, Nvidia is leading the semiconductor industry with a market value of over two trillion dollars, making chipsets for power hungry machines that train large language models. And OpenAI’s chatbots have set in motion an AI revolution no one expected even two years ago.

Speaking exclusively to The Hindu, on the sidelines of ‘Intel AI Summit’ in Bangaluru, Gokul Subramanian, Intel India’s President and VP, Client Computing Group, shared how the company is implementing its roadmap, what is India’s role in it, importance of AI PC, and competition in the chip business.

The Hindu: Intel’s five-nodes-in-four-years roadmap was an ambitious one. How is the company implementing it?

Gokul Subramanian:  We’re already ramping products into 4nm and we’ll start manufacturing 3nm nodes by the end of this year. By 2025, will move into 18A, and that is when we’ll regain our leadership. So far, we’ve been firing on all cylinders. It’s not easy to be able to do that in three years, but we are extremely committed and the progress has been really good.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

TH: TSMC is already shipping 3nm chips. How do you see that in the context of competition?

GS: At Intel’s Foundry Day about two weeks back, we spoke about how innovative 18A will be. It’s about a system of chips, which means we’re looking at a few things. Through the systems of chips, we’re bringing a variety of capabilities in a heterogeneous manner. In 18A, you will see a number of packaging technologies that are coming out, including the backside power delivery, which is going to give a tremendous amount of boost in terms of performance.

We’re giving the power from the [chip’s] back side which will allow us to have the autonomy to be able to reach the transistors in a more compact and dense way without having all of the signal and power integrity issues. So that’s a big thing that we are doing uniquely based on our packaging technologies.

TH: Where does India fit in your overall strategy on developing next generation chips?

GS: There are many aspects to manufacturing — SOC, IP, packaging, and foundry services. If you take Intel India, we collaborate pretty much globally across product lines. So, we have teams that are working on the client products, data centre products, and network and edge products. We also have teams that are a part of the foundry services. They’re engaged in this journey, both as a foundry business as well as a fab-less product that uses the libraries coming out of the foundry for their product means. We don’t do a contribution breakdown for every country or region but the engineering centres in India have made a contribution to every product line.

TH: Given Intel has no plans on setting up a foundry in India, how will the company help India play a significant role in semiconductor manufacturing?

GS: We make passives, PCB, resistors, capacitors and chassis material. Intel has invested a lot of time building an ecosystem. Think about Taiwan three decades ago and compare it to what we are trying to do now in the last 3-4 years with the [original device manufacturers] ODM ecosystem, or the manufacturing ecosystem that builds products together. We’ve been working with that ecosystem, giving them early access to [products] and providing multiple phases of training, and trying to solve the challenges there. We have gotten to a point where eight ODMs are going through the entire cycle and be able to also tie that back into business where they are able to sell it. The whole idea of doing electronics product manufacturing is to increase the domestic value.

And the domestic value goes into everything from a connector to a battery to [very large-scale integration] VLSI fab — more than 65% of the components are non-VLSI specific. But it is vibrant to the infant ODM ecosystem. And they are really progressing so well, and we had a lot of them [ODMs] participating and building using the 4th Gen Xeon. Now, they are ready to make with the 5th Gen Xeon as well. In fact, the 4th Gen processor was indigenously made. We have several laptop designs that are done by these ODMs, and we’ve been collaborating on others. Intel is kind of leading the way on this compared to some of our competitors.

TH: What’s your outlook on talent within the country?

GS: To shift from a service-oriented economy to a manufacturing-led economy is a huge transition. I think, in general, if you were to take a decade or five years ago, most people doing engineering would either pick electrical engineering or computer science engineering. And then maybe mechanical and civil engineering were a third option. When you have a manufacturing economy, the kind of skills and engineering and technical capabilities that open up are huge, because it takes a tremendous amount of engineering that spans across electrical, computer, chemical, material science, mechanical, industrial, robotics, and supply chain. And there is also the vocational and hands-on training.

Obviously, the automotive and phone manufacturing industries, which have preceded this [chip making industry], has gone through that process already. So, we are in a good space now, but we still think there’s a lot more skill sets to be developed.

On the VLSI side, universities are really collaborating with the industry to see how to get to the next generation of VLSI design, ranging from just digital to analog to packaging and others. I think there’s a tremendous opportunity here, and curriculums are being fine-tuned now.

TH: What is your thinking around the AI PC, and how are building it sustainably?

GS: Intel has been leading at sustainability and will be one of the top semiconductor companies and one of the most sustainable companies in the world. When you take AI PCs, we look at sustainability as an end-to-end lifecycle. And there are three parts to that lifecycle. There is manufacturing, wear and tear from usage, and end of life. At Intel, we look at these three phases in a way that allows customers to build a more sustainable PC.

When it starts with manufacturing, our silicon renewable electricity plays a big part. And Intel foundries have renewable electricity, a lot of aspects in terms of how they look at waste, and chemicals. So, we’re leading in that. We don’t stop there. We try to make motherboard references that has conflict-free minerals. No tantalum usage. We try to reduce the number of components. And then we are now doing modular architecture where it’s not a monolithic motherboard, but you have different pieces to it so that our customers can now just go repair a port if its broken and not the entire laptop.

And then we have a lot of these capabilities built into software telemetry that gives us an entire report of how it was used. On Meteor Lake, we have 10 plus technologies that we enabled beyond the silicon that improves the sustainability of an end product. A lot of our customers have used these OEMs have used to build their laptops. So, we look at it very holistically and super thoroughly.

Sustainability is also why we have this capability for AI tools to run on either CPUs or NPUs versus GPUs. NPUs use a very low power compute IP, which is very sustainable. CPUs give fast responses, GPUs are if you’re doing high throughput, and media related work. We give the ability to use any one of them and having a seamless software layer that abstracts it out with One API or OpenVino allows our customers and our developer ecosystem not to worry about the underlying hardware. We’ve built that layer well with open standards and heterogeneous compute, not just at the SOC level, but also in our server and accelerators so that it’s easy to use.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment