|Jean Hu – EVP, CFO, & Treasurer|
|Lisa Su – Chair & CEO|
|Mitch Haws – Head of IR|
Conference Call Participants
|Aaron Rakers – Wells Fargo|
|Blayne Curtis – Barclays|
|Christopher Rolland – Susquehanna|
|Harsh Kumar – Piper Sandler|
|Joseph Moore – Morgan Stanley|
|Mattew Ramsay – TD Cowen|
|Ross Seymore – Deutsche Bank|
|Stacy Rasgon – Bernstein Research|
|Timothy Arcuri – UBS|
|Toshiya Hari – Goldman Sachs|
|Vivek Arya – Bank of America Securities|
Thank you, Mitch. We will now be conducting a question-and-answer session. [Operator Instructions] And the first question comes from the line of Toshiya Hari with Goldman Sachs. Please proceed with your question.
Great. Thank you so much. Lisa, I had two questions. My first one is on the Data Center GPU business. You talked about ’24 revenue potentially exceeding $2 billion. I was hoping you could provide a little bit more color. What percentage of this is AI versus supercomputing or other applications? Within AI, maybe talk about the breadth of your customer lineup. And how should we think about which workloads you’re addressing, again, within the context of AI? Is it primarily training or inference or both?
Great. Thanks, Toshiya, for the question. So look, we’ve made significant progress on the overall MI300 program. I think we’re very happy with how the technical milestones look. And then also, we’ve made significant progress from a customer side. Your question as to how the revenue evolves, so the way to think about it is, in the fourth quarter, we said revenue would be approximately $400 million, and that’s mostly HPC with some — the start of our AI ramp.
And then as we go into the first quarter, we actually expect revenue to be approximately similar in that $400 million range. And that will be mostly AI so with a very small piece being HPC. And as we go through 2024, we would expect revenue to continue to ramp quarterly, and again, it will be mostly AI. Within the AI space, we’ve had very good customer engagement across the board from hyperscalers to OEMs, enterprise customers and some of the new AI start-ups that are out there.
From a workload standpoint, we would expect MI300 to be on both training and inference workloads. We’re very pleased with the inference performance on MI300, so especially for large language model inference, given some of our memory bandwidth and memory capacity. We think that’s going to be a significant workload for us. But I think we would see a broad set of workloads as well as broad customer adoption.
Thank you. And then as my follow-up, a question on the server CPU side. You talked about Genoa growing really nicely in the quarter. I think you talked about both units and volume being bigger than its predecessor. Is the growth that you’re seeing or the growth that you saw in Q3 and the growth that you’re guiding to for Q4, is this primarily a function of share growth or are you actually seeing a pickup in the overall market?
And I ask the question because, obviously, year-to-date, there’s been a significant shift away from traditional compute to accelerated computing, but are you actually starting to see signs of stabilization or even improvement on the traditional compute side? Thank you.
Sure. So the way I would frame it is we’re very pleased with our third quarter performance as it relates to EPYC overall. I think the 4th Gen EPYC, so that’s Genoa Plus Bergamo actually ramped very nicely. We got to a crossover in the third quarter, which is a little bit ahead of what we had previously forecasted. And when I look underneath that, I would say with a strong growth in both cloud. Cloud was strong so strong double-digits. The adoption is pretty broad across first-party and third-party workloads and new instances.
And then on the enterprise side, we’ve also seen some nice growth across our OEMs. And so from the standpoint of, is it the market recovery or is it share gain? I think it’s some of both. From a market standpoint, I would say it’s still mixed. I think enterprise is still a little bit mixed depending on sort of which region from a macroeconomic standpoint. Cloud depends a bit on the customer set. But overall, I think we’re pleased with the progress and the leadership of EPYC has ended up, allowing us to grow substantially in the third quarter and then into the fourth quarter.
And the next question comes from the line of Aaron Rakers with Wells Fargo. Please proceed with your question.
Yeah. Thank you for taking the questions. Just to build off that last question, Jean, I think last quarter, you kind of endorsed the notion that your Data Center business would grow. I think it was in the high-single digit range. I think you started the year thinking like 10. So I guess the question is, do you still see that kind of growth rate setup? And how has that $400 million evolved underneath that? Has that — was it $300 million now going to $400 million? Just how has that changed over the course of the last quarter just to level set that Data Center expectation?
Yeah. So I think for the second half, we said we expect Data Center business to grow approximately 50% versus first half. But right now based on what we are seeing, we continue to see in that similar range of that 50%. So we are very happy and pleased about the strong momentum of our Data Center business. On the GPU side, Lisa mentioned about $400 million, around $400 million. As we go through the quarter, we have a strong engagement with the customers. So we do see the progress continues and we see customers placing orders. So that’s why when we go through the quarter, we start to increasingly confident about the revenue profile in Q4 we are guiding.
Yeah. And Aaron, if I could add to that — if I can just add to that. I think what we’ve seen is the adoption rate of our AI solutions has given us confidence in not just the Q4 revenue number but also sort of the progression as we go through 2024.
Yeah. That’s helpful. And maybe just the follow-up, how would you characterize the supply side of the equation? As you look at that $2 billion number, do you feel confident that you’ve got adequate visibility in the supply side to hit those expectations, any update on that side?
Sure, Aaron. So we’ve been planning the supply chain for the last year and we’re always planning for success. So certainly, for the current forecast of greater than $2 billion, we have adequate supply. But we have also planned for a supply chain forecast that could be significantly higher than that, and we would continue to work with customers to build that out.
And the next question comes from the line of Joe Moore with Morgan Stanley. Please proceed with your question.
Great. Thank you. Following up on the Data Center GPU, can you talk about the breadth of customers that you might see there? How — I assume it’s fairly concentrated in year one, but you also did mention multiple hyperscalers. Can you just give us a sense for how concentrated that might be?
Yeah. Sure, Joe. So we’ve been engaging broadly with the customer set. I think in the last earnings call, we said that our engagements had increased 7 times and so there is a lot of interest in MI300. We will start, let’s call it, more concentrated in cloud, sort of several large hyperscalers. But we’re also very engaged across the enterprise and there’s a lot of interest. Our partnerships with the OEMs are quite strong. And when we think about sort of the breadth of customers who are looking for AI solutions, we certainly see an opportunity, especially as we get beyond the initial ramp to broaden the customer set.
Great. And now that you’re getting a look at volume in that space, can you talk about, are the gross margins there going to be comparable to your other Data Center businesses?
Yeah. So on the gross margin side, we do expect our GPU gross margin to be accretive to corporate average. Of course, right now, we are at a very, very early beginning of the ramp of the product. As you probably know, typically when you ramp new product, it takes some time to improve yield, testing time, manufacturing efficiency. So typically, it takes a few quarters to ramp the gross margin to a normalized level. But we are quite confident that our team is executing really well.
And the next question comes from the line of Timothy Arcuri with UBS. Please proceed with your question.
Lisa, I also wanted to ask about that $2 billion number for Data Center GPU next year. That’s still a pretty small portion, obviously, of the total TAM. Where do you think that can go? Do you think when we look at this out a couple of years, do you think you can be 15%, 20% share for total Data Center GPU or do you have aspirations to be even larger than that?
Yeah, Tim. I mean, I would say that, first of all, this is an incredibly exciting market, right? I think we all see the growth in generative AI workloads. And the fact is, we’re just at the very early innings of people truly adopting it for enterprise, business productivity applications. So I think we are big believers in the strength of the market. We previously said we believe that the compound annual growth rate could be 50% over the next three or four years. And so we think the market is huge and there will be multiple winners in this market.
Certainly, from our standpoint, we want to be — we’re playing to win and we think MI300 is a great product, but we also have a strong road map beyond that for the next couple of generations. And we would like to be a significant player in this market so we’ll see how things play out. But overall, I would say that I am encouraged with the progress that we’re making on hardware and software and certainly with the customer set.
Thank a lot. And then Jean, I just wanted to ask on March. I know that there’s a lot of moving parts. It sounds like Data Center is up but PC is going to be down, normal seasonal and Embedded and Gaming sound down as well. So can you just help us shape sort of how to think about March? Is it down a smidge? Is it flat? Could it be up a little bit? And maybe then how to think about like first half, back half next year, if you even want to go there. Thanks.
Hey, Tim. We’re guiding one quarter at a time. But just to help you with some of the color, as Lisa mentioned earlier, we said the Data Center GPU revenue will be flattish sequentially. That’s the first thing, right? The mix will shift from El Capitan majority in Q4 to predominantly more for AI in Q1. So that — because of the long lead time manufacturing cycle, we feel like it’s going to be a similar level of revenue with the Data Center GPU.
But in general, if you look at our business, we do have a seasonality. Typically Q1, the Client business, server business, Gaming business seasonally is down. Of course, right now, we definitely have a little bit more seasonality, given Embedded and the Gaming dynamics we are seeing right now. Server and Client typically were down sequentially, seasonally too. But overall, I think we are really focused on just execution. We probably can provide more color when we get close to Q1 2024, and especially, Lisa, please add if we have any color we can provide on the whole year 2024.
Yeah. No, I think that covers it. When we look at the various pluses and minuses, I think we feel very good about the Data Center business. It continues to be a strong growth driver for us as we think about 2024 for both server as well as our MI300 clients as well, we think incrementally improves from a market standpoint as well as we believe we can gain share, given the strength of our product portfolio. And then we have the headwinds of Embedded in the inventory correction that we’ll go through in the first half and the console cycle. So I think those are the puts and takes.
And our next question comes from the line of Vivek Arya with Bank of America. Please proceed with your question.
Thanks for taking my question. Lisa, on the MI300, many of your hyperscaler customers have internal ASIC solutions ready or in the process of getting them ready. So if inference is the primary workload for MI300, do you think it is exposed to replacement by internal ASICs over time or do you think both MI300 and ASICs can coexist, right, along with the incumbent GPU solution?
Yeah. I think Vivek, when we look at the set of AI workloads going forward, we actually think they’re pretty diverse. I mean, you have sort of the large language model training and inference then you have what you might do in terms of fine-tuning off of a foundational model and then you have, let’s call it, straight inferencing what you might do there.
So I think within that framework, we absolutely believe that MI300 has a strong place in the market, and that’s what our customers are telling us and we’re working very closely with them. So yes, I think there will be other solutions, but I think for the — particularly for the LLMs, I think GPUs are going to be the processing of choice and MI300’s very, very capable.
Got it. And then a question, Lisa, on just this interplay between AI and traditional computing. It seems like especially when it relates to ASPs and units, seems like server CPU makers are kind of holding the line on price per core. But at the same time, the cloud players are extending the depreciation and replacement cycle of traditional server CPUs. So I’m just curious to get your take. What do you think is the interplay between units and ASP, right?
If you were to take a snapshot of what you have seen in ’23 and how it kind of informs you as you look at ’24, that is it possible that maybe unit growth in server is not that high but you are able to make up for it on the ASP side. So just give us some color on, one, what is happening to traditional computing deployments? And secondly, is there a difference in kind of the unit and ASP interplay on the server seat side?
Yeah. I think it’s a good point, Vivek. So I mean, if I take a look at 2023, I think it’s been a mixed environment, right? There was a good amount of, let’s call it, caution in the overall server market. There was a bit of inventory digestion at some of the cloud guys and then some optimization are going on with enterprise, again, somewhat mixed. I think as we go forward, we’ve returned to growth in the server CPU market.
Within that realm, because these — like for example, 4th Gen EPYC, somewhere between 96 and 128 cores. I mean, you just get a lot of compute for that. So I do think there is the framework that unit growth may be more modest, but ASP growth, given the core count and the compute capability will contribute to overall growth. So from a traditional server CPU standpoint, I think we do see those trends. 2023 was a mixed environment and I think it improves as we go into 2024.
And the next question comes from the line of Blayne Curtis with Barclays. Please proceed with your question.
Thanks for taking the question. I want to ask on the Embedded side. I think last quarter, you kind of talked about the headwinds being mostly in the communications end market. You’re guiding it down in December. I was curious if that weakness is spread. And then your competitor talked about kind of a reset getting back to pre-pandemic levels. Just kind of curious how you framed that reset? You said it’d be weak to the first half.
Yeah. Absolutely, Blayne. So I think when we look at end markets, I think communications was weak in sort of last quarter and it certainly continues to be weak. We see 5G sort of CapEx just down overall. On the other market where we see a little bit of, let’s call it, soft end market demand would be industrial and that’s a little bit more geographic, so a little bit worse in Europe than in other geographies.
The other end markets are actually relatively good. And what we just see is that inventory is high, just given where we were with lead times coming into the sort of through the pandemic and with the high demand that was out there. As the lead times have normalized, people are drawing down their inventories and they have an opportunity to do that, given the normalization. So from an overall standpoint, we think demand is solid. And what we view is that we have a very strong portfolio in Embedded.
We like sort of the combination of the, let’s call it, the classic Xilinx portfolio together with the Embedded processing capabilities that we add. Customers have seen sort of that portfolio come together, and we’ve gotten some nice design win traction as a result of that. So we have to get through sort of the next couple of quarters of inventory correction, and then we believe we’ll return to growth in the second half of the year.
Thanks. And then, I just wanted to ask on the PC market. I think you and Intel have seen — you were under-shipping in the first half. Maybe you’re kind of over-shipping a little bit now, restocking. I’m just kind of curious to your perspective of what that normalized run rate is in terms of the size of the PC market and kind of any perspective if inventory levels are starting to move back up.
Yeah. I would say, again, Blayne, when we looked at sort of the third quarter and sort of the environment that we’re in now, I think inventory levels are relatively normalized, and so the selling and consumption are fairly close. We were building up for a holiday season that is a strong season for us overall.
When I think about the size of the market, I think from a consumption standpoint this year is probably somewhere like 250 million to 255 million units or so. We expect some growth going into 2024 as we think about sort of the AI PC cycle and some of the Windows refresh cycles that are out there. And I think the PC market returns to, let’s call it, a typical seasonality, in which underneath that, we have a strong product portfolio. And we are very much focused on growing in places like high-end gaming, ultrathins, premium consumer as well as commercial. So those are — that’s how sort of we see the PC market.
And the next question comes from the line of Matt Ramsay with TD Cowen. Please proceed with your question.
Thank you very much. Good afternoon. Lisa, I wanted to maybe ask the AI question a little bit differently, not just focused on your GPU portfolio, but more broadly. I think one of the big surprises to a lot of us is, how quickly the AI market changed from accelerator cards to selling full servers or full systems for your primary competitor. And they’ve done a lot of innovation not just on GPU, but on CPU on their own custom interconnect, et cetera. So what I’d like to hear a little bit of an update on is just how you think about your road map going forward across CPU, GPU and networking and particularly the networking part as you look to continue to advance your AI portfolio. Thanks.
Yeah. Thanks, Matt. I think it’s an important point. What we’re seeing with these AI systems is they are truly complicated when you think about putting all of these components together. We are certainly working very closely with our partners in putting together sort of the full system, CPU, GPUs as well as the networking capability. Our Pensando acquisition has actually been really helpful in this area. I think we have a world-class team of experts in this area, and we’re also partnered with some of the networking ecosystem overall.
So going forward, I don’t think we’re going to sell full systems, let’s call it, AMD-branded systems. We believe that there are others who are more set up for that. But I think from a definition standpoint and when we’re doing development, we are certainly doing development with the notion of what that full system will look like. And we’ll work very closely with our partners to ensure that, that’s well defined so that it’s easy for customers to adopt our solutions.
Got it. Thank you for that perspective. As my second question, Jean, I wanted to dig into gross margin a little bit and just, I guess, complement you and the team on being able to guide up for the fourth quarter. Sequentially, gross margin if we, I guess, rewound the clock back to the beginning of the year and the Embedded segment would be down from the peak to where you’re guiding the fourth quarter, maybe down by a third.
I wouldn’t have thought gross margin would have hung in as well and grown sequentially each quarter through the year. Obviously, Client margins got better. But maybe you could walk us through some of the puts and takes on gross margin, and inside of each segment, where you’re making progress because I imagine some of that progress is pretty positive underneath. Thanks.
Yeah, Matt. Thank you for the question. Yes, there are a few puts and takes, especially in a mixed demand environment. So let me just comment on Q3 first. We are very pleased with our gross margin expansion sequentially, 140 basis points. As you mentioned, the Embedded segment, revenue actually declined double-digits sequentially. There are two primary drivers. The first one is definitely Data Center grew 21% sequentially, which should provide a tailwind to our gross margin.
Secondly, as we go through the inventory correction in PC market, we did encounter some headwinds in the Client segment gross margin. And in Q3, we saw very significant improvement with our client segment gross margin. I think going forward, the pace of Client segment improvement will moderate, but it will continue to drive incremental gross margin improvement in Client segment. So that really is why we are able to drive sequential growth in Q3.
And in Q4, I would say the major dynamics is with a very strong double-digit growth in Data Center business, we definitely have the tailwind, which more than offset the Embedded segment decline sequentially double-digit again. I think going forward, it’s really mix, primarily mix is driving our gross margin, but we feel pretty good about second half next year when we can expand the Data Center significantly and especially, Embedded segment start to recover, we should be able to drive more meaningful gross margin improvement in second half.
And the next question comes from the line of Ross Seymore with Deutsche Bank. Please proceed with your question.
Lisa, I had a question on the MI300 side of things. When you go to market, obviously, there’s been shortages this year of GPU accelerators, and so a second source is definitely needed. But beyond just providing that second source role, can you just walk us through some of the competitive advantages that the customer lists that you’re going to talk about on the sixth is finding to be so attractive relative to your primary competitor?
Yeah. I think there’s a couple of different things, Ross. I mean, if we start with, it’s just a very capable product. The way it’s designed from a triplet standpoint, we have a very strong compute as well as memory capacity and memory bandwidth. In inference, in particular, that’s very helpful. And the way to think about it is on these larger language models, you can’t fit the model on one GPU. You actually need multiple GPUs.
And if you have more memory, you can actually use fewer GPUs to infer those models, and so it’s very beneficial from a total cost of ownership standpoint. From a software standpoint, this has been perhaps the area where we’ve had to invest more and do more work. Our customers and partners are actually moving towards an area where they’re more able to move across different hardware so really optimizing at the higher-level frameworks. And that’s reducing the barrier of entry of sort of taking on a new solution.
And we’re also talking very much about, going forward, what the road map is. It’s very similar to our EPYC evolution. When you think about sort of the — our closest partners in the cloud environment, we work very closely to make each generation better. So I think MI300 is an excellent product and we’ll keep evolving on that as we go through the next couple of generations.
For my follow-up, I want to focus on the OpEx side of things. You guys have kept that pretty tight over the years. Jean, I just wondered what the puts and takes on that might be heading into 2024. I think you’re exiting this year at about up kind of high single digits, maybe 10% year-over-year. Any sort of unique puts and takes, especially as you guys are driving for all that MI300 success as we think about OpEx generally in 2024?
Yeah. Thanks for the question. Our team has done an absolutely great job in reallocating resources within our budget envelope to really invest in the most important areas in AI and the data center. We are actually in the planning process for 2024. I can comment on a very high level, given tremendous opportunities we have in AI and the Data Center, we definitely will increase both R&D investment and go-to-market investment to address those opportunities.
I think the way to think about it is our objective is to drive top line revenue growth much faster than OpEx growth, so our investment can drive long-term growth. And we also can leverage our operating model to really actually expand earnings much faster than revenue. That’s really how we think about running the company and driving the operating margin expansion.
And the next question comes from the line of Harsh Kumar with Piper Sandler. Please proceed with your question.
Hi, Lisa. I had a strategic one for you and then somewhat of a tactical one. On the strategic side, as your key competitor is sort of getting their act together on the manufacturing technology and the nodes, would it not be feasible to think that their manufacturing cost could be significantly better, let’s say, than that of yours? And so if that’s the case down the line one year or two years out, I’m curious what kind of value-add offerings would AMD have to provide to a customer to keep the market share that you have in the server space, data center space and then keep that growing as well?
Yes, Harsh. Maybe I should just take a step back and just talk about sort of the engagement that we have with our data center customers. When we think about sort of the EPYC portfolio and what we’ve been able to build over the last few generations and what we have going forward with Zen 5 and beyond, process technology is only one piece of the optimization. It’s really about process technology, packaging. We’re leading sort of the usage of chiplets and 2.5D and 3D integration and then when you go to architecture and design. So it’s really the holistic product.
And from a pricing standpoint, actually, price is only one aspect of the conversation. Much of the conversation is on how much performance can you give me at what efficiency. So from a — from an overall efficiency standpoint, I think we’ve developed fantastic products. We are working closely with our customers to ensure that we continue to evolve our overall portfolio. So I think from a value-added standpoint, it’s providing the best TCO is what our customers are looking for, and that’s where our road map is headed.
Going forward, I think having the CPU, the GPU, the FPGAs, the DPUs, I think it gives us actually a nice portfolio to really optimize not just on a single component basis but on sort of all of the different workloads that you need in the data center.
Very helpful, Lisa. And then for my follow-up, a lot of folks that we talk to think that compute game is shifting completely from CPUs to GPUs. [Technical Difficulty] So it was actually very encouraging to hear you talk about your core EPYC CPUs and the traction that you’re seeing with the new generation of CPUs. So I’m curious, if I was to ask you how you think the long-term growth prospects for the next, call it, two to three to four years our for your CPU business, not the GPU but the CPU business, I’m curious what the answer would be.
Yeah. So look, I’m a big believer and you need all types of compute in the data center, especially when you look at the diverse set of workloads. There’s a lot of excitement around AI, and we are very much clear that, that is the number one priority from a growth standpoint going forward. But the EPYC CPU business, we feel like we’ve consistently gained share throughout the last few years.
And even with that, we’re still underrepresented in large portions of the market, right? We’re underrepresented in enterprise. We’ve seen some nice sort of sequential growth and nice prospects there, but there’s a lot more we can do in enterprise. And we’re still underrepresented in cloud third-party workloads, which, again, it’s a — you have to sell through the cloud manufacturers. So I think overall, we feel good about our EPYC leadership and also our go-to-market efforts that will help us continue to grow that business in 2024 and beyond.
Operator, we have time for two more questions.
Okay. And the next question comes from the line of Stacy Rasgon with Bernstein Research. Please proceed with your question/
Hi, guys. Thanks for taking my questions. First, I wanted to just dial in on the Q4 guidance. If you are going to grow Data Center 50% half over half and I assume Client is up sequentially, implies Gaming and Embedded both likely down sequentially in the 20% range. I know you said double-digits. But is that right?
And if that is true, especially for Embedded, what does that mean going forward into next year? I know you said it’s going to be weak in the first half. Does that mean — I mean, is it stable at these levels or does it continue to decline through the first half until things stabilize? Just how do we think about that in the context of the guidance that you’ve given for Q4?
Yes. Sure, Stacy. Let me take that and then Jean might add a few comments. So without getting very specific, I would say I think your comments about Data Center and Client are correct. And then from an Embedded and Gaming standpoint, we would say Embedded, think about it down similar levels sort of in the teens compared to sort of Q3 was down in the teens and Q4 will be down in the teens.
And then Gaming, from a console standpoint, we do expect that to be down a bit more than that. And then as we go into Q1, again without being — there are lots of things that need to happen. We would expect that both gaming and embedded would be down into Q1 as well and sort of the other comments would be more around seasonality. Does that help?
That does help. For my follow-up, again, I wanted to ask about gross margins. So I know that they’ve been extending through the year, but for the full year, they’re actually down. And I get the mix things and everything else. But as I look into next year, like how do I think about this because it sounds like Embedded is going to be pretty weak next year. Client is what it is.
Data Center is growing but it does feel like even if the GPUs are accretive, they’re not accretive yet, and it’s going to take them a while to get to be accretive. Like how much do you think you can expand gross margins year-over-year like in ’24 versus ’23, given the trends that we have entering the year?
Yeah. Hi, Stacy. I’ll say the first thing is, if you look at 2023, it’s a very unusual year for the industry, right, especially the PC market. It’s one of the worst down cycles during the last 3 decades. So during that kind of a down cycle, definitely, we had headwinds on gross margin side, on our Client business, which we have made significant progress in Q3 and Q4 in second half.
Going into next year, the mix primarily is the driver of our gross margin. The way to think about it is Data Center is going to be the largest incremental revenue contributor next year. And then with both Gaming and Embedded facing continued sequential decline, I think it’s all about the mix. We do expect next year will improve gross margin versus 2023, especially second half. So that’s how we think about it right now.
And our final question comes from the line of Christopher Rolland with Susquehanna. Please proceed with your question.
Thanks for the question. There was an article suggesting that you guys could be interested in doing some ARM-based CPUs. I guess I’d love any thoughts that you have there on that architecture for PC. But also Apple has their M3 out now. It seems pretty robust. Qualcomm has an X Elite new chip. It was rumored NVIDIA might be doing that as well. Would love your expectations for this market. And what does that mean for the TAM for AMD moving forward?
Yeah. Sure, Chris. Thanks for the question. So look, the way we think about ARM, ARM is a partner in many respects so we use ARM throughout parts of our portfolio. I think as it relates to PCs, x86 is still the majority of the volume in PCs. And if you think about sort of the ecosystem around x86 and Windows, I think it’s been a very robust ecosystem. What I’m most excited about in PCs is actually the AI PC. I think the AI PC opportunity is an opportunity to redefine what PCs are in terms of productivity tool and really sort of operating on sort of user data.
And so I think we’re at the beginning of a wave there. We’re investing heavily in Ryzen AI and the opportunity to really broaden sort of the AI capabilities of PCs going forward. And I think that’s where the conversation is going to be about. It’s going to be less about what instructions that you’re using and more about what experience are you delivering to customers. And from that standpoint, I think that we have a very exciting portfolio that I feel good about over the next couple of years.
Thank you, Lisa. And one quick one on FPGA for the Data Center in particular. That was a really cool fintech win. I understand that [Technical Difficulty] AI? And could we even mix in an FPGA into the MI300 tile at some point or is there really, at this point, not an AI market for FPGA?
Yeah. I mean, Chris, the way I think about sort of FPGAs in the data center, it’s another compute element. We do use FPGAs or there are FPGAs in a number of the systems. I would say from a revenue contribution standpoint, it’s still relatively small sort of in the near term. We have some design wins going forward that we would see that content grow but that won’t be so much in 2024, that it will be beyond that.
And part of our value proposition, I think, to our data center partners is, look, whatever compute element you need, whether it’s CPUs or GPUs or FPGAs or DPUs or — we have the ability to sort of bring those components together. And that is a strong point as we think about just how heterogeneous these data centers are going forward. So thank you for that.
At this time, we have reached the end of the question-and-answer session. Now I’d like to turn the floor back over to Mitch for any closing comments.
Great, John. That concludes today’s call. Thank you to everyone for joining us today.
And ladies and gentlemen, this does conclude today’s teleconference. You may disconnect your lines at this time. Thank you for your participation.