Generative AI is fundamentally reshaping how data is stored, processed, and analyzed.
As enterprises look to apply GenAI to unlock deeper insights for customers and internal teams, I’ve been speaking with technology leaders about how they’re responding to this paradigm shift.
At AMD, a leading semiconductor company specializing in CPU and GPU design, AI plays a central role in how the company leverages data across IT and operations.
My conversation with Hasmukh was the latest installment in a series of interviews with innovative CIOs at some of the world’s top companies.
Asheem Chandna: What part of AMD’s business has been materially improved by the introduction of GenAI?
Hasmukh Ranjan: We’ve been looking at AI from two angles. One is how we’re shaping the AI market, building products like GPUs that enable high-efficiency AI applications. The second is how we’re using AI to change our own operations.
We think of it in two streams: engineering, where most of the workforce sits, and everything else—our SG&A functions.
For engineering, we’ve embedded AI at a high velocity. We don’t talk much about it externally because it is core to our business. But things like code development and access to AI tools have already made a huge impact.
On the SG&A side, no function is untouched by AI. I lead IT, and about two years ago, we set clear goals. We made some early missteps, but once we defined our strategy and approach, we settled on a clear direction. We defined our objectives, figured out the KPIs, and now have a range of projects lined up to achieve those goals.
What changed with the introduction of AI compared to how your teams previously operated?
I’ve been in IT for 35 years, always in semiconductors, so everything I say comes from that lens. From day one, it’s been about doing more with less. IT has always been squeezed, and we used whatever tools were available to drive efficiency.
In semiconductors, one of IT’s biggest costs is computation: massive simulations using hundreds of thousands, even millions of CPUs. That’s what we provide engineers with to design new chips, so it’s always been a major focus.
Now, AI adds a whole new dimension, changing what’s possible.
Would you call that change incremental or a real sea change?
It’s a sea change, a notable transformation, especially for IT operations.
For the first time, I can access data across the entire enterprise myself. I don’t have to chase anyone down for it. We built a company-wide data lake, and now all IT data sits in one place.
Say a user files a case. The IT support person can instantly pull up their full history: who they are, past issues, which apps they use, Citrix data, tunneling behavior.
On top of the data lake, we’ve built a chat agent called Air, an AI layer that can query anything across the enterprise.
We used to dream of this possibility. Now it’s real. The IT enterprise doesn’t feel so massive anymore.
Has this resulted in measurable improvements, or just better visibility?
The output has significantly improved. I use two metrics to measure this. If someone doesn’t know IT but wants to understand efficiency, these two are all they need.
First is coverage ratio: how many employees one IT person supports. Since launching our AI initiative, we’ve significantly reduced the IT staff-to-employee ratio—and that trend continues.
Second is IT spend per employee. We’ve brought our numbers down by about 30% in the past two years.
Together, coverage ratio and IT spend make the impact clear.
Do people across AMD feel this change?
I think so. We’ve taken an organic, cross-functional approach. Every function has active AI projects. I drive AI adoption across all non-engineering teams, and our Chief Software Officer leads it for engineering.
We framed our approach in four stages:
- Assist: copilots, chatbots, and productivity tools. This is where most companies are now.
- Act: agentic AI that can take limited action.
- Automate: systems completing tasks end-to-end.
- Autonomous: systems operating with minimal human input.
As you move up this ladder, your productivity gains rise gradually then accelerate upon reaching autonomous. But so do your data requirements. Not just data volume, but also quality and usability.
That’s where the data lake comes in. It’s about curating data so AI can scale meaningfully. Without a strong data foundation, you can’t move from “assist” to “autonomous.”
Does data become the key differentiator?
Yes. There’s a lot of focus on LLMs right now, but over time they’ll all start to look the same. They’re being trained on mostly the same public datasets.
The edge will come from companies with internal data that is clean, governed, curated, and ready to be used. That’s what we’re building.
You’ve suggested the SaaS model itself might be shifting. Can you elaborate?
We’re already seeing it happen. We’ve gone from on-prem to SaaS. But even with SaaS, we’re only sending a small portion of our data to those platforms. The rest stays internal.
Now, we’re entering a phase where the model flips: instead of sending data to applications, the algorithms will come to the data.
Sending data out only to bring it back for analysis is inefficient. The future is keeping data in place and bringing compute to it.
What makes that shift so urgent now?
We’re entering the AI era.
In the CPU era, we analyzed less than 10% of enterprise data. In the AI era, we’re looking at analyzing 30-35%, maybe even 100 petabytes. No system today can do that at scale. But AI is pushing us there.
That’s why data strategy is so critical. Leveraging your enterprise data provides huge potential for AI. Everyone else, from SaaS vendors to LLM providers, will follow the enterprise to get this right.
Will this fundamentally impact your core business, like chip design?
It already has. Chip design cycles that would take two years now take one. AI is just the latest enabler; before that it was cloud. In the future, it may be quantum. The specifics change, but the principle holds.
How do startups factor into your strategy?
Startups are important in our ecosystem. I oversee procurement, so I meet almost all of them. We’ve seen waves: automation-heavy, data-focused, networking-oriented.
We’ve built a strong vetting process. On average, we engage with about 5% of the startups we meet. That model has worked well and most of the partnerships have delivered.
What kind of problems do you wish more startups were tackling?
We break AI down into three layers: plumbing, curation, and harvesting.
Most startups focus on harvesting, but we need innovation across all three, especially plumbing: automating infrastructure, especially with GPUs and liquid cooling entering the data center.
We need better tools for data curation. No one has cracked how to manage 100+ petabytes effectively. Someone will, and a startup will be involved.
Harvesting matters, but it’s just one part of a much bigger system.
Are your customers coming to you for guidance on AI strategy?
Yes. I work closely with tier-two cloud providers through strategic partnerships to make our GPUs more functional. I also host CIO forums around the world—recently in Brazil, Mexico City, and China—to share how we’re using AI in real-world ways.
The content evolves as we do. I’m not in sales, but I spend a lot of time sharing what’s possible and helping others move forward.
At the end of the day, it’s not about the noise and hype. It’s about what you’re doing to move the needle. That’s our focus.