Loading…

No country left behind with sovereign AI

Ryan welcomes Stephen Watt, distinguished engineer and VP of Red Hat’s Office of the CTO, to chat about digital sovereignty and sovereign AI.

Article hero image
Credit: Alexandra Francis

They explore major infrastructure constraints for things like power, cooling, and scarce hardware that cause the regional disparities we see in sovereign AI, plus why we need to extend Kubernetes and  integrate PyTorch Stack not just for a sovereign cloud but for sovereign AI.

Red Hat’s Office of the CTO is a division of 150 software engineers and researchers working on their Research and Emerging Technologies arms, helping to shape the vision and strategy of Red Hat’s technology.

Connect with Stephen on LinkedIn.

Congrats to user Ittiel for winning a Populist badge on their answer to Print timestamps in Docker Compose logs.

TRANSCRIPT

[Intro Music]

Ryan Donovan: Hello, and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ryan Donvan, and today we're talking about AI sovereignty and how engineers are extending Kubernetes, integrating PyTorch Stack, doing all those good things so [that] we can have open-source AI sovereignty. And my guest for that today is Steve Watt, who is a distinguished engineer and VP of the office of the CTO at Red Hat. So, welcome to the show, Steve.

Stephen Watt: Thanks, Ryan. Excited to be here.

Ryan Donovan: Before we get into our topic today, we like to get to know our guest. How did you get into software and technology?

Stephen Watt: [A] long time ago, I actually started in South Africa, which is where I'm from, we were working on the early internet service providers trying to get people connectivity over somewhat problematic African phone lines on 14.4 KBPS modems. But from there, I started building web applications, early Java, and from there I went into startups in the United States and IBM building large systems around web service integration. And then, from there, my career went into emerging technologies and large, big data analytical systems, distributed systems, has been my focus, I'd say in the last 15 to 20 years. And then Spark, and then Kubernetes, and now PyTorch Ecosystem, the LLM.

Ryan Donovan: So, I've heard a lot of folks talking around AI and data about AI sovereignty and data sovereignty. Let's take a little step back and define that. What are we talking about [with] sovereignty?

Stephen Watt: Yeah, I think this is a great question. There's two ways to articulate this. So, two different lenses. Sovereignty is to get a set of sovereign guarantees for your application. The digital sovereignty is one lens that I think is what's most commonly talked about. And so, that's [when] you're running your application, can you guarantee that it's running in a particular region? It's being operated by people in a particular region, and the data lives in a particular region, and you can, from a compliance standpoint, actually instrument all of that and provide assurances to meet those compliance requirements to articulate that. And then, there's what I would say more on the 'sovereign cloud' piece. Sovereign Cloud's got this subset of ' sovereign cloud' and 'sovereign AI,' but essentially what we're seeing there is there's a region, a nation, or a state, and that region wants to provide infrastructure for its constituents to be able to run these applications to get those sovereign guarantees. But there's additional incentives on this, especially when it comes to sovereign AI. And specifically, that nation or state wants to ensure that their constituents aren't getting left behind. So, that's one. Two, this infrastructure is complicated, and expensive, and requires specialized skills to run, and specialized infrastructure to run, and they want to be able to provide that and be able to provide some sort of discounted access to it to their constituents. So, it often involves deploying an infrastructure and operating an infrastructure for their constituents, and having some sort of a– there's almost always an attached mechanism that researchers, startups, [and] citizens can come through to get discounted access to that infrastructure.

Ryan Donovan: Okay, so it does have a sort of state-level control to it, right? Because sovereign, you think of the king, and this is inference under consent of the king, right?

Stephen Watt: Yes, exactly. It's a great way to describe it. And that is literally what's happening. There are kingdoms that are doing this. Saudi Arabia, I would say, was a first mover. And UAE is also a monarchy. And so, both of those were early movers in the sovereign AI space.

Ryan Donovan: We're talking today about how people are implementing that on a technical level. And I think you wonder what is different than just setting up a data center on that country's soil, cutting the pipes outside of the country? What does it require to create the sovereign AI and data?

Stephen Watt: Yeah, I think this is fascinating. This is such an interesting space. That's a loaded question, and I'll explain why: because if you separate sovereign Cloud from sovereign AI, so basically, imagine Sovereign Cloud is all the same things without actually ever bringing AI into the conversation. So, let’s say it's primarily like cloud native, that stuff all runs on CPUs and can be powered and cooled in the data centers that all these regions have today. So, that's by and large, a pretty simple thing as far as it's more focused on who's operating it, and the guarantees that the data doesn't leave the region, but the compute's already there, assuming it's a data center in region and not too complicated. Sovereign AI is way different. And what I mean by that is as soon as you bring in the latest infrastructure, the latest Nvidia and AMD chips, there's a whole lot of additional questions that get asked, which is, can your in-region data centers power and cool these things? Do you have the power? And as far as your data center, most of these are liquid-cooled. So, can your data center provide liquid cooling? Can you retrofit your data center to do liquid cooling? Is that cost-effective? And then, you start to see these regional dynamics play out. And so, if you go through this sort of rubric, then you start to see, okay, does land become a factor? And then, policy becomes a factor. And so, can you pour new concrete? Do you have the land to pour new concrete to build new data centers like the Stargate data centers that are being built in Texas? And do you have the water to be able to provide to liquid cool these? That's an issue by and large, we don't have in the United States, but in other geographies, say Western Europe, there isn't a surplus of available land, and building out this new infrastructure is complicated. And so, they're having to also factor in, at least until 2030, 2035, where maybe new installations are being completed, they're having to do it with what they have.

Ryan Donovan: The politics and the concerns around water for data centers is a big sticking point for AI for a lot of people. And you talked about some of the forerunners in sovereign AI, Saudi Arabia, UAE, they're not known for their reserves of water, right?

Stephen Watt: Yeah, exactly. I think, I'm not quite sure on how much freshwater versus saline water makes a difference in those, and maybe it does, or it doesn't, but yeah. What they do have challenges are on their thermal footprints in the area. So, they're running hot data centers in a very hot climate, and that can really put an additional load on the grid to actually liquid cool that. It is interesting, that thermal footprint is why the Nordics are really popular for building these data centers. And Finland, especially, is doing some really incredible things with, as they build out this new infrastructure, they're integrating it into their city. So, they're actually moving away the excess heat to power their cities and neighborhoods, which I think is just very clever.

Ryan Donovan: Yeah, to make these data centers there's a lot of software, too. And we talked about, [in] the beginning, extending Kubernetes and the PyTorch stack to enable a sort of sovereign AI. What sort of extensions are needed for this?

Stephen Watt: You asked about the software stack in the different regions around AI sovereignty. So, if I go into the United States, it's primarily focused around sovereignty, concerns around open weight models that are built in the US. Most of the OpenMates models are currently—the top six of the leaderboard—are all in China. And if you look in Europe, there's a very strong focus of self-determination, and there it's slightly different; there isn't such an open weight focus. Theirs is more focused on a fully open stack from top to bottom that they fully control, even to the point of the silicon, where they're exploring Risk 5 inference processors. And so, the stack is different based on those different needs. In the US, you will see, depending on their provenance of what infrastructure they're running on, if it's specifically an HPC infrastructure, it's some mixture of Slurm, which has been around for 20 plus years as a large-scale cluster orchestrator, and Kubernetes. There's also this journey around familiarity. So, if you're specifically in an HPC world where you've been focused on running jobs—jobs are things that start and then finish—you're really focused on the metric of how long does the job take? But once it's done that, then basically, you don't have a demand on the system until the next run of the job. That's really like model pre-training, post-training, so creating a model or improving one. The model inference, which is serving the model, is a very different dynamic. It's an app, and when you have an app in production, people depend on it, and when the app goes down, problems ensue. And so, inference is way more like that. It's an operational concern, and there'll be SLAs and SLOs around that, which is what Kubernetes is really good at. Where Kubernetes is like, 'your app's running on this thing. If it fails, we don't want you to have to even know because the portal just gets scheduled on another system that API endpoint won't change. The service interface won't change. You'll be none the wiser.' And it's designed to not have anyone getting paged out. And so, there is this sort of educational aspect that we're explaining to folks that sort of have this background of, look, it's really a case of Slurm and Kubernetes – better together? Or whatever your pre-training Slack is. And there's an interesting project called Slinky, which allows you to run Slurm on Kubernetes. And so, it actually provides additional guarantees to make Slurm more reliable and robust. It's a community project that folks can check out, but we're very focused at Red Hat on– basically our bread and butter's always been providing guarantees for business-critical applications through our platforms, and inference is just another version of that.

Ryan Donovan: I wanna dig into a piece you talked about there about Kubernetes being good for the SLAs, SLOs, right? I wonder if there's complications if you're running your own models on your own infrastructure in your own Kubernetes pods. Those are pretty in-memory heavy operations, right? What is the sort of load risk? How many LLMs do you have to be running simultaneously to maintain good SLA requirements? Do you have to be running five different copies?

Stephen Watt: There are different strategies. This is a complicated question because with X86 CPU Cloud native, a lot of folks were just basically using pizza boxes as servers, very low-cost industry standard, and they just had hundreds of them, and racks and racks of stuff. And so, there was this concept of ephemeral hardware. You know what's not ephemeral? A Blackwell. That's very expensive. It's actually closer to the mainframe mindset of, this is a highly expensive piece of equipment. We're gonna be very thoughtful about what runs on this and what doesn't. All infrastructure can fail. It's not thought of as ephemeral. Like, I may have a server in a rack in my data center that catches fire. And so, you do see what you're talking about to increase scale, which is, VLLM as the inference server, and VLLM currently is one model per server instance. And so, you can use Kubernetes to spin up a lot of these, and then you can basically round robin, like we've done with Nginx and Kubernetes forever, right? You can do that. You can do a more sophisticated version of this, where you can actually start caching the inference requests and the inference responses across these. So, there's a really interesting project that we created called the VLLM Semantic Router, right? And basically, it's a router that sits in front of your different models, and it looks at the inference request coming in, and looks at the responses coming back. And in gateway, it can do classification of what's being asked, and if it's the same thing, it can take a cached response and send it back and greatly reduce the toil and load on your scarce hardware. And then, there's more advanced versions of this, where– LLMD is another project that we created, where we basically disaggregate the core components of the LLM servers. So, basically, the prefilled step, the decode step, and the KD cache in between, across different servers, and basically allow your infrastructure to be able to increase the amount of inference through press and responses going back. So, we're tackling this from a number of different spaces. One, drive TCO, make it more performant, and then add your standard Kubernetes machinery around the core inference servers to basically make it more reliable, more scalable, and more optimizable, where you're getting the most out of your scarce equipment.

Ryan Donovan: It's interesting. What you're talking about fits in with what I've talked about with a lot of people, 'cause you're talking about setting up caching AI, you're separating the AI stack into smaller pieces. It's: AI is speed running the path to microservices, right?

Stephen Watt: Yeah. That's another way of describing what an agent is. But it's just a fully autonomous microservice that's powered by an LLM. It's still got a service interface and all those things that we've been working on for 15 years about how you interact with those. We're standing on the shoulders of those things to develop the ag agentic platform.

Ryan Donovan: I almost wonder if this is basically the same concerns as distributed applications, is there something that is actually different about this process that we're doing now with developing agent systems?

Stephen Watt: Yes, but I also think that if you don't know history, you're doomed to repeat it. There's a lot of folks building the agentic platforms and the futures [who] might not have gone through the same arc that we went through when we first defined services. So, if you harken back to the 2000s when web services were being created, and you had the SOAP standard, and if you remember, you have the wisdom; so, the service definition, the agent card, right? That's getting reinvented. Agent registry, UDDI, the service registry – that's getting reinvented. Inter-agent orchestration and how they team. BPEL – Business Process Execution Language, about how we work through all of that. So, I do wonder how much the lessons of the past are being incorporated today, because I think sometimes when I sit in these projects and listen to these discussions, there seems to be a lack of awareness that this conversation happened in the first place in the industry, and what we learned from it. Because I tell you what we learned from that: we all arrived on the REST protocol because the rest of the stuff became so insanely complicated with so many different specifications. Nobody wanted to deal with it, and it became really hard for the IDEs and the runtimes to keep up with it, and someone was like, 'how about we just do this really simple thing?' And everyone's like, ' let's do that.'

Ryan Donovan: Yeah, you never hear about the WSDLs, the SOAPs, the Ajax, and I think, like you said, those paradigms are now incorporated into what has become the accepted stack. So, with a sovereign AI stack, we talked about the building from the silicon. What is the sort of MVP of sovereign AI? What is the minimum that you need? Because I assume that you don't need to make your own chips at the fundamental level, right?

Stephen Watt: Yeah, I think there is a playbook that we're seeing. So, I'm gonna just go top to bottom. I'm gonna start with how the nation states organize it, and I'm gonna finish with how they deploy the technology underneath it. But typically, the first thing is they create a function that deals with the users of the actual infrastructure they're gonna deploy and have operated for them. And that function will often be tasked with incentivizing local startups to just stimulate the local economy to be able to learn how to use all this new infrastructure they're building. And it comes with the main thesis that they have, which is we don't want our constituents, both commercial and private to get left behind. So, there's a function that's that, and then there's a function that will then deploy, manage, and operate the infrastructure. So, there's always an operator, some sort of systems integrator. This can vary. So, sometimes it's a neo cloud provider, sometimes a state, like– so, I'll give you an example. The Massachusetts Innovation Hub that's doing one of these in the states, they delegated that to the MGHPCC, which is a coalition of universities. It's a giant data center that pre-exists in New England. And they delegated that to the operators of that environment to do that. So, that essentially became the decision maker around the stack. And then, what stack actually tends to get deployed does sometimes depend on who the operator is, but there are patterns that we see, and the patterns typically are some technology for pre-training and post-training that's job-oriented, so that's often Slurm, and then could be Kubernetes, as well. Kubernetes does that. It just depends on the scale. Kubernetes is a known limit of around 5,000 nodes, and I think Slurm can go up to 37,000. So, it depends how much capital you have and what your model training ambitions are. I will say, by and large, I think that there's a lot of post-training that's happening in sovereign AI, so basically taking a foundation model that's open weight, and post-training it and adding local information, local sensitivities, local culture, local languages. One of my favorite quotes is from The Lord of the Rings, where it's Tolkien talking about how he couldn't create the history of The Lord of the Rings without first creating the language, because the languages actually reflect the journey of the nation and the people. And so, there's a lot more that goes into a region-specific model than just the language. It actually reflects part of the culture and the mindset and thinking, and so they're trying to inject those into the models through post-training. And then, there's the inference piece, which is critical, right? If people are building solutions on this, and they need to be able to depend on them. And so, you need something with strong operational guarantees, like Kubernetes, in there. And I will say there's a third nuance here, which is reinforcement learning, which is a specific kind of post-training technique, and that actually involves inference. And so, if they're using reinforcement learning, then if you think step one, two, and three: step one, pre-training, Slurm, like post-training. Okay, if you're using reinforcement learning, that might not be Slurm; that could actually be Kubernetes. And then, working with VLLM, LLMD, in a reinforcement learning loop. And then, the operational piece is Kubernetes.

Ryan Donovan: How much of a risk for sovereign AI is giving up any piece of that pathway? Because when you see these open weight models, they're not fully open source or open training, right? You don't entirely know what's gone into the pre-training part of it. So, how do sovereign AI builders measure that risk and weigh that risk against the cost of doing the full training stack?

Stephen Watt: I think the challenge is evals are a hard problem in generative AI, and I think, even if it's open weight, you don't necessarily know what predetermined decisions have been built in around what the models meant to know and what's not meant to know, depending on where it came from. And I think this creates some anxiety and some risk, and I'll just say that there's starting to be regional open weight alternatives. So, in the United States here, we have Reflection AI, and Reflection AI is trying to be the US answer to Deep Seek, and so they're particularly popular both in the US and increasingly in Western Europe. I think, by popular [vote], they're going to start becoming increasingly more engaged by those nation states because there's more clarity around the pipeline that's being used, the data sets that are being used to actually end up with the open weights. So, it is a good point that you make, Ryan, which is the idea of open source and being able to say open-source AI – what does that mean? Because most of these folks are saying, 'I've got an open-source AI,' and you're like, 'really? Do you, though?' Because it's really just open model weights, and I have no idea what pipeline you use, and I have no idea what data you use. And so, we, [Red Hat], tend to think, open-source AI is minimally the open-source software component—so, Linux plus the LLM plus, LLMD and an open model weight. That's the minimal definition of it, but really what we'd hope to see in the future is a definition that includes the pipeline, as well as the data sets. And I think at that point, you really get to sunlight as the best disinfectant, where people can truly see what they're getting and what they're building on.

Ryan Donovan: And they can recreate it if they need to. I think for some folks there's also a legal exposure to open weight, whether it's using copyright materials, or a few of those open-weight models have been accused of distillation of more powerful models. Do you think that legal risk is part of it?

Stephen Watt: It's a great question. IBM Granite was one of the first open-weight models to come out. And what was unique about Granite is it came with indemnification. So, they were so confident in what it had been trained on—IBM was so confident in what it'd been trained on—that they basically said, 'look, if you use this thing and you get sued, we'll indemnify you.' And this, I think, came on the heels of one of the frontier model providers getting sued, I think by the New York Times, where it had been evident it had been trained without authorization on their content. And I think it's a legitimate concern, and until there is transparency of the data sets, I think the legal system, which varies from country to country, is still figuring this out. There's this fascinating problem that we deal with in tech of linear government's collision with exponential tech. It's always hard for the legal system to be able to keep up with how fast tech is moving here. And I think we're gonna continue to see challenges in the legal system running behind, trying to keep up with this stuff.

Ryan Donovan: In terms of future sovereign AI concerns, what do you think is going to start being the thing that people think about that they aren't thinking about now?

Stephen Watt: The topic now is the different regional sensibility. So, in the States, there is the ability to keep building power stations, data centers, and so basically the States has got a very smooth path to consume the latest AI accelerators. I think the rest of the world has not so much. And so, if you're in a region that is challenged to build the infrastructure to power it, to cool it, then you are dealing with some different challenges. And so, couple things that maybe aren't well understood today, but there's something called the 'sovereign paradox,' where basically, if you can't—and there are countries in Western Europe that can't or cannot until 2035 when stuff is completed from being built—they have to assume the ability to actually run and operate the infrastructure in their country, and they have to go operate it out of country, which actually negates the whole concept of sovereignty, or at least national sovereignty. And they switch it for regional sovereignty, and they go build that in data centers in the Nordics. So, there's that thing that I don't think is very well understood in the conversation. But then, there's the second piece of, okay, what do we do between now and 2035? And from a technology standpoint, this is pretty interesting because they either trigger that sovereign paradox and go outta country or they meet them at the power and cooling budget of the floor tiles of the data centers today, and that could mean CPU inference. So, there's a technology called VLLMCPU that we're working on, which actually allows you to do all the generative models that you can run on VLLM today, but actually operate those on CPUs. Now, CPUs are not GPUs. They're exponentially, I think, to two orders of magnitude different to the amount of cores that they have. The latest X86 instruction sector has this thing called the ACE features, which are matrix multiplication capabilities inside the cores. And for example, the Intel Xeon 5s and the Xeon 6s include these, which are in these data centers, today. And when you take that and you marry it to Kubernetes, which can do broad scaling, so going back to one of your original questions of you take VLLM, put it on A CPU, you set expectations around token throughput, which is like, 'if your use case, your agent can inter token latency of like a hundred milliseconds or greater,' which is certainly great for experimentation, and testing and things like that. Then, you can just spin up lots and lots of model instances across your already large cluster, and get aggregate performance that way. So, that's one approach. The other thing that we're doing is in core PyTorch, which is the project that's basically being used in pre-training, post-training, and inference in just different parts of the project. For the first time, we've moved the accelerator support out of treat, so that makes it way less complicated to develop new accelerators, because effectively we've just defined an interface that plugins can now comply to, and so we've taken a lot of the friction outta accelerator development. This is particularly interesting because another way that you can meet the power budgets, so the performance budgets of the floor tile, is go to lower power inference focus accelerators. So, GPUs today are things that are great at video games, they're great at cryptocurrency mining, they're great at AI, but specifically, what if you created something that's inference-focused that inherently 'cause it's so targeted, you can really control what it does, and the power consumed for that. And that can get start to get closer to what a CPU demands, which what these data centers have been defined for. It changes the thermal dynamics, the performance dynamics, the power dynamics. And I believe that's what we're gonna start seeing in the industry. This challenge around what's happening around being able to power and cool these things is gonna create this opportunity. And we already saw this with Grok and Cerebras, which are two accelerators. I think the latest OpenAI Codex model is a partnership partner with Cerebras, and we saw recent Nvidia, hard to articulate what it was, some sort of pseudo acquisition around Grok. But I think, that signals a recognition of we need to get in early on this market. And then, there's also Risk 5, which is a much longer play, but for folks that are listening that aren't familiar with Risk 5, it is basically open software silicon, right? So, it's an open-source processor. People are already developing Risk 5 chips for inference. And so, I think that's very appealing if your sovereign stack is just all about open. From agentic all the way down to the silicon, there are people that are exploring Risk 5 in that space. So, I think we're gonna see an explosion of inference-focused accelerators. If you think about open source, you spoke a little bit about agency, you implied people could rebuild the pipelines themselves to produce models, and that reproducibility, although not technically inside the OSI definition, is something that we all believe in as part of the core value of open source, which is: I can pull this thing down, and I can reproduce it, and make my own thing from it because I have the source, and the build instructions. So, that agency in being able to tinker with any part of the solution is tricky when you're dealing with a monolith model. So, I wanna pull this model down. Why does the business world care about that? Because you can do business stuff with it, right?

Ryan Donovan: The business world loves business stuff.

Stephen Watt: Exactly. And so, to do business stuff, you actually have to get that model to do the thing that you want it to do in a manner that's good enough, and that can imply post-training, like taking that model and changing it. Dealing with a huge monolith model is unwieldy to deal with. You need a specific, beefy enough power to stick that sucker in all the V-RAM that you have. It's got fit into all of that. And then, you may have to go multi-server, use something like LLMD if it's big enough. So, there's another way of thinking about that, which provides more agency, and control, and reproducibility is, what if you can get to that same level of reasoning performance by combining a series of smaller models? So, small language models, okay? And that's great, but then how do you stitch them together so that it looks like you're dealing with a monolith model, but actually you're dealing with a disaggregated, modularized setup, like we're more used to in cloud native? And so, this is a project like the VLLM Semantic router, which I think is quite interesting because it's more broadly talked about as the concept of inference gateways. And part of this is they can do a lot of things like static routing between models, or semantic analysis. So, you just ask it a question in the same way you'd ask a general question to a monolith model, and it can figure out where to route the thing. And there's all different kinds of capabilities in this around, maybe routing is just policy-driven, and that's super arbitrary, like policy you might be checking, is there PII inside this inference request? 'Hey, this request is from Stacy. Stacy's team gets access to the gold standard equipment, this request from Sally – Sally's a research experimental team, that goes to the A100s, the older stuff, and so there's all kinds of different capabilities that you can put into this, and it's starting to become disaggregated, which allows this idea of control, and agency, and fine tuning, which gets us closer to what we're more used to. And it allows us to build more purpose-built systems for what we need.

Ryan Donovan: Yeah, definitely seen a push towards smaller, more focused models.

Stephen Watt: Yeah, and I think agentic is driving that, right? If an agent is a very specific thing to accommodate a specific task, it just needs a small model that understands tax. It's a really good fit for that, and I think there's a lot of exciting things that are coming from that, but what's challenging is this AI space is moving so fast. And so, what we have is a very fragmented user base of people that are at different levels of maturity and consumption. So, you have people that are still trying to figure out what business use is, how to build agents, how to get value out of this; other folks that are super users that are using agent teaming and doing really incredible things. There's the SaaS paradox, where folks have become so proficient at this, and they built so much of their business around this, that using the SaaS model is eating up so much of their margins that they're now figuring out how to come back on-premise to a self-managed setup. And then, they have different concerns, and that's probably the further end of the spectrum. So, a lot going on in the space.

Ryan Donovan: Alright. It is that time of the show again where we shout out somebody who came on Stack Overflow, dropped some knowledge, shared some curiosity, and earned themselves a batch. Today, we are shouting out somebody who won a Populous Badge. That's where they dropped an answer that was so good, it outscored the accepted answer. So, congrats to @Ittiel for answering, 'Print timestamps in Docker Compose logs.' If you're curious about that, we'll have the answer for you in the show notes. I'm Ryan Donovan. I edit the blog, host the podcast here at Stack Overflow. If you have questions, concerns, comments, topics to cover, et cetera, et cetera, et cetera, email me at podcast@stackoverflow.com, and if you wanna reach out to me directly, you can find me on LinkedIn.

Stephen Watt: My name is Steve Watt. I'm the VP of the office of the CTO, which is our AI research and emerging technologies function at Red Hat. You can find me on LinkedIn at Steve Watt. I look forward to chatting to you if you're interested in these topics.

Ryan Donovan: All right. Thank you for listening, everyone, and we'll talk to you next time.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.