Loading…

Transforming enterprise workflows: How IBM is unlocking AI's potential

Learn how IBM deployed and integrated AI tools in the ultimate enterprise environment.

Article hero image

In this episode of Leaders of Code, Stack Overflow Chief of Product and Technology Jody Bailey chats with Matt Lyteson, CIO of Technology Platform Transformation at IBM, about the processes and challenges of adopting AI within an enterprise environment. They explore IBM's strategic approach to integrating AI into workflows and emphasize the importance of fostering the right behaviors among employees, particularly regarding automation and AI assistance.

The discussion also:

  • Explores what it means for a company like IBM to truly embrace AI, with Lyteson sharing strategies for integrating AI into every workflow to maximize productivity across the organization.
  • Highlights key challenges like data privacy, security risks, and the critical need for workforce reskilling in an AI-enabled world.

Notes

TRANSCRIPT:

Eira May:

Hi, everyone. Welcome to the Stack Overflow Podcast. Today we have another episode of Leaders of Code, where we're chatting with tech leaders about the work they do, how they build great teams, and the challenges they're facing. My name is Ira May. I am the B2B editor here, at Stack Overflow. And I'm joined today by Jody Bailey, who is our Chief of Product and Technology, as well as Matt Lyteson, who is the CIO of Technology Platform Transformation at IBM. How are you guys doing today?

Jody Bailey:

Great. Thank you.

Matt Lyteson:

I'm doing fantastic. Thanks for having me today.

Eira May:

Oh yeah, our pleasure. So glad you could join us. So I wanted to just kick us off by posing a question to Matt. So as IBM's CIO, I imagine you're at the center of the company's internal AI projects, AI transformation. So I'm wondering what does it mean for a company like IBM to embrace AI, to lean into AI? And what is that looking like on a day-to-day basis for your teams?

Matt Lyteson:

Well, thanks, Eira. You started off with such an easy question here. As you can imagine, we've got a lot of AI and technology. Part of our strategy, core to our strategy is hybrid cloud AI, and of course, the emergence of quantum computing. Internally at IBM, where I'm responsible for delivering the solutions and technology platforms that every IBMer can operate on to ultimately be more productive. We try to turn that into how do we use our products, in some cases before they get into the hands of our clients.

But I would say more importantly, how are we looking to inject AI into every single workflow, into every single task that IBMers are participating in, in order to help them be the best at what we need them to do to help the company grow and accelerate.

So I would say that's a high level summary, and I think we're going to get into a lot of the specifics and details around that.

Jody Bailey:

What I like about what you said is it seems like you have an outcome defined, in terms of what you want to do. You want to help everybody do their jobs better and more effectively. I'm curious, what is the desired outcome of implementing AI at IBM?

Matt Lyteson:

Well, I think it all comes down to what I said a few minutes ago. It's how do we have every IBMer be the most productive they can be? And I think you're absolutely right that this means let's focus on the outcomes. Ironically, this is what we CIOs were talking about 10, 15, 20 years ago. I think in some cases we got it right, and other cases not, but what's the value of the technology that you're implementing for the organization? I think AI almost gives us an opportunity to have that conversation much more intentionally than we did before.

So when we think about our internal AI use cases and where we're going to focus on the workflow, we do a couple things that I like to think are interesting and I think resonate with a lot of my peer CIOs that, first of all, we distinguish from what we'll call everyday productivity where we're putting AI into simple tasks, that's how do you and I maybe save 15 minutes from developing a presentation, or how do we summarize email, or how do we read documents faster because we can summarize and use rag patterns in order to get intelligence much faster. That's helping all of us make better decisions, operate faster, take some of the mundanity out of our work and do more exciting things, but that's on one side of the equation.

On the other side of the equation, here, we're thinking about how do we put AI into the end-to-end workflow? And when I think about it through that lens, I can start to talk about my outcomes in terms of, are we growing revenue faster? If we're focused on the operations functions, are we getting better at operations? Which means am I doing a workflow faster? Am I producing the output of that workflow at a lower per unit cost? Which helps me to think about how my company can scale a little bit differently. And then there's a third bucket we think about is, how am I using AI to reduce my risk posture or manage risk overall effectively?

We put in those three buckets, we can have very focused conversations as we're running experiments, as we're building AI solutions in the workflow, that's a little bit different from, I'm going to save everyone on the team 5 to 10 minutes because we don't need someone taking notes in the Teams meeting that we're having, for example.

Jody Bailey:

I love that. I'm curious, are there, in any of those three categories, any specific examples that you'd want to call out where you feel like you and the team got it right and had a pretty significant impact?

Matt Lyteson:

I think there's a couple examples. I'll maybe list two here. One is an example, and I like to think in we, as CIOs, and having engineers on our team, how do we lead by example? I think we are seeing a shift of how do we get this into every single business function.

Internally, we did this with what we call Ask IT, and we really took out some of our level one, level two support, had AI in front of that, did that in about 100 days. This is about a year and a half, almost two years ago, and said, "What if every single one of our 280,000 IBM employees asking for IT support, how do they get that first through an AI-based approach?" And then AI, on the backend for those things where it's not able to handle on its own, multilingual translation, and now my IT support agents are able to handle much more complicated tasks much more easily and actually get better satisfaction from the role because they aren't trying to tell you, Jody, how you can reset your password where we gave you clear instructions and there's an auto password reset. I think that's frustrating to people who love to help people with their technological needs.

A more recent use case that I'm super excited about, if you think about this, there are a lot of steps that's required when we're thinking about AI from an enterprise perspective that's maybe a little bit different than if we were running a startup, I need to make sure that I'm focused AI for the right things within my operation, that I can trust that, that I have the right human in the loop, that I am respecting data privacy, cybersecurity. And typically, this comes across and touches a number of different teams when you're contemplating or considering the type of use case. So we've got an AI ethics review, we've got an office of responsible use of technology, we've got our CIO teams that own the platforms. We really started to take a look at that, and on the mantra that we've had on our overall evolution here in the past couple years at IBM, eliminate, simplify, automate, and some of us will even add in AIFI at the end of that as a fourth category.

So we said, which steps in this process can we eliminate? How can we simplify? Because a lot of these teams during this review are asking for similar types of information. What type of data are you using? Which business process is this indicated with? And then myself, as a CIO, need to understand where is this running on the platforms? What's it connecting to? How important is it? Same as we do for any business critical application. So if something goes bump in the night, which hopefully it doesn't, we're able to communicate to the IBM employees what impact that's going to have on the operation. So with that, the broad context, we said, "How do we make this faster?" So we literally went to a two-week process of doing all this back and forth with the business case to now, in about five or six minutes, you can have an entire environment provisioned on what we call our enterprise AI platform in order to build your thing.

We've connected with all the necessary data privacy, AI ethics reviews with the right information and really streamline this process. And I think that's really a game changer as we think about how many agents are employees going to develop over the course of the next several years because I don't think the IT department's going to do all of that, but how do we know what's going on? How do we do this in a safe, consistent way? And most importantly, or just as important, I will say to your earlier comment, how do I understand the value that I'm getting out of that? So we tie it end to end from the time you think about doing something, to the platforms, what it's connected to, to now, how we're tracking the value on the backend through our technology business management and our enterprise business management, as well.

I think a lot of enterprises are going to need to think through this framework. So that's just another example of how we've implemented some of this internally.

Jody Bailey:

You mentioned agents being developed and it won't all be done by IT. I'm curious for you, and at IBM, who do you envision building agents? Do you imagine a world where just about anybody can build an agent for themselves or how are you thinking about that?

Matt Lyteson:

I think we need to be cautious on that. Part of the reason is because I think a lot of CIOs like myself still have a little bit of anxiety and stress over what happened in early days of cloud computing where everyone somehow found a way to get access to a cloud account, and now we're 10, 15, 20 years later, still cleaning some of those things up and being able to articulate the cloud optimization. So we were very intentional here, I think partly because of my own personal anxiety of not wanting to repeat that or be cleaning some of this up 10 years from now. But very early on we said, "We need something that's really an enterprise AI platform that gives people the ability to safely build these AI things, whether it's an agent or a digital assistant, or maybe even traditional AI machine learning at an enterprise scale."

So we asked ourselves, "What does that look like?" And some of this is that intake to value mechanism that I just described. I think also that is reconsidering, okay, if you've got a platform like this, who are the people that are going to build on it? So one of the things that we did late last year was we developed what we call an AI license to drive. Understanding that, yes, of course in a technology company, we may be similar or dissimilar than other enterprises in that we've got a lot of people that like to play around with tech. But I said cognitively, it doesn't make sense that just because of where you align on the organizational chart that, that dictates whether you can do that or not. What do we really care about?

And when you think about a traditional IT organization, say these are really the people that understand about data privacy, that understand about information security, that understanding some of what it means to connect into backend enterprise systems and to make sure that you don't hammer them so hard that they're going to tip over through a for loop like maybe happened at sometimes in the past.

So we developed this AI license to drive that basically will give you the stamp of approval that you know what you're doing and you are going to care and feed for this thing. Because I also, as a CIO, don't want you, Jody, building something and just saying, "Matt, I don't have the skills or resources to maintain this going forward. Can you take it over?" That's an unhappy conversation for us, in the CIO department. So we wanted to get out of that and we think the license to drive is this, doing it in one way. What does that mean for who can build? I think every organization is figuring this out as they go along. Some organizations have opened up platforms that allow everyone to build. I think we do that in a controlled enterprise way that's a little bit ambivalent to what organization you're in. But if you're following these rules and guidelines, you're building on our enterprise AI platform, you're coming through the front door, then I understand enough what's going on to allow you to rapidly experiment and to move forward at the speed that I think enterprise really need to.

Jody Bailey:

The AI platform, is that an IBM platform? Have you built your own?

Matt Lyteson:

I would say what this uses, our own technology, but I think what every CIO needs to think about is what's your opinionated version of that technology? Because even our foundation of ours is WatsonX Orchestrate along with WatsonX Data, WatsonX Governance. You may choose to use one of those components, but then we talk about, well, what's your CRM? What's your productivity stack? Are you using the Google Enterprise? Are you using the M365 stack? Are you using something else? All of these are considerations because these need to get plugged into that platform because if you start developing agents that don't have access to data, that don't have access to your productivity stack, they aren't going to be very useful agents.

Jody Bailey:

Right.

Matt Lyteson:

And so I like to refer to this as a hyper-opinionated installation and configuration of our products along with some of our partner products so that we have assurances that we're following our cybersecurity policies that we're doing in a trusted way. And I think every enterprise is going to need to make their decisions of that. Ideally, they're using some of our tools, but you're going to have a mix and match. And even if you're using some of the same tools that I am, you're going to be doing things a little bit differently because of your context. But it's that hyper-opinionation that allows us to move faster. And again, solving for that 80% use case to allow, then, things to move at the speed they need, because we've got one way to integrate with your chat system, one way to integrate with your email and calendar system, one way to integrate with your CRM, your IT service management system, et cetera. And that's really where the opinionation comes into play.

Jody Bailey:

Within IBM, obviously it's different for all organizations, but do you have people choosing their own? Are they able to, I'm thinking of the shadow IT example where everybody can go out and get access to something. Is that something that you experience? And either way, how do you think about what tools to empower people with? There are so many tools and things are moving so fast now. How do you think about that?

Matt Lyteson:

So I think we, like a lot of organizations, have a top level corporate policy that would, say, sets the high level guardrails, and then for our internal use cases, and that's really where I get involved. So I'll have that be discreet from, we've got a lot of client facing teams, a lot of product teams that have their own guardrails within that top level corporate umbrella. But we want to understand what tools people need and make sure that we are using the fewest tools that are going to get us the biggest value faster. What we don't want to have is that every single individual is having the same tool that does the exact same thing. And really, if you think about it, Jody, this is no different than probably don't want to have 10 project management tools. So what's the one project management tool that we're going to standardize in? I think CIOs have a large role to play into that.

I think what we are finding differently is people are coming forward with new, different, innovative ideas. So how do we triage those quickly, maybe run a quick evaluation and proof of concept to say, does this make sense? Is this an area that we want to invest in to scale out a little bit broader? Or is this an area that we can... Okay, there's a problem someone was trying to solve with this. What's that outcome that we're after? How do we solve that with something that's built into our enterprise platform? But certainly, we're seeing a lot of people have a lot of innovative ideas, and I think keeping pace with that and what's going on in the world and how do we translate maybe what you and I are using in our home lives to plan our family vacation or just get insight. My kids are using AI all the time to learn new things. It's a little bit different than what we need here at the enterprise context.

Jody Bailey:

One of the questions I'm asked about often, I imagine you are, or you've thought about it. One, I think we're getting more developers rather than fewer because just about everybody can vibe code and do something productive. The question or comment I get a lot is, okay, but they don't understand what they're doing and it's really valuable for senior developers, but then you have people that don't understand the code and get stuck. And there's this fear that we're not going to ever have any more senior engineers because nobody bothers to understand the code. What's your take on that?

Matt Lyteson:

I think there's a shift in skills that are required. And I think the interesting thing, we ran some early experiments with one of our products, the Watson Code Assistant for Ansible within my software reliability engineering team. And the initial feedback I got was, "Well, this is great for our junior SREs. This is not so great for the senior people on the team." And then they started sharing information and creating, I think what a lot of teams have today with, "Here's my prompt library, here's what really works well with this tool." And then the tool starts to evolve a little bit.

And then it shifted to our senior level engineers are getting the most out of this because they're able to move much more quickly, they're able to assess the code quality, they're able to adapt their prompts much more quickly. So you apply that then,, to the use cases that you're doing in a business setting like our procurement team, and we're developing a lot of agentic capabilities for procurement now that sit between and across our core platforms to do procurement. And what we're finding, interestingly enough, is that this, we call these AI fusion teams where we take people that really understand that business function and compliment them with our technologists that come from the CIO function and make this new hybrid team. And what this forces the team to do is focused on that outcome, you've got the procurement people who are learning how to do prompt engineering that really understand that business function and understand the workflow of that function.

That's something that I think a lot of IT organizations have not hired for, historically. We say, "Jodi, I need you to run this procurement system." And maybe you'll synthetically absorb what procurement actually does over a period of time.

Jody Bailey:

Right.

Matt Lyteson:

I think I did that early in my career because I was in a consulting role before I turned internally at some of my previous organizations. So I got to know the client, got to understand the workflow. Internal IT organizations, I think traditionally, have been a little bit different. And especially with the agile transformation that we all went through a few years back, it was really focusing on the engineering and I would say more on the listening skills rather than appreciating how the function operates. That's got to change. And so we've got people that are in that function, like my procurement example that need to understand how to prompt engineer, maybe not so much low level coding, but they will begin to get good at vibe coding if we give them those tools.

And then we've got the engineering from the IT organization that are taking the best of their engineering skills and really need to understand the function that they're operating on. Okay, what's the data that's required for procurement? What does that mean? What's the flow that we go from the time someone thinks that they want to buy something to the time that we actually have accounts payable writing that check to that company and everything that's involved. I think that is a huge shift in skills. And I think our organizations right now need to adapt for that. I think we're finding a lot of success with people that understand these functions. I think we're finding ways in order to get new talent that really understand how these AI tools up to speed quickly on the functions. And I think it's an entire new world for all of us that we're in the middle of right now.

Jody Bailey:

Yeah, that's super cool because one of the things that I've seen a lot of benefit is on our design and product side, where designers and product people are able to vibe code something that more or less works as a way of communicating. But if you take that a step further, which it sounds like you're doing with the AI Fusion teams, you actually have the expert doing the vibe coding as opposed to explaining it to a product or design person, who then does the vibe coding, who then gives it to the engineer, you're collapsing that whole cycle and getting right to the meat of it.

Matt Lyteson:

Yeah, that's exactly right. And again, goes back to some of the concepts that I mentioned earlier where I've got this AI enterprise platform that we've built that's connected to our enterprise data platform because AI doesn't work with that data. And that's done through a number of different mechanisms, traditional APIs increasingly now more with the tools and the MCP servers. I've got this license to drive. So now I've got the guardrails, I've got the signposts on the road, the speed limits, I let you drive, and then now you can learn how to write the prompts by using AI. And then you can have the people on that team that are maybe traditionally in IT, "Okay, I need an interface to another system. I need an MCP server. I need an API." They know how to do that. Maybe the people in procurement don't, but the people in procurement know how to access that data. They know what needs to get solved. You bring them together and you start to see amazing results.

Jody Bailey:

I'm curious, I got to imagine, especially a company as large as IBM and all the people that you work with, do you have a group of people that are just like, "No, AI is a fad, it's a bubble, I don't believe it, or it's going to take over the world and I'm not going to do it," or some degree of that.

Matt Lyteson:

I don't think it's as avert as you just described there.

Jody Bailey:

Right, right.

Matt Lyteson:

Maybe it is in their own heads. I don't think anyone thinks this is that.

Jody Bailey:

They're not saying that out loud to you anyway, right?

Matt Lyteson:

Yeah, they wouldn't. But think about it, and I love it. I think I heard Ethan Mollick at the Wharton Business School a few weeks back talking about this and he highlighted some very important points. First, you've got a lot of organizations have these corporate policies which set the guardrails. And I think at first, it was like, "no," or, "no, and you can only use this one tool." And then we started to say, "We're going to be an AI first organization," so how do you square that thread? And then I do think there is a shift of roles within the organization, rightfully so, of this. I need fewer people who are doing tier one IT support. What does that mean? And people start to internalize that. And you combine that with the fact that, I think in some organizations we've really patted people on the back for "working hard", then we've had conversations, am I cheating if I got some help on this research assignment, things like that.

So I think all of those are natural reactions as human beings. And I think what we're trying to do as leaders is really encourage what's the right behavior that we want to see? Is that I do want people working harder, I do want people focusing on the things where only a human's going to be doing that now. I don't want people doing the right wrote things over and over, like summarizing incidents and things like that. And some of that's AI, some of that's automation, but I try to be very intentional with my team.

And I think even the example of, Jody, if you were up working all night and around the clock on the weekend fixing something that could have been avoided, I want to say, "Nice job," and then move on from there. But I don't want to give you a gold star for that because now I've implicitly, if not explicitly, reinforced that our culture here is about working hard, when really I want you thinking differently about how we move.

And how we can apply technology to further the mission. So yes, I think we, like many other organizations, are struggling with that and we look for how do we enable people? How do we get people comfortable with this? Because again, early on, we had a lot of mixed experiences with this. Some of this was the tools weren't that great. Some of this is, I didn't know how to write a great prompt, for example. Some of this was, I got some unexpected hallucinations, whether internally. We saw that with some of our internally developed tools and you ask yourselves why, that turns some people off. So very intentionally, we're starting to look at our tooling and we've internally, yes, we use industry tools from our partners for this everyday AI.

We also build some everyday AI capabilities, and one of these is called Ask IBM. That's our front door to any AI agentic experience across the company. It's getting to that point because we know all these assistants and agents, they're very difficult to find, so how do I know which one to go to? So we're starting to plug things in into an overall framework, but the early instantiation is that we train this on our internal intranet and we got some interesting results just to give people the experience of working with us. When we deploy these tools, we need to make sure that we're following up with this enablement. What are you learning about this that's a little bit different from what I learned? What's working well for you? What's maybe not working well? And then we go and do brown bag sessions, we do enablement sessions, we do town halls, and we see what that does.

But I think all these things are how do we get people comfortable with working with the technology? Because I think if nothing else, I am convinced that the people that are going to be most effective in the roles are the ones that are finding how to use the technology to produce the results, validating the technology through that human knowledge that's not going to come natively from the technology. This is that human validation and specific use cases. I think those are going to be the people who are most effective moving forward. And as many of the folks across the organization that I can get comfortable with that, that's what I want to see because that helps everyone win and succeed.

Jody Bailey:

I know the human validation is something at Stack Overflow that we focus on a lot is both on our public platform, as well as our SaaS products, really figuring out not necessarily how to make things humanless, but to maybe make it less, needing less human. There's so much knowledge within the organization, especially the size of IBM, and so often it's duplicitous, or some is new, some is old. And how do you sort through that and how can you use AI for those? But then also ensuring that validation piece is tricky or can be tricky, in terms of getting to the right subject matter expert, et cetera. So it's a super interesting journey in terms of enabling people dealing with hallucinations or errors because it's going to happen, right?

Matt Lyteson:

Yeah, precisely.

Jody Bailey:

It's non-deterministic and even testing things is challenging, right?

Matt Lyteson:

I think you're exactly right. And I think we've even seen instances where you put it out there and then a week later, it's producing different results than you initially tested for. So I think that's one of the big things that we coach people on. This is not a one and done. When you and I would maybe develop a web system in the past, yes, we'd maintain it or fix bugs, but you could say, "Hey, this was done," and move on to the next thing." I think we learned through Agile that, that doesn't keep pace with the business. But now even with AI, we kind of need to remind ourselves of that, that you're going to have Drift, you're going to have unexpected results.

So that's where we use our WatsonX Governance platform to help us monitor and detect that over time. And that is so critical when you think about these enterprise business workflows. It starts to become critical also because you may say, "Okay, I'm just going to reprompt this now and get a better result." Well, then what is that costing us? So I need to understand the cost of that, whether I'm using tokens from a provider or whether I've got an internal team, which, again, is why we've got this plumbed start to finish all the way into our value framework of the workflows. But if I can't detect, for example, that now you're having to prop twice in order to get the same result and that's costing me twice as much and I need to go back and engineer a fix to that, then that's resources that I could have put on something else. And so that becomes critically important. And I think understanding that we aren't done, we're going to keep iterating, keep iterating as the tools tech get better, as they give us different and unexpected results along the way.

Jody Bailey:

What are the key metrics you're looking at? You mentioned token counts, cost, obviously, especially from a infra-ops perspective, I'm sure you're paying attention to that, but in terms of AI usage, governance, security, once things are rolled out, I'm sure you have a process for the design and build, but how do you monitor things going forward and what kind of metrics are you really focused on?

Matt Lyteson:

We use our WatsonX Governance platform for that and we look at things like Drift. We also have feedback baked into all our tools, so the thumbs up, thumbs down, to make sure that people are reporting some of these things. And then even the traditional CSAT surveys that we're using right now to understand what's going on, what's working well, what's not working well. I think all those are important metrics, but I think just as important to that is the human feedback. In order to not just understand, is this thing still working? Am I producing the results? And I can see that. Let me double click back to that example of our Ask IT. I am monitoring my traditional business metrics, in terms of how quickly are we resolving that issue? If the number of issues that the AI is not able to handle, right now we're about 81, 82%, but if I start to see a variance in that, then I know that's going to have impact on the humans that are behind the scenes that should be handling these more complicated things.

So we're tracking the traditional operational metrics. We're also tracking the performance of this. And then that gets into, well, what is it costing me, on average, to service that ticket? How quickly I can get that done. That gets back into some of the value things that I brought up earlier. But I think all of those come into play. And then with that feedback that we're getting, we can quickly detect, did we do something differently here? Is it getting access to different data that's in our tickets and everything’s like that that's not, that's causing it to respond abnormally? And then we can go in and try to rectify that pretty quickly.

So all of that becomes important. And I would argue that in cases where we haven't used a lot of data in the past, in order to run the operation, it becomes more important. So you can say, "Well, Matt, any customer support function probably has a good handle on their data." And everyone's tracking NPS and CSAT. You would hope that in any IT organization, we're at different levels of the maturity curve. So of course you got the data. Well, in cases where we don't, such as how long does it take me to respond to a request on what's the state of a purchase order, for example. I don't think we had really great data on that. Well, now we need to benchmark that because part of that's in my value equation, my value conversation.

Part of this is also, is the tooling that we're putting out there, is this agent actually working correctly? Because if it's not, again, we got to go in and remediate it pretty quickly. And it will change over time is, I think, probably one of the unanticipated surprises that we've had over the past as we roll out more and more of these use cases.

Jody Bailey:

I would imagine those AI fusion teams are pretty good at identifying the outcomes and the benefits of those projects, but then you also have to monitor the other side of the equation, in terms of what is it costing you and is it generating the value?

Matt Lyteson:

Exactly.

Jody Bailey:

It's really useful to that person, but in the grand scheme of things, how does it fit into everything?

Matt Lyteson:

Yeah, that's exactly right. And again, partly why we set up this mechanism to get it all plumbed in from start to finish. So you're building on my AI platform, I need to provision that for you. And again, this is not self-service. I tried to do away with that a number of years ago and said self-service is insufficient. That's like the self-checkout line at the grocery store, it's hit or miss, but you get there on a Friday and the person in front of you forgets the code for the organic tomatoes, it's going to be a long time in that line. So we said we can't really do that with our platforms that we host and build things on.

Jody Bailey:

right.

Matt Lyteson:

So we want this to really anticipate. Imagine you're at a fine dining establishment, they anticipate everything that you need. That's what you want in these enterprise AI platforms. So we ask you upfront, "What is this workflow that you're going to be injecting AI into?" And then because we're provisioning every aspect of this, then we're able to take the cost of those platforms that we're tracking, whether that's our internal platforms or our cloud platforms and really tie that to your use case in our technology business management framework and then tie that to our enterprise business management framework. So I can see on a daily basis last week, what did it cost me for this specific AI use case? Why did that spike? Why did that not spike? Do we have unanticipated costs because they needed more GPUs or are using more tokens than we thought they were originally going to do? That becomes extremely important. Because I do that upfront and I get that whole system plumbed, then I can let the AI Fusion team really focus on those use cases.

And then with the top line executive of that business function, then we can do that reconciliation. Are we seeing the results here in how we think AI can be impacting this on that flow velocity? Because it may be okay if I'm spending a little bit more if I can get something done a lot faster. It also may be okay where I want that unit cost to come down for whatever it is on it. So creation of a purchase order or paying an invoice, for example, those are all units that we can get down to that unit level and we can compare before and after. We try to do that with as many of our use cases as we can, and it is a new way of thinking. It is making sure that we get grounded in some of that data so we can actually see that impact.

And to your point, that may not be exactly the outcome that the AI Fusion team is after because they're focused on a specific use case, maybe a specific part of the workflow, but it's all got to plug into the system in the way that your company operates.

Jody Bailey:

Having the platform not quite self-service, but anticipating and being able to track everything and being able to understand how AI is being used, what it's costing, bottlenecks, all that and setting it up front to building it upfront and using that to enable the other teams, I think is, maybe it seems obvious or not, but I think it's brilliant because that's something that I think everybody is challenged with. Everybody's trying, especially in smaller companies, but I'm sure a larger company, everybody has an idea, everybody wants to use AI to solve for something, but then how do you know what the real cost is, what the real risks are? And if you're able to funnel that through your own systems and infrastructure, then that seems like you're head and shoulders above where you might be otherwise.

Matt Lyteson:

I think it's a level of maturity and discipline. And I like that we've all talked about business outcomes and the value of IT for, what seems like ages, now. And I think some organizations were able to do that, some struggle, but I think if you can let people experiment very rapidly and have a hypothesis, really become that scientist. And I think interestingly enough, this is where a lot of smaller organizations are good because they're willing to try something out very quickly, see if it works, and if it doesn't work, move on to something else.

I think larger enterprises can sometimes struggle with that, not in all cases, but if you really have that hypothesis, scientific driven mindset, you can even back of a napkin some of this stuff and say, "Okay, this is great. Did it do what I thought I was going to do?" "Yes." "Okay, what happens if I scale this out?" Some of those basic principles, you don't need to have it plumbed and systematized end to end like we're trying to do here, but at least thinking that way and getting that in people's minds as they are starting to build.

Another thing we did, for example, is that you come into my platform, I can provision you a sandbox or playground environment to do stuff in a safe way so you can kick the tires on it before it becomes a big thing and really important to your particular team. And if I could let you do that, then I'm going to get you thinking about, okay, what is the real value of that? Yeah, let's figure out the cost. So I think we all know how to do these things. It's what all the best articles that we read about operating businesses tell us.

Jody Bailey:

Right.

Matt Lyteson:

It's a matter of what's going on in our own organization that's either closer to that or maybe a little bit farther away, and how do we capitalize that and figure out what's the behavior that we need to help influence in the right direction. I think if we all can do that, then we're going to see more value creation out of this AI world at the enterprise scale than I think some organizations have seen in early days.

Jody Bailey:

What are you most excited about and what are you most worried about, in terms of rolling AI out?

Matt Lyteson:

Look, I think what I'm most excited about, I think the opportunity is limitless. And I think I am a proponent of what can you learn every day? I joke with my kids in the morning, they're going to school. I say, "I'm going to work and I'm going to learn something." We compare notes, what we learned when we come back from that.

So to me, that's super exciting because I really think this is the reinvention of the business world and we're all at different stages in our journey and there's a lot we can learn from each other. People are going to hopefully learn from some of this. People are going to ping me and say, "Matt, here's something that we're doing that maybe you should consider." Love to learn on that. I am worried that if we don't have the right guardrails for the enterprise, it is too easy to miss, "Hey, we've got a data leakage here," or, "We've got a cybersecurity here."

So I think going into this eyes wide open and my team constantly has conversations with our CISO organization, our data privacy organizations, how are we thinking about this? What are the different implications now that we're deploying these tools out here and what are the different risks that maybe we weren't considering years ago? What are the remediations we need to do on the backend with how we authorize and give these tools access to new and different data? I think that's going to be an ongoing challenge that we need to face head on.

Jody Bailey:

Well, Matt, I've really enjoyed the conversation. I've learned a few things. I'm sure others will, as well. Thank you so much for your time. It's been great having you here.

Matt Lyteson:

Great. I enjoyed it, as well.

Eira May:

Thank you for listening to this episode of Leaders of Code. If you have any suggestions for topics you'd like to hear about or guests you would like us to speak with, you can email those suggestions to podcast@stackoverflow.com. And I have been joined today by Matt and Jody. Do you want to let folks know where they can find you?

Matt Lyteson:

Hi, this is Matt Lyteson, CIO of Technology Platform Transformations here, at IBM, and you can find me on LinkedIn.

Jody Bailey:

And Jody Bailey, CPTO at Stack Overflow, also on LinkedIn, Jody Bailey.

Eira May:

Thanks guys. And I have been Eira May. I'm the B2B editor at Stack Overflow. You can also find me on LinkedIn, and we will see you on the next episode of Leaders of Code.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.