Loading…

How Braze’s CTO is rethinking engineering for the agentic area

Jon Hyman, co-founder and CTO of Braze, shares how he's led the company's engineering organization over nearly 15 years of growth — and how they transformed into an AI-first team in just a few months.

Article hero image

Jon Hyman, co-founder and CTO of Braze, joins Stack Overflow CPTO Jody Bailey on Leaders of Code to share how he's led the company's engineering organization over nearly 15 years of growth — and how they transformed into an AI-first team in just a few months.

Jon explains some pivotal moments where his thinking shifted (watching his team ship an MCP server six weeks ahead of schedule will do that!) and talks candidly about the cultural and practical challenges of driving adoption across a 300-person engineering org. He explains how model quality, not mandates, was the key factor in winning over skeptics, and why over 60% of Braze's committed code is now AI-generated.

Jon also addresses the harder questions: how to measure AI's real business value, the surprisingly steep cost of inference at scale, why "vibe-coding your way to scale" is folly, and what comes next as autonomous agents start building features overnight.

Connect with Jon on LinkedIn.

TRANSCRIPT

Eira May:

Hello and welcome to Leaders of Code. If this is your first time joining us, this is a segment on The Stack Overflow Podcast where we get senior engineering leaders together and we talk about the work they're doing, how they build their teams and the biggest challenges they're dealing with right now. My name is Eira May. I'm the B2B editor at Stack Overflow, and I'm here with Jody Bailey, who is chief of product and technology here at Stack.

Jody, thank you for joining us again.

Jody Bailey:

Oh, my pleasure. Great to be here and excited to talk to Jon and learn all the things he's up to.

Eira May:

Yes, me too. So our guest today is Jon Hyman. He is co-founder and CTO of Braze, which is a customer engagement platform that I have to say is very well loved by our marketing team.

Jon, welcome to the show.

Jonathan Hyman:

Hi there. Thank you so much for having me on the podcast.

Eira May:

Jon, I wanted to ask you, you've been at Braze for over a decade, I think. So you've taken it from startup to global leader. When you think about your definition of good engineering leadership and what that entails, how has your understanding of that changed over time?

Jonathan Hyman:

So you're exactly right. We founded the company almost 15 years ago, and so that is basically a lifetime in technology. It's been really exciting to be from the birth of mobile and really the growth of that and build a business that was catalyzed by the changes to the world for mobile and now be at the time of being in a world that's being catalyzed and changed by AI and see the transformation on our business in both directions of that. If I look to the mobile aspect of it, one of the big things that was a driver from the engineering side was just how much scale and speed you needed in order to operate in the mobile world. When businesses decided to operate in the mobile space, they gained access to a global audience and users wanted to be able to interact with them whenever they wanted, and they demanded interactivity as their definition for real time.

And I always used to say to folks, "You've got high responsibility now. You have a device in my pocket you can talk to me through. When I'm with my friends or family or with my wife, you need to say something that's really important." And so when I think about that from the engineering aspect of it and the lessons that I've just learned, I think it's just really crucial that engineers and engineering managers just have a strong understanding of how to scale and how their product really works.

For me, being involved with the technical underpinnings and the technical architecture of the products since day one has let me be very effective as a leader. I often refer to myself as a on the ground general. I'm not in the Pentagon, just waiting in the back. I'm literally out with the troops, able to understand the operational challenges that we have, the complexities and difficulties we may be running into with a new feature or a new product space, or really just help ultimately scale together. And I think that aspect has been very important to my ability to be effective and motivating for the team.

And when I look even now to the AI aspect of it, I think the same is true of I'm getting my hands deep in AI, launching all sorts of agents. I'm sure we'll talk about it later on the podcast. And I have a lot of my own point of view and perspective and able to go toe to toe with some of our best builders who are building really cool pipelines and setting up great automations and using AI to change their workflow. And my proficiency and understanding of that has allowed me to continue to just be effective in conversations, I think, as I said, be motivating to folks.

Jody Bailey:

Y'all have grown a ton over 15 years, right? I assume the leadership style and what you need to do has evolved with that. How do you lead leaders differently than the ICs and then maintain that balance of being able to go toe to toe, so to speak?

Jonathan Hyman:

Well, we've certainly had to evolve our organization a lot as we would get best the point of everyone being able to be in the same room and understand what folks are working on. And specialization was a big part of that. When we think about how we organized ourselves originally, sure, like many startups, you just had a pool of engineers and then you would go into separating into say front end or backend engineers.

But at that point in time, early on in our company's existence with engineers who could work on anything, or product managers would work on anything. And we referred to the prioritization challenge that we would face to build new things as a Sudoku problem where you needed to get front end engineers or backend engineers and product managers and designers to offer up their schedules and align together in order to build something. And we very quickly ran into problems there that led us into teams and now a divisional structure where we have engineering managers who report up into directors of product spaces that report up into VPs of divisions that ultimately report up to me and our SVP of engineering and our chief product officer. And the evolution here has just been that engineering managers and as you go up that chain need to manage increasingly levels of complexity and lead work streams beyond just the day-to-day operations of the team.

And when we get to say the divisional leaders, we meet with all of our division leads on a weekly basis. And those are the ones where they've got to be leaders through goal setting, through architecture and design review. We're asking them to articulate visions for large sectors of the business. And so on one hand, I'm having conversations with the division leads around what ought they be building, what and how they should think about their product areas. Do they need to be improving the product health of a product space? Do they need to be going for adoption of features? Do they need to be going for driving monetization of those features? Ultimately, what are we doing here? How competitive is our product in the market?

But I also like to do, as you were coming out, Jody, at the engineering side, that you kind of got to get deep down into the weeds sometimes in order to know how effective the team is being. And for me as a leader, I've even seen this be effective at times when you're working with engineers on estimates, trying to break the back of a problem. So a lot of times where folks, they maybe aim too high, they think about a problem a little bit too large and aren't seeing the force for the trees and all the positions there.

And you can just kind of come in and say, "Wait, this doesn't make sense to me. I don't understand why we can't do it this way." And then being able to actually prototype it myself can also help, as I said, break that back of that problem and show a team the path forward and be able to then let their leader take it from there and be able to usher it forward.

Jody Bailey:

I assume you're leveraging AI to help with that rapid prototyping for those conversations. Is that a fair assumption?

Jonathan Hyman:

Yeah. The explosion of AI has been really cool to see as we've evolved to here at Braze. And I'm just in awe of how fast we've gone from AI is helping us auto complete code to AI is a junior software engineer who we can give guidance and direction to and it can do some stuff to AI is now a senior software engineer that doesn't need as much guidance in order to now us thinking about how we can use AI automatically to build things in response to triggers. If a bug report comes in, can we automatically have AI submit a pull request in order to fix that? If we have a pull request that goes up and there are test failures, can AI just automatically fix that so the engineer doesn't need to do those things? There's a lot of automation that we're just starting to get there and it's just so early innings, but in the last three months, we've just completely transformed how we're doing engineering at Braze to put AI in front of every way in which we're approaching projects.

Jody Bailey:

Three months teams quick, especially given the company's age, 15 years old. How did you make that happen? How did you go from code complete to autonomous agents?

Jonathan Hyman:

So if we look back at the AI track record here, for a long time, the AI state of the art was really code completion. And so we had these great JetBrains IDEs doing some good code tab completion. We're early adopters of GitHub Copilot, which also brought in doing a bit of code completion and asking a little bit about the code. And when Claude Code came out at the end of February of 2025, that for me was a big game-changing moment. I remember playing with it about six or seven days after it came out. And at dinner that night with my wife, I couldn't stop talking about it, just going on and on about how just impressed I was with what happened. I was sending screenshots of what I was doing to my co-founder and our CEO of being like, "I cannot believe AI is doing this now. Wow, this is shocking to me."

And my goal at the time was to say, let's just start to enable the organization. And so over the next two months from February to call it late April, early May, I was trying to look at how I can just give the team access to different coding tools. We bought Cursor licenses for folks. We doubled down on GitHub Copilot for people who wanted it and started doing automated code reviews. I got Claude Code access for anyone who wanted to use that through AWS Bedrock. And just we started doing AI lunch and learns in order to enable the team. And this corresponded with me trying to push AI across the company as well.

In the first phase, what I really just call about enablement and guidance, let's make sure that people are aware of these tools, they know they exist, they understand how they can access them, and we start to show them examples of what to do. Because even just, gosh, eight months ago, we had lots of folks who didn't know necessarily what to do. I talked to executives at Braze who would say, "What should I use Gemini for? Can you give me an example of how I can use it?" Because we're all old dogs. We've been building either software or doing our non-software jobs across the business for, some of us, 20, 30 years. Now we have this new technology come in, you got to build that muscle.

We continue to experiment and the models also continue to get better. And for us, it was August of 2025 that I'd say the start of this big transformation because in August we had a set of AI features that we wanted to release ahead of our big customer conferences that were coming up. And the first of which was an MCP server. So we started seeing demand for that over the summer. And we actually said, "You know what? The MCP server is something that I'd say is like low stakes. It's a greenfield. Let's just try and build the MCP server entirely with AI. Let's take two engineers and have them do it." And what they did is they built the MCP server route, they ended up about six weeks ahead of schedule because of using AI. And it was really exciting because that for me was the moment where I actually got that click of, oh gosh, this is going to be a stepwise increase in efficacy for our teams.

I've been reading on Twitter, some people saying, "Oh, it's 30%, 40% more effective." But even in my own day-to-day, I wasn't seeing it. I was still struggling with some models. They weren't doing so great. I was spending a lot of time fixing things up. And I surveyed our engineering organization and they felt similarly that a lot of folks in the models were still sometimes causing a little more work than they were saving. So people more reluctant to get into it because you didn't know if you're rolling on the dice if it's going to save you time or it's going to be a little bit of a time waster. But once we saw that project go well in August, that was the difference then. And we started saying, okay, we now need to incorporate AI into more workflows of what we're doing and start to push for a little bit more adoption.

And then of course I'll say when Opus 4.5 came out in November, that was a game changer because we moved from saying we can give you a model that you need a little bit of direction. And Opus 4.5 was the first model that I worked with that felt like it needed little direction or correction in order to build with a great meaningful feature quickly and correctly. And as we then been promoting the outputs of that, the takeoff in our team has been like wildfire and we've seen just massive adoption. Last week, I was looking at the stats. Over 60% of the code we committed to our main repositories was written with AI. And that's just that flywheel just kind of keeps going as other people on the team are using it and seeing their coworkers use it and they're getting results from it. And now we need to build a lot of infrastructure around it.

So it's exciting time for us to be where we are because we're kind of at the first days. Look at this as like, if this was AI is one day, we're like one minute after midnight into where we're going to go and we've already accomplished so much.

Jody Bailey:

One of the big challenges with all this, at least my observation is cultural. I mean, one, just getting started, I'm sure your engineering team had its fair share of cynics, not just doubters but people that just didn't believe at all. How did you really win them over or were you able to win everybody over?

Jonathan Hyman:

Well, I think what's won them over has been the massive improvement in model quality in the short timeframe and the demonstration that these things are really working. Because again, the polling that I had, and I would send out that surveys to the team pretty regularly, we'd ask about it in our own department employee engagement surveys, how are you using AI? How effective is it? What tools do you need, et cetera, is just that some folks said it was really great. Some folks just said, "You know what? I'm not really using it because all my experiences were bad."

We tried to do the general things you do in engineering. Have some lunch and learns, try to do some show and tells, say that there's people available to help you get set up and running and promote the things that are working really well and try to get that bottoms up groundswell that I think is really effective in engineering because engineers like to tinker, they like to hear what other people are doing. And when they see something cool and they say, "Ah, great, I want that because it's all code, you should be able to pull those things into your workflow."

So we tried to do those things, but ultimately the adoption I think really came from just the models being better to the point where it has let us as leaders raise the expectations of what's possible from the team from AI. Again, you go back six months and people were talking about this stuff, but I wasn't even seeing it myself, but now that I'm seeing it myself, we start to, I'll say, have a expectation that others are using it. A bug report comes in and I'll DM on Slack, the engineering manager and I'll be like, "Hey, is this just something that you can get Claude to do?" Or a customer will request ... We have lots of customers, we've got over 2,500 customers. They always have all sorts of features that they're requesting and we take that into consideration our roadmap, but as every company is, we just can't build everything that customers want.

And some of these features came in, I just had one that reached out for one of our customers, DraftKings, I was looking at it being like, "Oh yeah, we should just be able to write this up in a prompt and have AI build it." And so now that I as a leader see what's possible, I can hold people to higher standards of what the type of output they can do and how much velocity they can have. And then that trickles down to being able to get more adoption. Of course, this stuff's all very self-reinforcing on the flywheel of the more people see it working for them, the more they're going to want to do it.

Jody Bailey:

So you all did an acquisition last year, right?

Jonathan Hyman:

That's right. We acquired a company called OfferFit. They are a really solid leading reinforcement learning engine for marketing software. And it's been a really fantastic integration since then, and I'm just really impressed with the team and the product.

Jody Bailey:

I'm curious, bringing two organizations together, different backgrounds, different engineering philosophies, et cetera, how have you navigated that and how has that played into your AI journey?

Jonathan Hyman:

So one thing I'll say is that our cultures were very compatible. And it was one of the things that we really liked about speaking with their whole teams is that it fit very well into the way that we like to operate, very open, very direct, very innovative, and our leadership styles are complimentary and overlapping.

But there are certainly some challenges. For example, they were 100% remote company and Braze is a bit more hybrid than that. We've got 15 offices globally and we like to hire folks around our offices. That way they can come into the space, they can come into the events that we throw and we can all be able to collaborate in person during the week. And so there's just a little bit, I'll say just on that kind of COVID, post COVID, tension of having to align on that. But ultimately there's things that we tried to blend in and merge together.

So for example, they're a smaller company. They were obviously much smaller than us. We had about 1,800 employees at time of acquisition. They had like 150, 170-ish maybe. And because they're smaller, they're not public, they can move a lot faster. So they have a whole set of tools or it's easier to experiment and easier to get running with things than we can at a public company like Braze. But we've been able to over time try to, again, blend in the things that work well. We move them over from linear over to our Jira system. They are using Graphite for pull request and stack pull request management. We're now trialing Graphite across our overall organization, something that we took from them and some of their practices there.

And ultimately, I'd say that it's just something where we've been going at it in a deliberate way to think about where their ways of working can be brought to us in order to have us move faster, but also trying to get them to integrate into our ways of working as fast as possible so that way we demonstrate that we're all one team was really important to us. I mentioned earlier that we're in a divisional model and so we've moved their engineering group into a division. It works the same way as all the other divisions with the same reporting, the same structure and the same processes there. And that's just let us be really effective at actually integrating our products very quickly and having customers see value from it rapidly and continuing to increase that.

Jody Bailey:

And I'm sorry, did you say they operate as their own division or are they spread across multiple divisions?

Jonathan Hyman:

Great question. So in the engineering side, we are divisional based. So for example, we have a division for our channels, like say the division that sends email, WhatsApp, SMS messages, et cetera, all that's a single division. We another division for orchestration and data, another division for DevOps engineering. They are now an additional division on top of that for machine learning and reinforcement learning in terms of that. But that structure is just parallel to all of the other divisions here at Braze. And so we were able to align them to our processes really quickly. Across the business, we had other challenges to integrate them to. I was just speaking about engineering, but of course they had salespeople and they had marketing folks and product marketers and they had support engineers and all those other things there. And we've been definitely fitting them in where it makes the most sense to keep processes and everything consistent.

Jody Bailey:

Of course, engineering's going the best, right?

Jonathan Hyman:

Yeah. One of the things that's good about engineering is it's very easy to measure. Engineering and sales. Sales, you can see how many deals you close. Engineering, you can see the velocity and the state of the integration. So it's great that it's objective because then we can actually look at it and we can be proud of ourselves with what we've done. And also we can point back at it and retro it and say this went well or this didn't go well because those engineering processes exist and they work just as well with onboarding a company as they do, like onboarding a new feature.

Jody Bailey:

You mentioned measure, which reminds me or makes me wonder, how are you measuring the success of AI? And how did you ... At this point, from what you're saying, it sounds like, well, you've got the results, right? I mean, you're saying things ship, but as you started your journey, how did you think about measuring adoption or success and value of AI?

Jonathan Hyman:

That's a great question and one that we're even literally still trying to define right now. Well, I'll talk a little bit about the AI evolution overall of Braze too and the way I've thought about this, and then I'll use engineering as the example here. Now, as CTO, I'm not just looking at the engineering side of it. I'm also trying to think of the technology work at Braze. And I looked at AI and saying there's three warp streams.

The first bit is what I described earlier is just what I'll say is the enablement and awareness. Let's get people across the company familiar with these tools, let's get them access to these tools, let's show them great use cases and encourage their adoption. And what we can see from that is we find folks say, "Gosh, you know what? I'm so much more effective now because I have AI tooling." Whether that's writing a feature or it's a customer success manager summarizing a meeting and their email and taking the Gong call transcript or whatever, we're able to get some efficiencies there. That bottoms up approach, I think, doesn't apply to being able to really transform a company. You can help with everyone's individual productivity, but even if I make everyone 20% more productive, it's unclear how that's going to mix into making Braze say grow 20% faster or be 20% more profitable.

So we've got to move into the next work stream, which goes into having departments actually start to think about the tasks that they're doing and move toward AI, not just as an advisor or as a guide, but more like a coworker. And this is that agentic space where you start thinking about saying, instead of building something like a Gemini Gem where I might go to it and upload a document and ask for some feedback on that document, how can I have AI produce output that's indistinguishable from my coworkers where I go to it with a document and it gives me back an updated document or something ready to send or it creates a deck for me.

And that's kind of the next phase we're working on that with some areas too. And that comes into what we're doing in engineering is trying to think about how we can have automated bug reports in ProQuest generated from that, how we can think about taking Jira projects and Jira Epics, having AI code all of that. We're still really early in that space because the automation for that, everyone's trying to build it right now and we're in that set of just getting that arsenal really set up and that machine in good space.

Then the third pillar comes then into how you actually change the business metrics that affect your company's bottom and top line. I look at this on the go to-market side as things like if we can reduce the ramp time that it takes an account executive to come up to speed, that literally is used in financial projections and will lead toward more revenue and more dollars for the business. And so that's something where we can then start to hone and say, can we use AI in order to reduce the ramp time for an account executive? Obviously you've got customer support when folks are looking at how can I reduce the dollars that were being spent on customer support and make sure that they're more effective? Or how can we build the product in order to not deflect that support ticket from even being made? Those are great metrics to optimize for it.

On the engineering side, we haven't quite defined those, but I will tell you that one of the big challenges that we're running into in engineering is we're realizing just how expensive AI inference is. Ultimately, these models are very good now, but they cost a lot of money, especially when people are using them every day as part of their jobs. We used to joke at Braze that one of the best investments we could make was putting me on a plane because whenever I'd be on an airplane, I would code a bunch of stuff and we'd go on a long haul flight to one of our other offices and I'd get off that plane and I'd push up a ProQuest and I'd be like, "I built this feature and it was really great."

And the last time I was on a long haul flight, I was coming back from Tokyo and I was thinking, "I kind of don't want to code by myself anymore. I need internet connectivity so I can actually use agents." Because the way that I'm working is completely changing around needing and wanting AI for my job. And so we're seeing that engineers are changing themselves too, where some folks are using a model day-to-day. I mean, they're working for eight hours a day, they're working on our model for probably six hours a day and time between meetings or whatever they're maybe even the whole time or longer. And it's expensive.

And one of the things that we're going to need to figure out pretty soon is how we can drive the most efficient use of large language model inference and also how we can start to measure people's usage of large language models. As I even see like today, I was just looking at on demand spend. It's the first business day of the month of March. We have an engineer who already spent $150 today on inference and you just play that out. That's $4,500 a month just from that engineer in terms of token costs on a daily basis, probably less on a work basis. But that's just very hard to think about applying that to my 300 engineers that I have for the sudden cost increase that I've got to have.

And there might be another engineer who spends less than that and is able to put more productive output. Maybe they're doing better context, maybe there's less back and forth, maybe they're using models that are appropriate to the task instead of the latest and greatest of all time. And we're very quickly now in engineering going to need to focus on how we can get the most effective use of inference dollars per amount of output. And so right now I'd say the metrics that I'm thinking about in engineering, cost is a really big one. I just don't have that figured out yet because we're just embarking on that. And then how that ties into velocity and how we can raise the expectations on the team in a durable and measured way are going to be the two first focus areas I'll say we're going to go in with this group.

Jody Bailey:

Yeah, that makes perfect sense to me. It lines with some of my observations as well. I think it's interesting, how do you measure the productivity of an agent? It's like we never figured out how to measure the productivity of a developer, why do we think we can measure it of an agent? And I mean, it's kind of the same problem, right?

Jonathan Hyman:

Because you're paying per consumption, you're always just asking yourself, "Did we do this in the most optimal way and could we do it more cheaply?" And folks in business ask themselves that all the time and engineering managers and PMs have always been asking, "What's the MDP that we can get out that's cheap as possible, fastest to market?" And because with AI, we're getting a dollar figure every time the task completes. I think we're actually now starting to say like, "Gosh, is this kind of worth it?" Chipping away of someone spent 150 bucks today, is the output that they did worth the $150 on top of just the other expenses that we have for an individual?

It's going to be a challenge because it's a budget shock. It's just this stuff is so great and so valuable for productivity that it wasn't something that we realized we were going to have as the high inference costs when we went through budgeting cycles a few months ago and now we're looking at it and going to have to adjust into it. And by the way, I think that the large costs of inference is going to steadily erode at the thesis that folks are going to vibe code their own software in-house because ultimately it's like that works if Citrini says that inference is going to be as cheap as electricity or as free as cheap as water, but we all know that's not the case. OpenAI's got to spend $100 billion to build data centers. It's not cheap. And when you look at the bills and you say like, "Well, my team's burning 150 bucks a day, do you want to have that resource tied up doing that or do you want to buy software?" And I think it's going to change the conversation as soon as we move toward cost in the discourse.

Jody Bailey:

Yeah, that makes perfect sense. I also think that it still requires somebody to set things up to do the work and businesses have their core business to deal with. Do they really want to build all the solutions for all the parts of their business themselves? It's like they too have limited budget and time expense. What are they going to focus on?

Jonathan Hyman:

Sure. I feel like it's even a little bit bigger than that too, because what you look at is that everyone instantly, globally, got this stepwise increase in productivity. And so if you had 100 engineers and all of a sudden now you have the output of, call it, 180 engineers, is the first thing you do, is it to go and build Salesforce? Because all your competitors also went from having 100 engineers of output to 180 engineers of output, and they're working on their roadmaps and you have to run fast right now. And tomorrow when AI is even bigger and even better and even faster, you have to run even faster than that.

And so what you would see if AI was free and infinite is what I call like a singularity event where everyone would go and build their roadmaps tomorrow, but you don't see that. But you do see increase in velocity, but you see it everywhere, which just makes competition harder and harder. And I think that when folks really start to actually internalize that, that if they say, "Oh, we try to go and build this software," by the time we even build it, they're like, "Oh, vibe code this in two weeks." By the time you get two weeks from now, the software you just vibe coded, they are going to have built so much more stuff that you then have to continue to play catch up on.

It's a really naive take. And I think that what's lacking in the public discourse is really a sense of trade offs, a sense of competition, the second order effects of AI and the cost and the future maintenance of all of that. And that's things that we as engineers are dealing with in our organization, and it's a really hard problem. And there's a lot of difficult challenges that we're facing over just what are the second order effects of having AI generated code in order to build our roadmaps and service our needs?

Jody Bailey:

Well, I think the comment about roadmaps is really key because at the end of the day, you can build all sorts of stuff and you can build up faster. But if it's not useful, if it's not what your customers need, if it doesn't add value, then should you build it? And then the cost piece comes in as well. How do you think about that and how is your relationship with product management and the way you think about roadmaps and iterating, is that evolving? Has that changed?

Jonathan Hyman:

Well, it certainly has. And I'll just talk a little bit about how that works and say that at this stage in our company, we're about 2,000 employees. We've been public for about four years. We've got a lot of great names of customers. I work with a lot of businesses across different verticals and also across different sizes. We're in a privileged position where when we build products, they're very well researched and very well thought out. We have a lot of customers who give us feedback. We've got a great team of UX researchers, got a great team of product managers who go off and talk to our customers around what we're working on, and we then build things and go into early access with them. And by the time we get to general availability, we really know that what we've got both is what customers want and it has a user experience that is very easy to understand and works really well for our customers. We're putting stuff out there that we know the market of our customers is really going to like.

Now, what AI is doing for us is two things from the market product manager side that are just really immediate. One is it makes the ability for product managers and designers to self-service some of their needs and fix some of the small chores much more possible. That could be designers who are on a fixed spacing on a certain page or adjust the layout of a certain page in some way. And these are things that we call a UX debt. Ultimately, stuff that is confusing for customers, maybe it causes a support ticket, maybe requires an extra hop. There was one example of this really easy to understand of in our email composer, if you didn't have an unsubscribe link, we just threw up a validation and said you must have an unsubscribe link. And the UX set there was, we don't tell people how to do that. And so you should say, click here to add one or link to the documentation or just something from the error message that makes it a lot easier to understand.

And we have an OKR for every single team that it's an evergreen OKR. Every quarter they've got to do five UX debt items. This is part of our product health initiative to make our product just easier for customers to use and have a less support ticket burden and better for our customers. The designers can now do these things by themselves, which has been really cool to see as people have been building stuff without needing engineers at all.

The second part of it is that we're seeing rapid prototyping happening with our product managers using Vercel and Cursor in order to create interactive mock-ups that we're then able to have conversations about in a very fast turnaround. And so we're moving from, even from the Figma mock-ups that I thought were incredible of like, wow, we have a Figma mock-up that I can go through and click a dropdown and click the next page and actually all works. Well, even faster than that, we're able to build things in Vercel in v0 and just kind of get that out there for us to talk about, show customers and align on and make sure that we're building in the right direction.

And maybe I'll add as a third part to that, which is that there's some things where our building can now go a little bit ahead of the design. What I mean by that is historically a lot of the designing, the building that we're doing happens after we've already got the product spec and the design mock-ups are there. And obviously they can go to an engineering team to kind of build it to the spec. But because it's so much easier for us to change the user interface and apply that back to a design mock-up, we can be building while the designers are still figuring out the perfect flow with our UX team and with the product managers here, which has also been allowing us to speed up and get things out for beta customers who are totally willing to accept a little bit of a subpar user experience in order to get the best leading cutting edge features that solve the business problems there.

And that's been really cool to see because we would never have done that in the past because rework costs a lot, but in a world where rework is less expensive and an engineer creating their own, I'll say like UI themselves is less expensive, this becomes possible. So it's been cool to see how, again, it all leads back to we're getting more stuff done much faster than we were before.

Jody Bailey:

That's awesome. One of the things you, from what you're saying that I really like is you're getting better, you're getting faster and you're figuring out how to do more. As opposed to a lot of people are publicly saying recently, "Well, because of AI, I can cut my engineering team by 70%." And I think the approach of how do we deliver more for our users and our customers with what we have is the right way of thinking about it largely.

Jonathan Hyman:

That's right. And that explanation that we don't need as many engineers never made sense to me as the technology leader at a growing technology first business. I mean, I'm sure there are lots of businesses out there that are like, "We were going to hire four or five people, but we don't need to." I've got a buddy who's building a startup right now and he said that they're saving a bunch of money and heads being able to build an iOS app that they though they had to hire someone for. And that's been fantastic for them.

But I'm in a very competitive space with a massive roadmap. I've got way more great ideas than I have people to build them. And in a world in which I had 100 great ideas and I used to only be able to build 20 of them, now I've got 100 great ideas. I can say build 40 of them on my team, but that doesn't change the fact I still have 100 great ideas and I want to build as fast as I can. And I think about the fact that the engineers that I can get can continue to do and build more. It's just a very appealing aspect of just team building for me that to think that we can just continue to grow really fast because of AI.

I think there's a lot of discussion and understanding that AI is actually inducing demand for more software. People want to build more things. We've got more roadmap to build, or as you were saying, there's more stuff that anyone can create. And now it's like you're able to build more of your roadmap. And I think that we're absolutely seeing that. And so for us here at Braze, we are nowhere near at peak engineering headcount, and we just have a lot that we want to continue to do and just that we're just hungry for.

Jody Bailey:

Yeah. There's never a shortage of ideas, at least not in my career. There's always more things to do than you can possibly do. So it doesn't seem like that's changing.

Jonathan Hyman:

That's right. Yeah, that's totally right. And one thing I also call out is that you can't vibe code scale. Again, vibe scale, really. And the things is that you can build a lot of, get a lot of code out there, but ultimately being able to run that at high complexity, high scale, high downy use cases is something that requires deep understanding of what you're doing, the business problem that you're solving, and then how all the systems work together.

And right now, I'll say even with a million token context, I've got a much bigger token context window in this brain than the models do of how everything works, not just the code, but the business process, the business system, the customer challenges, the customer use cases, and all that domain knowledge of just what are folks trying to do, how are they trying to use our product? So that way when we're going and building something, we can make sure it fits really well into what they want to do.

Jody Bailey:

Yeah. It's impossible to fit all that into a context window, right?

Jonathan Hyman:

Yeah. At least for now it is.

Jody Bailey:

Right. Yeah. It does bring up a good question. I mean, just that knowledge that's spread across your brain, but then all the people within the organization, how do you convert that to useful information that the agents, et cetera, can use? And how do you balance that with somebody with the brain walking out and taking all that knowledge with them? Do you all capture that in some way or how do you think about it?

Jonathan Hyman:

Well, I can tell you what we're doing now and what I expect to happen. Right now, we are trying to codify our ways of working and our best practices. And just the easiest kind of example in coding has been that ways of which we are react standards for front end, the ways in which we expect to write front end tests, just ultimately the usage of frameworks that we expect. We're trying to write that down because we're seeing things like the scaffolding of endpoint creation in the front end is just abysmal because it's kind of this, what I'd say, like vanilla off the shelf stuff that doesn't fit into how we've built the application.

And our front end teams look at this saying like, "Oh my gosh, it's using tons of anti-patterns." And if we can then codify how we do scaffolding, how we want to write tests, what types of stuff we want covered in Cypress end-to-end tests versus unit tests versus other front-end unit tests there, that is really helpful for the models. And so we're trying to do some of that now, which has ultimately been building some skills, asking models to write back the skills when they've learned something in a chat session, and we're kind of piecemeal going at that.

But I will say that I think that this is leading to this bunch of spaghetti skills and ways of working, and we're going to have this challenge on our hands that I think we already have, but haven't really said out loud where people are using different tools, they have different sets of skills that are in each of those tools. Maybe there's different standards there. There's some things that aren't there that people are putting up with. And we have this messy inefficiency of not standardizing that stuff that we're going to pay for in terms of rework, slower velocity and higher cost. And so what I expect is going to end up having to happen is we won't have to write a lot more things down around how we do things and have humans be able to take that knowledge from the brains and really transcribe that into a standard process workflow.

And I think that at some point, we haven't done this yet, but at some point we'll want to standardize how people are building with AI inside Braze. Right now, I care so much about experimentation and for experimentation to be successful, you have to let people do their own thing and not hold them to a standard. But I think that we're going to see the limits of that where we get to folks being like, "Gosh, your front end tests are always having to get rebuilt. You should be using this skill" or someone saying, "That's not the way that I ... I don't want to have to know how to do these 10 things. The model should know how to do it for me."

We're going to have teams that are focused on agentic workflows and building those just like we have teams focused on building CI pipelines and building testing environments and building local platform, local dev environments and containers. We need that same agentic infrastructure to exist for coding. And so I think that we're going to see new teams forms and new responsibility sets and a lot more standardization in 2026.

Jody Bailey:

This has been really fascinating for me and I've really enjoyed the conversation and could go on and on. Is there anything you'd like to share before we wrap up that we haven't covered?

Jonathan Hyman:

One thing I'll just call out again is I think that getting your hands involved with this is just total no regrets for your ability to understand what's changing in the tool set and be able to adapt your SDLC and your expectations into the modern AI era. If we look at where we were, we're winding the clock back to early to middle of last year, I was saying that things like, "Oh, we should be able to require you to write more tests because AI can write tests for you." Or if you're doing a PRD, you should have competitive research in there because it's really easy to have Gemini Deep Research just look at what's in the competitive market there. We can say that you can stop cutting corners that we all may need to cut from time to time just due to time and resource trade offs.

And that was the way I thought about it in the past, but it's really changed with what's possible for what AI can build for us and how we can change our processes and moving from what can I expect of an individual to what can I expect of the team overall? And I think the world's going to go ahead where all of our teams are going to have agents that are building stuff 24/7, that are responding to bug reports, that are responding to product questions, that are helping build roadmap, people are going to go to sleep and wake up with features being built by AI. I'm excited about that. And it takes your own knowledge in this space in order to transform your business like that.

And so if you're already not launching a bunch of agents in parallel and having them work on different features, I would say go do something like that. If you don't have your AI app on your home screen or your phone, go ahead and put it there. So then that way your first instinct is to open that up instead of going to Google and doing a search, really just start engaging with the AI.

So I would just say just you got to tinker with this stuff. I'm having my third child and in the next couple days we go on parental leave and I was saying, I'm going to just start playing with OpenClaw. That's just something where I want to just experiment a little bit more of my own personal life around what's possible with AI just so I can be involved with it and think about what I can bring to the business.

Jody Bailey:

That's amazing. Great advice. Thank you so much for sharing and thank you for joining us.

Jonathan Hyman:

Yeah, likewise. Thank you for having me on the podcast.

And for folks who are out there listening, you can follow me on LinkedIn. I post a little bit about our AI ventures here at Braze, as well as just great stats about scale and exciting challenges that we're working on in the technology realm. So feel free to follow me on LinkedIn, love to have you.

Jody Bailey:

Jody Bailey, LinkedIn, Stack Overflow.

Eira May:

Thanks for a great conversation, gang. I think that's going to do it for this episode. My name is Eira May. I am the B2B editor at Stack Overflow. And if you enjoyed this conversation, if you have suggestions for topics you'd like us to cover or guests you'd like us to speak with, you can always email me at podcast@stackoverflow.com. Thank you so much for listening, and we will see you on the next episode of Leaders of Code.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.