Loading…

AI is a crystal ball into your codebase

Ryan is joined by Kayvon Beykpour, CEO and founder of Macroscope, to dive into AI-powered code review’s potential for managing large codebases, the need for humans-in-the-loop for PR reviews so AI tools can efficiently and effectively debug, and how AI can increase visibility through summarization at the abstract syntax tree level and high signal-to-noise ratio code reviews.

Article hero image

Macroscope helps you understand your code through AI-powered code review, automated PR descriptions, and real-time status reports

Connect with Kayvon on Twitter and LinkedIn.

This week’s shoutout goes to user Jesper Grann Laursen for winning a Populist badge on their answer to Exclude Table during pg_restore.


TRANSCRIPT

Ryan Donovan: Tired of database limitations and architectures that break when you scale? Think outside rows and columns. MongoDB is built for developers by developers. It's asset-compliant, enterprise-ready, and fluent in AI. Start building faster at mongodb.com/build.

[Intro Music]

Ryan Donovan: Hello, everyone, and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ryan Donovan, and today we are talking about an AI that understands your codebase. We have a lot of code-gen things out there, but this will understand it, and you can ask it questions, you can give your C-Suite folks reports, et cetera. My guest today is Kayvon Beykpour, who is CEO and founder of Macroscope. So welcome to the show, Kayvon.

Kayvon Beykpour: Thanks so much for having me, Ryan. Great to be here.

Ryan Donovan: So, at the top of the show, before we dive into the discussion, we like to get to know our guests a little bit. How did you get into software and technology?

Kayvon Beykpour: Oh man. The TLDR is video games.

Ryan Donovan: Of course.

Kayvon Beykpour: Back when I was a wee lad, video games led to building servers, which led to programming, which led to making websites for people, which led to starting companies, which led to eventually realizing I should probably major in computer science because I liked it more than anything else. And just like hobbying, you know, I've always loved building stuff, and software engineering is a pretty useful skill when you wanna build stuff. So, that's how it all began.

Ryan Donovan: Right. And this is not your first rodeo with founding companies, is that right?

Kayvon Beykpour: Yeah. This is my third company. I started my first company while I was in college with the same co-founder. And then, my second company in 2014 – we started a company called Periscope, which is a live streaming app. It got acquired by Twitter in 2015, and then, we started Macroscope in late 2023. So, yeah, third time. Third time on the radio.

Ryan Donovan: All right, well, let's talk a little bit about your current rodeo. I think a lot of codebases now, especially in enterprise projects, get massive, right? Hundreds of thousands, millions of lines of code. Understanding all of that and the changes that are being pushed through can be kind of a chore for individual engineers. Can you talk about what is, you know – that problem that you're solving and how you're solving it?

Kayvon Beykpour: Yeah, I mean, I'll start way back and maybe tell you a quick story around why we even started this. You know, we've worked at a few companies now, some very small companies that we've started, and then really large companies, you know, Twitter acquired our most recent company. And, at the time, Twitter was a total of 2,500 to 3000 engineers. The consumer team, which I led—I led product for consumer and then eventually engineering it for consumer at Twitter. And that was probably like a 1500% engineering team, at the time. It's a lot smaller now, since Elon took over and downsized things, but, you know, understanding what is happening at a 1500-person engineering team is extremely challenging. My job was the head of product, and you know, in order to be good at my job, I needed to have a really good understanding of what people were working on, right? I could tell you at any given point in time, the most important priorities we should be working on, but I had no idea whether we were allocating our engineering effort aligned those priorities. And even my engineering counterparts didn't know, right? With that large of an organization it's hard to know what people are doing, and looking at the codebase is also not practical.

Ryan Donovan: Right.

Kayvon Beykpour: You're not gonna open every PR, open every diff for a 1500-person engineering team. And so, part of the root of this problem was, you know, I live the pain of understanding what is the current status of the things that are important? What progress are we making on our priorities? What are people doing, and what is the current state of a given project? And the reality is that, so far as I know, the state-of-the-art solution to this problem at the most sophisticated companies in the world is meetings and spreadsheets–

Ryan Donovan: Mm-hmm.

Kayvon Beykpour: And Jira and Linear tickets. And just asking people, fundamentally, like, the most reliable ground truth is just distracting an engineer and saying, 'hey, can you tell me what the status of this thing is?' And that's not an efficient solution to the problem; that's annoying for engineers. It takes them out of the flow of building things, which is more important.

Ryan Donovan: Mm-hmm.

Kayvon Beykpour: But nevertheless, flying blind as an engineering or product leader is also not a productive and acceptable state, and so the problem that we were trying to solve with Macroscope is fundamentally closing this gap. We want to build, we want to help bring instantaneous visibility to the people who need it, right? Your CEOs, your CTOs, your product leaders, your engineering leaders, your tech leads – they all want different levels of understanding, different levels of granularity or coarseness of what is happening, what is the state of things, and they want it quickly. They want it accurately based on the actual ground truth, and the ground truth for any software company is the code, right? If it's not in a codebase, it hasn't happened yet. And if it is in the codebase, the codebase can tell you with pretty high resolution what happened, and how does it work. Maybe not the why. The why is often elsewhere. So, that sort of speaks to some of the other integrations we've built with Macroscope. But anyway, solving this problem, building ground truth, is why we started Macroscope. We just selfishly– I wish we had this product at all of my previous companies, and so we figured we'd build it.

Ryan Donovan: Yeah, I mean, getting engineers to document their work on a PR level can be a chore, and obviously, you don't want your engineers just writing reports all day, right? 'Cause you, as head of product, are like, 'what happened?' And to go to 1500 engineers to summarize what happened– so, building in an AI product that understands that whole work of 1500 engineers must have taken– you know, obviously you're not just dumping the codebase into your favorite LLM and asking it questions. What's the engineering effort to get from LLM to something that actually understands codebase?

Kayvon Beykpour: Yeah, so, we actually took a very methodical and gradual approach towards building this understanding engine. And I can talk about sort of technically how it works on the Hood, but maybe a useful starting point is: we started with the simplest possible thing, which is, 'could we just summarize commits as they happened?' That's the baby step, right? The core atomic unit of Macroscope is every new commit to any repo you've connected, including commits to your feature branches. Macroscope attempts to summarize in an extremely succinct way that still retains the technical detail that is useful to a stakeholder. And at the commit level, it's really engineering stakeholders, like you as an IC engineer, want to know how the codebase is changing around you, right? And so, every time a commit gets pushed, Macroscope is summarizing. How did that commit impact the codebase? That was our baby step. And so, we started there. We then built a feature that let you get a newsfeed of how the default branch of a given repository was evolving. So, sort of like aggregating, you know, that you have a bunch of commits and a feature branch, they get squash-merged into the default branch of the repo. Can we succinctly summarize that? You know, Ryan merged into staging. What did that commit do? Super simple. You know, we weren't revolutionizing the world with this feature, but that was our baby step of could we deliver that simple newsfeed to a 15-person engineering team. And would that be valuable to them? Would it be valuable for them to see how the codebase was changing around them? And as it turns out, that feature alone, for small teams, was extremely useful. Otherwise, it's kind of hard to understand what's happening in the codebase. And so, that was the baby step. From there, we realized, wait a second, if we have these extremely accurate summaries of the commit level, maybe we can cluster these commits in interesting ways and describe to a different stakeholder, like more of an engineering leader, or maybe even a product stakeholder, or maybe even the CTO or CEO of a small company. Maybe we can sort of aggregate those summaries at some useful altitude, at some useful cadence, and start to describe to that stakeholder how did the product evolve? Not like, 'what did this commit do?' What changed in the product insofar as a customer might experience it, or insofar as an executive leader might care to understand? And those ended up turning into what we call now 'project summaries,' which is very useful, right? It's like, I know our team is working on changing the onboarding flow from Macroscope.

Ryan Donovan: Mm-hmm.

Kayvon Beykpour: There might be 40 commits over a three-day period that impact the onboarding flow. For me personally, I'm not necessarily gonna go read every single one of those commit summaries, but I really wanna understand after two days, what did we change about the onboarding flow? What's left? What did we currently do? And so, that became like a, you know– with the foundation that we've built, we sort of progressed up from there. So, this is the journey that we've been on. We started at the most atomic level, which for us is a commit, and then continued building our understanding of how not just the codebase has evolved, but how the product has evolved from there. So, that's, from a use case standpoint, how we've approached methodically building the product.

Ryan Donovan: Right.

Kayvon Beykpour: To answer your question around how does it work? Like, how do we sort of get this understanding? It's really a combination of, we obviously use the state-of-the-art language models when doing inference, and anything from generating summaries to doing code review, which we can talk about code review later. But two, we've made a pretty big technical bet around leveraging the AST in order to generate really useful references to set the LLM up to be successful. And what I mean by that is that we found that if you're just doing the simple thing of like, 'Ryan has a diff, let's send the diff to the LLM and just have it summarize it.' That will generate a summary, right? What we found [is] that it is much more robust, and accurate, and oftentimes magical to be able to set the LLM up with a more comprehensive understanding, not just of the change Ryan made, but the context of how the codebase around that change works. And so, we leverage the AST to create a graph of the codebase. And we don't just send Ryan's diff to the LLM, we send, you know, what are the callers of this function that Ryan changed? What are the downstream functions, the sort of in references, or the example usages of the function that Ryan changed? And by sort of supplying the LLM with all of those things, the diff, the references, it allows the LLM to have a more coherent and robust summary of what that change was. And it starts to kind of fill in– it's the sort of thing that results in a summary that makes an engineer go, 'whoa, this was a better summary than I could have written.'

Ryan Donovan: Mm-hmm.

Kayvon Beykpour: And that's really where the satisfaction, and the value, and the magic of using Macroscope comes in. And as it pertains to code review, it is the difference between high-signal-to-noise code review. Or just sort of gibberish code review that feels really noisy and spammy. 'Cause there's a lot. You know, we've tried all the code review tools out there, and most of them we want to turn off within a week.

Ryan Donovan: Two verbose, right?

Kayvon Beykpour: Two verbose, sloppy errors. They just don't point out high-value bugs. And if you get enough false positives spammed into your PRs as review comments, it's just more of a tax than it is useful.

Ryan Donovan: Right.

Kayvon Beykpour: And so, we're very sensitive to the signal-to-noise ratio. And our AST approach, we've found, is one of the things that allows us to have an extremely high hit rate for our code review pipeline.

Ryan Donovan: So, for listeners' reference, AST is the Abstract Syntax Tree. Usually, it's a sort of intermediary step in compilation, right? So, does that mean you are actually compiling code or running some sort of static analysis on every commit, or you know, the entirety of the codebase every time?

Kayvon Beykpour: Yeah, we compile the code, and then we use the whatever-AST library or package is available to us in that language. So, in Go, you know, our backend is in Go: one of the first—we call these code walkers, by the way, affectionately—and so, the first code walker we created was Go Walker. 'Cause we were dogfooding our own backend repo. And so, we'll compile the codebase. We'll use the AST package in Go to sort of generate these references, right? Like the colors of the function that you changed and whatnot. The techniques that we use are different from language to language. And this is one of the challenging things about this technical approach that we took: is that we had to go build these walkers in the most common programming languages. We started with Go, we eventually added TypeScript, and Python, and Swift, and Java, and Kotlin, and Rust. This was a time-consuming effort. It involved us hiring language experts. We weren't just willy nilly doing this. We went and found that engineers who have contributed to the AST package in core Python, more than any, you know, like who are the most frequent contributors to the AST package who understand how Python works and how the Python AST works? And we asked those folks to go build these code walkers for us in the respective languages that I mentioned. And so, it was like an expensive, in terms of time, effort, but our bet was that the way we would develop the best understanding of the customer's codebase would be to leverage this approach, and that that depth of understanding was gonna lead to the best product experience. And so, that's why we did it.

Ryan Donovan: Which language was the hardest to get a code walker for?

Kayvon Beykpour: Well, some languages don't have really convenient AST libraries that you can use. So, you know, Kotlin was particularly challenging, Swift, we had to do some creative things. And other languages were more straightforward, in the sense that Python and Go have very well-documented AST libraries.

Ryan Donovan: Mm-hmm.

Kayvon Beykpour: There's still a lot of work we had to do to leverage those libraries and just determine what are the– there's so many parameters that we can tune around the depth of references to gather, because it's– you're trading off how much context do you provide the LLM and how much attention do you want the LLM to have on a smaller set of references? And so, all of that has taken a lot of experimentation and tuning of the AST walking approach, and also prompt tuning.

Ryan Donovan: Mm-hmm.

Kayvon Beykpour: But just focused on the gnarliest of leveraging the AST, off the top of my head, I would say Kotlin and Swift were particularly challenging.

Ryan Donovan: Yeah. How about for those codebases that have multiple languages? Obviously, you'll have a backend and a front-end language, but there's also a lot of scripting or Python in between. Or, you know, Python wrappers on big seeds, binaries. Was that a challenge?

Kayvon Beykpour: Yeah, there's definitely a lot of, for lack of a better word, scaffolding challenge. We had to figure out, the examples you mentioned, you know, a lot of large professional software teams have monoliths, and the monoliths, they have their Rust backend, and their Swift front end, and their Kotlin front end, all in one repo. And so, that sort of presents a challenge for us to take a repository, like a real-world repo like that, and set up and kick off and maintain our code walkers at pull request time if we're doing code review, or when we're summarizing stuff. And so, I think, we're still a relatively new company, but we've worked with enough customers now, and seeing so many different flavors of development infrastructure and codebase setup, that I think we've solved a lot of these challenges. But definitely there was a period of time when we were starting, where every new customer we were like, 'oh wow, okay. We gotta accommodate this approach that we hadn't thought of.' That's the thing – engineering teams can have infinite variation of how you set up your dev environment and your codebase. So, I'm sure we'll continue to run into interesting setups that we may need to tinker with stuff on our side to make it work really well, but I think we've gotten to a good place so far.

Ryan Donovan: Yeah. I mean, engineers are creative and opinionated, right? They'll solve a problem 10 different ways.

Kayvon Beykpour: Totally. Yeah.

Ryan Donovan: So, the sort of natural language understanding the LLMs, the summarization, and then multiple layers of summarization. I think anytime I write something that is about AI, or a possible AI future, I get some salty comments that this is gonna be unreliable, somebody's gonna lose their stack on this. How do you increase the reliability of those summaries?

Kayvon Beykpour: I mean, a big part of it is this AST approach. You know, if you were to really. Oversimplified it into its component parts, I think it's like, can you set the LLM up to be successful with the best context? And best doesn't just mean the most, it's like the minimal set of useful contexts. This is like the whole context engineering thing, I think is real. We want to provide it with the correct and maximal set of contexts that is useful for doing the job and nothing more, lest we ruin its attention. And then two, is really tasteful prompting. And this is where a lot of this is vibes, to be honest. And our team was very opinionated around it. And we've been professional software engineers at all kinds of different companies, and we really put our own eye towards what is a commit summary that I would want to read that is useful to me, that I would stamp my name behind. What is a PR description that can check the same box? How do you articulate a code review comment that doesn't make me want to vomit, that is useful, tells me what the problem is, but is as succinct as possible, as an executive stakeholder? How do I get a summary of how a project has evolved? If I'm getting an email digest once a week, that's like a, 'what did you get done this week?' email, what is the way I want that summary articulated that I am not gonna Insta archive that email because I get a billing emails a day, you know? So, all of these things above and beyond the sort of technical approach to making sure we're not generating LLM gibberish, which is its own immensely challenging effort.

Ryan Donovan: Right.

Kayvon Beykpour: There's a lot of sort of taste and prompt tuning that goes into how do we articulate this in a way that's actually useful to a reader? And by the way, that reader is different from use case to use case. And so, I think that's been a huge part of it as well.

Ryan Donovan: So, if somebody's, like you said, it's different reader to reader, can somebody come in and be like, 'well, I want my executive summaries to look a little different.' Can they change that context?

Kayvon Beykpour: Yeah, so we built a feature called 'Macros.' We like our puns.

Ryan Donovan: That's right. It's in scope, I assume.

Kayvon Beykpour: Well done. You're hired. Macros are what you think they are. You can set up a macro to run a custom prompt at whatever custom cadence you want, sort of like posing a question to Macroscope on a cron job. And in those prompts, you can tune it to do whatever you want. So, for example, one of my favorite Macros is we generate release notes for everything we shipped on a weekly basis so that we can share those release notes with customers, and I have tuned over the course of many weeks of tinkering with this, I've tuned that prompt to be at a certain reading level. I wanted to include emojis. I don't want it to include esoteric technical changes we made that customers won't notice. I wanna really focus on the kind of high-level, what are the features that we built? And I want it to be a certain level of succinct, not too wordy. So, you can just sort of tinker with your own prompt that solves your specific use case and then have it run on some automatic cadence. So, that's been the feature that we've built that lets Ryan kind of customize and prompt whatever use case he wants. You know, the commit summaries that we generate, we have not let customers tinker with that sort of canonical summary. But I think what the dream is, basically, there is a canonical summary that Macroscope has so that we always can retain the understanding of how the codebase and the product changed. But then we give you an automation layer on top that lets you mix and match whatever you want. Think of it like a Zapier, but on top of your ever-evolving codebase and product, where you can create all kinds of workflows that you want, whether it's summarizing your commits and Shakespeare, you know, Shakespeare-esque pros, or whether it's generating release notes, or summarizing what the team did. There's just so many workflows that you can enable if you let people tinker with different triggers, different actions, different instructions, and different destinations, right? Maybe you wanna pipe a summary to Slack, maybe you want to pipe a project update to email, maybe you wanna trigger a custom web hook. These are all very complex workflows that you can customize with the right building.

Ryan Donovan: Oh, very cool. So, the prompting future that we were promised was to get us away from this sort of esoteric, you know, knowing the keywords, and coding, and all this stuff. But then, there's another sort of massaging, and the vibes like you're doing, like, making sure you have this prompt, and testing it. Do you think that there's a new sort of esoteric, magic words happening? Are there certain things in your prompts where it's like, 'oh, I have to have this word that triggers the exact thing I'm looking for?'

Kayvon Beykpour: I mean, there's still, for certain use cases, my personal belief is that there's still a lot of knowing the right magical incantations to say to the LLM to get it to do what you want it to do. Particularly around summarization and language like this isn't necessarily true for other use cases, but in the context of generating project summaries and commit summaries, the whole purpose of that feature is to generate a synthesis of some body of work that is useful to the reader. And that does require prompting, and there are magical incantations that we've had to learn. And by the way, those magical incantations change with every model when you move from Sonnet to GPT5. You know, you gotta massage it in a different way. So yeah, I don't foresee that changing anytime in the near future. And it's necessary only because the user experience commands it, right? If you get a case in point, like, if you get a review comment in GitHub for code review and it's like three paragraphs long, it's the sort of thing that just makes you wanna auto-close it. People have an aversion to reading super-long things, depending on the surface area. And so, sometimes, it's not enough to just be like, 'keep it short like we have to.' You know, we've gotta tell the LLM, ' get to the point in describing the issue,' and you know, ' do focus on this, but don't focus on this.' So yeah, there's definitely magical incantations involved still.

Ryan Donovan: Yeah, yeah. Well, be happy about that. Y'all are still wizards. You mentioned code review. So, talk to folks on this program about doing the sort of LLM-powered code review and testing. How do you go about solving that problem from your end? How do you review code in a way that is, you know, lets the human still be there?

Kayvon Beykpour: Yeah, well, maybe stepping back and starting with the why, there's sort of two reasons why we're interested in this. One, engineers spent a lot of time doing code review, right? For anyone who's worked at a large engineering team, a significant portion of your time is spent reviewing other people's code. And there are multiple reasons.

Ryan Donovan: Big block or two, right?

Kayvon Beykpour: Totally. And as a result of that, the amount of time spent, and this sort of perception that code review is a gate that I need to go through before impacting my customers. Just the vibe around code review is one of like, people just don't. It's an annoying part of software engineering, but a necessary evil, if I may be so bold in saying. And there's multiple reasons why code review exists. One, you want to prevent bugs from shipping into your codebase. Two, it's a sort of cultural norm around educating, and enforcing conventions, and sort of teaching junior engineers all that. There's other sort of cultural benefits to code review, right? Our thesis is that, you know, humans should not be spending time reviewing PRs for bugs. We ought to be able to delegate that as soon as possible. A proper, well-instrumented, well-built AI code review tool should be better at identifying bugs, like correctness issues that could cause problems in production, better, faster, cheaper than a human would. That is our belief. And I think we're already– there are edge cases to this. We're already very close to that reality today. Which is not to suggest that there is no room or no reason why humans should be involved in the code review process for other reasons, like the cultural ones. But, so we imagine a world where A: you can ship fewer bugs, and B: you can land your changes faster and engineers can spend more time building rather than all the work around the work, which is just like, 'oh, I gotta review like five PRs today, and that takes me away from the stuff I'm building.' So, in order for that dream to be possible, an AI review layer needs to be extremely good at finding real correctness issues.

Ryan Donovan: Mm-hmm.

Kayvon Beykpour: It needs to be capable of both identifying those issues and, ideally, fixing them. And you need to be able to delegate. You, as a human, need to be able to delegate to that system. Like, 'hey, if Macroscope gives me the green check, it's good to merge.' We're not there today, we are not in production auto-merging our PRs as soon as Macroscope gives a check. But we are gonna get there. And that, I think, is where AI code review goes from being extremely valuable to transformational, because it just fundamentally changes the nature of your software development process. There's an enormous percentage of PRs that you open, then could merge without being reviewed by human.

Ryan Donovan: Sure.

Kayvon Beykpour: Like, simple example, I make a copy change to our website. As a SOC2-compliant organization, we cannot merge PRs without getting a human the plus one right now. But it's an enormous waste of time for a human to spend time reviewing a PR that is a copy change that poses no runtime issue to the codebase. So, I think that this can be super transformational over time. Anyway, all of this is the why behind code review, and most importantly, the reason why we're sort of interested in focusing on this. Given the rate of adoption of AI-assisted coding agents and autonomous coding agents, like your Devons, your Code Xs, et cetera, the need for an AI review layer on the codebase and at PR time is even more important than it already was. And it was very important before. But you have a lot of–

Ryan Donovan: Yeah. At least to get the first pass summit, right?

Kayvon Beykpour: Yep. So, we think this is the right time to be building extremely robust AI code review systems, and that, as a consequence of the way we architected Macroscope, we found that actually, our system, given RST-AST crawling approach, was yielding way better results than all the other AI code review tools. We did not start Macroscope with an AI code review feature, and so, as a result, we adopted all of these tools, and we just didn't like any of them. We turned them all off. We were like, this is too noisy, it's not finding real issues, and that then started our own experiment of, 'wait a second, we can use our approach to attempt to do super high signal code review.' And we found, and we now know from our benchmarking—we published a benchmark as part of our launch a month ago—we know quantifiably that Macroscope finds more bugs than the competition that we tested against, and we do so without spamming you with comments. And so that then began our focus on code review and has led to a lot of the success we've seen with adoption.

Ryan Donovan: Yeah, I think, you know, a lot of engineers sort of get a little squicky around AI-powered code review. But I think this is just another step towards the automation of build CICD process, right? You already have automated tests. You already have the automated pushing it to production. This is gonna be another thing once it's sort of just testing for correctness, right?

Kayvon Beykpour: 100%. My hot take would be anyone who's opposed to these tools and thinks that they're a waste just hasn't used a good one, right? Because we felt the same way. Like, we used really bad AI coder V tools, and the vibe you get after using 'em is just like, 'get this out of GitHub. This is a waste of time.' And many of those products had to come before actually good ones arrived. But once you've used a good AI code review tool, it's a one-way door. You are not gonna go back to not having an AI code review tool. So, I would encourage anyone who's a skeptic, I appreciate your skepticism, try a good AI coder V tool, of which Macroscope is the best one, in our humble opinion.

Ryan Donovan: Humble, humble.

Kayvon Beykpour: One of the things that excites me about Macroscope is if you imagine a future in which, you know, obviously AI AI-assisted coding agents are abundant, and autonomous coding agents are contributing more code in the codebase than humans are, A: that sounds like a pretty awesome future. I'm excited for that to happen. I think you're starting to see a lot of this stuff become more popular, but we're not yet at the point where autonomous agents are completely taking over the humans. But if you imagine that world coming sooner rather than later, we feel very strongly that having a system, like an air traffic control system, that can help you understand what is happening and how things are changing becomes even more important than it is today. And it was important– you know, erase AI from the picture, at every software company I've ever worked at, this was important, right? You need to understand what everyone is working on and how things are changing. But in a world where 100x more code is being written by autonomous agents that are just making their own decisions, then having this intelligence layer's sort of an air traffic control system becomes imperative, and it becomes extremely high leverage. And that's the future that we are building. There's no doubt in my mind that that product will exist. I hope it's Macroscope, but that the product will exist, whoever builds it. And so, that's what we're really excited about.

Ryan Donovan: Yeah, it's a wonderful, interesting future. And, I hope that it holds space for the humble engineer that's still there.

Kayvon Beykpour: I don't think the humble engineer is going anywhere. I think the humble engineer is gonna be able to achieve a lot more in the same amount of time, and be able to focus on architecting large scale systems, or architecting their ideas in a way that allows them to focus on the interesting parts of the problem, and delegate away the more laborious parts of the problem, and just be able to achieve more.

Ryan Donovan: Mm-hmm.

Kayvon Beykpour: There is a doomer angle on this, which is, the engineers don't have a job. That does not resonate with me, personally. I'm way more of an optimist that great tools will make capable people even more capable and even more productive, and I think that's nowhere more true than in the hands of the software developer.

Ryan Donovan: Alright. It's that time of the show again where we shout out somebody who came onto Stack Overflow, shared some curiosity, and earned themselves a badge. Today we're shouting a populous badge winner – somebody who dropped an answer that was so good, it out-scored the accepted answer. Congrats to Jesper Grann Laursen for answering, 'Exclude Table during pg_restore.' Curious about that? We'll have it in the show notes. I'm Ryan Donovan. I edit the blog, host the podcast here at Stack Overflow. If you wanna reach out to me, gimme comments, suggestions, topics, et cetera, you can email me at podcast@steccoflow.com, and if you wanna find me on the internet, you can find me on LinkedIn.

Kayvon Beykpour: Thanks for having me. I'm Kayvon Beykpour. I'm the co-founder and CEO of Macroscope. You're welcome to reach out to me on Twitter/X, where my handle is @ KAYVZ. You're welcome to DM me, or you can find me on LinkedIn, but I'm less likely to read my LinkedIn DMs. So, hit me on X. Learn more about Macroscope at Macroscope.com. You can also try Macroscope if you want to use us for AI code reviews, or if you have a team that wants to use Macroscope to understand what's happening and how the codebase is changing, go to microscope.com. We have a two-week free trial. We'd love to have you.

Ryan Donovan: Thanks for listening, everyone, and talk to you next time.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.