Loading…

Democratizing your data access with AI agents

Jeff Hollan, director of product at Snowflake, joins Ryan to discuss the role that data plays in making AI and AI agents better. Along the way, they discuss how a database leads to an AI platform, Snowflake’s new data marketplace, and the role data will play in AI agents.

Article hero image

Snowflake provides a fully-managed data platform that developers can build AI apps on.

We’re happy to have Stack Exchange data available on the Snowflake Marketplace.

Connect with Jeff on LinkedIn and Twitter.

Congrats to Timeless for throwing a Lifejacket to Using pandas to read HTML.


TRANSCRIPT

[Intro music]

Ryan Donovan: Hello everyone and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I am your host, Ryan Donovan, and today we are talking about Snowflake, the AI data platform. My guest today is Jeff Hollan, who's Director of Product over at Snowflake. Welcome to the show, Jeff.

Jeff Hollan: Hey, thanks so much, Ryan. It's awesome to be here.

Ryan Donovan: Our pleasure. So, at the top of the show, we like to get to know our guests, see how they got into software and technology.

Jeff Hollan: For me, I was actually really lucky. So when I was in fourth grade, my school was selected in a pilot program where they started this new thing—I even remember it was called Chill—and we got to learn how to do some programming. So, I remember after school, I decided to show up and we learned how to do some QBasic programming and build some really simple little console apps. And I just fell in love right away. Like, I was like, this is incredible. Like, I felt like it was finally my creative juices and it honestly just rolled out from there. So I've always really loved tinkering, building, coding, really from fourth grade on, and you know, all of the places that that took me: from cloud, and out to AI. So I was really lucky. I've really thought back of, at the time it was not common at all to be teaching fourth graders how to code but I got pretty lucky in having that chance.

Ryan Donovan: So we are a couple years deep into the AI revolution, and data is very, very much a big part of that. What does an AI data platform do for somebody who's trying to, you know, implement something AI?

Jeff Hollan: I think about my own personal life – So I'll use various LLMs for various tasks, whether it's for like code editing, or ChatGPT, or whatever else. And it's super useful. You know, my kids were sick earlier this week and I was asking questions of like, ‘hey, is this normal? Is this not normal?’ And I get back, you know, some helpful answers. But when I start my day job as a product manager at Snowflake, those stop becoming as useful, and it's not because they're not powerful, it's just that the questions that I care about as a product manager at Snowflake are things like: what are Snowflake customers doing with our product? Who are some of our newest customers? Who are maybe some customers who've been declining in their usage? And what should I do about it? Like, what are some things that I could do to help make sure people are having a good experience? If I go paste any of those questions into any LLM model, it's going to give me a big shrug emoji. It's gonna be like, ‘I don't know Jeff, like, I have no idea what's happening in type of Snowflake usage.’ That's all context that is specific to my organization, to my enterprise. So what we're striving to make super easy at Snowflake and super secured at Snowflake is: how do you make it so you can still have those rich types of conversations with AI but bring in that missing piece, which is the unique business context, like that unique special sauce that every organization has, which is inside of its data. What's happening in the business? How are sales going? You know, where are things moving? So we've been focusing on a lot of building blocks behind the scenes of: Snowflake has been the home where thousands of organizations have been storing their mission critical data for over a decade now – how do we make that super trivial to then connect that with various AI applications so that now I can ask questions like, ‘hey, I'm starting my week as a product manager, where should I be focusing my time? Who are the customers that I should care about?’ And have AI actually be able to answer those types of questions as well.

Ryan Donovan: The data platform is an interesting specificity there. AI data – you need a database, you need vector storage, probably need something else attached to it. What is the platform beyond that?

Jeff Hollan: Those are the primary ones. If I think, especially in the agentic AI pieces, I think that there's a few building blocks that Snowflake provides out of the box – to hopefully make fairly simple, I'll start with the ones that you kind of mentioned. You need to have very fast vectorized lookups, and specifically for folks maybe who aren't as familiar with LLMs, a very powerful pattern whenever you're bringing data to AI is this retrieval augmented generation, or RAG, which is: how do you bring the right relevant snippets of data that have to do with the question that I'm asking at just the right time? And in order to do that, you have to have really good information about all your data, and what it means, and how related this data is to the question that Jeff is asking. So we have services—Cortex search—that help you do that really, really quickly. And our hope too is like, it's, it's super easy. The anecdote that I've shared is: I remember—this would've probably been like two years ago now—spinning up an open source vector database, generating embeddings for a bunch of my data, storing it in the vector database, doing some lookup… It all worked. It was very cool. Took me about a weekend of tinkering to do. With some of the snowflake building blocks, you just say like, ‘this is the data I want you to vectorize. Go do it.’ And you know, 30 seconds later you send it a request and it gives you back an answer. The other one on the data platform side that's a bit more unique is accessing query-able data. So for the vast majority of like, agentic AI solutions out there, they're exclusively on that RAG side that I just talked about. Connect us to your Slack channels, connect us to your Google Docs - we'll service the relevant snippets. But if I ask a question like, ‘what has revenue been like in the last three days?’ There's probably not a PDF that somebody published to Google Drive that like tells me exactly what the revenue is up to the second. So in order to do that, we built some building blocks that say like, ‘well, how do I generate the right SQL query so that I can actually query the right data?’ So that's the other big building block that we focus a lot on. And then the last one I'll mention is more just on the governance layer. So, a huge consideration here is like, ‘hey, maybe Jeff is asking a question about revenue. Do I have access to all the revenue data?’ Maybe I only have access to revenue data for, you know, west US, and I shouldn't see revenue data for East Us. That's a huge consideration, which is like, I want to make sure my agent's not spilling secrets or, or spilling data that it shouldn't be. So there's this big fundamental building block in the data platform, which is the, ‘how do you actually do enforcement, and role access controls, and you name it?’ So those are some of the ones that come to mind in terms of like making this happen on the platform level.

Ryan Donovan: I assume with the SQL querying, you're querying external databases, you're querying some sort of production and/or analytics database, or is that part of the platform too?

Jeff Hollan: Snowflake's bread and butter—where most of the thousands of organizations who started using Snowflake again for over a decade—was actually in like, Snowflake's really good ability to query huge amounts of data really, really quickly. So the vast majority of the data that it is querying, Snowflake itself is driving the query power. So it's, like it'll do the processing. Now the data though, comes from a bunch of spots. To your point, whether it's open table formats, you know, maybe there's data that's living in AWS buckets or in Azure storage, or you name it. But for most of the SQL querying, Snowflake itself is actually providing that underlying engine. I was just talking to a team before this who'd been using AI agents to do the support case stuff. They were showing me this demo where they were asking about like, ‘hey, what's happening in support cases?’ And within, I don't know, less than five seconds, they had an answer across like three months of support cases that were instantly queried. But to your point, some of it was querying across data that was coming from Salesforce, some of it was querying across data that had been loaded into Snowflake native tables – so you've got some flexibility, but most of it is part of the platform itself.

Ryan Donovan: Yeah. Uh, for those external queries, are you bringing that into the platform or is it a single serve?

Jeff Hollan: The two primary patterns we have here is: bringing it into the platform directly- so we have some tools that help you bring it in; and then the second one is, the storage can remain outside of the platform, but we'll still handle the query compute. So Iceberg is like a very common open storage format that we'll integrate with to let you do that. There are ways to do, like, kind of this pattern of federated querying where you could be like, ‘hey, I actually am gonna give this to Postgres outside of Snowflake,’ and Postgres can go do some querying. That does happen from time to time. It's less frequent. We see that happening less frequently, and it's usually just, there's a few more layers of the cake that you have to be responsible for when you're doing the federated querying. So usually it's either internal or external, but if it's external, they're just doing external storage through Iceberg. Not necessarily external federated querying, but it's possible. Like it happens, it's just not the most common. It's, it's like the 20%, not the 80%.

Ryan Donovan: In the last few months, I've seen and talked to a bunch of companies dealing with that querying across storage, all of these different storage formats. What about Snowflake makes those queries so fast?

Jeff Hollan: The big, first, kind of like, underlying innovation that Snowflake was responsible for was what is now fairly common, but at the time was quite novel, which is like this separation of storage and compute. So, it used to be that if you wanted to scale your database, you scaled them together, you added more cores, you added more memory, and they were kinda like one and the same, and as you stored more and more data, you just had to continue to add more, and more, and more servers. So Snowflake, which was coming around right when the cloud was like taking off, said, ‘hey, in this world where we have like, AWS S3, and we have EC2 as separate resources that we can kind of treat independently, we don't have to bundle these into one separate thing.’ And so that underlying architecture just enables a lot of flexibility to say, ‘hey, maybe this query has a tremendous amount of data, and you can flexibly choose, like, do you wanna put a few cores of power on that data or do you wanna throw all the cores in the world?’ But having that separation gives you the ability to like, massively scale. Like, you can throw tons, and tons, and tons of cores at a massive amount of data or put a very few small amount of cores on the same set of data, but having that flexibility to separate on top of like – I’m much more from the application background, so like coding websites, server lists, and containers… There's a whole fleet of people at Snowflake who I have lunch with from time-to-time and they're like, the database experts, and they're doing all these optimizations on merge performance, and whatever else. So, there's a bunch of like, cool secret sauce there, too. But at its core, I think it's that cloud native architecture that provides most of that power that – that you were asking about.

Ryan Donovan: We just recently moved our public platform to the cloud and we ran into something where we were like, we don't need all of the compute, but we do need all of the memory just to have it all in memory, to have it move really fast.

Jeff Hollan: That's, to me, one of the coolest parts of the cloud is you have now this ability to kind of a-la-carte put together the perfect architecture rather than just, you know, racking up a bunch of compute and dealing with the constraints.

Ryan Donovan: With the AI, and the LLMs, and the agents, what are the sort of next-level data platform things that you're trying to bring in to optimize that agentic workflow?

Jeff Hollan: A few categories here – and happy to jump into any of them as it would be of interest. I think one of them is more and more data sources. So, actually, there's a third category, which is actually even very relevant to some of the partner stuff we've been doing with Stack Overflow, which is: Snowflake has this data marketplace. So, some data you're already gonna have in your organization, but there's a lot of data that you might not have in your organization that's gonna exist in the ecosystem, either between different business partners, like different banks in a financial institution or just in the ecosystem in general. So, one of the things we want to try to do is make it as easy as possible to bring all of the relevant data into the equation. So, we have like, a marketplace where you can add additional data sources. So, one of the data sources that I'm super excited about, especially somebody who's been coding since fourth grade and spent many, many, many, many hours and weekends on Stack Overflow, is you can add—through the Snowflake Marketplace—Stack Overflow as a tool to have that data instantly available to your agent. And we've tried to do this in a way that is like a win-win for everyone, right? Like it's been in partnership with Stack Overflow, so that like, Stack Overflow is able to get credit for this platform and the data itself, but it's available to everyone. This isn't just some like, one-off agreement where it's like, okay, only Snowflake can use select Stack overflow data. It's like, no—anyone who wants to go to the Snowflake marketplace—you have access to all of the posts, comments, you know, Wiki data, all this stuff, that now your agent is just so much smarter, again, at some of those secret sauce elements, at doing a bunch of different tasks, from fine tuning, to conversational apps, to you name it. So that's one category: is like, continuing to expand out the data.

Ryan Donovan: I was excited about that because we realized that a lot of folks were training on our data anyway, but we wanted to find a way to protect that community and also bring that data to everybody, right? We can't just make big deals with the big guys. Like, everybody should have that data.

Jeff Hollan: You restated exactly why we get excited is, it's like, ‘hey, this is great for the consumer 'cause they can now get access to this incredibly valued data set, and it can be fair for the provider,’ like protecting the community and doing things where you're not just going and grabbing content and using it without permission or without credit.

Ryan Donovan: And you can combine these data sources, right? Integrate them into an AI uh, workflow.

Jeff Hollan: You know, I mentioned before – there's an agent that I use every single week inside of Snowflake called the product management Agent. It has access to a bunch of technical documentation. It also has access to a bunch of like, usage data in Snowflake, and the agent is able to kind of blend these different data sources together. It's pretty powerful. You can even see it like, you know, I don't know if you've used some of the thinking models and watched it, but like, you ask it a question of like, ‘hey, I'm trying to figure out what to do, and you can watch it being like, well, first I'm gonna go look at some technical documentation. Based on that maybe I'm gonna go look up some of the SQL data. Now from that, maybe I'm gonna go do a web search. Like, it's really powerful. And then, you know, 30, 45 seconds later, I get a nice answer that probably would've taken me an hour to do on my own, you know, was a simple prompt into a conversational agent.

Ryan Donovan: How do those marketplace data integrations work? Like, is it just click a button and you have the data now?

Jeff Hollan: Genuinely, it is that easy. Like, earlier today, I did demo where I went to the marketplace, I added a data source, I said like, ‘get this data,’ it goes and creates that data share and that data transaction, and then I jumped to the other screen, I said, ‘create an agent,’ and it said, ‘okay, what tools, what modules did this agent have access to?’ And I pointed it to that marketplace data set- I said, ‘this is the one that I want.’ I clicked save, and then I was chatting away. Like, that was it. So it literally took 30 seconds and I had an agent that now had access to go do all of the info on these various marketplace sources.

Ryan Donovan: I know y'all are pushing to become a sort of AI first company. What does that mean to you and what sort of initiatives are you doing?

Jeff Hollan: I guess it spans a lot of places. There's two main categories when it comes to AI. First, there's one is: how can we make every single user of Snowflake, just like, more productive? So that they have, you know, superpowers in doing their day-to-day job. All the things we're talking about before of like, how do I connect the right data? How do I get in the right data? How do I clean the data? How do I get the data presentable? All of those tasks that data scientists, and data engineers, and data analysts are doing – can we just make them more productive? Can we help them do in, you know, an hour what maybe would've taken them a day? So, for instance, you know, one of the things we have now is like, this data science agent where you can go in and say like, ‘hey, I need to build a model. This is the data set I wanna build a model on. I think I want to use like, these patterns.’ You know, ‘hey, I want this to be a this-type-of-a model.’ And the agent will actually go and try to do a bunch of the work for you where it's like, okay, here's at least a starting of that. So, that's one category. It's like, I think of them as like, productivity benefits. The other one is where we started the conversation: how can we make a bunch of tasks in an organization, especially tasks that are more time consuming and mundane, like tasks that – not the super complex things that maybe are a little bit energizing and require a little bit more creativity or, or human; but things where it's like, you know, the thing I mentioned: I want to know what happened with support cases over the last three months. I could go spend three hours manually digging through a bunch of BI dashboards and trying to figure out the right queries, but do I really love spending that three hours digging through a bunch of BI dashboards? I don't. So, can we make AI democratize those things? So, can we make it super easy for a company to have a conversational agent that just tells them within 30 seconds what's happening? So, it's kind of like the democratizing access to insights from data, and adding productivity tools for people working with data, are the two big categories that we focus on as we're becoming, or have become really, AI-first. It's pretty remarkable, even to think back three or four years ago, there's been data science on Snowflake for a good long time. Obviously, generative AI itself is fairly recent. Just the number of features and teams and areas within Snowflake that now have AI magic trickled throughout them is really exciting. It definitely makes me much more productive using Snowflake.

Ryan Donovan: In some of our surveys, we found that there's a lot of adoption with AI, but still a lot of mistrust. How do you use these AI workflows in a way that is safe and reliable? How do you ensure that they're not BSing you on the spot?

Jeff Hollan: This is the number one thing. Understandably so, and, and rightfully so, I would say. Like, I think it's a healthy skepticism for people to have, of like, can I trust this thing? And I would not tell people, like, throw that to the wind. So our approach has been, kind of like, a few fold. The first one is, you know, obviously anything that we can do to just improve the accuracy all up. I talked some at the beginning of the power of the context—the business context—we have some tools and techniques that we've employed to just make sure that like, the context and the quality of the insights we're bringing are as accurate as possible, but it's never going to be perfect. It's never gonna be perfect. We're just trying to get as close to that as we can get. So, the other approach that we take is: making sure that it's at least very clear of how an answer is derived and how much confidence you can get in the answer. So, I'll give a concrete example: in pretty much every company, if you ask a question of like, how many customers do you have? If you ask two different organizations within a company that question, you might get two very different answers. Like, there's often things like, you know, how much money we made, how much customers we had. It's hard to pin down like, what's the actual source of truth? You know, this organization might be like, ‘are we including free trials? Are we not including free trials?’ You know, ‘what dictates a customer?’ So, AI will have the same challenge. And then if you ask AI, ‘how many customers do we have?’ Is it gonna give you the right answer? So, we try to build in things like the ability for teams to verify different pieces, to say like, ‘hey, this is our source of truth, definition of revenue, of customers, of support cases,’ and then when you're interacting with these AI features, we'll actually let the user know. We'll say like, ‘hey, this is the answer to your question, but by the way, we actually have some certified queries behind the scenes that we use to generate this answer.’ So again, it doesn't make it foolproof, but to me, those little indicators—like, these little green shields that we put in, these confidence scores that we're trickling in—those are all things where we're trying to just help explain like, this is the answer, this is how we got the answer, and here's some breadcrumbs to help you understand like, how much should I trust or not trust. And it's in its evolving art, um, but at least for now, that's helped us help. I guess the last one I'd mention in that too, is: in our agentic patterns, we ground the answers whenever possible. So anecdotally, again, if I ask ChatGPT a question, I don't know, 70% of the time ChatGPT will just start giving me an answer right away; and then 30% of the time it's like, ‘hmm, I'm gonna check the web, like, maybe the web is gonna give a better answer.’ And it's that 70% that tends to be the most problematic, 'cause like, I don't know where it got that training data. I don't know why it thinks that that's the right answer. When you use the agentic pieces in Snowflake, it's like 99% of the time, the first thing the agent does is, 'what data can I use that's gonna make sure that I'm giving an accurate answer?' It will almost never—in fact, you have to try really, really hard to get it to—just like come up with an answer on its own. It will almost always be like, ‘let me go check the Stack Overflow marketplace data. Let me go check the SQL data.’ So that's another kind of technique that we do to just try to kind of boost that accuracy and that trust.

Ryan Donovan: I think for us as a data provider, it was really important to have the attribution piece where it's, 'where did you get that?' And for me, as, as somebody who is trying to go through documents and come up with intelligent opinions, I wanna know where that opinion comes from, right?

Jeff Hollan: Exactly right. And, and to verify it too. Yeah, 'cause like, if I have the answer, give me the little, 'this is the stack overflow conversation' so I can, you know, go and figure out like, 'oh, this has 450 upvotes'. This thing's probably much more reliable than this, you know, one upvote thing that I'm not totally sure how much I should trust it or not.

Ryan Donovan: Sounds like agents are a big part of what Snowflake is thinking about in the future, and I think a lot of companies are. What's the chips that you're putting down as part of that bet?

Jeff Hollan: There is a lot there around agents, and I think about it too. It's, in all of these features that we're building, ‘how can we augment those capabilities within agent to make you more productive?’ And then just doing everything we can... it's actually a lot of – even the last question. It's the, ‘hey, we believe that organizations are going to be significantly more productive if they're able to offload a huge set of these mundane tasks to agents with humans in the loop.’ To be clear, like, I tend to be much more in the camp of, we'll be much more productive. You know, Snowflake's not in the business of being like, ‘hey, we're gonna go replace entire functions,’ but it's like, ‘hey, here's a person who they can now get done in a day what used to take them a week.’ And so there's a lot of investments that we're making in that approach in making that thing real for enterprises specifically, because it's possible for people to build all these things. I don't want to, you know, belittle how tricky any agent is, but it's possible for agents to build these things for like, my personal life, for small things. Once you get to an enterprise, these things around like, accuracy, trust, compliance – those become so important. So, you know, even, I think of recently, there's been a few acquisitions that Snowflake has made in terms of companies like TruEra who focus exclusively on quality and observability of things like agents, so that you can understand, ‘what is my agent doing? Why is it doing what it's doing? What's the quality of my agent? Do I need to go in and inspect it a little bit more?’ Like, those are some of the pieces that, on the surface, they might not be the first thing that you think about, but once you think about rolling this out to 10,000, 20,000, 100,000 employees, they become must-haves. So, there's a huge amount of investment that we, as a company, have been making there. Which again, if you would've thought of Snowflake seven years ago, of like, 'hey, yeah, this is the thing that can query my data at massive scale,' and then realizing now like, 'oh, they're making big investments in agent observability,' kind of shows how much we're taking this very seriously.

Ryan Donovan: People are still trying to figure out observability in the LLM level, but the agentic workflow – I kind of think it makes it a little bit easier to get some observability in there because you can sort of break point, you can have some logging, but the LLM piece is still a little bit of a black box.

Jeff Hollan: Exactly right. It still breaks my mind a little bit of like, honestly, the industry—at least my opinion of it—is still evolving and figuring out like, what is this world of test-driven development look like when I have this non-deterministic LLM behind the scenes that I can't just write, you know, ‘assert that this value always equals this one’ because every time I run the statement I get a different answer. So, it does present some really interesting challenges. Incredibly powerful, but like, a really big pain in the butt to test around as you kind of mentioned, Ryan.

Ryan Donovan: You know, you talked about earlier with the grounding it in some sort of truth, like, this is calculated by a different function, and I think a lot of folks are looking at just not using LLMs for everything.

Jeff Hollan: Agree. I think that there's a huge amount of utility in saying, 'really the only thing I need you to do as the LLM is decide what tools to use.' But those tools become a lot more deterministic. To your point, it's like, okay, this is doing a search lookup on Stack Overflow data, this is doing a calculation, this is doing whatever else. And your point – that's where agents make this a bit easier 'cause then you can test each of those components. You still have that middle LLM reasoning layer, but it becomes a much less prominent piece. It's doing 10% of the work instead of 100% of the work.

Ryan Donovan: Are you all getting to the, uh, enabling tool use? I know the agents are using the tools, but are you doing any of the sort of glue – the MCP server stuff? Agent-to-agent?

Jeff Hollan: Personally, I've been really excited. I remember this would've probably been, now, honestly six or seven months ago, telling people like, 'hey, I'm hoping to see some standards start to emerge. We definitely be willing to adopt them. You know, this will make everyone's life easier.’ For now, we're just kind of waiting and seeing. And I was thinking back to history of the past when it came to things like Kubernetes. You know, I, I remember there being many years in technology where it was like, is Kubernetes the standard? Is Docker Swarm? Is Mesos? You know, whatever else. And I've been blown away in the LLM Gen AI world where it's like, MCP comes out, a few months later, everybody seems to be like, 'yep, we're on board.' Agent-to-agent comes out, it seems to be doing the same thing. So yeah, for us at Snowflake, we're like, this is awesome. Like, we can just make it easier for people to kind of like, use these existing open standards. So we're supporting those MCP agent-to-agent protocols as well to make those super easy to use in our products too.

Ryan Donovan: The adoption curve of MCP is wild, like, I think I first heard it mentioned in January or February, and then now everybody wants to talk about it.

Jeff Hollan: Exactly right. Like, there's no competing specs. Everybody's in. Like, to be clear, it's not perfect. I think everybody looks at it, but you know, we're all getting behind it. So to me it's awesome. I would love to see more of these things just start to become standards.

Ryan Donovan: What is the next part of the AI platform? What's the thing that's unsolved right now?

Jeff Hollan: If I start looking ahead, a big area that I think is starting to emerge, maybe I'll choose two: at the end of the day, I think kind of like, having more of these things happening autonomously, behind the scenes, is a big area that we have yet to see in action... I think there's some really interesting technology across the industry that is starting to get there, but so much of it is still very interactive, which is understandable. Like, a 'chat-like' interface. The ability to actually have, like, agents that are working behind the scenes and making improvements or decisions or generating reports- I think we're in the early days of, and I think there's some problems we have to solve to get there. But to me—and maybe it might sound like it's coming out of left field—any company that I've joined as an employee, if I think about, even like, onboarding. You know, I go to the employee onboarding; some set of that onboarding is gonna be telling me about like, how to do my actual job, but a huge amount of that onboarding is actually like, 'here's how the organization works, here's the org structure, here's how to get things done, here's like our, you know, unique way of doing things.’ Like in Snowflake, this is how these processes happen. And I think that business context, the business understanding of like, how an organization operates and the things unique to that organization- figuring out how to capture that and make, so that that is, I guess, one way to think of it is like, does the agent have a tool that lets it understand how the organization works itself? So, in the data world, the term that we use for this type of thing is, is often, like, a semantic view. It's like, how do I get the business semantic representation to the data behind the scenes? I think at a larger level there is this problem where it's like, agents know how to accomplish a task, but they're kind of dumb right now in terms of like, what the org looks like, or like how to get things done in an organization, which I guess kind of pairs with the autonomous one. There's no way you can have an autonomous agent going and doing a bunch of tasks if it doesn't understand some of those nuances of the organization. So, it's a bit amorphous, but I think that that's gonna be a really interesting space that—I'm making a personal prediction that over the next two, three years—we're gonna start to see some ways that people start to understand how to capture that business context and that organizational context that will enable agents to do a whole lot more.

Ryan Donovan: And how do you fit the security policies into the context as guardrails?

Jeff Hollan: And have those defined in a space that like, the agent's able to honor. Yeah, it's gonna be very interesting. My general recommendation to folks—you know, I'm assuming if you're listening to the Stack Overflow podcast, you're probably of the camp on the early adopter kind-of-frontier-edge but,—for me personally, just finding different ways that I can start to use these different technologies and understand what they're good for, what they're not good for, is always my number one advice to people. Again, speaking from experience. Like, this is not a perfect magical technology, it's like every technology: it can do some cool things, it has some limitations, it has some considerations that you need to make... To me, the best way that you're going to find that out is just trying it out in your day-to-day life. So, I've used agents from everything from like, giving me workout recommendations when I'm going to the gym, which is like, ‘what should I be doing to do a workout?’ To, most recently, I built an agent that will build demos for me. So it's like, 'hey, I'm about to go talk to this customer- go build a demo that I can go show them on Snowflake,' and then it goes and generates a demo. Through those processes of like, writing blog posts, to building demos, to whatever else, it's helped, kind of, illuminate. So, I guess to anybody listening, if you've made it this far—if you're not already—go figure out for yourself where AI could be useful and where it's not useful, because I do feel very strongly that this is really going to transform how work is done over the next decade. It's very exciting.

Ryan Donovan: Well, it's that time of the show again where we shout out somebody who came onto Stack Overflow, dropped a little knowledge, shared some curiosity, and earned themselves a badge. Today we're shouting out a life jacket badge winner: somebody who came onto a question that had a score of negative two or less and answered and got a score of five or more, and brought the question up a little bit, too. This badge goes to 'Timeless' for answering "Using Pandas to Read HTML." If you're curious, it'll be in the show notes. I'm Ryan Donovan, I edit the blog here at Stack Overflow and host the podcast. If you liked what you heard, or have questions, concerns, topics, suggestions- email me at podcast@stackoverflow.com and if you wanna reach out to me directly, you can find me on LinkedIn.

Jeff Hollan: I'm Jeff Hollan, Director of Product at Snowflake, working on building AI agents in apps in Snowflake. You can find me on Twitter X @ JeffHollan or on LinkedIn at Jeff Hollan. Happy to reach out to any questions or interests that you might have.

Ryan Donovan: Thank you very much for listening, everyone, and we'll talk to you next time.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.