Nutanix combines compute, storage, virtualization, and networking so you can run applications and manage data across on-premises datacenters, public clouds, and edge locations all on one platform.
Connect with Dan on Linkedin and Bluesky.
Congrats to Necromancer badge winner David Ferenczy Rogožan! They won the badge on their answer to Where does adb shell mkdir create directories.
TRANSCRIPT
[Intro Music]
Ryan Donovan: Hello everyone, and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I am your host, Ryan Donvan, and today we are gonna be talking about some cloud native stuff – getting your VMs and your Kubernetes to play nice, so you don't have to worry about all the fiddly bits with your infrastructure. My guest today is Dan Ciruli, Vice President and General Manager of Cloud Native at Nutanix. So, welcome to the show, Dan.
Dan Ciruli: Thanks for having me, Ryan. I'm excited to be here.
Ryan Donovan: My pleasure. So, before we get into the infrastructure discussion, we like to get to know our guests a little bit. Can you tell us how you got into software and technology?
Dan Ciruli: Sure. I got in the old-fashioned way. I studied computer science. But I did it back in, what my kids lovingly refer to as the 1900s, when studying computer science wasn't something you did 'cause you wanted a lucrative career. It was something you did 'cause you were good at computers.
Ryan Donovan: Right.
Dan Ciruli: But as it turns out, I liked it. I was an engineer for 10 years. I shifted into product management after about 10 years, 'cause that had been invented, and I liked dealing with customers. You know, I liked thinking about solving the problems more than my hands on a keyboard solving it. And so, now, I've been doing product management for about 20-something years, all in very technical enterprise products. And specific to this domain, I spent seven years at Google. When, effectively, the concepts and technologies behind opens where they were invented and then when they were kind of made public and open source. So, I've been working in that cloud native ecosystem ever since the early days. And I was one of the founders of the Open API initiative; so if people know what an open API spec is, I was one of the founding members of that group. The Istio service mesh is one of the big projects in the CNCF – I was on the steering committee there. And then I also worked on a protocol called GRPC, which has become quite common. So, now helping Nutanix bring all that cloud native stuff to enterprises.
Ryan Donovan: There's a lot of stuff there that's definitely become part of the standard issue cloud native stack. And you know, I think when everybody talks about cloud native stacks, the infrastructure level, they think of containers, and Kubernetes, or another infrastructure orchestrator. But, the older styles, the VMs, the virtual machines, before we get into how you get them to play nice, why would you want to?
Dan Ciruli: That's a great question. So, I'm gonna take that as a part A and part B, which is, why would you want to start using containers at all? And then why do you need 'em to play nice? And, and the ‘why do you wanna use containers at all’ really boils down to: you can ship software faster. There's some other ancillary benefits too, in when you do it well, it's easier to secure. It's more scalable. You can deploy at more places. But fundamentally, the reason that you wanna adopt containers is because developers who are pushing code to containers end up being able to push more frequently. And the example I like to give people for that is: think back to the early 2000s, for those of you who can remember the early 2000s, when the state-of-the-art for Search was Alta Vista, and the state-of-the-art for Maps was MapQuest, and the state-of-the-art for Email was Hotmail, right? A 10 megabyte limit in your inbox, right? And then this company comes out of the blue and says, ‘you know what? We're gonna revolutionize all these different things at once, right? We're gonna revolutionize.’ All of a sudden, Google search comes out and blows everything away. Google Maps, so much better than MapQuest or Yahoo Maps, and you know, look at Gmail unlimited storage. And how do they do that? The reason they were able to do it was because they were moving, they were innovating, so much faster, and one of the big reasons was that they had adopted containers. They were able to innovate quickly. There's some other stuff they did, too, on the backend storage stuff, as well. But containers played a big part, and now the whole industry has moved that way. So, fundamentally, it doesn't matter if you're doing a web service that's meant for 2 billion people. When you ship smaller increments, you can ship a lot more frequently, and that means you can innovate a lot faster.
Ryan Donovan: So, before we get to the part two, there was a sort of flip side to that where it's like, containers are great. Why would you still have VMs?
Dan Ciruli: So, the reason you have VMs is because you have VMs, right? And the fact is that moving to containers– writing code for containers isn't fundamentally very difficult, but when you have an application that's already running, changing it so it can be adopted to run in containers, that can be quite difficult. And this is the thing that I think even we in the cloud native community were blind to for a while, because we kept saying, ‘well, just re-architect your application.’ And the fact is that when you re-architect an application. It can take a long time. It can take a lot of work, and it fundamentally is the same application. It doesn't give you any business value. You can operate it differently, and maybe you could move more quickly, but maybe this is an old application that doesn't need to move quickly. So, right now, we have been writing applications to be deployed on a server, or, then later on a VM, for 30 years. Literally for 30 years, we've been writing applications the same way. There are millions and millions of them. I was at one of the largest banks in the United States yesterday. You know, they have tens of thousands of applications running in VMs. Most of those will never get rewritten. So, while they’re new things that are exposing their mobile apps and their web apps, you know, and wait, AI is coming in. Those things are all being written to be deployed in containers. They need to A, be deployed alongside, and B, sometimes communicate with those applications that have been running for the last 10, 20, 30 years. And that's the real reason. You know, when people talk about, when are VMs going away? I say, ‘when are mainframes going away?’ Because the fact is mainframes aren't going away. This bank, they said they have no plan to eliminate mainframes from their data centers. They're not gonna eliminate VMs from their data centers. So, we've got a new technology. Unless you're starting a company today and putting everything in containers, you've got, you know, hundreds, maybe thousands, maybe tens of thousands of VMs that are gonna be around for the rest of your career.
Ryan Donovan: I mean, when are mainframes going away is a question people have been asking for a while, I think, right?
Dan Ciruli: Yeah. Like, since the 1900s.
Ryan Donovan: Yeah, yeah. Ouch. So, the sort of legacy stuff running on the VMs, like a lot of that is very hard to stop and then sort of replace with a refactored application. So, how do you get those to play nice? What's the issues with communicating between a VM and a sort of Kubernetes cluster, or whatever?
Dan Ciruli: So, I think when it's done poorly. I was with a big insurance company a couple weeks ago. They run completely separate infrastructure teams that run the infrastructure, that runs– essentially they have big silos in their data centers, right? And one stack of iron might be serving storage for virtual desktops, and another stack of iron is running VMs where, you know, databases and large applications are running. And they've got another stack of iron where they're running Kubernetes, and that Kubernetes is installed on that bare metal. And the fact is that the kind of networking, the identity management, the networking between all of those big stacks of metal is all fundamentally different. And it might as well be on the moon. It might as well be in another company effectively. Right? If you've got one application that's deployed in Kubernetes and it needs to communicate with something that's running in a VM because you're dealing with different networking systems, different identity and authentication systems, it's a really difficult process. Again, it's no different than if a partner were integrating with you.
Ryan Donovan: Mm-hmm.
Dan Ciruli: When things are done well, you don't have silos like that between the infrastructure on which things are deployed. And the simple example is you've got an app that's running in a container that needs to communicate well with one of your existing applications in a VM. Ideally, you can run those in such a way that you've got the same networking between them, and you can write a single network policy in one system that says, ‘yeah, I want that thing to be able to talk to this thing,’ and vice versa, protect them both, you know, don't open up everything. Protect them but let them do that.
Ryan Donovan: Right.
Dan Ciruli: Another big problem with those silos is that when you create those silos, inevitably—and this is something I've been seeing, already I was seeing this almost 10 years ago in early adopters of Kubernetes—where they were buying dedicated hardware is: you might have one section of your data center is full. Say you've been decommissioning VMs, and in your VM cluster, there's room on that hardware to run more stuff. But you've got a Kubernetes cluster that's completely full. And those teams are growing. They need to run more Kubernetes. You can't take advantage of that hardware that's sitting on that other iron, right? And so, having a common substrate on which to run all this stuff really solves a lot of different problems.
Ryan Donovan: Yeah. So, I think I wanna ask the sort of naive question about the communication. Why can't you just do IP-to-IP communication across the VM?
Dan Ciruli: When you say across the VM, you mean from VM to Kubernetes and-
Ryan Donovan: Yeah. From VM to- yeah.
Dan Ciruli: You can. Well, IP-to-IP. The problem with IPs in ‘Kubernetes land’ is that IPs in ‘Kubernetes land’ tend to be ephemeral, right? Things can move. And, with a VM, when there's a new piece of software, when that VM gets upgraded or patched or something, you tend to do that on that running piece of software. Your IP stays constant. In ‘Kubernetes land’, when there's a new container, you don't update your container, you spin up a new one, you tear down the old one, right? That, by definition, that could land somewhere else, could have a different IP, and then you can get into things like, ‘okay, well, everything's gonna go through an egress, and we're just gonna whitelist the egress.’ But then you're having to write again, different network policy. So, now you've got some network policy that's doing maybe your L2 or L3, but then you've got something else internal. The cluster that's doing your L7 is saying, 'oh, is this particular thing allowed to talk here?' And it just gets complex.
Ryan Donovan: Right.
Dan Ciruli: I think when implemented well, for most people, and I don't know if everyone has really clued into this, for most companies, running Kubernetes within VMs actually solves a bunch of this problem, because when you're running within your VMs, now you do have, even though something is running in Kubernetes, that Kubernetes is running in a VM, and you can use your VM networking policy. It sounds obvious to begin with, but in reality, that actually solves a big problem right there. Now, there's way more you can do, but it's interesting that that does solve one of the big problems right there. It also solves that silo problem. 'Cause when you're running in VMs, well, by definition, you're running all kinds of things. Everything is running in a VM effectively, in an enterprise for the most part. Right? Right. And so on that same hardware, on that same physical cluster, you can be running side by side on the same physical nodes, VM-based workloads, as well as Kubernetes-based workloads.
Ryan Donovan: Yeah. You know, when you have cloud computing, it's essentially running on the sort of VM, right? The hypervisor, right?
Dan Ciruli: Yeah. It's funny because there's a sometimes a debate in the Kubernetes community, is it better to run on bare metal? I don't know what the percentage of Kubernetes in the world that runs in the VM, but given that by far, most of it runs in the hyperscalers, in their managed Kubernetes, all of those are running in VMs, and there's a reason, right? Again, they get some of the benefits of networking. Certainly, what they want is they can run any– they don't want to tie specific workloads to specific hardware, of course. Right? And then, you get access to some of the good, fun features in Kubernetes, like autoscaling your cluster, like auto downscaling your cluster. Right? You can't do that. You can't autoscale a hardware-based cluster. You need to go buy provision rack. You need more infrastructure. Right?
Ryan Donovan: It's a very manual process. Yeah.
Dan Ciruli: It's very manual, and it's very time-consuming. And then there are some reasons where there are, you know, use cases that make sense for individual workloads, individual clusters that you might wanna run Kubernetes on bare metal; but for most enterprise applications, and we are, like I said, we're trying to– more and more and more applications are just being written to be containerized.
Ryan Donovan: Right.
Dan Ciruli: It just makes sense to do it in a way that you have fungibility of hardware across all your workloads, et cetera.
Ryan Donovan: Yeah. So, if that is the obvious solution, like you say, why don't people just do that all the time?
Dan Ciruli: I think there's a few different reasons. One is that there is kind of a– maybe it's kind of Conway's Law in large organizations. And I don't know if Conway's Law is exactly the right way to describe it, but-
Ryan Donovan: The shipping your org chart, basically?
Dan Ciruli: You're shipping your org chart, right? You're building your org chart in your data center. And if I am the Kubernetes team and I am budgeted, like, ‘hey, go buy some hardware and run Kubernetes.’ First of all, I'm not the virtualization team. I don't care about a hypervisor. I won't think I need to run the hypervisor there. I might actually even have an anti-hypervisor bias for no good reason. I think that's the first reason. And then, I think there are, you know, purists who think, 'well, this is just the right way to do it.' But yeah, for the most part, unless you have hardware that's dedicated to a particular workload, especially when it's specialized hardware like GPU-based hardware, which is the thing that we do see where I do see customers doing that, and doing it for a good reason. Unless you're buying hardware that is somehow specific to that workload hypervisor has, you know, there's a reason it's become just so endemic in data centers in the last 30 years.
Ryan Donovan: Endemic. That's a fun way to put it.
Dan Ciruli: And I don't mean that in a bad way. I might have entered Nutanix with a hypervisor bias, but I certainly have been cured of– you know, I see it used a lot, very, very successfully.
Ryan Donovan: So, you know, my understanding is a lot of the mainframe paradigms designed sort of pre most of the networking protocols, right? Is that right?
Dan Ciruli: Yes.
Ryan Donovan: Does that make it hard for a hypervisor VM to communicate with the mainframe VM?
Dan Ciruli: With the mainframe VM? Yeah. I mean, with mainframes that's going to be harder for sure. And the reason is that as virtualization was developing and stuff, we just kind of standardized on HTTP, and HTTP made it really easy to make these calls, and it just so happens that most of those things fundamentally predate it. Right? My first job as a developer was interfacing with different hardware. And, literally, you were writing code that was putting individual bits and bytes on the wire. No joke. And so, unfortunately, that's how that stuff's written. You can't just use one of the common libraries that's built into every language. Every framework we use now has these things that make it really easy to call a process somewhere else. And the mainframe stuff is just older than that.
Ryan Donovan: Yeah. You said you worked on GRPC, which I understand is a sort of more modern solution of like server-to-server communications.
Dan Ciruli: Correct.
Ryan Donovan: Does that have any bearing on communicating with mainframes? Are you able to manage an IOS?
Dan Ciruli: Not a bit. In fact, GRPC is based on HTP2. So, it is a really modern implementation now. It’s funny 'cause when it came out, it did feel to people like a step into the past, and then it's a remote procedure call. The RPC stands for a ‘Remote Procedure Call,’ which is an old technology. However, it's done over HTP has some really cool semantics, including things like bi-directional streaming. By the way, you know, we're on a video and audio chat right now. There might be GRPC here because, you know, you can have over one connection bidirectional streaming going on simultaneously. It's got some other really good benefits. One of the things is that it has a contract between the caller and the callee, which standard HTP Rest does not. Right? You hope that you're gonna understand that JSON that's returned, and you're gonna try to interpret it, but they can change that server, and you might not know. And so, GRPC is, in a way, one of my favorite open source projects that I've been a part of. And the reason is that GRPC, Google, as we open-sourced it, we didn't have a commercial implementation. There was no commercial GRPC product. It was just a protocol. It was just a protocol to make the web better. And it has had tremendous uptake. Every company I've been since Google uses GRPC. Internally, because it's such a handy tool, it has become very successful. Standard might be the wrong way to look at it, but it's become extremely common. And it's not that way because anyone was promoting it to get their Kubernetes. Everyone has a commercial implementation, it’s DL. Everyone has a commercial implementation, GRPC. It's just useful, and it's just, you know, 10 years later it's just out there getting more, and more, and more usage. So, it's kind of cool.
Ryan Donovan: Yeah, absolutely. I mean, I heard it brought up as one of the sort of fundamental pieces of a multi-agent system the other day, like it's funding new implementations.
Dan Ciruli: Shout out to my old friend Louis Ryan, who's now at solo.io, one of the inventors of GRPC. I was very lucky to work with Louis all seven years that I was at Google.
Ryan Donovan: Yeah. Nice. So, you know, you talked about putting bits over the wire for the communication to mainframes. Are there any standard sort of middleware trying to manage the communication there? I imagine somebody must have come up with something.
Dan Ciruli: Yeah. So, that's, nowadays middleware is a typically a part of that solution. And I think that, as I said, I was at this bank and they said, ‘you know, we don't, we don't have plans to decommission mainframes.’ Obviously, they don't write anything that way anymore. My guess is that what has happened is they've got a standard way – ‘okay, here's how we take advantage of those things. We're gonna let 'em sit there. At some point, applications will get decommissioned.’ Right? And then eventually, maybe you can then decommission that piece of middleware that's sitting in between. But you definitely want four of those developers who are writing stuff today. You want, what feels to them, like a standard way you wouldn't want people doing what I was doing in their early nineties, which is putting bits on the wire for sure.
Ryan Donovan: Yeah. Do you have any sort of take on what it would take to actually decommission mainframes and move over to modern there?
Dan Ciruli: One of the interesting cases for AI, and I saw companies doing this before the LLM-boom of the last two and a half years, but certainly really big now is taking for those applications for which you have source code that might be Pascal or might be Cobol—you know, language that no one has anymore—putting it through an LLM to generate a modern version of that. This is something I think we probably will see some success in. You know, first use that original code to generate a set of test cases, right? So, you can effectively prove that when you do it, and then do it. So, I actually think this is a case where we will see LLMs put to work to something that will actually be useful. I haven't heard of anyone actually either say, 'hey, we just decommissioned our last mainframe,’ yet, but my guess is this is one of the areas where it will be successful. We will see it. Everyone knows AI is gonna change everything, but we don't quite know how. This is one where I have a good degree of optimism that we will see some companies actually be able to do that.
Ryan Donovan: Yeah. That does seem like one of the safer versions of LLMS and code in large scale, the sort of strong guardrails, a lot of scaffolding. AWS did one that saved like 4,500 years of developer work, and it was a Java upgrade.
Dan Ciruli: Really? Yeah, yeah. That starts to add up, doesn't it?
Ryan Donovan: Yeah.
Dan Ciruli: Especially at Amazon engineer prices.
Ryan Donovan: Yeah, that's right. So, you work with a lot of both legacy and modern mix. What's the sort of pathway to a cloud native system for a legacy organization?
Dan Ciruli: Well, one of the things that we are trying to solve is, you know, I said before, a huge percentage of the Kubernetes that runs in the world runs on hyperscalers, and the reason that is, is that the hyperscalers made it really easy to get a Kubernetes cluster effectively at a button click, or an API call, if you're automating. Right? And to get it and have it be secure, have upgrades that to be something you don't have to worry about. You don't even think about it; it just happens when you're in the cloud. The reason why Kubernetes hasn't taken off as much on-prem is that that wasn't true. Right? On on-prem, you had to do a lot of care and feeding. You were in charge of that Kubernetes cluster. So, one thing that we're trying to do, and there are other vendors trying to do this too, which is make it just as easy to get a Kubernetes cluster on-prem as it was in the cloud. And I think that's a big hurdle, because right now, there's a lot of enterprises that say, ‘developer says we wanna write this in Kubernetes.’ And then someone says, ‘okay, well then you do it there.’ You do it up there, right? Where economically, it might not make sense. The data that you need might not be up there. But that's where the Kubernetes is. So, I think that one of the big hurdles is getting it so it's just as easy to get a Kubernetes cluster on-prem. By the way, again, another benefit of running this in VMs is you can do that. You can make it so, poof. Now, no more buying hardware, no more anything. You just have a Kubernetes cluster. It isn't just getting it is the care and feeding of that, right? Keeping it up to date, making sure that it's secure, making sure that it's patched properly. So, that's a big hurdle. After that, you know, then it comes down to developer experience and other developer tools that, you know, were available as a service in the cloud that you need to have on-prem a container registry. Right? And it's just sitting there. And then in cloud, I don't have to do anything about it. Making sure that you've got a container registry, making sure you've got a CD pipeline that works. Ultimately, what you want to get to is, effectively, you know, what Google had in the early 2000s, and lots of big companies have now, which is a developer checks in code. And then magic happens, right? Developer checks in, code testing happens, deployment happens, and you know if the right flags are in the right approvals. A production deployment happens. All those pieces exist. Now. All those pieces can exist. You can have that on-prem, too. And one of the things we're trying to do is make it easy on people to get that going on-prem so that they can move just as quickly on an application. It has to live on-prem for security reasons, for legal reasons, for data reasons for latency reasons, whatever reason, it has to live on-prem. We want it to be just as good an experience as it is in the cloud.
Ryan Donovan: Do you see a movement of organizations moving stuff back on-prem from the cloud?
Dan Ciruli: Yeah, I do. And this is a matter of debate 'cause I also have heard analysts say, ‘no, there's, there's no movement.’ But there absolutely is. I routinely will talk to an enterprise who is like, ‘hey, we're five years in the cloud, and now we understand the economics, and we know what it's good for, and we know what we can do cheaper,’ right? And so, I talked to those. Interestingly, I talked to a company that was born in the cloud that was Kubernetes native. So, a company that's less than 10 years old that is spending, I wanna say, in the tens of millions of dollars a month in the cloud. And they told me their plan, I talked to 'em earlier this year, is to move 80% of that on-prem. And so, that's an interesting one because that's not a, ‘hey, we went to the cloud, and it was more expensive.’ We moved back to, 'no, they were born in the cloud.' And they've realized, they've done the economics, renting is more expensive than buying and for a workload that if you're gonna use that machine all the time – another thing that VMs can really help you with. And then yeah, you can probably run that cheaper at scale on premises. The cloud is excellent. There are some services that you can only get in the cloud. The ability to scale flexibly, to get resources on demand is, you know, it's incredible what it's done right, but to get companies, enterprises, in a position that they can make really good, intelligent decisions about why they wanna run something where they wanna run it, and then they can run it there. Not, 'oh, which tools are available?' where ideally, you know, you get away from that.
Ryan Donovan: Right. If you have that baseline traffic, you know what you're gonna get. You put that on-prem, right?
Dan Ciruli: Right. Yeah. Yeah. And we have customers who operate just like that. All their ephemeral clusters are in the cloud. They do the development up there, but when they run production backends, they run 'em on premises.
Ryan Donovan: So, what is the sort of profile of the software, the thing that runs best in a cloud?
Dan Ciruli: I think that if you need access to a service that's very hard to duplicate on-prem. You know, my favorite example is BigQuery. Google has immense resources in BigQuery, and you can do incredible data stuff that might take you, for short times, access to huge numbers of servers. Right? That's fantastic. Anything where you will have a dramatic change in its volume, which might even be daily, right? You might be the kind of company that is you're only operating in one region in the world, and you have a daytime-type operation. If that's true, then that's the kind of thing that you can scale down every night. Or you're the kind of thing, you know– when I was at Google, the Pokemon Go game came out, right?
Ryan Donovan: Oh, right, yeah.
Dan Ciruli: And got, you know, a hundred million users within three days or something, there was no way they were gonna put the capital into pre-provisioning a warehouse full of machines, right? Just in case their game became the most popular game ever, right? By building on a cloud, they were built on GKE, they were able to just keep turning on, just to keep adding those clusters, keep on turning new Kubernetes clusters. It met an incredible demand. So, that kind of stuff that, like I said, there's a few things. Their global networks are fantastic. Cloud is not going away, by the way.
Ryan Donovan: For sure.
Dan Ciruli: Cloud is amazing. What it has done for flexibility in how you think about running your business is fantastic. I just think you want people to be able to run workloads in the right place.
Ryan Donovan: Right. Use the right tool for the right job. I know we haven't talked about Nutanix too much, but what is unique about what you do there?
Dan Ciruli: I think the thing that really differentiates Nutanix is that Nutanix, first of all, started as a storage company. Some Google engineers who said, 'hey, the way Google stores data on commodity hardware is really cool,’ you know? Google gets incredible performance and incredible resilience out of common Linux servers, which was again, super ground groundbreaking in the early 2000s. So, they started Nutanix and built an OS, which is, effectively, our storage utility. It is distributed storage. And distributed storage is what built Google, right? Google had distributed storage and then a container management layer, right? And so, here at Nutanix we have really all the underpinnings you need for both historical and modern applications because we have this great distributed storage. Perfect for containerized applications, perfect for ML- and AI-based applications. That stuff was also developed on distributed storage, and we have a real enterprise hypervisor, something that is really good at running those workloads that were developed between 1990 and 2020. And so, the thing that really differentiates Nutanix is that for most enterprises, they think about one vendor for virtualization, one vendor for storage, and a different vendor for container management. And Nutanix, while we really play nice, we can interoperate on any of those tiers, we can use anybody's storage, anybody's hypervisor, anybody's container management, we do offer them all in one package. And so, we do have customers who say, 'yeah, I'll take that, because it's got everything I need.’ And there's no other company that can say, ‘yeah, we really do all those things well.’
Ryan Donovan: I've talked to some other folks doing large scale distributed storage, and talking about the sort of sharding and replication issues, what in your mind, are the sort of hardest things to deal with for distributed storage?
Dan Ciruli: Well, I mean, I think that's its own at least 30 minute session.
Ryan Donovan: That it's a tldr, right?
Dan Ciruli: The TLDR is, you have to build a distributed database, right? Ultimately you need a distributed database. You can’t factor that in later. And most enterprises today are effectively building, using what we call a 'three tier architecture' where there's storage running in some places, networking running on some machines, and virtualization in another. And it's virtually impossible to retrofit that and say, 'okay, now how do we make this into distributed storage?' And so, Nutanix was built on a distributed database. I have a colleague who joined from VMWare, Kostadis is his name, and he's been publishing a fantastic series of blog posts and LinkedIn posts about how joining Nutanix really opened his eyes and he realized, 'hey, this was a decade ahead of what we had seen,' where Nutanix already had what he wanted to build in the next decade. So yeah, you have to be built from the ground up that way. And like I said, Nutanix started by some former Googlers who effectively did that.
Ryan Donovan: All right folks, it's that time of the show where we shout out somebody who came on the stack overflow, dropped some knowledge, shared some curiosity, and earned themselves a badge. Today, I know we're past spooky season, but we're shouting out the winner of a Necromancer badge. Ooooh. So, congrats to David Ferenczy Rogožan for answering 'Where does adb shell mkdir create directories.' And they answered a question that was more than 60 days later. So, they came back and they brought it back from the dead. Woohoohoo.
Dan Ciruli: What a hero. Congrats to David.
Ryan Donovan: What a hero, right? I have been Ryan Donovan. I edit the blog, host the podcast here at Stack Overflow. If you have questions, concerns, topics to cover, et cetera, et cetera, email me at podcast@stackoverflow.com and if you want to reach out to me directly, you can find me on LinkedIn.
Dan Ciruli: And I am Dan Ciruli. You want to hear about our products? Of course, that's at nutanix.com, and if you want to hear my ramblings, probably Blue Sky. I'm danciruli.cloud on Blue Sky, and I write a little bit on LinkedIn, also.
Ryan Donovan: Thank you for listening everyone, and we'll talk to you next time.
