Loading…

Seizing the means of messenger production

Ryan sits down with Galen Wolfe-Pauly, CEO of Tlon, to chat about calm computing and how humans can take back ownership of their data and digital world.

Article hero image
Credit: Alexandra Francis

They discuss the early internet’s evolution from individual creativity into today’s internet that turns users into products, Galen’s takeaways from building a new network architecture that prioritizes user control, and why messenger applications are ripe for decentralization.

Tlon is releasing a decentralized messenger app that gives you ownership of your data, built on Urbit, a complete, wholly encapsulated system that allows you to run a personal server in the cloud. Use the code STACK to skip the waitlist for the Tlon Messenger app.

Connect with Galen on LinkedIn.

Shoutout to user mkobuolys for winning a Populist badge for their answer to Set default transition for go_router in Flutter.

TRANSCRIPT

[Intro Music]

Ryan Donovan: Hello everyone, and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ryan Donovan, and today we are talking about calm computing and owning your own applications. Today's internet is making us all little products out there, harvesting our data. And my guest today, Galen Wolf Polly, who's the CEO of Tlon, who is gonna talk about how they are making your applications your own. So, welcome to the show, Galen.

Galen Wolfe-Pauly: Yeah, thanks for having me.

Ryan Donovan: Before we get into Messenger and all that, how did you get into software and technology?

Galen Wolfe-Pauly: That could be a whole podcast of its own, probably.

Ryan Donovan: TLDR, buddy.

Galen Wolfe-Pauly: I grew up near Silicon Valley, so it was a part of my childhood, just building things on the early internet, but I actually never really thought that I would work in this world. And the short version would be I went and studied architecture, 'cause I was very interested in the discipline that had the longest history of making things and just how things are made, generally. And during the course of being in school, I started to realize, I think maybe buildings and the way buildings are made is already figured out, and the sort of frontier of how things are made and how they influence how we think is really in the digital world. And because I just had such experience building software, it really was like I stumbled into it. It was also just somewhat natural, and I think I'm also just stubborn enough to not wanna work for anyone else. So, then I was always starting my own things and working closely with people starting things. Then, yeah, I guess that's also pretty related to why we build the things we build. I felt like, wait, not only do I wanna build my own things, but I want to make things for people that they actually own and control, and can do whatever they want with. I am very optimistic about what people can do when you give them their own tools and let them do whatever they want. But that also maybe is 'cause that's what I like doing.

Ryan Donovan: That's the spirit of the early internet. Remember the early days? It was like everybody was building their own things on there. Everybody had their own website, GeoCities, or LiveJournal pages, or whatever, and then something shifted, right?

Galen Wolfe-Pauly: I remember that very well. I definitely spent a lot of time hosting things from my computer in my bedroom as a teenager. What I was aware of in that shift, where all of a sudden people were connected all the time through services, was the fact that what those services provided in convenience was also very much a technical innovation that they were running software in the cloud for you, and that just provided a much better user experience than figuring out how to configure it and keep it online at your house. I could see that, I think always, as a pretty significant technical problem. If you want people to have that freedom and flexibility of they have their own computer, you gotta figure out how to get them a computer that can run in a data center forever, which is a hard problem.

Ryan Donovan: Now you're working on Messenger. To start, talk about architecture being solved problems. Is messaging not a solved problem?

Galen Wolfe-Pauly: The thing that's not a solved problem is how we do personal computing in the cloud. A lot of computing in the cloud's not very personal, right? It's mostly service oriented, or it's for a big company to run services for a lot of people. So, the first thing that I worked on for a long time was really, yeah, could you build a better system for individuals to be able to store their own data, run their own applications in a totally sealed little virtual machine, totally portable. And so, that project was called Urbit. It has a wild and complicated, strange history. It's [an] unhinged open-source project that ultimately did produce something that worked pretty well, which, once it started to work, it was starting to satisfy my initial interest, which is could you actually just build a user-facing application that an ordinary person can use that they actually own and control? And what I found that we were using this system for as really just contributors people having fun with this thing that we were working on, was just, yeah, messaging and collaboration. And I don't think that's an accident, because if you think back to the early internet, what were we doing on the early internet? You're just talking to each other, and it is quite simplistic as a use case, but it might also just be the first use case of a new technology. In this case, I think, messaging is definitely, in many ways, a solved problem, but for me—this may be that I just have a kind of heightened sensitivity to this stuff—for people that I really care about, or my collaboration with them is really important, it feels really weird to use a large company's service for that. I care a lot about the history of the conversation. I care that it can't disappear. I actually probably wanna modify the system a little bit. I want to feel like I have control over it. As you might imagine, people attracted to this project of, 'hey, how do we own our computing?' They also want to talk to each other on that platform 'cause it just feels natural. So, that's where the messenger as a product emerged. And I pushed that really far. I felt if it's gonna be a messenger, then you gotta be able to just install it on your phone. I need to be able to send it to my, maybe not my mom, but my cousin – let's not solve the age issue there, which could be big, but just an ordinary person of my age-ish or younger, can they just get set up and go? And that is, in the last say, three to six months, the thing actually is passable in that form factor, which is– there are a lot of messengers in the world, so it sets up an interesting dynamic, for sure.

Ryan Donovan: My understanding of the history of messengers, I don't exactly know how ICQ works, but my understanding is that there's always some sort of server to connect, pass through messages to make sure you can find people. How do you connect people with a sort of locally installed messenger/server?

Galen Wolfe-Pauly: I'll do the most simplistic technical explanation, and then we can dig as deep as you want. So, the system that we built is– think of it as one little virtual machine per person, a private key that you see to this virtual machine with, and the public key of that is your network address. The way we drive those public keys, they're very short. They look like synthetic usernames. So, each of those machines is a node on a network and can pass packets between each other; they can also share programs, they can run little programs, and stuff like that. So, the simplest version we came up with was we can run a hosting service of those machines so that when you sign up on your phone, we'll spin up a machine on your behalf, and then the client just connects to that little VM. But if your friend is more suspicious of us, or just privacy-conscious, generally, they could self-host, and you would see no degradation of the user experience, right? It doesn't matter. If you and your friend are running WhatsApp, someone has self-hosted the same thing. And then, the other difference would be if you could someday just say, 'you know what? I don't know if I trust Meta anymore, I'm gonna download my WhatsApp and run it myself.' A lot of people would say, 'but you're still running it for me.' I'm like, yeah, I know. But one way to think about it would be, we haven't actually prototyped this. We've never actually finished it, but I think someday we definitely will. We could let you actually stream the event log of that virtual machine and keep it locally always. So, we keep a copy for you in the cloud. Cloud's really convenient. But if you– the example I always think of is, remember when the Telegram CEO got arrested, and everyone was like, ' oh, Telegram is already probably compromised,' but people all of a sudden realize that's a problem, but you can't leave. So, in our case, the difference would be, you could actually unilaterally exit. You could just cycle your keys. You have your whole event log locally, and the whole network could become much more decentralized overnight. And that is, I think, a very big difference [from] 'I connect to a service.' I'm never gonna run Signal locally. I'm never gonna run WhatsApp locally. Maybe I get the data out, but I can't run it. And in this case, no, you can actually run it yourself forever. Very different.

Ryan Donovan: Do you wanna sort of peer-to-peer? I think even with some of the mastodon or torrenting, there's still a central announce or a federator, right? Is it possible to just have peer-to-peer clients working on your system?

Galen Wolfe-Pauly: You can think of the peer discovery process as being a little bit like DNS. So, there are root nodes. There are 256 of them, though. So, we aim at being meaningfully decentralized, but not totally flat. Part of that is actually because civil attacks on totally flat networks like BitTorrent, like things that generally use DHTS for authentication, are not very civil resistant, which is really important for anything that's like a social network or where you might exchange value, you need a net new peer to have a non-zero reputation. So, those addresses on our network are actually finite. We'll give you one, but anything that's finite eventually will probably have some value. And the thinking is that buying a username is a good indication of, you've got a little skin in the game. So, there's a little bit of trust on this network by default, which is, that's in parallel to how peer discovery works, but they're related, right? You think of it like DNS, basically. It's like DNS, where all of that address space is property, and those individual owners of even bigger/8/16 blocks of addresses, they could be totally independent. They could even be antagonistic of one another. All they have to do is share packets.

Ryan Donovan: Whoever owns that virtual land can charge and let in whoever they want, essentially.

Galen Wolfe-Pauly: Exactly, and so you can have very different authentication rules. So, the internet itself, like IP addresses are hardly authenticated, right? Because [you] mostly rent them. But then, if you look at the meaningful authentication tokens of your email address, your username, they're all these different weird, creepy ways that they're authenticated. Your driver's license, your passport, your phone number, your address. I think there's probably a lot more ways that could be done. The system is designed to say, 'hey, you know what? Yeah, go figure it out. It's fine. We don't care how you sell your addresses.' But because they're finite, they could obviously be blacklisted. So, if you buy a big block of addresses and you sell them all to spammers, then you're gonna black hole that value. So, it's a pretty simple mechanism.

Ryan Donovan: I think with a lot of the sort of outages, to go with the DNS metaphor, DNS has been the culprit in locally sourced ways. How do you plan to enable permanent discovery when it's distributed like that?

Galen Wolfe-Pauly: That is a good question. When you're building a new system like this from scratch, one of the most important things is to not get ahead of yourself, and think, what's my limiting factor in terms of what's the most important next thing to work on? And then also, am I prohibiting myself from solving a problem later? So, in terms of the general category of denial of service on a network like this, it's a little bit easier to control because you require all traffic to be authenticated by default. So, in order for a large network of nodes on our network to go after a particular provider, and those providers are just doing pure discovery, right? They're not actually routing data or routing traffics. pretty easy to do, or it's not super resource-intensive. That solved architecturally, in the sense that it could be solved at the software level, but I think in today's version of that system, are we doing strict, basically DDoS mitigation through backoff of request time and stuff? No, but it's pretty easy to build.

Ryan Donovan: It sounds like you're not solving problems you don't have yet.

Galen Wolfe-Pauly: When you're crazy enough to try and build a system this big, you have to limit the scope.

Ryan Donovan: That's right. Pick your shots, pick your shots. So, with the sort of real estate view, does there have to be a central authority for giving out blocks of addresses?

Galen Wolfe-Pauly: No. The address space is quite distributed. It's all cryptographic property, and watching the oscillations of crypto have just been so funny because I'm probably more interested in cryptography than crypto as it's now understood. But there are many thousands of owners of this stuff. It's pretty distributed, and in part because of crypto as a fad. But I think we're such a fringe, strange project that actually most of those people understand what they're getting into and are excited to own a piece of a network. If you own a big block of it, you can sell smaller chunks of it, basically. And that still goes on.

Ryan Donovan: I joked at the beginning that messaging was a solved problem, but I know that there are some thorny issues. We had a chief product officer that worked at AOL in the early days, and when they introduced the three dots to indicate that someone was typing, it crashed their servers. So, I wonder is the peer-to-peer– does that mitigate that? Do you have other ways of working around the feedback loop that the 'three dot someone is typing' thing has, or just not do that?

Galen Wolfe-Pauly: If you think about it technically, so if you and I are sending packets back and forth, or we're computing together, I'm sending commands to you, whatever, it's just you and me, man. It's just two little virtual machines in a cluster somewhere. It's just not a lot of compute. You can think of it like– in the AOL case, I don't know how much you wanna get into the distributed systems problem, but it's a totally different problem, right? I have basically one program I'm trying to horizontally scale, so I have to figure out how to parcel it out, run it on different machines, and elastically scale it up and down, which obviously, in the AOL days, no one figured out how to do that. And we're much better at it now, but we do that by default. We basically just say every user is its own set of resources. And actually, if you think about even your own interaction with– the scale of your own individual computer is just not that big, actually, even if you're a pretty active user. So, those kinds of problems we generally don't really run into because the whole thing is pretty horizontally sharded by default.

Ryan Donovan: Yeah. I'd imagine that problem happened. Basically, you're sending a binary bit, are they typing or not? But also, a whole pile of metadata around it. And you don't have the metadata.

Galen Wolfe-Pauly: Right. The way I would imagine is that you have a million people doing that all at once. You have a million of these going to a single process, and then you gotta figure out how to load balance that, or I don't know, do something. Whereas in our case, it's like you have a million people, but actually, in every case, it's probably one-to-one, or it's malt. N is like Dunbar number size, right? It's like you're typing to a group, or something. But it's funny, I haven't thought about this stuff in a while, 'cause we got to the point where I think the– pretty comfortable with thousands of connections, both concurrent numbers of people in a group together and being able to synchronize data across all of those nodes. And for the current sort of use case, which is, can you get a community together to stay connected basically forever? Mr. Beast is not a community, so it's not exactly an important use case for us.

Ryan Donovan: But we have heard use cases of certain protest groups hitting the limit on a WhatsApp group or a Signal chat, and those chat groups being spontaneously generated daily.

Galen Wolfe-Pauly: Oh yeah. I remember looking at this too. I can't remember now. These are things we thought about years ago, but do you remember the size in those cases? This is a constraint of end-to-end, right? So, isn't this a problem with the ratchet? It's like the signal ratchet, which is used in both those cases, but almost all the cases now, I think there's something about the rate at which– because if you have n numbers in a group and you have to basically share each, they do some crazy stuff with the signatures such that it has to basically be encrypted for all n members, right? And with n private keys. And so, there's actually a constraint on the compute. They're trying to make sure that when I send a message to whatever it might be, a thousand people, that means I have to do a thousand signing processes. And it probably is an issue. It's similar to the AOL issue. They're just like, 'look, our infrastructure can only do that so many times.' I may be totally wrong about that, but that's my memory of the end-to-end constraint. It's like a different constraint than what we have.

Ryan Donovan: It goes with the sort of central routing problem. So, you talked about the individual private keys – it's just a one-to-one encryption.

Galen Wolfe-Pauly: Yeah. We do not actually do end-to-end right now because the topology is so different. So, if you think of the app on my phone is TLS to my node, and then my node runs its own secure networking protocol between every other node, certainly, your host is a liability, but if you host yourself, it's a different model, if that makes sense? So, anyway, yes. It's much more about the, how many concurrent connections could you have between nodes, then how many connections between client and server, but that's because the server, everyone is their own server. It's a different model.

Ryan Donovan: You mentioned that this was the outgrowth of a crazy open source VM project. The Urbit, you said?

Galen Wolfe-Pauly: Yes.

Ryan Donovan: Is that the sort of foundation of the messaging, or the messenger is like the proof point of Urbit?

Galen Wolfe-Pauly: Yeah, we run an Urbit node for every individual. We build a lot of custom stuff on top of that. I helped build or was sort of very involved in Urbit as a platform project. I think there was some point, it might have been actually some of the stuff you're asking me about, where I could see that the system was– the core components system were stable enough and reliable enough that you could really build an application on top of it, like a competitive, everyday app that anyone could use. So, Tlon builds everything above what the sort of core components of Urbit as a system are, and that's pretty non-trivial. So, including a lot of the actual messaging and sort of socialization features, that's stuff that we built on top of Urbit as almost like a protocol. So, Urbit is still out there and is pretty bare bones at the start. We do use that as a sort of foundation for what we build, and then we build quite a lot of stuff on top of that, all of which is also open source, but it's built by the company to be this messaging product and not necessarily just to be a protocol.

Ryan Donovan: I've talked to folks who have projects that are assembling all these open-source software into building your own cloud compute locally. What's the difference between those projects and the Urbit project?

Galen Wolfe-Pauly: If I'm thinking of the things you're thinking of, which I may not be, the sort of whole ethos of Urbit is basically, look, everyone is trying to solve the problem of 'how do I host in people?' Urbit's trying to solve the problem of 'how do I host one person and then connect them to end people?' And so, they're just totally different ways of thinking at the outset. And I suppose, one of the kind of axioms of Urbit is: most software– you have Unix on the internet. That's great. Everything above that is basically built for the centralized model. So, let's just take Unix in the internet and then build a system that is purpose-built for a one-to-one relationship between person and their computer, VM, whatever, node. Architecturally, Urbit is very different. It's single event log. It's like a single transactional system that we're basically like, every file system event, keyboard event in the terminal packet, HTTP event, those are all like transactional updates to basically an event log database, and has its own concept of core system components. It's just a totally different system, if that makes sense. So, there's things like, I'm thinking of Own Cloud, that world of– there was Sandstorm for a while that was pretty interesting, actually.

Ryan Donovan: I talked to somebody who had Project Cozy Stack, which I think is similar.

Galen Wolfe-Pauly: At the time we were working on Urbit, there were other things that were more about, how do I self-host similar things to what I get from the Google Suite, but maybe for my own business, but I could self-host it? Which, some of that stuff got really good, and it is a nice model, but it's kind of a different thing, technically.

Ryan Donovan: You're talking, [and] it almost sounds like the thing you have an issue with is the client-server model.

Galen Wolfe-Pauly: Yeah. I think there's no way that you get the– if a computer is a tool, and a network computer is an even more powerful tool through which we can, in a way, do anything, right? A computer is a tool that builds itself. And if you believe, as I do, that is something that should pair that with an individual, one-to-one as possible. You wanna give people the ability to do truly whatever they want beyond the imagination of the initial creator, for as long as they want, however they want, then that model is simply not gonna work because whoever has to run that server is a permanent intermediary. You just gotta get rid of that person forever. It doesn't make any sense, and it's no judgment. That's just not what I want. It's not, and I don't think that's what anyone actually wants, it's just that no one wanted to solve that problem because there was a lot of money to be made not solving it. That's great, but I don't think it's actually in the interest of humanity at large. I think if you look long term, historically, this is the bias of being an architect, right? Architecture's a discipline that's existed for thousands of years. People literally called themselves architects thousands of years ago, and so people didn't call themselves computer scientists thousands of years ago. From my perspective, this stuff barely exists. Having 50, 70, whatever you wanna call it, years of history, you may as well do it over, man. We haven't figured this out. So yeah, I have a slightly different perspective.

Ryan Donovan: Yeah. If you wanna talk to some of the logicians, you can go back to 300 years, you can go back to the Greeks if you want. The real computer science stuff is, at best, 100 years old.

Galen Wolfe-Pauly: I tend to think about it that way, too, that there is this deep desire that we have to be able to share knowledge and understanding. And so certainly, there's always been this collaboration between ways of formalizing thinking and ways of transmitting thinking. So, whether that's this sort of pure logic, and you can trace that as it feeds into information theory and computer science, but also just like the invention of the printing press, and the relevant infrastructure for people being able to coordinate culture, and then those things converge in the way that we start to build network computing. But yeah, I tend to think you want to protect the open-endedness of that because that is long-term, historically, what has been the most powerful force, almost in human history at all. I'm always trying to not talk about this stuff too dogmatically, because I don't think about it as a polemic thing, as a political thing, 'we should do this.' It's more like this is usually just what happens with this kind of technology. It becomes very distributed. It's owned by the people who use it because that's how you get the most value out of it. Whether or not this is gonna happen today or it's gonna happen in two decades, that's the hard part that's a little bit hard to predict, like I'm quite sure of this prediction. On what timeline? Don't actually know. Yeah.

Ryan Donovan: I've talked to some other folks who talk about the sort of social sharing of knowledge with the printing press and the internet, and one thing I always wonder is that the printing press, there was 100 years of wars of reformation in Europe. Where are we now with the internet?

Galen Wolfe-Pauly: It's interesting, though. So, yeah, we built a messenger so that we could stay coordinated. The whole company runs on the system, and we haven't actually shared it too much publicly. You can self-host, and we let a few people off the wait list every now and again to work with them on it. But then, I would say almost a year ago, I started to think, man, the LLM thing is really starting to work, but it's moving so quickly, and there's so many of them. I really actually want to use them all in one place, and switching context between 'em, I always found so frustrating. So, I started to think, man, it would be nice if you could do this now. Actually, the separation, ownership, is a feature, right? I want all my data, and context, and tokens separate from the model. And we started doing quite a lot of this internally in the last couple weeks with the open clause stuff, where we're running one in parallel. I'm like, oh, this is magical. Then you really feel the power of 'this is mine. All my context is with me, and I can send it to where I want, and I can let those things operate on this thing that I own,' and quite interesting. So, anyway, from a timeline perspective, we live in an era of history [where] it's like you see these historical patterns over a longer term period of history, looking backwards, and then you see them collapse and happen so quickly, I think [in] the current timeline. But it's hard to say. It's really hard to say.

Ryan Donovan: It's interesting bringing up the open claw thing and like owning your own LLMs because when I thought about this, there could be an instance of having the entire sort of stack locally run for a texting agent, right?

Galen Wolfe-Pauly: And that's what we're doing, and with open cloud, it's so interesting because you can say to the thing, 'hey, go use Deep Seek for this. Go run this via open router because we run a few things in our own infrastructure, run this one locally, and then go ask Claude about this.' And so, you could actually really precisely control how they proxy or context around, and you can do it over, you say, 'hey, look back at everything that happened in this group today, and compare the difference between the models or whatever.' And yeah, that separation is, I think, really important. It was very vaguely clear to me, just using Chat GPT casually and then thinking, ‘oh I wish I could also send this to Gemini, or I could just see the difference.’ But I've also thought that it seems very likely that we're gonna get a lot of niche models. No one has yet done a biometric model, but it seems like Apple has a ton of biometric data, [and] so do these other device companies, and [my] feeling is that you're gonna get a lot of niche models like that. Certainly, driving data, geospatial data, stuff like that. And then, you wanna stack them, you wanna synthesize 'em, you want to use them in concert, and I think it's gonna be done by someone else, basically. I don't think one of the model companies will do that. And you get interesting results there.

Ryan Donovan: I think there are some folks who are trying to do things like that, where it's one API for a bunch of models, and I think some of the platform providers are trying to get that space.

Galen Wolfe-Pauly: Honestly, in some ways very naive about this stuff, and I try to spend most of my attention on what we build from a product standpoint, and I feel like sometimes if you get too obsessed with the day-to-day rate of change in any technical domain, it's incredibly distracting. And yeah, not that productive.

Ryan Donovan: Are you hoping to add to your public product stack in the near future?

Galen Wolfe-Pauly: We're moving cohorts on, where when you boot, you actually just get an open claw instance as a standalone node. So, we actually do something where we spin up another node that's like a child of yours that has open claw running in parallel, and obviously, you could do a lot with that. But because it's a standalone node, you have a DM with it, but you can have it join groups, you can have it create groups, you can have it use our API to do whatever you want. And it's very unrefined. There's many things you can break, and that's part of what makes it fun. So, there seems like there's quite a lot of interesting stuff that will happen there. And that's probably the thing I'm the most, I guess, just genuinely curious about. Yeah. And probably in the next six months or so, I'm very curious to see what will happen there.

Ryan Donovan: Okay. It is that time of the show where we shout out somebody who came on to Stack Overflow, dropped a little knowledge, shared some curiosity, and earned themselves a badge. Today, we're shouting out the recipient of a Populous Badge, somebody who dropped an answer on a question, and the answer was so good, it outscored the accepted answer. So, congrats to @mkobuolys for answering, 'Set default transition for go_router in Flutter.' If you're curious about that, we have an answer for you in the show notes. I'm Ryan Donovan. I host the podcast, edit the blog here at Stack Overflow. If you have questions, concerns, comments, topics to cover, et cetera, email me at podcast@stackoverflow.com, and if you want to reach out to me directly, you can find me on LinkedIn.

Galen Wolfe-Pauly: I'm Galen, I helped start Tlon. You can find us Tlon.io, T-L-O-N, and I believe we set up Stack as an invite code for anyone listening to the podcast, so you should be able to skip the line and check out what we're building.

Ryan Donovan: Thanks for listening, everyone, and we'll talk to you next time.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.