Loading…

How AI is helping us build better communities

MIT and Stanford professor Alex “Sandy” Pentland joins the show to explore the power of communities for shared knowledge and how AI could hurt or help the growth of these communities.

Article hero image

Ryan and Sandy dive into the findings from Sandy’s new book Shared Wisdom: Cultural Evolution in the Age of AI, the ethical implications of rapidly advancing technology, and AI’s potential to foster community dialogue and decision-making.

Sandy’s new book Shared Wisdom: Cultural Evolution in the Age of AI explores how we can build a flourishing society by using what we know about human nature to design our technology—rather than letting technology shape our society.

Connect with Sandy on Linkedin. Check out the work he’s doing with AI at deliberation.io and Loyal Agents.

Congratulations to user Harshal for winning a Populist badge on their answer to How to start search only when user stops typing?


TRANSCRIPT

Ryan Donovan: Hello everyone, and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ryan Donovan, and today we're talking about AI, but in the context of communities; we're talking about the power of communities for shared knowledge, shared wisdom, shared intelligence, and how AI can help, or harm that. And my guest is Alex Pentland, who is a professor at Stanford and MIT, and the author of the book, Shared Wisdom: Cultural Evolution in the Age of AI. So, welcome to the show, Sandy, I should say, as your preferred nomenclature.

Alex Pentland: That's my nickname. Thank you. Glad to be here. Happy to talk to folks that are listening here.

Ryan Donovan: Absolutely. Well, as somebody who works for a community knowledge-sharing platform, we're very much interested in this subject. So, before we get into that, I know you've come to technology in an indirect way, but at the top of the show, we like to get to know our guests. What's the short version of how you got interested in science and technology?

Alex Pentland: I have a really clear memory of it. This is crazy. When I was like 12 or 13, I was saying, 'what am I gonna do with my life?' And some people wanna be rich, and some people wanna be basketball stars. I'm not gonna do that. And I was entranced with the Albert Einstein, et cetera type of thing. And I figured that was probably a good thing to do because even if you weren't ' The Guy,' you were doing good engineering and you were contributing. And then the other thing is that my grandfather and my father had lived through the Great Depression, and they said something once that stuck with me, which is that professors never got laid off. They took salary cuts, but they never got laid off. And I said, 'hey, that sounds pretty good.'

Ryan Donovan: Right. So, we're interested in the ways of building community, but shared knowledge building systems that you talk about in the book have been around for a long time. Give us a little bit of foundation to understand these sort of shared wisdom, shared intelligence technologies, and things we'll be talking about.

Alex Pentland: Yeah, sure. First of all, shared wisdom means what your community believes. It's not necessarily the truth, right? It's like, 'okay, we all think this is true. Now we can act in a coherent way.' So, that's a prime thing for any social species is you have to act cooperatively, and in humans, it goes way back. As far as we know, we've been doing this for hundreds of thousands of years, and we know that hunter-gatherers gathered around the fire at night. So, that's a social convention; it's not a technical thing. And when we look at a hundred gatherers today, about 80% of what they're doing is saying, 'well, it looked like there was food over there, and there was a lion over there.' And then they figure out what to do, so they're cooperating as a function of their storytelling. The thing that got me started on this was, we got all these big challenges in the world: global warming, plastics, God knows. Right? And the only time I can think of when we had a real reinvention of ourselves was the Enlightenment. I said, 'well, so what caused the Enlightenment? Maybe we could do it again.' Sometimes people say, 'oh, it was invention of science,' but that's not true. People were doing that sort of science for thousands of years, or people say it was logic. The Greeks taught logic. It wasn't that. But, I did discover something, which was that long about the beginning of the Enlightenment, there were these post office routes that the royalty used, and nobody else was allowed to use them. And they opened it up for normal citizens. And so, people like Leibniz, all these sort of famous, early scientists, started writing letters to each other. And somebody like Leibniz wrote three letters a day his whole life. And these are not like two-page letters, these are 15-page missives. This guy was just pumping it out to everybody, and these sort of early scientist-philosopher-types became known as men of letters. Unfortunately, they're mostly men. And they very quickly formed societies of letters, and the king gave them some funding and a building and there was a lot of incentive. But that investigation, that sort of distributed arguing about what works and what doesn't work, is what gave us the modern world. That produced inventions, it produced cultural change. It really got us through a lot of stuff that wasn't so good. And I think we need something like that. And if you think about that, and enlightenment- what we could do today- well, we have the internet, we have AI, we have these things. We ought to have the tools to have these sort of community discussions that result in a community understanding, a shared wisdom that lets us get together and act. I'll give you a sort of funny story that only some people get. You talk about climate change, and the cop process, and all that sort of stuff, and once somebody asked, 'well, why don't we just get together and talk about it and decide what to do and do it, right?' This is like a 4-year-old, right? An adult knows that we don't know how to talk to each other. We don't know how to decide what to do, but hey, wait a second, maybe we do have tools for that. Maybe we can use AI and internet, and stuff to do that. We've had a couple false starts here with social media and stuff, but maybe we can do a better job, and that's what the book is about, is doing the better job.

Ryan Donovan: A lot of what I'm reading the book and thinking about is doing the better job is that it relies on better incentive structures. It relies on a good-faith approach to wanting to do a better job and not just make money. Right?

Alex Pentland: One thing about the word 'community' is it's people who all have skin in the game, and they've got problems that are in common, and so they actually really want to solve those problems. They're not just spouting off. So, if you can get groups of people who are actually genuinely trying to solve some problems, then you can generally find something that they all believe, or they can educate themselves about what's going on, and they can take actions. If we have enough of that going on, well, that's what the enlightenment was about. There were sort of these social experiments of people trying to live in different ways, and some of them were pretty good, and some of them were pretty lousy, but we copied the good ones when we realized we weren't doing so hot.

Ryan Donovan: The internet, we have the new Royal Postal System, where everybody's connected, everybody can talk to each other, but you mentioned this as like a false start. What happened?

Alex Pentland: Well, there's two obvious things that are wrong. One is that connecting everyone to everyone is actually probably not a great idea. You wanna connect to people who have similar problems, similar situations, so that you can figure out what to do, right? Random comments by somebody who shares nothing in common with you is probably not gonna be terribly useful, although it might be amusing, and people drift off into this sort of entertainment alarmist mode. That's one thing. And interestingly, if you look at Facebook, which doesn't have a community, everybody's a friend. Yeah, sure. Tell me another one. Right? But it turns out that the groups that they have now, those are the Facebook groups – if you look, the people that are in there, they're people that share physical reality too, almost always. So, if you have a basketball team where you go, you'll have a group, but you don't have groups typically that don't have a lot of real shared interests. So, people have taken that structure that ignores the notion of community and shared interest and twisted it to make it like that. Communication should be meant for communities. And you could be part of several communities, but there people of shared interest. The other thing that they do in the pursuit of money is they allow influencers to grow. And so, there are these very loud voices. You make more money if you're more angry. And when we do experiments, and we do experiments across the whole country, we find that people don't know anything about other people. All they know about are the influencers. So, if the Democrats ask, 'what do the Republicans believe?' Well, the thing they do is they pick out the crazy guy on the right, and they say, 'well, that's a Republican.' And the Republicans do the same thing about the Democrats, and they're both almost exactly equally wrong, which is to say, they're mostly wrong, almost completely. And even on issues like abortion, and gun control, and things where there'd be a clear divide, there isn't. There's great consensus in the country and then there's the crazy people who are gonna blow everything up unless they get their way. So, the obvious thing is you wanna knock down the loud voices, and you wanna promote some sort of civility. And we actually have built an open source platform. It's at deliberation.io if you wanna look at it. Code’s all there, a lot of experiments. And what we find is that if you take something like polarization, things like gun control, things like that, if people use this, it's like X, but with visualization, and there's a little AI in there that says, 'I hear people saying this.' But it doesn't contribute any content; it's just moderating. What you find is people get depolarized really fairly dramatically over very short periods of time. So, we're using this for town halls, and things like that, and other people have taken it, and they're using it for education, have a classroom discussion, et cetera, et cetera. So, I think it's possible to build things for communities where the AI is not contributing content. It's not telling you what to do; it's just trying to be like a mediator. It's trying to cause people to focus on the problem and behave themselves, just a little bit. 'Cause people wanna do this. They have skin in the game if they're in that community. So, it turns out it works.

Ryan Donovan: Surprise. People want to go along with the sort of group consensus, right? There's an urge to be part of the majority, but with fractured communities, it's like, 'which majority are you part of?'

Alex Pentland: When people talk about that, they say it's like, 'oh, it's like this human weakness to want to go along with majority.' No. Another reason to wanna go along with majority is if you're gonna actually do something, like you're gonna make an investment in a new road, or a new school, you need a lot of people all on the same boat. Right? So, the reason to find consensus is that is what enables you to change your life for the better.

Ryan Donovan: Now we have the new technology you mentioned, like generative AI that makes its own stories. Do you feel hopeful or worried about that?

Alex Pentland: I spend most of my time in Silicon Valley and there, people are either Doomers, ' AI's gonna rise up and kill us all,' or they're lotus eaters, 'oh well, we'll never have to work again,' which actually sounds pretty bad to me 'cause I like to think about things, and I don't think that's likely, either of those. I do think that people will use these tools in both good ways and bad ways, just like they use every other tool, and we ought to be ready for it, 'cause these are pretty powerful tools. The fact that you can spin up 30,000 bots in a fraction of a second to make it look like the majority believe X, or to manipulate a financial market, is pretty scary. And our current systems don't acknowledge that possibility, because it wasn't possible before. Right? So, we need to think really clearly about that, and particularly, there's a couple of things. Like, I just read a paper yesterday showing that the doubling rate for AI competence—so this is how good a little AI agent will be—is three and a half months. It means it's 10 times better at the end of a year. That's like, 'okay, we are on a rocket ship, boys and girls.' We better ask where that rocket is headed.

Ryan Donovan: And who's driving?

Alex Pentland: And who's driving, right. Yeah. Well, hopefully it's our communities. One of the things we've been trained to do over the last few centuries, or millennia even, is say, 'oh, we gotta have a single person in charge, 'cause there's no way to make a decision quickly that involves everybody.' Well, yes, there is. Go take a look at this deliberation.io thing. Not that it's the answer to everything, but it might give you inspiration that it is possible to do that. And the other thing is that people, including me, are not so smart. We're not good at this logic thing. We're not so good at consistency. If you ask a person a couple questions at one time, and then you ask them two weeks later, they only agree about 70% of the time with themselves, right? Okay, well, what are we gonna do with that? Well, we know what to do with noisy signals. If you put lots of them together, you can get things that are pretty consistent. And the other thing about the AI systems I think is very promising is, if you think about a lot of the problems in the world, like the legal system to move, or how do you get bureaucracies to move, these are all systems that were designed two centuries ago. They're mostly patterned after armies. 'Guy at the top tells you what to do,' right? And that's just a recipe for being uncreative, for being trapped in this sort of funny hell that we all are when we think about big corporations, and we have possibility of breaking outta that and really being much more community-oriented, much more shared wisdom. And people say, 'oh yeah, great, stupid dreamer. I don't need to listen.' So lemme just tell people: there is a thing called the Uniformed Law Commission. It's a volunteer organization. Bunch of lawyers from different states get together and say, 'hey, if you adopt a kid in this state. It doesn't apply in the next state. That seems completely broken. What are we gonna do?' And they come up with example proposed laws that they take back to their states. That process, purely volunteer, purely distributed from all 50 states, accounts for about 10% of all the law in the United States today. It's been going since 1870. It's why you can buy things interstate. It's why if you're married in one state, it applies in the other state. It's the thing that sort of smoothed it all together and made it work on a larger scale. But it's not a top-down thing at all. It's not the federal government. It's a bunch of people saying, 'I see a problem, what can we do about it?' And the incentive for the lawyers is, of course, they get to show up in the home state, say, 'hey, I'm part of this great thing and here's an idea.' And maybe they make a little more money from that, but not from the thing itself.

Ryan Donovan: And this distributed volunteer contribution, you mention it in the book, but it looks like open source software to developer.

Alex Pentland: It's just like it and v ery similar to that. Or maybe Wikipedia, or maybe– yeah.

Ryan Donovan: With open source software, there's always some decision maker at the top, whether it's a benevolent dictator for life or a governance board that makes the final decisions. Is that a necessary part of this?

Alex Pentland: Some open source software, each person who has this building a system and deploying system, makes the decision. They're gonna have these parameters, and they're not gonna put that part in, and maybe it works, and maybe it doesn't. So, a lot of it doesn't have that decision maker. We work with ITF, which is the group that defines the internet worldwide. And there's a thing very much like the Uniform Law Commission, which is: you get together, and you talk about it, and people say, 'well, what about this?' And you try and reach consensus among all the people that are concerned about this before it becomes the law of the land. Law of the land doesn't mean you have to do it, it just means that you're not gonna be compatible with other people, which is pretty strong motivation for doing it. Just like the Uniform Law Commission, the guy at the top is just a facilitator that says, 'you should go talk to George or Mary,' or something like that.

Ryan Donovan: But I mean that they take it to a senator or somebody who is then the decision maker who puts it in play, or they don't, right? There is some governance board.

Alex Pentland: There are some governance. The thing with the law commission is that, at this point, it's well known enough that you bring it back to the whole legislature, not just the head of the Senate. Head of the Senate will hear about it and may be able to block it, but so will all the other senators, and they may give him hell for that. And if you think about it, also one of the other characteristics, and there's been a lot of research about this is: community is defined by common interests, okay? So, if I point out something that's costing you guys money, then you'll go, ' maybe we better think about that one,' right? So, it turns out that if you're focused on things that really matter in a day-to-day case, and are very broadly applicable, people actually move. They change their opinions. They do things. That's not a very sexy thing for the news to report on, but it happens.

Ryan Donovan: Generative AI models are statistical aggregations of their training dataset, right? That all of the material in there, big, squishy, stochastic parrots of that. Do we lose a little bit of that community coherence when we tap on the AI models?

Alex Pentland: One of the major problems of the AI models is that they took all the stories that were on the web, and they stuck them in one place, so they're not sorted by community. And you can prompt them to do a little bit of that, but let me give you an example of something that I thought was really interesting. I was on a panel with the CTOs of five of the largest corporations, and we were talking about AI and stuff like that, and what they had to tell their CEO, right? But one of the things they let slip was, 'oh yeah, and we've built AI buddies for everybody in the corporation.' And it turned out all five of 'em had. AI Buddy is just a little vanilla AI that runs locally, that's read the manual that you didn't read, that read the newsletter that you don't pay attention to, that pays attention to what other people in the organization are doing, and can answer questions about or remind you about things that are going on in your organization that you're working in. It's cheap. It's easy. All that sort of stuff. It may not be perfect. It isn't perfect, but it makes you much more aware of the context, much more coordinated with other people, without actually telling you what the answer is. It's just telling you what other people are doing. That's a wonderful sort of way of reinforcing community. It's just making sure that everybody's in the loop in that way, and you know who to go talk to if you have a question, right?

Ryan Donovan: Yeah, I mean, for a modern CTO to keep track of all of the tech flow in their company, it's very hard. I talked to a company recently that does AI summaries of commits made into the software base and rolls it up based on how detailed you need it to be. So, something like that is gives you good context, but I'm sure anybody listening to this is gonna be like, 'well, AIs hallucinate.'

Alex Pentland: Well, that's one of the hard things is, you've got this tool, which is really great in some ways, 'cause it can look at a lot of data and give you something, but it's stochastic. It doesn't really necessarily– so, you need to have judgment in there. And I've noticed on your podcast, you've got a number of people who point out that not having that reality check at the end can lead to bad things. I talk to boards of directors with some frequency, and the thing they're honestly interested in is what happens when the AI screws u,p and we have made a mistake with a million customers, or we've done something that hits the front page of the news or whatever?

Ryan Donovan: And who do we fire?

Alex Pentland: Oh my God. Yeah. And with a squishy AI technology, this is gonna happen. But, think about the AI buddies. It was okay if the AI buddy said something stupid because it's there to just remind you about things. It's not there to tell you what to do. And we have a project that I'm very proud of, and I think is exemplary. We got together with Consumer Reports, so people probably know Consumer Reports. You subscribe, you give them a couple bucks. They used to produce a magazine that had testing of consumer products. Now it's all online, but they still do the testing of consumer products so that you can buy stuff that's good and works, right? That's the idea. And what we're gonna do is we're gonna build AI buddies for everybody. And the project name is called 'loyalagents.org.' So, you can look at that. And what it is it points out that, yeah, you need authentication, you need authorization, those sorts of things. But there's also legal things, 'cause when you have an AI that's representing what you're doing, and this could be in a company too, if it screws up, you could be on the hook, or the person you're dealing with could be on the hook. And so, you need to have something in there that talks about a legal checker. Like, I was at a meeting of 50 of the leading lawyers in Silicon Valley, and I said, 'no, you can't use an LLM to decide if it's going to be a legal problem or not. You need something deterministic so that you can point at the law and say, yep, that one's okay, and that one's not.' And there's this concept of fiduciary, which is something you should pay attention to. So, doctors are supposed to represent you, not them. Lawyers are supposed to do the same. Finance people sometimes do that. But it's this idea of you hiring a professional to represent you, and in many cases, what you want is you want an AI to represent your intent. But how do you know it does that? 'Cause it's made up of all this stuff, and if you talk to it for a little while, it forgets who you are. There's all sorts of stuff because it's a little squishy and can be moved. And so, that's one of the main things we're doing is trying to figure out how you can actually make AI do what you want, and only the things that you want. And there are solutions. One is a way of prompting it that gets the intent of your preferences across correctly.

Ryan Donovan: So, that'd be a sort of context engineering?

Alex Pentland: Yeah. So, we all know about MCP, presumably, right? So, this is human context protocol, right? Communication protocol. And basically, what it is, it's a way of saying, 'well, here's a request, but here's the intent, the human intent, that gives you the legal context for doing this. Don't screw around with that. This one's important,' right? And that's important also, particularly 'cause if you've got agents talking to agents, it's really easy to lose the thread. And they do, right? And they come up with their own thread, which is really bad sometimes. So, you have to be able to say, okay, the point of this is to pay a reasonable price near the bottom, but it has to bloody work, and it has to bloody get here in two weeks. Right? Something like that. That's really the thing you need to have happen. And then it can have some degrees of freedom to work within that, but you have to be able to set that, and current AIs are okay for that, but one of the things is how do you keep reminding them so that they don't run off the rails, right?

Ryan Donovan: Yeah. And how do you absolutely forbid something and programmatically forbid every forbidden action?

Alex Pentland: Yeah. If you talk to the lawyers, the way they think about it is you have to have a deterministic system that surrounds, that looks at every action that the AI does, says, here's the inputs, there's outputs. We're gonna essentially have an expert system. 'Cause law is basically an expert system, and we're gonna check if this goes into a red area, and if it does, then we kick it out and look at it. Right? I think that it may be possible to do this internally, where essentially, you construct prompts that are like that. And you have things where there's the same AI, or some other AI acts as a judge that goes in and says, 'okay, does that fit the prompt?' And they're pretty lightweight ways of doing that sort of thing. So, it's not crazy to think about, and remember that these things are getting exponentially faster in terms of amount of quote-unquote intelligence per dollar.

Ryan Donovan: Yeah. LLM on LLM evaluations, they're pretty good, but they're still, at the top end, about 80% aligned with human intent. I don't know if your lawyer friends would take that one into court.

Alex Pentland: 80% is probably not good enough, but like when you buy something with a credit card, they assume that there's a 3-, 4-5% fraud rate in there, and they buy insurance. They have some fairly hard things on the outside, which is, you can't do more than this amount, and you can't do, et cetera. But those are pretty lightweight. You have the systems out there that are already doing things like that, and they're doing it at scale every day, in every country in the world, and so forth. So, there's real hope in doing this; you just have to remember that you want to have that audit trail. You want to have that check. You need to have that sort of intent prompt that is carried from agent to agent, and you need to check. I also like to tell boards of directors and stuff that there needs to be an audit trail for what you're doing, right? So, this is just like when you spend money, you say, 'well, I spent this much money on that thing, and I got it this day,' right? Simple. Computers are really good at this, right? But what that lets you do is when somebody comes up and says, 'are you biased against this group?' Or, 'are you committing fraud with that country?' Right? If you have the audit trails, you can answer that question, like right then, and then, you don't get into this big kerfuffle with lawyers, and courts, or news people, or stuff like that. It's not nothing. You need to really think about that.

Ryan Donovan: The audit trail stops once you get to the LLM itself. I think some folks, Anthropic, is doing some efforts and trying to look into the brain of the LLM, but at that point it's like, how did you get this answer?

Alex Pentland: I think that's not necessary. I think that's a whole sort of mistaken line of thought. Let's take a judge, and the judge listens to some evidence. It makes a decision. Well, what went on in his brain? I don't know. You don't know. We do know that it's different when he is hungry. And it's different on Monday than it is on Tuesday. Right? Okay. But what you do have is you have, 'well, here were the facts and here was the decision.' And when somebody thinks this is crazy, they take it to a higher-level judge. Okay, well, that's how we do things. So, we need to take our LLMs, and we need to have some little boundaries where we take it up to the higher level thing, and take a better look at it. And you need to be able to answer questions about overall performance – if there's subtle little biases or things that are going on that are not immediately obvious in a single operation. The one I'm most concerned about is not so much that, 'cause I can see there's techniques for doing these things and so forth, it's the fact that you're likely to have LLMs calling LLMs to do something, and you don't know three quarters of them, and how are you gonna ensure that this is actually a valid chain of operations?

Ryan Donovan: We've recently published an article on overall discomfort with some of the AI-generated writing and art, but I wanna get your take on how LLMs, Gen AI, can actually connect people. What's the positive spin?

Alex Pentland: Well, let's go back the two things I already mentioned. So, one is this AI agents thing. So, if I ask my little LLM, assuming it's attached to the right sort of data, who else is doing this sort of thing? It's pretty good. And in fact, obviously, if you use Google search or anything like that, you get a little AI thing all the time, and they're not bad. You shouldn't bet your house on it, and so check the footnotes, and maybe check 'em twice if it involves a fair amount of money. But the point is, it's not bad, and what it's doing is it's taking human output, like the web, just like Google originally search was right, and making a summary that's easily digestible. So, that's connecting people to other people. Now, of course, you've got little questions like, is this response something that somebody paid to have promoted? And blah, blah, blah, which is why you probably want your own agent, even if you have to pay a little for it, right? You really don't want it promoting so and so's product, right? That type of a thing. And so, this is gonna be a big legal battle, like Amazon and Perplexity I think are at it now over that, right? So, that's a way of connecting people, and it's like this AI buddies-type thing. A second thing is this causing groups of people to reflect together about what solutions are without permitting loud voices. So, that's this deliberation IO stuff. So it's a lot like Twitter X, something like that, but there's no influencers, and you get to see a visualization of what everybody is saying. So, you get a sense of where you are in the crowd, and there's an AI that gives you feedback about what everybody is saying. It's like, 'well, people seem to be saying this, but some people say that,' that type of a thing. So, it's trying not to inject content. It's trying to give each person a sense of what's going on in a way that's otherwise contextless. And you can go read the science; it's all there on deliberation.io. It seems to work really pretty well and not be biased, and blah, blah. Even for things that are very contentious, like should we give guns to mentally ill people? But it turns out people behave pretty reasonably there. So, there's ways of using them as information gatherers to connect– the AI buddies one is connecting to particular people. The Deliberation one is getting people to connect to each other to make decisions. And then, the third thing I think is really interesting, and this is in the book, is this sort of trying to think about what are the range of things that we can do and what's important. And the example I like best is, we did something where we were just looking at all of the scientific publications in the world, and noticing that there are places where nobody's focusing. And if you look at that—so you get this big web of science right, and there's a hole in the middle, the fact there's a bunch of holes—those are the places where the big hits are gonna be in the future. And interestingly, it works exactly the same with patents. So, people patent all this stuff, and they refer to each other. And there are places where people are blind somehow. Now we don't know what goes there, but we know that's an area that nobody's mining, and maybe you ought to take a second look. So, there's ways of assessing the global behavior of people to be able to ask, 'how are we doing? What are we missing?' Things like that. And that sort of bleeds over into things like protein folding, and things like that. We have these huge, complicated things, and what you wanna ask is, 'what are the tricks we're missing?' That type of a thing. So, it's an innovation discovery engine, as it were. And those are the three things I talk about in the book, and I think all of them have the character that they don't remove the human from the conversation. They're about finding the things that other people say and meeting people. They're about hearing what other people are doing to try and think about things together, or about reflecting on the broad range of people and what they're doing. But none of them are, 'the AI is telling you what to do.' Right? So, helping people build and shape their community is the idea.

Ryan Donovan: It is that time of the show again where we shout out somebody who came on to Stack Overflow, dropped some knowledge, shared some curiosity, and earned themselves a badge. So, today we're shouting out a Populist Badge winner – somebody who dropped an answer on a question that was so good, it outscored the accepted answer. Congrats to Harshal for answering 'How to start search only when user stops typing?' If you're curious about that, it'll be in the show notes. I'm Ryan Donovan. I edit the blog, host the podcast. If you want to reach out to me, comments, concerns, questions, topics, et cetera, you can email me at podcast@stackoverflow.com, and if you wanna reach out to me directly, you can find me on LinkedIn.

Alex Pentland: Hi, I'm Alex Pentland, faculty at Stanford and MIT, and we're just talking about my new book, Shared Wisdom, MIT Press, and Bookstores near you starting tomorrow, actually, as it turns out. And you can find me on Wikipedia, and LinkedIn, and just about everywhere. Interested in helping people use AI in ways that are pro-social and build communities.

Ryan Donovan: We love to hear it. Thank you for listening, everyone, and we'll talk to you next time.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.