\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>If LLMs were better at citing their sources, we could trace each of their little \u003Cem>faux pas\u003C/em> back to a thousand hauntingly similar lines in online math courses, recipe blogs, or shouting matches on Reddit. They’re pattern printers. And yet, somehow, their model of language is good enough to give us what we want most of the time. As a replacement for human beings, they fall short. But as a replacement for, say, a command-line interface? They’re a massive improvement. Humans don’t naturally communicate by typing commands from a predetermined list. The thing nearest our desires and intentions is \u003Cem>speech\u003C/em>, and AI has learned the structure of speech.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Art, too, is moderately stochastic. Some art is truly random, but most of it follows a recognizable grammar of lines and colors. If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. That's what AI does. That's the whole story.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>This means AI, though not quite the cure-all it's been marketed as, is far from useless. It would be inconceivably difficult to imitate a system as complex as language or art using standard algorithmic programming. The resulting application would likely be slower, too, and even less coherent in unfamiliar situations. In the races AI can win, there is no second place.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Learning to identify these races is becoming an essential technical skill, and it's harder than it looks. An off-the-shelf AI model can do a wide range of tasks more quickly than a human can. But if it's used to solve the wrong problems, its solutions will quickly prove fragile and even dangerous.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:heading -->\n\u003Ch2 class=\"wp-block-heading\" id=\"h-ai-can-t-follow-rules\">\u003Cstrong>AI can’t follow rules\u003C/strong>\u003C/h2>\n\u003C!-- /wp:heading -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The entirety of human achievement in computer science has been thanks to two technological marvels: predictability and scale. Computers do not surprise us. (Sometimes we surprise ourselves when we program them, but that's our own fault.) They do the same thing over and over again, billions of times a second, without ever changing their minds. And anything predictable and repeatable, even something as small as current running through a transistor, can be stacked up and built into a complex system.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The one major constraint of computers is that they're operated by people. We're not predictable and we certainly don't scale. It takes substantial effort to transform our intentions into something safe, constrained, and predictable.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>AI has an entirely different set of problems that are often trickier to solve. It scales, but it isn't predictable. The ability to imitate an unpredictable system is its whole value proposition, remember?\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>If you need a traditional computer program to follow a rule, such as a privacy or security regulation, you can write code with strict guarantees and then prove (sometimes even with formal logic) that the rule won't be violated. Though human programmers are imperfect, they can conceive of perfect adherence to a rule and use various tools to implement it with a high success rate.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>AI offers no such option. Constraints can only be applied to it in one of two ways: with another layer of AI (which amounts to little more than a suggestion) or by running the output through algorithmic code, which by nature is insufficient to the variety of outputs the AI can produce. Either way, the AI's stochastic model guarantees a non-zero probability of breaking the rule. AI is a mirror; the only thing it can't do is something it's never seen.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Most of our time as programmers is spent on human problems. We work under the assumption that computers don’t make mistakes. This expectation isn’t theoretically sound—a \u003Ca href=\"https://www.computerworld.com/article/3171677/computer-crash-may-be-due-to-forces-beyond-our-solar-system.html\">cosmic ray\u003C/a> can \u003Cem>technically\u003C/em> cause a malfunction with no human source—but on the scale of a single team working on a single app, it’s effectively always correct. We fix bugs by finding the mistakes we made along the way. Programming is an exercise in self-correction.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>So what do we do when an AI has a bug?\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>That’s a tough question. \"Bug\" probably isn't the right term for a misbehavior, like\u003Ca href=\"https://fortune.com/2023/02/17/microsoft-chatgpt-bing-romantic-love/\"> trying to break up a customer's marriage\u003C/a> or\u003Ca href=\"https://apnews.com/article/technology-science-microsoft-corp-business-software-fb49e5d625bf37be0527e5173116bef3\"> blackmail them\u003C/a>. A bug is an inaccuracy written in code. An AI misbehavior, more often than not, is a perfectly accurate reflection of its training set. As much as we want to blame the AI or the company that made it, the fault is in the data—in the case of LLMs, data produced by billions of human beings and shared publicly on the internet. It does the things \u003Cem>we\u003C/em> do. It's easily taken in by misinformation because so are we; loves to get into arguments because so do we; and makes outrageous threats because \u003Cem>so do we\u003C/em>. It's been tuned and re-tuned to emulate our best behavior, but its data set is enormous and there are more than a few skeletons in the closet. And every attempt to force it into a small, socially-acceptable box seems to strive against its innate usefulness. We’re unable to decide if we want it to behave like a human or not.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>In any case, fixing \"bugs\" in AI is uncertain business. You can tweak the parameters of the statistical model, add or remove training data, or label certain outputs as \"good\" or \"bad\" and run them back through the model. But you can never say \"here's the problem and here's the fix\" with any certainty. There's no proof in the pudding. All you can do is test the model and hope it behaves the same way in front of customers.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The unconstrainability of AI is a fundamental principle for judging the boundary between good and bad use cases. When we consider applying an AI model of any kind to a problem, we should ask: are there any non-negotiable rules or regulations that must be followed? Is it unacceptable for the model to occasionally do the opposite of what we expect? Is the model operating at a layer where it would be hard for a human to check its output? If the answer to any of these is “yes,” AI is a high risk.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The sweet spot for AI is a context where its choices are limited, transparent, and safe. We should be giving it an API, not an output box. At first glance, this isn’t as exciting as the “robot virtual assistant” or “zero-cost customer service agent” applications many have imagined. But it’s powerful in another way—one that could revolutionize the most fundamental interactions between humans and computers.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:heading -->\n\u003Ch2 class=\"wp-block-heading\" id=\"h-a-time-and-place-for-ai\">\u003Cstrong>A time and place for AI\u003C/strong>\u003C/h2>\n\u003C!-- /wp:heading -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Even if it always behaved itself, AI wouldn’t be a good fit for everything. Most of the things we want computers to do can be represented as a collection of rules. For example, I don't want any probability modeling or stochastic noise between my keyboard and my word processor. I want to be certain that typing a \"K\" will always produce a \"K\" on the screen. And if it doesn't, I want to know someone can write a software update that will fix it \u003Cem>deterministically\u003C/em>, not just probably.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Actually, it's hard to imagine a case where we want our software to behave unpredictably. We're at ease having computers in our pockets and under the hoods of our cars because we believe (sometimes falsely) that they only do what we tell them to. We have very narrow expectations of what will happen when we tap and scroll. Even when interacting with an AI model, we like to be fooled into thinking its output is predictable; AI is at its best when it has the appearance of an algorithm. Good speech-to-text models have this trait, along with language translation programs and on-screen swipe keyboards. In each of these cases we want to be understood, not surprised. AI, therefore, makes the most sense as a translation layer between humans, who are incurably chaotic, and traditional software, which is deterministic.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Brand and legal consequences have followed and will continue to follow for companies who are too hasty in shipping AI products to customers. Bad actors, internet-enabled PR catastrophes, and stringent regulations are unavoidable parts of the corporate landscape, and AI is poorly equipped to handle any of these. It's a wild card many companies will learn they can't afford to work with.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>We shouldn't be surprised by this. All technologies have tradeoffs.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The typical response to criticisms of AI is \"but what about a few years from now?\" There's a widespread assumption that AI's current flaws, like software bugs, are mere programming slip-ups that can be solved by a software update. But its biggest limitations are intrinsic. AI's strength is also its weakness. Its constraints are few and its capabilities are many—for better \u003Cem>and\u003C/em> for worse.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The startups that come out on top of the AI hype wave will be those that understand generative AI's place in the world: not just catnip for venture capitalists and early adopters, not a cheap full-service replacement for human writers and artists, and certainly not a shortcut to mission-critical code, but something even more interesting: an adaptive interface between chaotic real-world problems and secure, well-architected technical solutions. AI may not truly understand us, but it can deliver our intentions to an API with reasonable accuracy and describe the results in a way we understand.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>It's a new kind of UI.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>There are pros and cons to this UI, as with any other. Some applications will always be better off with buttons and forms, for which daily users can develop muscle memory and interact at high speeds. But for early-stage startups, occasional-use apps, and highly complex business tools, AI can enable us to ship sooner, iterate faster, and handle more varied customer needs.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>We can't ever fully trust AI—a lesson we'll learn again and again in the years ahead—but we can certainly put it to good use. More and more often, we'll find it playing middleman between the rigidity of a computer system and the anarchy of an organic one. If that means we can welcome computers further into our lives without giving up the things that make us human, so much the better.\u003C/p>\n\u003C!-- /wp:paragraph -->","html","2023-05-01T15:30:38.000Z",{"current":514},"ai-isnt-the-app-its-the-ui",[516,525,531,536],{"_createdAt":517,"_id":518,"_rev":519,"_type":520,"_updatedAt":521,"slug":522,"title":524},"2023-05-23T16:43:21Z","wp-tagcat-ai","fpDTFQqIDjNJIbHDKPBGpV","blogTag","2025-01-30T16:19:01Z",{"current":523},"ai","AI",{"_createdAt":517,"_id":526,"_rev":527,"_type":520,"_updatedAt":517,"slug":528,"title":530},"wp-tagcat-code-for-a-living","9HpbCsT2tq0xwozQfkc4ih",{"current":529},"code-for-a-living","Code for a Living",{"_createdAt":517,"_id":532,"_rev":527,"_type":520,"_updatedAt":517,"slug":533,"title":535},"wp-tagcat-generative-ai",{"current":534},"generative-ai","generative AI",{"_createdAt":537,"_id":538,"_rev":539,"_type":520,"_updatedAt":540,"description":541,"slug":550,"title":551},"2025-04-24T16:28:57Z","797b8797-6e65-4723-b53f-8bc005305384","vn3UzGZJyacwgllS8WZNgc","2025-04-24T16:29:32Z",[542],{"_key":543,"_type":65,"children":544,"markDefs":549,"style":73},"bb32f75814b4",[545],{"_key":546,"_type":69,"marks":547,"text":548},"dbcf27ef29b3",[],"Community-generated articles submitted for your reading pleasure. ",[],{"_type":10,"current":551},"contributed","AI isn't the app, it's the UI",[554,560,566,572],{"_id":555,"publishedAt":556,"slug":557,"sponsored":12,"title":559},"370eca08-3da8-4a13-b71e-5ab04e7d1f8b","2025-08-28T16:00:00.000Z",{"_type":10,"current":558},"moving-the-public-stack-overflow-sites-to-the-cloud-part-1","Moving the public Stack Overflow sites to the cloud: Part 1",{"_id":561,"publishedAt":562,"slug":563,"sponsored":503,"title":565},"e10457b6-a9f6-4aa9-90f2-d9e04eb77b7c","2025-08-27T04:40:00.000Z",{"_type":10,"current":564},"from-punch-cards-to-prompts-a-history-of-how-software-got-better","From punch cards to prompts: a history of how software got better",{"_id":567,"publishedAt":568,"slug":569,"sponsored":12,"title":571},"65472515-0b62-40d1-8b79-a62bdd2f508a","2025-08-25T16:00:00.000Z",{"_type":10,"current":570},"making-continuous-learning-work-at-work","Making continuous learning work at work",{"_id":573,"publishedAt":574,"slug":575,"sponsored":12,"title":577},"1b0bdf8c-5558-4631-80ca-40cb8e54b571","2025-08-21T14:00:25.054Z",{"_type":10,"current":576},"research-roadmap-update-august-2025","Research roadmap update, August 2025",{"count":579,"lastTimestamp":580},13,"2023-06-20T15:33:26Z",["Reactive",582],{"$sarticleModal":583},false,["Set"],["ShallowReactive",586],{"sanity-EKwKxXaN_WJEFfOOagaGBXmK5GJI3XCN4lcaKGvVTgQ":-1,"sanity-comment-wp-post-22083-1756687696786":-1},"/2023/05/01/ai-isnt-the-app-its-the-ui"]