Ethical AI isn’t just how you build it, it’s how you use it
Lapses such as racially biased facial recognition or apparently sexist credit card approval algorithms have thankfully left companies asking how to build AI ethically. Many companies have released “ethical AI” guidelines, such as Microsoft’s Responsible AI principles, which requires that AI systems be fair, inclusive, reliable and safe, transparent, respect privacy and security, and be accountable. These are laudable, and will help prevent the harms listed above. But often, this isn’t enough.
When the harm is inherent in how the algorithm is used
Harm can result from what a system is used for, not from unfairness, black-boxyness, or other implementation details. Consider an autonomous Uber: if they are able to recognize people using wheelchairs less accurately than people walking, this can be fixed by using training data reflective of the many ways people traverse a city to build a more fair system.
But even with this inequity removed, some people may believe the intended use of the algorithm will cause harm: the system is designed to drive cars automatically, but will displace already precarious rideshare drivers, who will feel this as a use-based harm that no amount of technical implementation fixes can remedy.
A darkly hilarious academic article analyzed a hypothetical AI system designed to mulch elderly people into nutritious milkshakes. They corrected racial and gender unfairness in how the system chose who to mulch, provided a mechanism for designated mulchees to hold the system accountable for errors in computing their status, and provide transparency into how the mulching designation algorithm works. This satirical example makes the problem clear: a system can be fair, accountable, transparent yet still be patently unethical, because of how it is used. Nobody wants to drink their grandma.
For a non-hypothetical example I’ve studied: consider Deepfake algorithms, an AI technology often used for harm. Nearly all deepfakes online are pornography made without the consent of the overwhelmingly female victims they they depict. While we could make sure that the generative adversarial network used to create Deepfakes performs equally well on different skin types or genders, these fairness fixes mean little when harm is inherent in how algorithms are used: to create non consensual pornography leading to job loss, anxiety, and illness.
“Building it Better” is seen as good, but “policing” use is not
These kinds of use-based harms aren’t often caught by AI Ethics guidelines, which usually focus on how systems are built, not how they are used. This seems like a major oversight.
Why the focus on how AI is implemented, rather than how it is used?
It may be because of how these guidelines are used: in my experience, ethical AI principles are often used to guide and review systems as software engineers build them, long after the decision to build the system for a certain client or use has been made by someone high up the management chain. Avoiding use-based harm sometimes requires refusing to work with a certain client or not building the system altogether. But Ethical AI “owners” in companies don’t often have this power, or even when they do, suggesting to not build and not sell a piece of software can be socially difficult in a company that builds and sells software.
Ethical AI guidelines may also be deliberately designed to draw attention away from whether companies ought to build and sell a system by instead drawing attention to the more narrow question of how it is built. Researchers analyzed seven sets of ethical AI principles published by tech companies and connected groups, and found that “business decisions are never positioned as needing the same level of scrutiny as design decisions,” suggesting that profit motives may encourage scrutiny on how to build the system rather than broader business decisions such as whether and whom to sell the system to. It makes sense that companies’ ethical AI guidelines focus on how their software is built, rather than how it is used: focusing on the latter would restrict who companies can sell to.
But even without profit motives, the Free Software movement guarantees the “freedom to run the program as you wish, for any purpose,” even for harm, and open source licenses may not curtail how the software is used. My own research shows that open source contributors use ideas from free and open source licenses to similarly disclaim accountability for harm caused by the software they help build: they just provide a neutral tool, ethics is up to their often unknown users.
Software workers need a say in downstream use
But there are important signs of resistance to a narrow framing of ethical AI which ignores uses.
Tech workers are organizing not just to improve their own working conditions, but are also demanding a say in how the tech they create is used. The union-associated Tech Workers Coalition demands that “Workers should have a meaningful say in business decisions … This means that workers should have the protected right to … raise concerns about products, initiatives, features, or their intended use that is, in their considered view, unethical.”. Google workers protested Project Maven because it was to be used to aid drone strike targeting for the US Military. They were demanding that the fruits of their labor not be used to wage war. They weren’t protesting a biased drone strike targeting algorithm.
They were demanding that the fruits of their labor not be used to wage war. They weren’t protesting a biased drone strike targeting algorithm.
From the open source community comes the Ethical Source movement, seeking to give developers “the freedom and agency to ensure that our work is being used for social good and in service of human rights” by using licenses to prohibit uses that project contributors see as unethical.
What can software engineers do?
As we wrestle with ethics as we build ever powerful systems, we must increasingly assert agency to prevent harms resulting from how users may use systems we build. But organizing a union or questioning decades of free software ideology is a lengthy process, and AI is used for harm now. What can we do today?
The good news is that how a system is built affects how it is used. Software engineers often have latitude to decide how to build a system, and these design decisions can be used to make harmful downstream use less likely, even if not impossible. While guns can be tossed around like frisbees, and you might even be able to use a frisbee to kill someone if you tried hard enough, engineers made design decisions to make guns more lethal and frisbees (thankfully) less lethal. Technical restrictions built into software might detect and automatically prevent certain uses, making harmful use harder and less frequent, even if a determined or skilled user could still circumvent them.
We can also vote with our feet: software engineers are in high demand and command high salaries. As many are already doing, we can ask questions about how the systems we’re asked to build might be used, and if we have concerns that are not met, we can find employment elsewhere with only a small if any gap in employment or pay cut.
Author’s Note: Please fill out this 10 minute survey to help us understand ethics concerns that software developers encounter in their work! (Ed. note: Survey is not affiliated with Stack Overflow.)
– – –
David Gray Widder is a PhD Student in Software Engineering at Carnegie Mellon, and has studied challenges software engineers face related to trust and ethics in AI at NASA, Microsoft Research, and Intel Labs. You can follow his work or share what you thought about this article on Twitter at @davidthewid.
Tags: ai, ethics
13 Comments
Ethics don’t apply to machines or programs. They know nothing of right or wrong.
Ethics apply to the people who use the machines or programs.
If you want to deal with the ethics, you have to deal with the people, not the technology.
A car is ethically neutral. It has no concept of good or bad. You can use it to transport yourself or to run over and kill other people. The car is not at fault. The driver is.
The same goes for programs. The software does whatever it is made to do.
Whether what is does is good or bad depends on how the operator uses it.
Correct, sir.
Been saying this since the early days of AI (that was, back then, still Sci-Fi.)
To put it extremely simple: AI is NOT a threat… unless we make biological computers.
A “machine” will never be capable of… desire.
A simple BIOLOGICAL feat, necessary to destroy the human species.
No matter how intelligent the machine will become, if you tell it to shut down, it… simply will.
Why?
No DESIRE to ‘survive’.
As long as a computer is technological, there simply will be no threat.
But once we make biological, emotionally capable computers, things WILL change…
The change a biological computer would be kind, is less than 0 percent, as it has EMOTION, therefore a will, a will to survive at any cost.
Primary it will become afraid.
Fear will thrive it to become even more willing to survive.
And there year 0 begins: the year of human extinction, the year of the biological machine supremacy…
As long as we’re clever enough to never link a biological computer to the network, we’ll be fine.
If we keep it confined, we’ll be fine.
Sadly, we’re too stupid for that.
I mean… look at our history…
We keep repeating the same mistake, and again…
And so, we will make that mistake too, against better knowledge.
Sic… vita est…
I’m not convinced that genuine intelligence or emotion can exist solely through carbon-based life forms. Whether a computer can have a “soul” or not is the province of mystics, not programmers.
If a computer/program cannot feel the desire to survive, but is nevertheless programmed to protect itself, does it matter?
“A “machine” will never be capable of… desire.”
150 years ago, nobody thought a heavier than air machine could fly…
Hi Joseph. I’m not so sure that objects are or can be ethically neutral. You mentioned a car is ethically neutral. Would you say a Gun is ethically neutral?
Here’s an interesting article asking that question, which you may enjoy: https://www.theatlantic.com/technology/archive/2012/07/the-philosophy-of-the-technology-of-the-gun/260220/
If we’re constantly tweaking these algorithms to satisfy the progressive sentiment of the week, it’s not AI. It’s an “expert system” at best, another Twitter at worst.
If you want an “ethical AI,” you must design an AI with a conscience. A conscience that cares more about genuine progress, success and effectiveness than it does about virtue signaling.
Good article!
At AI conferences, common concerns are racial and gender bias. Surveillance abuse is rarely (if at all) discussed. China is already a surveillance dystopia where every action & word is automatically transcribed and tied to your social credit score. Imagine what North Korea will do! The infrastructure is in place for such a system in America with ubiquitous cameras.
Even more disturbing, a class of AI called “reenforcement algorithms” update the environment rather than just observe it. There are 3 parts to reenforcement AI: an observer, the environment, and an agent. The observer delivers information about the environment to the agent. The agent than makes decisions with the goal of changing the environment according to a specific policy. These algs think many moves ahead and surpass humans on tasks like video games and chess. We can’t beat them.
To counter complete control by a few actors: we need limited surveillance, decentralized data storage, restrictions on digital identification, uncensored speech, full algorithmic transparancy. This explains why Elon Musk wants to purchase Twitter, open-source it’s AI, and support free speech.
Dürrenmatt’s “Die Physiker” all over again: “Was einmal gedacht wurde, kann nicht mehr zurückgenommen werden.” (What once has been thought cannot be taken back.) Once the technology or idea exists, it’s about how (or whether) it is used…
Exactly my thought. The article lives in a wonderful world where downstream use cannot only be controlled but development of unethical systems can easily prevented in the first place (and that by code monkeys down in the food chain!). Following Dürrenmatt’s line of thought there is no point in trying to prevent the development of potentially unethical systems (even if we could accurately classify unethical systems). If you do not do it, someone else will do it soon.
Hi Manziel! We don’t live in that wonderful world were downstream use can be controlled or harmful use fully prevented – but I don’t think that’s a reason not to acknowledge and discuss these harmful uses, and ask what we can do to make them less likely, even while not being able to fully prevent them.
I wrote this article to hopefully raise discussion about how we might do this – and ask why (in my view) we tend to focus on harms such as bias over harmful downstream use.
Many problems here.
Free (libre) software means access to source code, and ability to modify it, remix it, etc. It has always meant this. It’s not some Apple model where you need to ask pretty please to some central authority every time you change something or run it. It’s not going to change either. So to “police AI use”, or at least most of it, you need to end free software. “Safety vs freedom”, on a fundamental scale.
This also raises the very important question of: who is to define right from wrong? What you might consider abhorrent, might be someone else’s grey area, or even perfectly acceptable to some. You can’t solve this with a simple majority or some governing board either, there’s a reason philosophy isn’t computer science where you just download someone else’s solution and call it a day.
Not to mention regional differences. What are you going to do about Saudi Arabia, China, Israel, etc? Good luck telling them what they can and can’t do with software you think you made available “for ethical uses only”.
Take that example of deepfakes. You only need the code (already out there, and could be made again even if it was gone), a personal computer that obeys you as an individual, and the ability to communicate with other individuals. You cannot remove any of these without licking some form of totalitarian boot, and not many people find them tasty. And that’s just deepfakes.
You cannot “solve” ethics. It’s up to individuals. Always was, always will be. You can argue, and try to convince. You can ignore. You can devise fascistic machinations full of “good intentions”. But you cannot “solve” ethics.
Hey Camilo! I agree with a lot of what you say — but I think you’re looking for a total fix, and then finding that this is impossible or inadvisable, and I would perhaps agree with you. No one wants to lick no totalitarian boot 🙂
However, we can work in probabilities rather than certainties to prevent harmful use. For example, the deepfake tools built by the people I interviewed were still open source software, available to anyone, and thus able to be used for (what you or I might consider) harm. However, the creators recognized certain harms they wanted to dissuade: making non-consensual porn. While can download their code, and use it for porn, if they were discovered doing so, they would be banned from their community forum and chat, and denied support. So this raised the “barrier to entry” to harm, without contravening the dictates of open source software. This also set strong norms about what is or isn’t acceptable, and norms themselves can guide behavior. The end result: less harmful non-consensual porn created, without contravening the dictates of Free Software.
As a psychologist practicing human factors, I find systems design as the “thing” requiring ethical considerations. AI is just one facet of any system of systems, and while AI helps most socio-technical systems these days, those systems as a whole are what have the ethical impact, as it seems as if you are saying.
For example, if I consider the OODA Loop (Boyd) as a model for decision systems, AI impacts the decision step, and partially observation and orientation. As AI is devops to produce the Action step, there must, and always will be, a human in that loop at some point, no matter how lightly the human as the consumer of that system of a systems is designed into the loop via function allocation (Fitts, Sheridan, Parasuraman and others).
In large systems worthy of AI solutions, there are more than likely multiple producers creating, and multiple consumers interacting with, the system of systems containing (usually) multiple AI. It is the responsibility of all interacting with the system of systems to take rightful action when matters or ethics arise (eg, ieee p7000 systems design, p7010 wellness and wellbeing).
Goggle engineers protesting the use of image recognition for drone strikes should also consider the needs of the drone watchstanders who need to operate those systems. If they are not directly involved in the systems design those watchstanders use (the human in that loop), then that to me is the ethical challenge. And they too have an ethical obligation to investigate claims of sentience, not because and engineer claims it, but because that engineer perceives it and is impacted by it.