Loading…

Ethical AI isn’t just how you build it, it's how you use it

What good is removing biases from a robot that turns grandmothers into smoothies?

Article hero image

Lapses such as racially biased facial recognition or apparently sexist credit card approval algorithms have thankfully left companies asking how to build AI ethically. Many companies have released “ethical AI” guidelines, such as Microsoft’s Responsible AI principles, which requires that AI systems be fair, inclusive, reliable and safe, transparent, respect privacy and security, and be accountable. These are laudable, and will help prevent the harms listed above. But often, this isn’t enough.

When the harm is inherent in how the algorithm is used

Harm can result from what a system is used for, not from unfairness, black-boxyness, or other implementation details. Consider an autonomous Uber: if they are able to recognize people using wheelchairs less accurately than people walking, this can be fixed by using training data reflective of the many ways people traverse a city to build a more fair system.

But even with this inequity removed, some people may believe the intended use of the algorithm will cause harm: the system is designed to drive cars automatically, but will displace already precarious rideshare drivers, who will feel this as a use-based harm that no amount of technical implementation fixes can remedy.

A darkly hilarious academic article analyzed a hypothetical AI system designed to mulch elderly people into nutritious milkshakes. They corrected racial and gender unfairness in how the system chose who to mulch, provided a mechanism for designated mulchees to hold the system accountable for errors in computing their status, and provide transparency into how the mulching designation algorithm works. This satirical example makes the problem clear: a system can be fair, accountable, transparent yet still be patently unethical, because of how it is used. Nobody wants to drink their grandma.

For a non-hypothetical example I’ve studied: consider Deepfake algorithms, an AI technology often used for harm. Nearly all deepfakes online are pornography made without the consent of the overwhelmingly female victims they they depict. While we could make sure that the generative adversarial network used to create Deepfakes performs equally well on different skin types or genders, these fairness fixes mean little when harm is inherent in how algorithms are used: to create non consensual pornography leading to job loss, anxiety, and illness.

“Building it Better” is seen as good, but “policing” use is not

These kinds of use-based harms aren’t often caught by AI Ethics guidelines, which usually focus on how systems are built, not how they are used. This seems like a major oversight.

Why the focus on how AI is implemented, rather than how it is used?

It may be because of how these guidelines are used: in my experience, ethical AI principles are often used to guide and review systems as software engineers build them, long after the decision to build the system for a certain client or use has been made by someone high up the management chain. Avoiding use-based harm sometimes requires refusing to work with a certain client or not building the system altogether. But Ethical AI “owners” in companies don’t often have this power, or even when they do, suggesting to not build and not sell a piece of software can be socially difficult in a company that builds and sells software.

Ethical AI guidelines may also be deliberately designed to draw attention away from whether companies ought to build and sell a system by instead drawing attention to the more narrow question of how it is built. Researchers analyzed seven sets of ethical AI principles published by tech companies and connected groups, and found that “business decisions are never positioned as needing the same level of scrutiny as design decisions,” suggesting that profit motives may encourage scrutiny on how to build the system rather than broader business decisions such as whether and whom to sell the system to. It makes sense that companies’ ethical AI guidelines focus on how their software is built, rather than how it is used: focusing on the latter would restrict who companies can sell to.

But even without profit motives, the Free Software movement guarantees the “freedom to run the program as you wish, for any purpose,” even for harm, and open source licenses may not curtail how the software is used. My own research shows that open source contributors use ideas from free and open source licenses to similarly disclaim accountability for harm caused by the software they help build: they just provide a neutral tool, ethics is up to their often unknown users.

Software workers need a say in downstream use

But there are important signs of resistance to a narrow framing of ethical AI which ignores uses.

Tech workers are organizing not just to improve their own working conditions, but are also demanding a say in how the tech they create is used. The union-associated Tech Workers Coalition demands that “Workers should have a meaningful say in business decisions … This means that workers should have the protected right to … raise concerns about products, initiatives, features, or their intended use that is, in their considered view, unethical.”. Google workers protested Project Maven because it was to be used to aid drone strike targeting for the US Military. They were demanding that the fruits of their labor not be used to wage war. They weren’t protesting a biased drone strike targeting algorithm.

They were demanding that the fruits of their labor not be used to wage war. They weren’t protesting a biased drone strike targeting algorithm.

From the open source community comes the Ethical Source movement, seeking to give developers “the freedom and agency to ensure that our work is being used for social good and in service of human rights” by using licenses to prohibit uses that project contributors see as unethical.

What can software engineers do?

As we wrestle with ethics as we build ever powerful systems, we must increasingly assert agency to prevent harms resulting from how users may use systems we build. But organizing a union or questioning decades of free software ideology is a lengthy process, and AI is used for harm now. What can we do today?

The good news is that how a system is built affects how it is used. Software engineers often have latitude to decide how to build a system, and these design decisions can be used to make harmful downstream use less likely, even if not impossible. While guns can be tossed around like frisbees, and you might even be able to use a frisbee to kill someone if you tried hard enough, engineers made design decisions to make guns more lethal and frisbees (thankfully) less lethal. Technical restrictions built into software might detect and automatically prevent certain uses, making harmful use harder and less frequent, even if a determined or skilled user could still circumvent them.

We can also vote with our feet: software engineers are in high demand and command high salaries. As many are already doing, we can ask questions about how the systems we’re asked to build might be used, and if we have concerns that are not met, we can find employment elsewhere with only a small if any gap in employment or pay cut.

Author’s Note: Please fill out this 10 minute survey to help us understand ethics concerns that software developers encounter in their work! (Ed. note: Survey is not affiliated with Stack Overflow.)

– – –

David Gray Widder is a PhD Student in Software Engineering at Carnegie Mellon, and has studied challenges software engineers face related to trust and ethics in AI at NASA, Microsoft Research, and Intel Labs. You can follow his work or share what you thought about this article on Twitter at @davidthewid.

Login with your stackoverflow.com account to take part in the discussion.