Although ethics has been part of computer science for decades, its focus has been on methods of storage and security for traditional systems of record. Whilst these needs still exist in AI systems, we also need to factor in wider ethical influences as we build the future.

Today, we’re evolving into systems of engagement, where end-users directly engage with AI systems in their day-to-day lives.

When we make decisions that directly affect end users, we need to align those decisions with the moral constructs we live by.

To do so, we must re-evaluate how we look at ethics in computer science—we’re no longer affecting just a person's data, but also the person themself. In fact, according to CloudFactory’s recent Ethics in AI survey of IT professionals in the U.K., 69% of IT leaders say that they encounter ethical issues often or all the time.1 By acting early and decisively, we can avoid the dystopian futures laid out in science fiction novels, where our lives are dictated by computers and humans have no social mobility.

A humanoid in the background of binary numbers

Illustration by boscorelli.

Ethics in supervised and unsupervised learning

Ethics is based on three main principles: Truth, rightness, and sincerity. If a person is being truthful to the best of their knowledge, acting in line with the norms of the world in which they exist, and taking your personal world into account, you would say the person is acting ethically. But as soon as one of those elements is not true, they immediately flip to being unethical.

To put it into context, imagine you've lost and ask someone for directions. You ask a person who appears to care about your personal situation; they’re courteous to you when they find out you’re lost. Then, unbeknownst to you, they give you the wrong directions to your location—on purpose. You'd say that person is unethical. They’re breaking the trust that people innately give to one another. The same is true of the systems we use.

We can train an AI system to provide information in two ways: Supervised and unsupervised learning. With supervised learning, we partially de-risk ethical issues by creating checkpoints at which humans validate the outputs of the system. It doesn’t mean the system is perfect or even ethical though.

With unsupervised learning, we can find ourselves self-reinforcing unethical logic. In the early days of unsupervised learning, there were a few now-quite-notorious examples of how systems using these methods got out of hand.

One of the most prolific, and public, of those systems was Tay. Tay was a Twitter bot produced by Microsoft. It learned through its interactions with other platform users and quickly spiraled into spewing misogynistic, anti-Semitic, and hateful content on the platform.

Twitter bot - Tay

Photo by Profit_Image.

Microsoft wasn’t alone in this, though. IBM’s Waston had similar issues after learning of the Urban Dictionary; it had to be muzzled to reduce the same kind of unethical behavior.

This doesn’t mean supervised learning is “better” than unsupervised learning. Other elements beyond training also play into the ethics of a system (or person) such as the source of data, how we check if the system (or we!) made the wrong decision and how we set goals for systems or ourselves.

Goalsetting in ethically designed AI systems

To tackle the first challenge of successfully applying ethics into an AI system, we need to think about goalsetting. At its most fundamental level, goalsetting is trying to maximize a goal within a set of defined parameters.

Parameters might be real-world constraints, resources, or time, for example. Or they could be business-related, such as customer satisfaction, profitability, or cost. Without limiting parameters, we’re designing a utopian system, which in practice provides no useful value at all.

Whilst you may not realize it, every time we humans make a decision, we’re aiming to maximize a goal based on the limitations in front of us. Most of these decisions we don’t consciously make; others, we do. It’s also worth noting that we don’t always manage to achieve the “best” outcome; sometimes we fail spectacularly even when we try. There have been far too many meetings where I’ve tried my best to arrive on time only to be foiled by public transport around London!

Man looking at a digital timetable for public transportation

Photo by Shooting Star Studio.

Avoiding bias while training AI models

The second challenge with training AI models is that it's incredibly easy to bias them. One of the principles of ethics is rightness, acting in line with the norms of the world. When you train a model, you’re defining the world it operates in, and if that world is not an accurate representation, the outputs of the model will seem odd, such as recommending you save more money than you earn in a year for retirement. Outputs can also be unethical, such as a hypothetical scenario in which you want to make sure no human is unhappy. One seemingly easy but unethical solution is to have no humans!

To relate back to humans and how we perceive ethics, consider cases where highly isolated communities have been exposed to the modern world. The new world often deems their practices immoral simply because they’re different. We want to avoid this happening when a newly released AI system interacts with the world, and with users, for the first time.

To make things even more challenging, sometimes bias is actually good for a system, such as when we don’t want to replicate the status quo. There have been cases in banking where certain minorities have had challenges accessing credit; building a system based on historical data is only going to replicate that issue.

Meta (formerly Facebook) and ethics

You might think we have years to sort out the issue of ethics and AI but, sadly, we do not. We’re facing real issues today, especially inside the big tech companies. It’s not all bad though; big tech has done some great things with AI and machine learning. For example, Meta, formerly Facebook, has been actively monitoring posts and assisting users with suicide prevention. Whilst they don’t publish statistics, any life Meta saves through this type of monitoring is valuable; we should applaud it.

This also doesn’t mean Meta is perfect. They are, in fact, far from it. As I’m sure you’re aware, Meta makes money by delivering ads to users. Meta users have become the product they’re selling. If you’re using Meta as a business, you may want to pay them to deliver ads to their user base. Meta’s Business Help Center is the main resource for advertiser information. They define the goal of their machine learning ads-matching system like this:

“...[to help] each advertiser meet their goals at the lowest cost, the delivery system uses machine learning to improve each ad's performance.”
Person checking phone notifications

Photo by RoBird.

If you take their words at face value, it seems that Meta is trying to provide the best value for their customers—organizations that want to advertise on the platform. But the reality may well be different. There have been reports that this goal may actually be causing a rise in the hateful and decisive advertising being shown across the platform. As you would expect, Meta has denied this. Without further information, we can’t say who, if anyone, is at fault. But it does serve as a cautionary tale that even large companies experienced with AI run the risk of creating systems that don’t always perform ethically.

Seek guidance when designing ethical AI systems

There are no silver bullets for designing an ethical AI system. In fact, our survey revealed that 64% of IT leaders think AI systems will be fundamentally biased.2 Governments, enterprises, and not-for-profits provide the guidance you can use as a basis for designing an ethical AI system.

My recommendation is to start with a group like the Alan Turing Institute, which has made a good start teaching the world how to build ethical AI systems with Project ExplAIn.

But no guidance is perfect, especially when you’re communicating across borders. Whereas the developed world predominantly conforms to the same ethical values, nuanced differences still exist between countries. Even in Europe, norms can change within a few hundred miles, as is the case with France, Benelux, and Germany, which weigh the importance of values differently and even have different values themselves. That’s why it’s important to keep in mind the audience for your system. Having a single ubiquitous system and model across borders may not be suitable; you may instead want to train separate models to suit local customs and ethics.

Conclusion: Steps to building a successful, ethical AI system

Ethical AI System

Photo by metamorworks.

What’s the best course of action if you’re designing or overseeing the design of an AI system? 

The first step is to look into ethical frameworks you can use to help guide you. Consider Project ExplAIn, Google’s Responsible AI framework, or the EU’s proposed legal framework for AI development. By aligning to a framework, you’ll have a process and structure to follow, as well as a method of accountability for the model.

The next step is to think critically about goalsetting. Look at the strictest way to apply your goals while removing yourself from the typical social constraints you work within in your daily life. This may feel uncomfortable, as you may have to go against what you think the “right” answer is, for example, setting a goal to maximize loan repayability might be discriminatory against minorities.

The final, and possibly most challenging, step is to realize that failure is inevitable. It’s not about if you fail, but when—and how you react when you do. Allowing for human override and maintaining continual development are keys to building a successful and ethical AI system.

In my next post in this Ethics of AI series, I’ll discuss the ethical sourcing of training data, a hot-button topic that delves into copyright and fair use, identifiable and unidentifiable people, data obfuscation, edge cases, and over-representation.

If you’d like to learn more about creating an ethical AI system, and how CloudFactory can help with those efforts, please contact us.

1 CloudFactory’s Ethics in AI Survey, with 150 respondents, was conducted by Arlington Research in March 2022.
2 CloudFactory Ethics in AI Survey.

ML Models AI & Machine Learning

Get the latest updates on CloudFactory by subscribing to our blog