Most people strive to be good people. Humans follow an ethical system of principles that helps us determine right from wrong. Ethics allow us to appropriately deal with dilemmas by eliminating behaviors that do not conform to our sense of right and wrong.

In our latest webinar by Solutions Consultant, Matt Beale, called Ethically Designed AI Systems, he discusses how ethical principles apply to AI and emphasizes the real challenge and effort required to do things ethically.

Matt discusses several topics during the webinar including:

  • The three main principles of ethics - truth, correctness, and sincerity
  • The widespread belief that AI is biased
  • The importance of ethical AI in the success of a company
  • How a lack of ethical AI can cause harm to a company's success
  • The four steps needed to ensure ethical AI

If you don't have time to watch the entire webinar today, this blog post will jump ahead to what Matt says are the 4 steps to ethical AI:

Step 1: Be aware at the start

Kick off your projects by creating a responsible AI framework and follow it. The Data Ethics Canvas is a great place to begin, but there are many other frameworks and guides to get you started.

Not only is having this framework in place the “right” thing to do but there are potential consequences for failing to do so, like GDPR fines and several proposed legislations worldwide.

Step 2: Diversify data

Consider building your own datasets and not just taking the most convenient route because it could introduce bias. Most open-source data and data sourced via social media or search engines are very Western-focused and will not represent the world’s population.

Collecting data “in the wild” is always best where possible, and data curation and classification are vital steps in preventing bias. The best-case scenario is having a diverse team with unique backgrounds to help you create a holistic view and minimize unconscious bias.

Take a deeper dive into ethical data-sourcing practices.

Step 3: Have an annotation strategy

Historically, more data has always been considered better when it comes to machine learning. While that is true to a point, more bad data will not make your models magically better or remove bias. Knowing what extra data you need to improve your model is incredibly helpful, and active learning can be a hugely beneficial way to do this.

Active learning is when an algorithm looks for where you have confusion within your dataset and pulls those forward to be annotated next, reducing the amount of data needed by up to 80%.

Some labeling solutions, like our Accelerated Annotation product, have active learning technology built in.

Step 4: Handle exceptions with humans in the loop

The most valuable data you have for machine learning is the data your model doesn’t understand. For example, when you have a result with a low confidence score, humans in the loop can manage those results remarkably well. How humans tackle these edge cases, exceptions, and errors is what takes machine learning from only knowing about typical historical cases to being able to solve for new challenges and overcome historical biases that exist within datasets. In addition, humans in the loop provide labels that improve the model in future training datasets.

The challenge is that the world does not and will not stay still. You could build a perfect model today, but it needs to be updated tomorrow. Humans in the loop also assist in the model development process by keeping the model up to date, feeding new, accurate annotations into the model. If you don’t keep models up to date, you leave yourself open to bias in your outcomes.

Read more about AI ethics in Matt's blog series and watch the full webinar here.

Watch Ethically Designed AI Systems: How to Take Ethics Beyond Algorithms Webinar

Training Data AI Bias AI & Machine Learning

Get the latest updates on CloudFactory by subscribing to our blog