How to Build Ethics into AI - Part I Research-based recommendations to keep humanity in AI

9 min read
Kathy Baxter's Picture
Robot hand and human hand forming a heart
Source: “Heart Shaped” by dimdimich.

Published: March 27, 2018

This is part one of a two-part series about how to build ethics into AI. Part one focuses on cultivating an ethical culture in your company and team, as well as being transparent within your company and externally. Part two focuses on mechanisms for removing exclusion from your data and algorithms. Each of the recommendations includes examples of ethical missteps and how they might have been prevented or mitigated.


It seems like each day there are articles about how an artificial intelligence (AI) system caused offense (e.g., labeling African Americans as “gorillas”) or actual harm when the intent may have been well-meaning (e.g., racial bias in criminal sentencing recommendations and interest rates).

The developers in each of these systems did not set out to offend or harm anyone and didn’t anticipate the negative outcomes but should they have? If you are designing and building an AI system, can you build in ethics? Regardless of your role in an organization, can you help ensure that your AI system leads to a more just society rather than perpetuating societal biases? The answer to all these questions is, “Yes!”

Do Well and Do Good

Salesforce’s CEO, Marc Benioff, has said, “My goals for the company are to do well and do good.” This is at the heart of our core values of trust, equality, and innovation. We strongly believe that we can be at the forefront of innovation, be successful, and be a force for good in the world. We work internally on building ethics into Einstein (our AI system) and collaborate with other members in the Joint Partnership for AI.

Embedding ethics in your AI system takes time and may require you to work differently from the way you or your company has always worked. However, given the great potential for both harm and benefit with AI, it is critical that you make the investment!

An End-to-End Approach

The process for building ethics into your system can be broken into three stages with several steps within each:

  1. Create an ethical culture
  2. Be transparent
  3. Remove exclusion

Create an Ethical Culture

If you don’t build a strong foundation to start, the effort required to be successful will always be greater. This involves building a diverse team, cultivating an ethical mindset, and conducting a social systems analysis.

Build a Diverse Team

Recruit for a diversity of backgrounds and experiences to avoid bias and feature gaps.

When Apple’s HealthKit came out in 2014, it could track your blood alcohol content but you couldn’t track the most frequent health issue most women deal with every month.

Research shows (1, 2, 3, 4, 5, 6) that diverse teams (including experience, race, gender) are more creative, diligent, and harder-working. Including more women at all levels, especially top management, results in higher profits.

Lack of diversity creates an echo chamber and results in biased products and feature gaps. If the team developing Apple’s HealthKit had more (any?) women on the team, they likely would have identified the glaringly absent feature for 50% of the population. This example points to a lack of gender diversity, but all types of diversity are needed, from age and race to culture and education.

If you are unable to hire new members to build a more diverse team, seek out feedback from diverse employees in the company and your user base.

Cultivate an Ethical Mindset

Ethics is a mindset, not a checklist. Empower employees to do the right thing.

Uber’s Chief Executive Officer credits whistleblowers with forcing the company to make changes and “go forward as a company that does the right thing.”

Simply having a Chief Ethics Officer doesn’t prevent companies from making ethical missteps. That is because no one individual can or should be responsible for a company acting ethically. There must be an ethical mindset throughout the company.

Individual employees must be able to empathize with everyone that their AI system impacts. Companies can cultivate an ethical mindset through courses, in-house support groups, and equality audits.

Additionally, employees should feel empowered to constantly challenge each other by asking, “Is this the right thing to do?” In product reviews and daily stand-ups, people should ask ethical questions specific to their domains. For example:

  • Product Managers: “What is the business impact of a false positive or false negative in our algorithm?”
  • Researchers: “Who will be impacted by our system and how? How might this be abused? How will people try to break the product or use it in unintended ways?”
  • Designers: “What defaults or assumptions am I building into the product? Am I designing this for transparency and equality?”
  • Data scientists and modelers: “By optimizing my model this way, what implications am I creating for those impacted?”

When employees are dissatisfied with the answers they receive, there needs to be a mechanism for resolving them.

Simply having a Chief Ethics Officer doesn’t prevent companies from making ethical missteps. That is because no one individual can or should be responsible for a company acting ethically. There must be an ethical mindset throughout the company.

Conduct a Social Systems Analysis

Involve stakeholders at every stage of the product development lifecycles to correct for the impact of systemic social inequalities in AI data.

The Chicago police department used an AI-driven predictive policing program to identify people at the highest risk of being involved in gun violence. This program was found to be ineffective at reducing crime but resulted in certain individuals being targeted for arrest.

Social-systems analysis is the study of the groups and institutions that interact in an ecosystem. Rather than assuming that a system will be built, social systems analysis asks if the system should be built in the first place and then proceeds to design the system based on the needs and values of stakeholders. This can be done by conducting ethnography in the community impacted or getting feedback from an oversight committee or legal institution.

Referring to the example of Chicago’s predictive policing program, Kate Crawford & Ryan Calo suggest the following: “A social-systems approach would consider the social and political history of the data on which the heat maps are based. This might require consulting members of the community and weighing police data against this feedback, both positive and negative, about the neighborhood policing.”

Organizations must understand how their creations impact users and society as a whole. By understanding these impacts, they can determine those that are most vulnerable to the system’s negative effects. From a statistical standpoint, there may be only a 1% chance of a false positive or false negative (excellent from a statistical perspective!) but for that 1% of the population, the result can be extremely harmful. Are the risks and rewards of the system being applied evenly to all? Who benefits and who pays based on the results of the AI? Asking this question at every stage of the AI’s development, including pre- and post-launch, can help identify harmful bias and address it.

From a statistical standpoint, there may be only a 1% chance of a false positive or false negative, … but for that 1% of the population, the result can be extremely harmful.

Be Transparent

To be ethical, you need to be transparent to yourself, your users/customers, and society. This includes understanding your values, knowing who benefits and who pays, giving users control over their data, and taking feedback.

Understand Your Values

Examine the outcomes and trade-off of value-based decisions.

Some people fear that AI assistants like Siri and Google are always listening. They have been designed to guess what users want to know before they’re asked, providing extremely useful just-in-time information. However, it also raises concerns among privacy and security-conscious users.

An individual’s or company’s values may come into conflict when making decisions, which results in compromises. For example, users love the convenience of personalized results but may be concerned about what a company knows about them (privacy) or what the company may choose not to disclose to them (discrimination). Unfortunately, AI assistants are found to be not so useful for everyone since their training seems to exclude African-American voices. When tradeoffs are made, they must be made explicit to everyone affected. This can be difficult if AI algorithms are “black boxes” preventing their creators from knowing exactly how decisions are made.

Constant examination of outcomes is required to understand the impact of those tradeoffs. Let’s say your company is designing an AI-enhanced security system that results in some loss of individual privacy. Consider the following:

  • If protecting user privacy is a stated company value, employees (not just the top execs) should be aware of this tradeoff.
  • Additionally, customers and the public should be informed as to how individual privacy is impacted by using the security system.
  • If this is hidden for fears of a PR backlash, then it must be asked, “Is user privacy really a company value?”

Explaining why the company made the tradeoff and what it is doing to mitigate harm can go a long way to keeping the public’s trust.

Give Users Control of Their Data

Allow users to correct or delete data you have collected about them.

Google’s goal is to make the world’s information “universally accessible and useful.” Since 2014, they have received 2.4 million “right to be forgotten” requests to remove information private individuals, politicians, and government agencies find damaging. However, Google has complied with only 43.3% of requests.

Companies can collect and track a stunning amount of data about their users online, in stores, and from internet-enabled (IoT) devices. It is only ethical to allow users to see what data you have collected about them and to correct it or download and delete the data. If your company is operating in the EU, you need to be aware of the EU’s General Data Protection Regulations (GDPR) and how it impacts what you may collect and store, as well as rules around allowing users/customers to download and delete their data.

In addition, make sure it is possible to accurately represent the data. For example, is it possible for users to indicate their gender if they identify as non-binary? Do they have the option to select more than one racial background?

If the data collected are anonymized and it is not possible for users to see exactly what the company knows about them and edit it, clearly communicate the kind of data collected and enable individuals to opt-out. If users can’t use the product without the data collection, communicate that as well.

Take Feedback

Allow users to give feedback about inferences the AI makes about them.

Three national credit bureaus gather information on individuals to create credit reports that lenders use to determine the risk of a potential borrower. Individuals cannot opt of the data being collected and must go through onerous lengths to fix incorrect data or inferences about them.

Inferences drawn about an individual (e.g., high risk for loan default) can have harmful consequences without the individual’s knowledge or control (e.g., inability to get a loan). Unfortunately, those suffering most at the hands of AI and “big data” are the already marginalized, poorer, voiceless communities (e.g., those without internet access who cannot quickly see their credit report or file requests for correction).

EU law requires AI decisions with serious consequences be checked by a human with the option to override it; however, a single data point in isolation is meaningless without understanding decisions made about others (e.g., is the loan approval recommendation different for black vs. white customers despite all other factors being similar?). It is important to understand AI recommendations or predictions in context.

Being transparent about the inferences and allowing individuals to give feedback not only enables you to improve the accuracy of your model, but it also allows you to correct for discrimination. This can be an advantage over competitors that unfairly dismiss viable customers. For example, a bank that rejects a large number of loan applicants as being too high risk might identify micro-loans as an alternative offering that not only supports the community but also results in a loyal customer-base that the bank’s competitors have ignored. This enables the customers to improve their financial standing and leverage more of the bank’s offerings, which results in a virtuous cycle.


It Takes a Village to Make a Difference

From cultivating an ethical culture to being transparent about a company’s values and empowering its customers, there are multiple actions a company and its employees should take to create an ethical foundation to build AI products on. To dig into ways to remove exclusion in your AI-based products, check out Part II.

I would love to hear what you think! What do your company and you personally do to create an ethical foundation in your work?


Thank you Justin Tauber, Liz Balsam, Molly Mahar, and Raymon Sutedjo-The for all of your feedback!