Published: July 10, 2019
As a discipline, those of us working on ethical or responsible AI, are learning together how to translate ethical principles into business practices that work for each of our organizations. The result is a shared community of practice. In this blog post, we share early lessons learned and some best practices for implementing ethical AI practice that other practitioners can consider for their organizations because we know that a rising tide lifts all boats.
At the beginning of this year, KPMG published a report naming “AI Ethicist” as one of the top five AI hires companies need to succeed in 2019. Similarly, IEEE recommended in their Ethically Aligned Design Guidelines for Autonomous and Intelligent Systems v2, that “companies need to create roles for senior-level marketers, ethicists, or lawyers who can pragmatically implement ethically aligned design, both in the technology and the social processes to support value-based system innovation.” This isn’t surprising given the vast media coverage about the harm caused by technologies from highly inaccurate facial recognition to AI unfairly cutting access to healthcare. In fact, the number of news articles featuring “AI” or “artificial intelligence” and “ethics” has seen a sharp increase since the end of 2016 as charted by CBInsights.
There has also been an explosion in the number of AI conferences with a track dedicated to ethics or entire conferences and summits focused on ethics in AI.
It is truly encouraging to see so much focus on issues of ethics in AI! Although the topic of AI ethics is increasingly a focus in the media, ethics in business and design are not new topic. In fact, Doteveryone has a 37-page document of ethical resources from frameworks and principles to oaths and checklists that have been created over many years. The sheer number of tools can be overwhelming. Which one should a company choose if they want to work in a more responsible or ethical manner? There are no reported metrics to help organizations understand how to begin charting a new ethical path forward.
With this in mind, I conducted a survey and workshop in collaboration with Danielle Cass, Dir. of Ethical AI at Workday. Our goals were to understand:
We began in February of 2019 by sending a survey to everyone in our networks working on ethical or responsible technology and asked them to forward it to their networks. Partnership on AI (PAI) also forwarded the survey to all of their members.
On February 28th, we conducted a one-day workshop in San Francisco with 23 of the respondents from our survey. Participants shared how they work, how they measure their success, what has worked or not, and lessons learned. We ended the day with an affinity diagram exercise to find patterns in common across the group.
The results shared here are not meant to be an exhaustive representation of the entire ethical AI or responsible tech community. This is a convenience sample of those Danielle, PAI, and I could reach, as well as those that could attend our workshop in San Francisco. However, we believe that the results here provide insights into trends being experienced and offer suggestions for how we might all learn from one another to continue growing our practice.
After one month, we had 110 responses to our survey from practitioners, consultants, researchers, and academics in the US and Europe. Their roles ranged from individual contributors and founders to managers, VPs, and C-level executives. Their expertise varied greatly, including government policy, ethics, machine learning/AI, computer science, human-computer interaction, human rights, philosophy, engineering, linguistics, and more!
Just over a third (34%) of respondents worked in high tech while the rest worked in non-profits (17%), professional services (13%), education (11%), media and communications (3%), and more.
Four key areas were identified as successes respondents and workshop participants have had in implementing ethical practices: Providing access to experts, leveraging existing processes, providing context-specific educational resources, and finding sponsors and supporters of your work.
Some organizations have created Office Hours and Review Committees for employees to seek expert advice and feedback. One tip is to provide an intake form for employees to complete upon signup. It allows the organizer to prepare in advance and bring in other relevant experts across the company to make the most of the time together. It also forces the individual seeking feedback to prepare before the meeting. In addition to helping with the issue of scale, they create a collaborative environment for discussion.
In the words of a workshop participant:
“We are doing customized analysis on-the-fly. We can share documentation or tools or point them to specific experts – 'here is a quick and dirty test you can do to see if you might have a serious problem.'”
Before creating new processes in your company, examine existing ones that you can build off of. For example, add some ethical questions from one of the many existing ethical checklists to your product development processes. If your company is already teaching employees classes on machine learning or other technology, add a unit on the ethical creation of tech. As mentioned in the beginning, there are many existing resources that you can draw from.
In the words of a workshop participant:
“Ethics lives in the small engineering decisions. It’s explaining how the principles talk to each other and how you reach tradeoffs and balances.”
Context is key! Generic case studies, principles, or guidelines may be interesting but they don’t help individuals understand how to apply them to their daily work. Meet with different business units or departments and brainstorm together how they might apply principles or guidelines in their work. Co-create case studies and documentation for greater buy-in.
In the words of a workshop participant:
“By getting them to think outside the box, we start generating ideas about what would be a more technology-oriented solutions and how having certain people in the room during a pre-mortem can avoid many problems later.”
To be successful, you need top-down and bottom-up support which means you must create a culture of empowerment and agency at every level and in every function. You also need to find sponsors of your work. These are individuals in leadership positions that will loop you into critical conversations and decision making as well as provide access to resources to be successful. It may be for only one business unit or team but success is built one win at a time.
In the words of a workshop participant:
“My manager is my greatest advocate, adding me into emails or meetings I would never know about or have access to otherwise. That visibility increases my impact and success that I wouldn’t get otherwise.”
You also need supporters. These are individual contributors that actually do the work and apply the principles or checklists in their daily jobs. Empower them to work responsibly and find new ways of building ethics into their processes and decision making. Your greatest success will most often be the initiatives that you didn’t create but inspired!
“How do you multiply this and scale it to an organic level so that you get everyone on board and so that everyone thinks about ethics issues in the right context? When the group makes a decision that everyone on the team agrees is good for our users but also good for the company -- and the team is still able to do what they wanted to do, that is when we made the right call. And that feels good.”
There were five areas that respondents and workshop participants indicated they are still trying to figure out: How to get leadership buy-in and company-wide adoption, how to measure impact, how to scale the practice across the organization, how to create a shared taxonomy and language, and how to operationalize ethics.
As noted earlier, some of the respondents and workshop participants found success by identifying both sponsors at the top and supporters throughout the organization; however, it’s not always clear how to do that. There are many critical priorities in a company and a finite amount of resources so how does one make the case for some of those resources? The good news is that if a company has decided to invest in one or more individuals to develop their ethical or responsible practices, there is a recognition that this is needed. The next step is to identify exactly what resources are needed that will result in specific outcomes.
In the words of a respondent:
“The biggest challenge is getting leadership to invest the time, money, and attention to prioritize ethics as part of a business' decision-making, development, and existence.”
It is difficult to know when harm has been avoided and how to know when working responsibly has changed an outcome. It is unclear how to measure the impact of ethical or responsible AI practices because it is not a straightforward metric like number of new customers, user engagement, or licenses sold.
In the words of a respondent:
“1. Measurement -- measuring what constitutes ethical practice is very difficult. Even if ethical practice is in play, how do we point to this in a legible way in a product or system? 2. Conceptualizing harm -- most people are still thinking in terms of data privacy. It's hard to articulate and address ethical harms that are not absolutely egregious.”
The next most frequently cited challenges related to how to create a shared terminology or understanding of what is ethical. The definitions of “ethical,” “responsible,” “bias,” and “fairness” can vary based on one’s role (e.g., legal, engineering, R&D) and context. A lack of a shared taxonomy or vocabulary can make discussions confusing and frustrating, resulting in little progress. So if there is a lack of agreement on what it means to be “biased,” “fair,” or “responsible,” it will make it more difficult to agreement on a metric for it.
In the words of the respondents:
“Lack of shared taxonomy; variability in use of the same terms by different communities”
“Everyone has a different idea about what ‘ethical’ means to them.”
In 2011, the United Nations endorsed the Guiding Principles for Business and Human Rights, which define the responsibilities that businesses and states have to protect the human rights and liberties afforded to all individuals. It can be a challenge to apply those guiding principles to specific AI applications. Creating guidance and tools that work across business units and a variety of use cases is challenging. As mentioned in the beginning, there are many tools already in existence but which ones work for which industries or departments is unclear.
In the words of the respondents:
“Developing standards from principles that are general enough for broader acceptance but not so much that it becomes too vague for implementation.”
“Operationalizing ethical principles such that they’re meaningful and useful to our various businesses.”
The final question identified was how to scale across their organizations. Most of the individuals in the survey and workshop were the only ones whose sole job is focused on ethics or responsible practices. A few are a part of a small group within their organizations. Being able to scale across hundreds or thousands of employees, a myriad of departments, and products is a Herculean challenge.
In the words of respondents:
“In such a large company, it’s hard to reach everyone and help them understand how to apply ethics in their daily work.”
“How to have the internal conversation in teams, and across teams, on ethics versus product growth, customer, UX/UI, etc.”
This aphorism is usually attributed to John F. Kennedy but is actually a phrase that has been used in the Chinese language for centuries (水涨船高) and describes what all of us are hoping for: the ability to work together to create a world we all want to work in. In this Fifth Industrial Revolution, we need to work together to create a robust ethical and humane use practice. Although ethics may soon become a competitive differentiator for consumers, sharing insights and lessons learned creates a better world for everyone.