Inaugural Women in AI Ethics Summit: What happens when you bring together 30 women fighting for human rights in AI?

8 min read
Kathy Baxter's Picture
Illustration of Earth from space. Earth is covered in a network of connected dots and the sun is emerging from behind.
Image source: pixabay.com

Published: January 19, 2019

We’ve seen far too many examples of artificial intelligence (AI) systems making harmful decisions or recommendations based on race, gender, or other protected classes of data (e.g., interest rates, predictive policing, lending apps, ride hailing services, AI assistants). Increasingly, we are seeing governments in the US, Denmark, China and elsewhere use AI to make decisions about criminal sentencing, who gets access to medical benefits, which children are most at risk of abuse, and who is eligible for schools, jobs, and government contracts  slowly moving towards "algocracy" (rule by algorithms). The causes of bias and harm are complex and multi-factor, but there is a large group of women fighting to ensure that AI is created and implemented to help society rather than harm it.

On December 19th, 2018, 25 of those women from tech companies (e.g., Salesforce, Workday, Intel, IBM, Google, Amazon, Socos Labs), non-profits (e.g., Markkula Center for Applied Ethics, Omidyar Network, AI4All, Stanford’s Global Digital Policy Incubator, BSR), and analysts (Altimeter) came together for a day to share their experiences and insights, as well as to brainstorm solutions to big challenging we are facing.

“Every social justice movement that I know of have come out of people sitting in small groups, telling their life stories, and discovering other people have shared similar experiences.” - Gloria Steinem

Standing on the shoulders of giants

The day began with a series of lightning talks to share with the group practical experiences that others can leverage in their own work. I gave a presentation sharing my lessons learned standing up the new position, Architect of Ethical AI Practice at Salesforce.

Photo Credit: Scott R Kline

Vivienne Ming, Socos Labs (@neuraltheory)

Dr. Ming spoke on many of the themes highlighted in her recent interview in The Guardian. She shared how she and others work on developing AI for good (e.g., treating diabetes, predicting bipolar depression) but that this always comes with frightening and complex ethical questions. like. In her words, “Technology is only a tool. It is an amazing tool, and one that has had, on balance, a profoundly positive impact on the world. But it can only ever reflect our values back at us. ...seemingly innocent technologies can have surprisingly negative effects, such as inequality, capture effects, and instability in social networks. In the end, technology should never simply make us feel good or ease us through our day; it must always challenge us. When we turn technology off we should be better people than when we turned it on.

“AI for Good is easy; it’s AI-That’s-Not-Bad that’s hard.” - Vivienne Ming, Socos Labs

Tess Posner, AI4All (@tessposner)

We got a personalized version of Tess’s talk at the Unintended Consequences of Technology event. She shared some disappointing statistics on the diversity crisis multiple sources including  Element AI (2018), NSF Science & Engineering indicators (2018), Kapor, ASU, Pivotal Ventures, and AI Index.

  • 12% women are AI researchers around the world
  • 71% of applicants for AI jobs are male
  • 7% women of color with CS Bachelor’s degrees in the U.S
  • Nearly 80% of faculty for AI in higher ed institutions are male
  • 60% of K-12 schools in the US don’t even have computer science at all

AI4All wants to change this by expanding the diversity pipeline, increasing awareness of and access to AI education, and conducting research in AI for Good applications.

“There is a diversity crisis and it is urgent.” - Tess Posner, AI4All

Irina Raicu, Markkula Center for Applied Ethics, Santa Clara University (@iEthics)

I saw a lightning talk that Irina gave at the Partnership on AI (PAI) All Partners Meeting last month and loved it but it went by too quickly and we didn’t have the opportunity to ask questions but that wasn’t the case at the Summit! She talked about the need for “AI-Free Zones.” “The public conversation is full of hype and misinformation about what algorithms can do 'better' than humans can. Are there problems or areas of human life in which automated decision-making will not help, and might, in fact, cause more harm? If so, what might those be, and how should we improve the conversation?" Potential areas: parenting, relationships, religion/faith.

"Algorithms can't be used to decide societal norms.” - Irina Raicu, Markkula Center for Applied Ethics

Susan Etlinger, Altimeter Group (@setlinger)

Susan, who recently published an AI Maturity Playbook, shared with us the Five AI Trends to Watch for in 2019:

  1. How we interact: from screens to senses
  2. How we decide: from business rules to probabilities
  3. How we innovate: from data analytics to data science to data engineering
  4. How we lead: from expertise-driven to data-driven
  5. How we behave: From "Move fast and break things" to "Ethical AI"

She also made predictions about three possible outcomes for 2019:

  1. Jumping the Shark: “Inspired by Microsoft, Salesforce, IBM and others, more companies issue AI ethics principles--but stop there. Virtue-signaling in place of actual progress breeds industry and media cynicism and inevitable backlash.”
  2. Half measures: “Fig-Leaf. A few companies, nonprofits and academic institutions do the heavy lifting on AI ethics, and companies adopt bits and pieces of their frameworks and tools. Not much happens, but it feels like progress. Everybody declares victory and nothing changes.”
  3. “The Big Dig”: “2018 turns out to have been the turning-point for real progress toward ethical AI. Companies like Microsoft, IBM, and Intel scale AI efforts and provide accessible, useful frameworks that other businesses adopt, along with frameworks from AInow, MIT and others.”
“We're not done yet--not by a long shot. Publishing ethical principles and assembling ethics teams is a good first step. It looks super on a press release. But this is where the real work begins.” - Susan Etlinger, Altimeter Group

Priya Vijayarajendran, IBM (@vcPriya)

Vocabulary is always a topic of discussion in the AI Ethics world. Words like “fair,” “bias,” “transparent,” and “ethical” can mean different things to different people. Priya highlighted the difference between two concepts that are important to distinguish when discussing AI fairness or bias checking tools:

  • Fairness metrics: These can be used to check for bias in machine learning workflows.
  • Bias mitigators: These overcome bias in the workflow once discovered to produce a more fair outcome.

IBM’s has an impressive set of open source AI Fairness resources (AI Fairness 360 Open Source Toolkit) including tools to check for bias and to overcome it. However, tools alone are not enough. “AI is not only a technical problem.” We must change the incentive structure within our organization. Most for-profit companies incentive employees based on revenue, clicks, user adoption, etc., which can be counter to making difficult but ethical decisions. As a group we discussed what those might look like (e.g., reward when a project is canceled due to concerns about societal impact, reward when a sales rep. identifies a potential customer that would violate the company’s values).

“We need more human input in the system.” - Priya Vijayarajendran, IBM

Chloe Autio (@ChloeAutio), Heather Patterson (@h2pi), and Iman Saleh (@iman_saleh), Intel

Chloe presented on behalf of her two colleagues, Heather and Iman, who were sadly out sick. They shared some of the best practices for integrating AI Ethics into the product life cycle as well as some of the tools and capabilities to ensure AI fairness by detecting and fixing bias.

  • Generate interest and attention from leadership across organizations
  • Situate human stories within a broader context
  • Personalize it: Make a case for prioritizing a particular set of ethical problems
  • Decide on a vision and map out a path to success
  • Test with trusted colleagues first and then iterate

Intel offers many courses in ethics for AI for data scientists and non-data scientists. A discussion among the group followed about how to make training courses engaging and accessible by (e.g., existing knowledge, learning style, time, location). How do you know if someone retained the training and is applying it (i.e., what’s the impact of the course)?

“I foresee workforce development, recruiting, diversity and inclusion teams, data scientists and product teams working more closely together to identify areas where bias enters data systems and improve product quality.” - Chloe Autio, Intel

Driving change in your organization

Several ideas were brainstormed at the end of the day about potential ways one might drive change in their organization. Not all of them make sense for all organizations or should be attempted all at once but this is a great list to get ideas from!

  • Confidential employee surveys to identify teams/areas where things are going well and those where they are not. What is working or not working on these teams?
  • Measure actions in terms of company values or ethical principles, not “ethics,” and be clear in your vocabulary so everyone is using the same measuring stick.
  • Identify how addressing ethical concerns impacts the bottom line
  • Write press release and FAQ at the start of projects to imagine good and bad scenarios. Help teams understand the HUMAN and societal impact of what they are building.
  • Conduct ethical pre- and post-mortems to identify potential unintended consequences and use cases, and how to mitigate them
  • Advocate for an opportunity for individual contributors to present to board members
  • Create Ethical Red Teams to identify unintended consequences or use cases that team members too close to a project/product might not be able to see. Treat ethical holes with the same priority as security holes. If there aren’t enough resources to create a dedicated team, rotate among team members each release to play the role of advisory.
  • Get an executive to sponsor ethical efforts (e.g., training, Red Teams) and communicate their support to the entire company

“It can feel isolating when I am the only one working on these issue in my company but I have a community now that I can turn to. I am not alone. We can do this together!” - Summit Member

I couldn’t be happier with how the first Women In AI Ethics Summit went and I look forward to many more to come! If you want to participate in future AI Ethics Summits, please let me know!

“My brain is tired but my heart is full!” - Summit Member

Acknowledgements

A special thank you to Mia Dand at Lighthouse for raising everyone’s awareness of the brilliant women in AI Ethics and to Danielle Cass, Director of Ethical AI at Workday for suggesting this event and recruiting many of the brilliant women in the room! And thank you to all of the women who joined us to share their ideas, experience, energy, and light!

  • Michelle Carney, Amazon Music
  • Hannah Darnton, BSR (Business for Social Responsibility)
  • Kana Hammon, Omidyar Network
  • Emily Witt, Salesforce
  • Bulbul Gupta, Socos Labs
  • Shannon Vallor, Markkula Center for Applied Ethics, Santa Clara University
  • Roya Pakzad, Stanford’s Global Digital Policy Incubator
  • Allison Woodruff, Google
  • Hannah Darnton, BSR
  • Barbara Cosgrove, Workday
  • Katharine Bierce, Salesforce.org
  • Yakaira Nunez, Salesforce
  • Susan Etlinger, Altimeter Group
  • Chloe Autio, Intel
  • Iman Saleh, Intel
  • Heather M. Patternson, Intel
  • Tess Posner, AI4All
  • Priya Vijayarajendran, IBM
  • Irina Raicu,  Markkula Center for Applied Ethics, Santa Clara University
  • Vivienne Ming, Socos Labs

Thank you to Tiffany Testo for all of your feedback and support on this article!