Published: January 19, 2019
At this point, you have likely read at least one story about how an artificial intelligence (AI) system caused offense (e.g., Google Duplex tricking humans into thinking they were speaking with another human) or actual harm when the intent may have been well-meaning (e.g., racial bias in criminal sentencing recommendations and interest rates).
The developers in each of these systems did not set out to offend or harm anyone and didn’t anticipate the negative outcomes but they should have. If you are creating or implementing an AI system, you need to ensure that your AI system is not perpetuating societal biases or harming your customers. It takes work to create AI that supports the world we want to live in, not the biased world we see now. You may be the first person in your company focusing on the need to create or implement ethical AI. If so, I’d like to share some lessons learned that could help you in your work. These were originally shared at the Women in AI Ethics Summit in December, 2018.
Ethics has been a long-standing personal passion of mine, and in my new role as Architect of Ethical AI Practice I am able to put this passion into action by helping Salesforce and our ecosystem of customers and partners develop best practices for building ethical and fair AI. This is a new journey for Salesforce, and I’ve learned a lot in my first few months in my role that I will carry with me as the company continues to work towards a better tomorrow driven by AI.
Nearly every organization has published their values, mission statement, and/or principles. When speaking about ethics in technology broadly or AI specifically, do it in context of your organization’s values. How do your recommendations support those values and benefit the organization?
Salesforce’s culture and values drive everything we do. When I talk about building ethics into our Einstein AI services, it is in terms of our values of Trust, Customer Success, Innovation, and Equality. Everyone is able to immediately see how helping customers identify potential bias in their training data or building inclusive and diverse teams fits directly into our values.
“Now, here at Salesforce, we have determined that this ethical and humane use of technology, especially within this context of the Fourth Industrial Revolution, must be clearly addressed, not only by us, but by our entire industry.” - Marc Benioff, Co-CEO & Co-Founder, Salesforce
You cannot scale to be on every team or in every meeting. You need allies that can channel your words and help you fight the good fight. Take a grassroots, bottoms-up, as well as top-down approach across every department.
Since I initially worked in the Service Cloud and as part of the larger User Experience team, both groups quickly became “ambassadors,” evangelizing what ethical AI looks like. The amazing thing is, I didn’t ask them to become ambassadors; it happened organically! It was the result of prior collaborations and developing their trust over time. I would hear through the grapevine that specific individuals were “channeling” me in meetings because they strongly believed in the importance of building ethics into our process and services. From there, I was invited to present to more groups and consult on more projects, expanding the circle of awareness and influence. Given the passionate culture at Salesforce for equality, inclusion, and diversity, it wasn’t hard to find like-minded souls to spread the good word!
Perhaps my biggest ally has been Richard Socher, Salesforce’s Chief Scientist. He approached Marc Benioff, our Co-founder and Co-CEO, to create a full-time position dedicated to Ethical AI Practice. He has made building ethics into Einstein from the ground up a priority and given it executive-level visibility.
Don’t begin by creating net-new teams, products, or processes. Use what already exists. This saves time, resources, and effort. For example, at Salesforce, we follow the Agile Development Methodology (ADM). The methodology includes a Definition of Ready (DOR) to evaluate if you are set up for success at the start of a sprint (build cycle) and Definition of Done (DOD) to evaluate if you have truly completed everything you planned to complete in the sprint. A group of engineers across clouds have updated our DOR and DOD to include ethical use questions to the process.
You can also leverage existing trainings on AI or machine learning to include a discussion about unintended consequences, representative training data, and societal impact of their work. Many organizations have a mechanism for reporting governance issues or violations of the company code of conduct. See if you can add an option for reporting ethical creation and use concerns.
There is a tendency when starting new efforts to create processes or materials from scratch; however, many organizations have already created ethical frameworks, toolkits, checklists, oaths, principles, white papers, etc. based on years of research and domain expertise. Learn from what others have done and, if possible, reuse it in your organization. For example, a checklist (e.g., Deon, O’Reilly) is a lightweight tool to incorporate into most software development processes. Don’t waste your limited time recreating the wheel.
One of the beautiful things about the AI community is that researchers in both academia and industry publish their advances at conferences and open source libraries like arxiv.org. You should do the same. Share your insights and lessons learned with the rest of the community. We are stronger when we work together than apart!
Related to the previous topic, get involved with the AI Ethics Community. There are so many opportunities to meet others, learn from them, and give back -- including conferences, World Economic Forum (WEF) Center for the Fourth Industrial Revolution, Partnership on AI, local meetups, and industry events. It helps to find people who you can learn from, get feedback, and and feel supported by like-minded individuals.
Meetups in the Bay Area
If you are the first person in your organization focused on ethics in technology, it will likely take a lot more time and effort than working in a role that is well-established. No reasonable human being will respond to your efforts by saying, “Ethics?! We don’t need no stinking ethics!” Instead, you will likely get lots of nodding heads that what you are proposing sounds great and then... radio silence. Every organization is dealing with resource constraints and while your coworkers or executives may not think your suggestions to build ethics into the organization’s processes are unimportant, you may find it very difficult to get actual resource commitments. It takes time and effort to find allies, identify existing processes you can leverage, and progress piece by piece. You need to be in this for the long haul but if you love this work, it will feel a lot less like work and more like your calling.
Standing up an entirely new role at your company can be scary and difficult but the good news is, you are not alone! My key takeaways:
Thank you to Tiffany Testo, Ryan Van Wagoner, and Sean Alpert for all of your feedback and support on this article! Intro image source: Source: pixabay.com