Follow BigDATAwire:

April 19, 2022

Fighting Harmful Bias in AI/ML with a Lifelong Approach to Ethics Training

Kevin Goldsmith

(Drozd Irina/Shutterstock)

Today, artificial intelligence and machine learning technologies influence and even make some of our decisions, from which shows we stream to who is granted parole. While these are sophisticated use cases, they represent just the cusp of the revolution to come, with data science innovations promising to transform how we diagnose disease, fight climate change, and solve other social challenges. However, as applications are deployed in sensitive areas such as finance and healthcare, experts and advocates are raising the alarm about the capacity for AI systems to make unbiased decisions, or that are systematically unfair to certain groups of people. Left unaddressed, biased AI could perpetuate and even amplify harmful human prejudices.

Organizations likely don’t design AI/ML models to amplify inequalities intentionally. Yet, bias still infiltrates algorithms in many forms, even when excluding sensitive variables such as gender, ethnicity, or sexual identity. The problem often lies in the data used to train models, reflecting the inequalities of its source: the world around us. We already see the effects in recruitment algorithms that favor men and code-generating models that propagate stereotypes. Fortunately, executives know that they need to act, with a recent poll finding that over 50% of executives report “major” or “extreme” concerns about the ethical and reputational risks of their organization’s use of AI.

How organizations should go about removing unintentional bias is less clear. While the debate over ethical AI systems is now capturing headlines and regulatory scrutiny, there is little discussion on how we can prepare practitioners to tackle issues of unfairness. In a field where, until recently, the focus has been on pushing the limits of what’s possible, bias in models is not the developers fault alone. Even data scientists with the best intentions will struggle if they lack the tools, support, and resources they need to mitigate harm.

While more resources about responsible and fair AI have become available in recent years, navigating these dynamics will take more than panel discussions and one-off courses. We need a holistic approach to educating about bias in AI, one that engages everyone from students to the executive leadership of leading organizations.

Fewer than a quarter of educators report giving AI ethics training in their clases (Olivier Le Moal/Shutterstock)

Here’s what an intentional, continual, and career-spanning education on ethical AI could look like:

In School: Training Tomorrow’s Leaders, Today

The best way to prepare future leaders to address the social and ethical implications of their products is to include instruction on bias and equity in their formal education. While this is key, it is still a rarity in most programs; in Anaconda’s 2021 State of Data Science survey, when asked about the topics being taught to data science/ML students, only 17% and 22% of educators responded that they were teaching about ethics or bias, respectively.

Universities should look to more established professional fields for guidance. Consider medical ethics, which explores similar issues at the intersection of innovation and ethics. Following the Code of Medical Ethics adopted by the American Medical Association in 1847, the study developed into a distinct sub-field of its own, with its guiding principles now required learning for those seeking professional accreditation as doctors and nurses. More educational institutions should follow the University of Oxford in creating dedicated centers that draw on multiple fields, like philosophy, to guide teaching on fairness and impartiality in AI.

Not everyone agrees that standalone AI ethics classes, often relegated to elective status, will be effective. An alternative approach proposed by academics and recently embraced by Harvard is to “embed” ethics into technical training by creating routine moments for moral skill-building and reflection during normal activities. And then there are the many aspiring data scientists that don’t pursue the traditional university route; at a minimum, professionally focused short programs should incorporate material from free online courses available from the University of Michigan and others. There’s even a case for introducing the subject even earlier, as the MIT Media Lab recommends with its AI + Ethics Curriculum for Middle School project.

In the Workplace: Upskilling on Ethics

Formal education on bias in AI/ML is only the first step toward true professional development in a dynamic field like data science. Yet Anaconda’s 2021 State of Data Science survey found that 60% of data science organizations have either yet to implement any plans to ensure fairness and mitigate bias in data sets and models, or have failed to communicate these plans to staff. Similarly, a recent survey of IT executives by ZDNet found that 58% of organizations provide no ethics training to their employees.

Executives are concerned about AI’s impact on their companies’ reputations (FGC/Shutterstock)

The answer is not simply to mandate AI teams undergo boilerplate ethics training. A training program should be a component of organization-wide efforts to raise awareness and take action toward reducing harmful bias. The most advanced companies are making AI ethics and accountability boardroom priorities, but a good first step is setting internal ethics standards and implementing periodic assessments to ensure the latest best practices are in place. For example, teams should come together to define what terms like bias and explainability mean in the context of their operations; to some practitioners, bias could refer to the patterns and relationships that ML systems seek to identify, whereas, for others, the term has a uniformly negative connotation.

With standards in place, training can operationalize guidelines. Harvard Business Review recommends going beyond simply raising awareness and instead empowering employees across the organization to ask questions and elevate concerns appropriately. For technical and engineering teams, companies should be prepared to invest in new commercial tools or cover the cost of specialized third-party training. Considering that two-thirds of companies polled in a recent FICO study can’t explain how AI solutions make their predictions, developers and engineers will need more than simple workshops or certificate courses.

Training on AI ethics should also be a cornerstone of your long-term recruitment strategy. First, offering instruction on ethics will attract young, values-focused talent. But formal initiatives to cultivate these skills will also generate a positive feedback loop, in which companies use their training programs to signal to universities the skills that employers are seeking, pushing these institutions to expand their offerings. By offering training on these topics today, leaders can help build a workforce that is ready and able to confront issues that will only become more complex.

Conversations around AI ethics have been a constant discussion point in the past few years and while it may be easy to disregard these conversations, it’s crucial that we don’t allow AI ethics to become yet another buzzword. With updated regulations from the European Union and General Data Protection Regulation (GDPR), conversations and regulations around AI uses are here to stay. While mitigating harmful bias will be an iterative process, practitioners and organizations need to remain vigilant in evaluating their models and joining conversations around AI ethics.

About the author: Kevin Goldsmith serves as the Chief Technology Officer for Anaconda, Inc., provider of the world’s most popular data science platform with over 25 million users. In his role, he brings more than 29 years of experience in software development and engineering management to the team, where he oversees innovation for Anaconda’s current open-source and commercial offerings. Goldsmith also works to develop new solutions to bring data science practitioners together with innovators, vendors, and thought leaders in the industry. 

Prior to joining Anaconda, he served as CTO of AI-powered identity management company Onfido. Other roles have included CTO at Avvo, vice president of engineering, consumer at Spotify, and nine years at Adobe Systems as a director of engineering. He has also held software engineering roles at Microsoft and IBM.

Related Items:

Looking For An AI Ethicist? Good Luck

It’s Time to Implement Fair and Ethical AI

AI Ethics Still In Its Infancy

BigDATAwire