Breaking the Bias in AI

  • March 07, 2022
Woman standing in front of a blackboard with calculations

The theme of International Women’s Day 2022 is #BreakTheBias. In preparation for a week’s worth of internal events related to IWD, we’re devoting a blog post to how NTT DATA is working to Break the Bias in automation and AI. Kim Curley, VP, Workforce Readiness Consulting, recently had an opportunity to ask NTT DATA’s Lisa Woodley and Anisha Biggers (both subject matter experts in automation solutions) about their experience with bias and how they have worked to eliminate bias in digital transformation initiatives.

NTT DATA is a leader in automation and AI and recognizes the bias traps that can be created without the right level of care and consideration on the front end. NTT DATA adopted a set of AI Guidelines in 2019 that recognizes our responsibility in creating a human-centered society in which we coexist with AI. This year’s IWD theme provided a terrific opportunity for us to talk bias in automation and AI with these experts.

Q: What event or situation caused you to get interested in how bias is embedded into automation / AI?

Anisha Biggers: My interest in machine-based bias started in 2009 when Nikon’s face-detection cameras were accused of racism. Their camera technology could not recognize whether the Asian faces had their eyes open or closed, and an automated message would flash on the screen asking, “Did someone blink?” I remember wondering why the design team would not consider all categories and races as a starting data set for facial recognition. It would have been the logical thing to do.

For automation specifically, it was more of a direct experience. I had a cordial ‘argument’ with one of my clients who implemented an RPA (Robotic Process Automation) solution to reduce manual error and improve cycle time but was not considered successful. The solution was a technology solution that enabled straight-through processing. When we interviewed identified stakeholders, we realized the solution was designed by RPA developers and IT system owners — without engaging the business teams who will eventually use it. It was designed by technology teams with a data ‘input processing and output approach,’ without considering the daily variations and ad-hoc decisions for the same process.

Any automation/ AI solution will be as good — and biased — as its designers.

Lisa Woodley: I agree. My experience really started around my passion for ethical design. The use of data and manipulative design practices has negatively impacted our psychology and society, and I’m passionate about designers taking responsibility for where we are and doing something to change it. We represent the human in technology innovation, and it’s our job to draw a line to ensure the future we design benefits people and society or, at the very minimum, does no harm.

As I started to dive further into ethical design concepts, it became clear that inclusion HAD to be a part of the conversation. You can’t have ethical design without inclusive design. Inclusive design means understanding where the biases are and actively banishing them from what you create. Now how does this relate to bias in AI and automation? All the machine learning algorithms that are out there collect data, analyze it, and recommend actions — those algorithms drive what we design — things like what offers we’re presenting to a consumer as they navigate a site.

As designers, we can’t just accept that. We have to question whatever the machine says and design for it. How are you determining your target personas and customer segments? Is it fair? Are we leaving anyone out? Are the criteria based on biases? How do we know?

Q: What causes bias to be present in automation / AI?

Anisha: We need to understand that we humans are the ones designing the solutions. Humans are inherently biased. Anything we create will be biased as the sum of the ‘creators’ background, experiences and social circle. We, as people, design things based on our understanding of the world around us. So saying “Bias is present in automation/AI” might not be correct. “Bias is designed into automation/AI” is how we should approach this topic.

Lisa: Where we run into trouble is we assume because it’s a machine it has no bias, but AI can only be trained on what we give it — what we know and/or what’s already happening, and that is inherently biased. We have to start not from a position of preventing bias from creeping in but instead removing the bias we know is already there.

For example, we know that historically there’s inequality in mortgage lending. We might think, “Well, let’s take the inequality out by training a machine to approve mortgages.” No human, no bias, right? But if we train the machine on historical views of who has gotten approved in the past, we’ll only propagate the inequality that’s already happening.

Q: Why is breaking the bias in automation / AI important?

Anisha: We live in a hyperconnected world, an age of digital revolution and social media. Hyperautomation and AI are here to stay as we progress as a society. Breaking the bias is no longer an option. We must minimize bias as much as we can. Our future depends on it. Upcoming generations are born into technology. They learn, interact, and socialize using technology. If we arm people with biased technology, we accelerate that spread of bias that could have been contained without the help of technology.

Lisa: We are increasingly giving over decisions to AI. From who gets approved for a credit card, how much of a mortgage loan you qualify for, your credit score, and what you see when you do internet searches or log onto social media. It is absolutely everywhere, impacting every aspect of our life. Most people don’t realize the extent. If we don’t recognize the prevalence and work to break the impact bias has on the decisions AI makes, we will exponentially increase the socio-economic gaps that already exist.

Q: Can you give an example of unconscious bias built into automation / AI? And ways that bias could have been avoided?

Anisha: An AI chatbot, ‘Tay’ (built for conversational understanding), was taught to be racist by Twitter in less than 24 hours. The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation." The moment Tay was alive, people started tweeting the bot all sorts of misogynistic and racist remarks. And Tay — being essentially a hyperconnected robot parrot— started repeating these sentiments back to users, proving correct that old programming adage: garbage in, garbage out.

As we design and build intelligent decision-making solutions that learn from human counterparts, the same sort of bad training problem can arise in more problematic circumstances. This is why, for AI/automation solutions, not only do we need to understand how a specific technology solution design will impact humans but vice versa. It is essentially a continuous learning loop between technology and humans.

Lisa: An example that honestly scares me the most is a type of AI facial recognition Anisha mentioned previously. Only this time, it is being used for community safety and policing. I highly encourage anyone interested in this topic to watch the documentary Coded Bias on Netflix. It focuses on Joy Bolamwini, a researcher at MIT Media Lab and her discovery that facial recognition does not see dark-skinned faces accurately. It shows the impact this has on the real world as communities and police are increasingly relying on facial recognition for crime prevention. Her discovery prompted her to push for new legislation against bias and form the Algorithmic Justice League. Their mission is to bring together researchers, policymakers, and industry practitioners to mitigate AI bias and harm.

Q: How can we break the bias – what steps are needed to prevent bias, and what skills do we need to develop to identify and fix it where it might already exist?

Anisha: As I mentioned earlier, we are humans. And humans are biased. We cannot eliminate bias from what we design, but we can certainly minimize it. Understanding the different kinds of biases that exist — or at least the ones we have identified and categorized, followed by acknowledging that we have it — is a start to solving the problem. I found this explanation on Techcrunch.com very helpful:

  • Data-driven bias – facial recognition discrepancies
  • Interaction driven bias – Tay, the corrupted Microsoft chatbot
  • Emergent bias – information-based social bubbles on Facebook
  • Similarity bias – Customized news and ads on Google based on individual queries
  • Conflicting goal bias — any site that has a learning component based on click-through behavior will present opportunities that reinforce stereotypes

Lisa: We can #BreakTheBias by assuming that bias is always there no matter what we think we designed upfront. We can start every project with that assumption and then consistently ask questions about how the machine is learning, where it is getting that information, and how it will use it. What are the consequences if it gets it wrong? More importantly, what if that algorithm does EXACTLY what we programmed it to do. Are there any unintended consequences that might come out of that? AI is like a genie. It will deliver precisely the wish you ask for, and sometimes that ends up being more than you bargained for.

Anisha: Skills-wise, we need design engineers who understand technology and human psychology. To be honest, we need more than just skills to prevent bias in automation/AI. Based on what we are trying to solve, we need a foundational team structure — diverse technologists and designers — to provide varied perspectives and inputs on the impact of the designed solution. We also need AI Ethics committees, like the NTT DATA Center of Excellence, to provide some guardrails around design and innovative solutions. Just because we can design something does not mean we should.

Lisa: In terms of skills? Ok, I might have my own bias here, but we need more designers and user experience researchers involved. I said it before. The designer represents the human. They create the things that are interacting and impacting people, so they should be ones influencing the line between what the business wants, what is possible from a technology perspective and what is responsible from an inclusion and ethics perspective. Design thinking, starting with empathy or understanding the human, needs to be at the forefront of future technology innovations and services. We need to flip the current model. Instead of leveraging technology to achieve business goals without considering the human impact, we need to put the human at the center of our technological endeavors.

Q: What’s your favorite automation / AI use case?

Anisha: Alexa bloopers would be my favorite. Our two toddlers asking Alexa questions and her attempts to answer based on what she understood is always interesting. This human-to-machine interaction is extremely beguiling — a determined and frustrated four-year-old trying to articulate what he wants vs. an unemotional AI repeating what she understood. Nobody wins in the end. I recently changed one of my Alexa’s to a male voice. My kids’ immediate response was, “This Alexa sounds mean.” Absolutely fascinating!

Lisa: My husband has become obsessed with automating our home — particularly the lights. Everything is set with timers, motion sensors, voice activation, the works. I’m a huge fan of the movie The Fifth Element, so it became his mission to figure out how to set the lights, so I have to shout “Aziz, Light!” to turn them on and, “Thank you, Aziz” to turn them off.

International Women’s Day is officially tomorrow, March 8. The WIN ERG (Women Inspire NTT DATA Employee Resource Group) is proud to host a whole week of activities to support this year’s theme of #BreaktheBias. Internally, we will host a roundtable discussion about our personal experiences with bias and ways to break the bias, hosted by Mona Charif, EVP, Chief Marketing Officer, and Global Executive Sponsor for WIN. She’ll talk to Sharon Harvey, Client Executive in Financial Services; Barry Shurkey, NTT DATA CIO; Anisha Biggers, Managing Director, Automation Advisory; and Lisa Woodley, VP Customer Experience. We’re encouraging all of our NTT DATA colleagues to support IWD posting #BreaktheBias poses and comments on their personal social media channels throughout the week. We invite you to return to this blog space to read a recap of that event.

The Women Inspire NTT DATA ERG celebrates women’s achievements and in creating a more equitable, inclusive, and diverse world.

#IWD2022 #BreakTheBias

Subscribe to our blog

ribbon-logo-dark
Kim-Curley-3-23-21.jpg
Kim Curley

Kim Curley has spent her career focused on the human side of business, enabling leaders and their organizations to do more, do better, and to thrive through change. As the Workforce Readiness Consulting Practice Leader for NTT DATA Services, Kim leads advisory consultants who deliver people-side consulting solutions that help our clients solve their most complex business challenges.

Kim is also a founder of Women Inspire NTT DATA, the company’s first employee resource group. She launched the Charlotte Chapter in March 2018, which she continues to lead, and serves as the chair of the global steering committee. She is also a published author and sought-after industry speaker on the topic of human and organizational impacts of automation and other advanced technologies and is the co-lead of the Talent Development Forum of the Executive’s Club of Chicago.

Related Blog Posts