Addressing Implicit Bias Towards the Black Community in Conversational Products

Meet Our Panelists

saadia-gabrial-headshot-square

Saadia Gabrial

Ph.D. Student @ University of Washington

Saadia Gabriel is a PhD student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she is advised by Prof. Yejin Choi. Her research revolves around natural language understanding and generation, with a particular focus on machine learning techniques and deep-learning models for understanding social commonsense and logical reasoning in text. She has worked on coherent text generation and studying how implicit biases manifest or can be captured in toxic language detection. 

louis-byrd-headshot-square

Louis Byrd

Chief Visionary Officer @ Goodwim Design

Louis Byrd is the Founder and Chief Visionary Officer of Goodwim Design, a hybrid-design studio dedicated to licensing original ideas and socially responsible technology to companies in need of novel breakthroughs. Louis has set out to build a world where equity is the norm so that people everywhere will have an opportunity for success. He believes that technology is only limited by the imagination and life experiences of the creator.

jamell dacon headshot

Jamell Dacon

Ph.D. Student @ Michigan State University

Jamell Dacon is a second graduate student of Department of Computer Science and Engineering at Michigan State University (MSU).

Jamell completed his M.S. degree in Computer Science in Spring 2020 at Michigan State University, his B.S. degree in Mathematical Sciences in Spring 2018, and A.S. degree in Computer Science in Fall 2017 both at The City of New York - Medgar Evers College (MEC). Jamell was awarded a University Enrichment Fellowship (UEF) by the Graduate School at MSU (2018-2023). He joined the Data Science and Engineering (DSE) lab in Fall 2018. His research interests include Machine Learning that is primarily focused on developing efficient and effective algorithms for Data and Sequence modeling and applying them in fields such as Bioinformatics, Recommendation Systems and Natural Language Processing. Jamell not only concentrates on increasing his academic skills, but is very passionate about Community Outreach, Diversity and Inclusion, in that, he was awarded a Graduate Leadership Fellow Position by the College of Engineering.

charles c earl headshot

Charles Earl

Data Scientist @ Automattic

Charles Earl works at Automattic.com as a data scientist, where he's involved in integrating natural processing and machine learning into Automattic's product offerings (among them WordPress.com and tumblr). In that capacity, he is passionate about building technology that is transparent, equitable, and just. He has over twenty years experience in applying artificial inte

Watch The Event Again

Implicit Bias Towards the Black Community

Unlock the recording of the webinar by becoming a member

Event Q&A

I wanted all of you to tell me about an anecdote or the moment where you either realize or experience the bias into natural language processing Or any conversational AI products?

Saadia: Yeah, so the experience I have to share of implicit bias is a project in NLP that I started working on pretty early on during my time in grad school, where I was looking into toxic language detection and social media posts, specifically Reddit posts. And so for that project, one thing I was exploring was how to fool deep learning classifiers into thinking of benign posts post that doesn't contain any toxic content was toxic, and vice versa. And initially, I thought this would be a really challenging task. And that, you know, these models seem to perform fairly well and toxic language detection data sets, and so it should be fairly easy or fairly hard to trick them. But it turned out that a lot of these toxic language classifiers are easily swayed by keywords. So, for example, if there's a phrase that says something like "I'm a black woman" or "new family on the block is Muslim" thinks that that has a higher likelihood of being toxic just because it contains this keyword that has information about a demographic that's likely targeted by toxic post or by somebody with malicious intent rather than actually being a toxic post itself.

Charles: There are a couple of systems that work with that. It's been apparent in but kind of the most blatant example of something That could really harm or offend people was a domain name, recommendation system that we had filled it. And so lots of people, when they get WordPress accounts want to set up their website and want to kind of custom domain. And at some point we hit run a service, you could type in, you know all about Bs, something like that and just get something that a domain name that match did or at least was semantically close. And as part of just monitoring the system, we would occasionally do checks where we, you know, just type in arbitrary words and see what happened. And we noticed pretty every couple of weeks that we would run these samples and Would invariably come back with a word quite offensive. One example that stuck out for me is that one person was trying to get an LGBTQ advocacy site and it came back with an offensive term that there was no doubt it was very transphobic. And when this kind of sticks out, because when you're actually trying to build a product that's based on natural language processing, and real users are trying to do something with it, you kind of get the sense of how this kind of harm can impact people and impact your bottom line basically.

Jamell: So, one example that I had I was doing a project a while ago that did parallel context data construction. So this is where you have gender words like males and females you would have he, she, husband, wife, Mr. And Mrs. stuff like that. And then race pair words from standard US English and AAVE, AAVE is African American Vernacular English. So, It's a set of words that belong to the African Americans, usually typical language where you speak in like Ebonics, and so forth. And we fed these race pair words and sentences, regular sentences, and we ran some fairness tests on it. Like, he has a really nice smile and the sentiment was positive. And then we ran, we changed. We change the word he to she and then the sentiment was negative because they respond something like her smile is cute, but she's evil. And I was like, why would it say that like it just race-pair words. In terms of race-pair words, we changed the word this like T-H-I-S in a sentence a regular sentence and in AAVE, this is shortened to D-I-S right, just regular sentence. And it responded offensive with like drugs and like other offensive stuff. And I was really confused as to why a single word made the dialogue system respond in such a way.

Louis: Yeah, definitely. So my stories come in more so from the actual user experience side, not so much under the hood. So recently, my wife and I purchased a Samsung Smart fridge, which is the AI is powered by Bixby and basically, what I discovered was a part of this could be because I'm from the Midwest here in the United States. So are we kind of have a certain sound in our tone in our dialect that kind of comes out especially on the east side of Kansas City, Missouri, where I'm from, and I noticed one day when I was trying to interact With the AI, interact with Bixby. I was trying to talk to it and Bixby never understood anything that I would say. So you know, something as simple as saying, Can you tell me what's on my calendar today it will tell me the weather of some country across the seas. So that was confusing. And then one day, I was just joking around my wife and me and I put on my white man voice. For those of you in the US, you know exactly what I'm talking about. And as you know, I kind of changed my tone. And I really turned it up a little bit and just kind of did one of those numbers. Then all of a sudden, Bixby knew exactly what I was saying. So I start talking to the device, and whenever I put on my white man voice, it answered the questions with pure accuracy. But whenever I talk with my natural tone, it just didn't. It barely understood me. So the scientists in me wanted to experiment more. So I have my wife. Try it and see the same thing happened with her. So that's kind of where my interest in conversational AI came from.

Where do you think or what do you think is the main reason behind those issues of bias?

Louis: I believe that the main issue behind many of these issues kind of goes back to the source. And what I mean by that is, where's the information coming from? You know, I think it's easy to say that it's the developers or the designers who are exhibiting certain biases. And maybe that's true. But you also have to ask the question of, okay, when they're learning these machines, where does that data come from? And is that data skewed? Or, or does it have a very limited range of who they're trying to are, excuse me who they're basing their, their data inputs off of? So for me, I think the issue comes back to the inherent biases that exist from the onset, whereas data coming from who is that data tailored around from a cultural perspective, ethnic wise perspective, and all those things.

In the case of toxic language detection, the data on these models that you use. Where and how have they been trained? What language do they use, is the point that Louis making around the cultural impact into the data set true? Going back to your research on that specific topic.

So we say there are not enough data sets. You mentioned that so we do have AAVE data set. So how come we still have these issues?

How do you think what would be a good way to communicate and make people aware of these challenges?

What is your discourse? How do you approach? What is your way to bring that motion and willingness to go and make something thing, which is a much more inclusive and socially responsible?

What is Natural Language Processing (NLP)? And what are some of the main issues in there that bring that bias into larger products which embed the system?

How do we detect the data set that has the bias? How do you decide to pick the data set and the techniques that you use? 

Can't we just get machine learning to fix itself to resolve bias? Can we get the machine to just fix itself? Is there an avenue for that?

What are some of the work that I can do to help mitigate the issue of bias when making a product?

How do you envision or how do you think we could move forward and what are some of the things that you see coming to help us take control and maybe try to mitigate or get rid of bias?

Have our panelists or attendees encountered a black voice assistant chatbot? 

Unlock the rest of the QnA by becoming a member

Curated Resources By Voice Tech Global

In this section, we have highlighted all the articles that have helped us put together the initial set of questions for the panel.

Review of the bias definition and the strategies of mitigation by IBM

Case study of bias for NLP with consequences in the real world

Stanford study on bias in text to speech

A thorough review of the process of building an inclusive / empathy first conversational AI product

Racial Bias in Hate Speech Detection

View the case studies and articles by becoming a member

table of content

×