Both Zoom and Twitter found themselves under fire this weekend for their respective issues with algorithmic bias. On Zoom, it’s an issue with the video conferencing service’s virtual backgrounds and on Twitter, it’s an issue with the site’s photo cropping tool.
It started when PhD student Colin Madland tweeted about a Black faculty member’s issues with Zoom. According to Madland, whenever said faculty member would use a virtual background, Zoom would remove his head.
“We have reached out directly to the user to investigate this issue,” a Zoom spokesperson told TechCrunch. “We’re committed to providing a platform that is inclusive for all.”
“Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,” a Twitter spokesperson said in a statement to TechCrunch. “But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate.”
Twitter pointed to a tweet from its chief design officer, Dantley Davis, who ran some of his own experiments. Davis posited Madland’s facial hair affected the result, so he removed his facial hair and the Black faculty member appeared in the cropped preview. In a later tweet, Davis said he’s “as irritated about this as everyone else. However, I’m in a position to fix it and I will.”
Twitter also pointed to an independent analysis from Vinay Prabhu, chief scientist at Carnegie Mellon. In his experiment, he sought to see if “the cropping bias is real.”
In response to the experiment, Twitter CTO Parag Agrawal said addressing the question of whether or not cropping bias is real is “a very important question.” In short, sometimes Twitter does crop out Black people and sometimes it doesn’t. But the fact that Twitter does it at all, even once, is enough for it to be problematic.
It also speaks to the bigger issue of the prevalence of bad algorithms. These same types of algorithms are what leads to biased arrests and imprisonment of Black people. They’re also the same kind of algorithms that Google used to label photos of Black people as gorillas and that Microsoft’s Tay bot used to become a white supremacist.