Can technology be racist and sexist?
According to tech expert and author Sara Wachter-Boettcher, who stopped by "Salon Talks" recently to chat with me, just because machines aren’t human doesn’t mean that digital technology is free of human biases.
In her book, “Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech," Wachter-Boettcher lays out how technology, being made by humans and informed by human environments, all too frequently replicates the blind spots and prejudices of people, especially the predominantly white, male people running Silicon Valley.
“Bias gets embedded at this really deep level and it gets really scary when we start talking about the way algorithms, artificial intelligence can be biased,” Wachter-Boettcher explained. “That’s happening more and more in places like image recognition or natural language processing, where a computer learns from past data things like connections between words.”
“‘Man is to woman as king is to queen,” she continued, giving an example. “Okay, fine. But then they’ll also learn things like, ‘Man is to computer scientist as woman is to homemaker.’”
“Same with photo recognition stuff,” Wachter-Boettcher added. FaceApp, for example, “had trained their ‘hotness’ filter," which manipulated pictures to look more conventionally attractive, “to learn what attractiveness was on pictures of only white people.” The result was “it kept taking pictures of people of color and just making their skin lighter and then giving them like more European features.”
“I look at that and I think, how did you make it all the way through like your entire product development cycle, get this out on the market, and not realize that you hadn’t thought about people of color at all,” Wachter-Boettcher said. “And it’s shockingly common.”
Watch our full "Salon Talks" conversation on Facebook.
Tune into Salon's live shows, "Salon Talks" and "Salon Stage," daily at noon ET / 9 a.m. PT and 4 p.m. ET / 1 p.m. PT, streaming live on Salon and on Facebook.