Taking Advice From a Woman

May 1, 2022
4 min read
As the developers of crystal, the first AI-powered advisor for data intelligence, we naturally couldn’t be prouder of her. But bias isn’t always positive.

I’d blush if I could

A recent report by UNESCO put the “female obsequiousness and servility expressed by digital assistants projected as young women” under the spotlight.
The report’s name itself is inspired by the response that Siri, Apple’s virtual assistant, would give when sexually harassed by its users. Apple has now updated the software behind Siri, allowing the voice assistant to fend off explicit questions with a flat “I don’t know how to respond to that”.
Addressing the problems underlying gender bias in artificial intelligence takes more than a software update, though.

Designed by white men for white men

According to research firm Ovum, in 2021 there will be more voice-activated assistants on our planet than people. This staggering number reflects a broader technology shift: the Future Today Institute predicts that by 2021 half of the interactions humans have with machines will be using their voice.
As we move towards hands-free technology, we will type less and speak more with our devices. This technology shift comes with a few risks.
The UNESCO report highlights that the proliferation of AI voice assistants “developed by predominately male teams” and overwhelmingly “projected as young women” might perpetuate harmful gender biases and widen already existing gender divides.
“Bias can… emerge in AI systems because of the very narrow subset of the population that design them”, reads a 2017 report by research institute AI Now.
“AI developers are mostly male, generally highly paid, and similarly technically educated. Their interests, needs, and life experiences will necessarily be reflected in the AI they create. Bias, whether conscious or unconscious, reflects problems of inclusion and representation.”

Female: servile assistant = male: assertive advisor

A 2019 UN study examined 70 AI voice assistants worldwide and found that over two-thirds of them have female voices, and no option to switch to a male version.
As the UNESCO report points out, female digital assistants have not always been the norm. “Perhaps the closest relative to today’s all-purpose virtual assistants were speaking car navigation systems”, it explains. “The voices for these systems gave terse, authoritative directions… and were almost always male.”
One of the few early car models equipped with a female voice for navigation, a late 1990s BMW 5 Series, was recalled in Germany because so many drivers complained about getting directions from a woman.
AI Technologist Kriti Sharma noted in a 2018 TedX talk that, while virtual assistants “designed to be our obedient servants” tend to have female voices by default, more authoritative examples of AI-powered products, such as ROSS the robot lawyer, or B2B solutions like IBM Watson or Salesforce Einstein usually have male voices.

Bias in, bias out

Gartner estimates that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.
Algorithmic bias has been making headlines for years: from facial recognition systems failing to distinguish black people, to algorithms serving up ads for high-pay jobs to men as opposed to women, the list goes on.
Algorithms are sets of rules that allow artificial intelligence to “learn” from data.
In the era of big data, the quality of data is as important as the rules themselves, as it will ultimately determine the final outcome — for example, the accuracy of the response virtual assistants will provide their users with.
Computer scientists refer to this concept as GIGA: garbage in, garbage out. When it comes to gender bias in artificial intelligence, Dr. Fei-Fei Li, Co-Director of Stanford University’s Human-Centered AI Institute, talks about “bias in, bias out.”
Testifying in front of the US House of Representatives in 2018, Dr. Li highlighted the fact that “humanity has never created a technology so similar to us, or trying to resemble who we are”, and stressed the importance of “a broad representation of humanity developing AI”.

Breaking stereotypes

On to our own kind of bias.
As an AI company, we believe in embracing diversity, not just in terms of race and gender, but also culturally, working with a team of people coming from different backgrounds, with different points of view.
Our first product, crystal, has always been “she” to us, but one that has very little in common with virtual assistants — in fact nothing, aside from the conversational AI technology behind her, and the fact that she’s female in character.
crystal is the first AI-powered advisor for data intelligence — advisor, not assistant.
As far as humanizing AI goes, we think of crystal as an insightful work colleague, not an obsequious secretary. She does not react to orders, she acts as an equal, offering advice and proactive notifications to her users.
Being a B2B solution, crystal has all the characteristics of her “male”, assertive counterparts. She was conceived and developed as a B2B advisor designed to help businesses make better decisions with their data.
In this sense, crystal is perhaps similar to that late 1990s BMW 5 Series voice navigation system, providing users with clear, assertive directions.
Except we don’t want people to drive her out, we want her to drive their businesses.

Similar stories