Breaking gender stereotypes, one answer at the time

Breaking gender stereotypes, one answer at the time
iGenius
February 10, 2021
·
4
mins to read

Technology and artificial intelligence have unquestionably made our personal and professional lives easier.

At home, voice assistants help us cook and advise us on how to dodge traffic on the way to work. Technology and artificial intelligence are also increasingly supporting us at work — particularly so during the lockdown days, with teams working remotely around the world.

The way we interact with technology has also evolved: we increasingly type less and speak more with our devices — according to the Future Today Institute, by 2021 half the interactions humans have with machines will be using their voice instead of a keyboard.

Ethical Issues in AI voices

Voice interaction with our devices has simplified user interface and experience, making access to technology easier for anyone. But it has also raised ethical issues such as gender bias in artificial intelligence.

UNESCO report highlighted that the proliferation of AI voice assistants “developed by predominately male teams” and overwhelmingly “projected as young women” might perpetuate harmful gender biases and widen already existing gender divides.

According to the report, female AI assistants “reinforce commonly held gender biases that women are subservient and tolerant of poor treatment”.

A little less than a year ago, we wrote a blog post about the importance of having a diverse, international development team behind artificial intelligence.

A lot of work has been put into shaping our AI advisor, crystal’s voice interface and user experience since.

Most importantly, into designing crystal’s Generic Intents, i.e. the way she responds to users’ questions that are not related to the purpose of the interaction with the user.

crystal is our AI advisor for data intelligence, so any questions that go beyond data analysis — from ‘how are you today’ to a less appropriate ‘what are you wearing?’—would fall under the scope of Generic Intents.

Competence as a means to build users’ trust

Our Conversational Design team helped shape our AI advisor’s persona — or the way she projects herself and her tone of voice when interacting with users.

“Our AI, crystal, is not an assistant. She is an advisor for data intelligence”, says Anna Do Amaral, Conversational Designer at iGenius.

Most AI advisors tend to be male in gender — projecting authoritativeness — whereas servile AI assistants are usually female in gender.

“When building crystal’s persona, we focused on competence, rather than servility”, added Arianna Stefanoni, our Head of Conversational Design. “crystal is not one step back from the users, answering their needs. She works side-by-side with them, supporting them with data analysis.”

“The key to building users’ trust in our AI advisor’s abilities is the accuracy of the data-driven insights she provides them”, continues Stefanoni. “This boils down to the technology behind her, but crystal’s conversational interface must reflect her competence, and her language suggest the same clout as her users.”

“When shaping crystal’s persona, we focused on her relationship with the user, based on equality”, added Conversational Designer Stefano Calabrò. “We wanted users to see crystal as a real colleague, someone to trust, rely on, even joke with, but always in a respectful and professional manner.”

Generic Intents: right answers to wrong questions

The UNESCO report was entitled I’d Blush If I Could, a reference to an early response — now removed — that Siri, Apple’s virtual assistant, would give when sexually harassed by its users.

So far as users’ questions to a virtual assistant — or advisor, in our case — are relevant to the purpose of the interaction, tone of voice and language choices in the AI’s response are relatively straightforward.

Problems arise when users ask unrelated, and sometimes inappropriate questions. This is where the generic intents come in.

“crystal does not engage when questions reference even slightly to gender, sex, or private life”, adds Do Amaral. “Instead, she brings the conversation back to the purpose of the advisor-user interaction: data. And she does that with kindness and wit.

“There are no stereotypes when it comes to data insights.”

So, for example, should a user ask crystal something like: ‘Do you like men or women?’, crystal would reply: ‘Honestly, I’d rather not talk about this — let’s get back to your data!’.

Another example could be if users get frustrated: ‘crystal, you’re useless’, they might say (or worse).

Her stance would be once again not to engage, and her response: ‘Here’s my analysis: that was 100% inappropriate. Let’s go back to your data!’


Share this post

Don't Search.
Ask.

Data made accessible with the power of conversations.
EXPLORE CRYSTAL

Can you provide me with total production costs over time?

Show me the sales data from Q1 to Q2 for Diana Reuters.

Show me the ranking of agent by sales.

Alert me of any changes in product rankings.