Is AI bigoted?
Artificial Intelligence (AI) has re-configured our global digital world. AI systems are increasingly involved every aspect of our lives and it can be a positive or a negative in the long run and it really depends on your perspective.
We have heard the theories of AI killer robots and we have also heard of AI saving the world economy, but what is true? And why are there such differing opinions? Well, AI uses mathematical models to identify patterns and those patterns can create inherent biases which will them be learned by the system and, in essence, be continually reproduced thereby further confounding the problem. It can possibly create social imbalance or marginalizing groups such as women, elderly or minorities.
Take language translation for example.
Machine translation systems are trained on sentence pairs. In this article, the author uses the example of an English-French translation recently run through Google Translate. The English input was "The women started the meeting. They worked efficiently," and the French output "Les femmes ont commencé la réunion. Ils ont travaillé efficacement." This is incorrect. The "Ils" is the masculine plural subject pronoun in French, and it appears here despite the context indicating clearly that women are being referred to. This is but one example of an AI system producing sexist outputs.
In general, 70 percent of the gendered pronouns in translation data sets are masculine, while 30 percent are feminine. This is because the texts used for such purposes tend to refer to men more than women. To ensure a 50/50 ratio if masculine and feminine pronouns, specific sentence pairs would have to be removed from the data. But what would it take? Can it be done? Will it ever be really unbiased and non-bigoted or would doing so create even more issues we haven't even thought about? Read on for next steps and let us know your thoughts.