Education, diversity and tooling are essential for fighting sexist and racist AI

By Bogi Szalacsi - Senior Associate - infoNation

Artificial intelligence solutions can save a tremendous amount of money, productivity and even sanity ("Alexa, where is my phone?") The future seems bright for AI, if it is utilised for good.

But the darker side is very much present too: AI solutions riddled with discrimination and pressing ethical issues. Most of the bad press artificial intelligence has earned is caused by issues that relate to AI's lack of sensitivity for diversity and that's partly caused by a homogenous AI workforce, a lack of diversity in the data that AI uses and not enough tools to identify bias.  

It recently came to light that Amazon's recruitment was using an AI tool that discriminated against women candidates. For over 10 years a faulty AI algorithm made decisions that selected mostly male applicants. The natural language processing tool winnowed out CV's from females based on particular vocabulary used (e.g. the word "women" in terms like women's chess or journal) or on the mentioning of specific girls only schools.

Nikon's cameras showed a clear preference for non-Asian faces. When a smiling Asian person had their photograph taken, the camera interpreted their smiling eyes as blinking.

In another notable case, as the US judiciary system increasingly relies on AI algorithms to predict the chances of a convicted criminal reoffending, to set jail time and determine sentences, it was found that black offenders were unfairly judged based on their racial background when compared to white offenders.

Although some improvements have been made, Google's wildly popular Translate tool continues to show gender bias in many languages despite Google being aware of this issue for years. Some languages like Hungarian and Turkish don't use gendered nouns and adjectives, but ones that can represent either males or females. Google's algorithm assigns male-associated nouns and adjectives such as "strong", "pilot" or "surgeon" to males and female associated ones such as "nurse", "nice" or "teacher" to females. 

The above examples come as no surprise to people who closely look at the lack of diversity among data scientists and developers of AI algorithms. According to Forbes, a mere 26% of data related jobs are held by women in the United States and non-white and non-Asian people are even further misrepresented.

In AI the numbers are even more dismal, Wired Magazine quotes that only 12% of leading machine learning researchers are women and the numbers for minority women are negligible. Although in some parts of the world, like Asia and Eastern Europe, women are making strides in data science, for most of the developed world it is sadly true that white males are designing algorithms and robots that will reflect their biases accordingly.

Data science and engineering teams should be designed to cancel out such biases, but only if they are diverse and use the appropriate tooling. AI is only as fair as we choose to make it. Machine learning algorithms being used today "haven't been optimized for any definition of fairness", said Deirdre Mulligan, an associate professor at the University of California at Berkeley who studies ethics in technology to Fortune Magazine. "They have been optimized to do a task."

As our reliance on artificial intelligence is growing rapidly, it is essential that we act now to design fairer technologies for everyone and equip people with tools to help. A line of AI fairness tools are slowly hitting the market.

IBM's Fairness 360 Kit, launched in September 2018, is an open source software designed to monitor algorithms and flag biases. The Open Data Institute (ODI) has also developed a new approach for organisations to identify and manage data ethics considerations, the Data Ethics Canvas.

All large IT companies are working on their own solutions or are utilising existing ones: Google (the company that was "appalled and genuinely sorry" a few years back when its photo algorithm was discovered to be incorrectly labelling African-Americans as gorillas) is continuously improving its What If tool (this one does not work in real time yet), Facebook and Microsoft both consider the development of bias detecting technologies a priority. 

But tools on their own are not enough. To achieve true fairness we need to involve women, minorities, people of all ages, sizes, languages, accents, abilities and goals end to end in the AI field: tools development, research, design, engineering, marketing and sales. Naturally, involving women and minorities in artificial intelligence necessitates education on all levels, from the primary education of small children to the continuing education of professionals.

Experience shows that this will not happen on its own, the full cooperation of governments, employers, managers, schools and parents is needed to involve (or if necessary: push) diverse people into this industry traditionally dominated by a single group. A simple look at the homogenous pool of participants at AI tech and business conferences should concern all of us. "Encouraging" women and minorities into technology is failing and we don't have decades to wait for today's tech savvy children to grow up and help us out. AI is only as fair as we make it, therefore the same is true for our future. 

See also:

5 companies making the world a better place with AI right now

Topics: becoming a data scientist, data science short courses, artificial intelligence, AI

Want to know more?

Recent Posts