Oleh Hlovatskyi

Sebastian Ewak
Gender Issues of AI Models
The problem of gender relations and tolerance is a hot topic in machine learning. In June 2020, MIT removed a dataset that leads to a misogynistic AI model.
ISSUE FEATURES
MIT has apologized and disabled a dataset that trains AI models with misogynistic and racist tendencies.
This dataset is called “80 million tiny images” and was created in 2010. This dataset is a huge collection of images that are individually labeled basing on social and technical considerations to training AI in object detection.
Machine learning models operate using these images and their labels. AI can determine cars, street lights, pedestrians, and bicycles when a picture of the street is downloaded.
Two researchers Vinay Prabhu, Chief Scientist at UnifyID, and Ababa Birhein, the Ph.D. candidate at University College Dublin in Ireland analyzed the images and found thousands of stunning symbols.
In the gender social communication-related MIT training set, women have been labeled obscene and tied to certain communities that support the objectification of the female body. The analysis showed that the dataset also contains close-up images of female genitalia, labeled with the “C” letter.

Prabhu and Birhein alerted MIT and the institute quickly shut down the training set. MIT went even further and urged everyone who uses this dataset to stop using it and delete any copies.
MIT said, that the institute was unaware of the offensive labels and that they were “the result of an automated data collection procedure that relied on nouns from WordNet.”
The MIT spokesperson went on to add that the presence of such biased images hinders efforts to create a culture of inclusiveness in the machine learning community. This is extremely unfortunate and contradicts the values that the advocates of ideal machine learning strive to uphold.