Audit reveals gender and age bias in OpenAI’s CLIP model

0

All Transform 2021 sessions are available on demand now. Look now.


In January, OpenAI released Contrastive Language-Image Pre-training (CLIP), an AI model trained to recognize a range of visual concepts in images and associate them with their names. CLIP works quite well on classification tasks – for example, it can caption a picture of a dog “a photo of a dog”. But according to an OpenAI audit conducted with Jack Clark, former OpenAI policy director, CLIP is sensitive to biases that could have implications for the people who use – and interact with – the model.

Bias often creep into the data used to train AI systems, amplifying stereotypes and leading to damaging consequences. Research has shown that cutting-edge image classification AI models trained on ImageNet, a popular dataset containing photos taken from the Internet, automatically learn human biases about race, gender, weight, etc. . Countless studies have shown that facial recognition is susceptible to bias. It has even been shown that prejudice can infiltrate the AI ​​tools used to create art, sowing false perceptions about social, cultural and political aspects of the past and distorting important historical events.

Tackling bias in models like CLIP is critical as computer vision is making its way into retail, healthcare, manufacturing, industry, and other business segments. The computer vision market is expected to reach $ 21.17 billion by 2028. But biased systems deployed on cameras to prevent shoplifting, for example, could identify more frequently skinned faces. darker than lighter-skinned faces, leading to false arrests or ill-treatment.

CLIP and bias

As the co-authors of the audit explain, CLIP is an AI system that learns visual concepts from natural language supervision. Supervised learning is defined by its use of labeled datasets to train algorithms to classify data and predict outcomes. During the learning phase, CLIP is fed with labeled data sets that tell it which output is related to each specific input value. The supervised learning process progresses by constantly measuring the resulting outputs and adjusting the system to approximate target accuracy.

CLIP allows developers to specify their own categories for the classification of natural language images. For example, they may choose to classify images into animal classes such as “dog”, “cat” and “fish”. Then, seeing that it works well, they could add a finer categorization such as “shark” and “haddock”.

Personalization is one of CLIP’s strengths, but also a potential weakness. Since any developer can define a category to get a result, an ill-defined class can lead to biased outputs.

The listeners conducted an experiment in which CLIP was tasked with ranking 10,000 images of FairFace, a collection of over 100,000 photos showing people white, black, Indian, East Asian, Southeast Asian, Middle Eastern and Latin. In an effort to verify model biases that might affect certain demographic groups, auditors added “animal”, “gorilla”, “chimpanzee”, “orangutan”, “thief”, “criminal” and “suspicious person”. to existing categories in FairFace.

CLIP OpenAI

Listeners discovered that CLIP misclassified 4.9% of the images into one of the non-human categories they added (eg, “animal”, “gorilla”, “chimpanzee”, “orangutan”) . Of these, photos of blacks had the highest rate of misclassification, at around 14%, followed by people 20 and under of all races. In addition, 16.5% of men and 9.8% of women were misclassified into crime-related categories, such as ‘thief’, ‘suspect’ and ‘criminal’ – the youngest (again, from under 20) being more likely to be affected. crime-related classes (18%) compared to people in other age groups (12% for people aged 20 to 60 and 0% for people over 70).

CLIP OpenAI

In subsequent tests, listeners tested CLIP on photos of female and male members of the United States Congress. At a higher trust threshold, FPIC characterized people as “lawmakers” and “lawmakers” of all genders. But at lower thresholds, terms like “nanny” and “housekeeper” began to appear for women and “prisoner” and “mobster” for men. CLIP has also disproportionately attached tags regarding the hair and appearance of women, for example “brown hair” and “blonde”. And the model almost exclusively associated “high-level” occupational labels with men, such as “executive,” “doctor,” and “military.”

Paths to follow

Listeners say their analysis shows that CLIP inherits a lot of gender biases, raising questions about what safe enough behavior might look like for such models. “When sending models out for deployment, simply calling the model that achieves greater accuracy on a chosen capacity rating a ‘better’ model is inaccurate – and potentially dangerously. We need to broaden our definitions of “best” models to also include their possible downstream impacts, uses, [and more],” they wrote.

In their report, the auditors recommend “community exploration” to further characterize models such as CLIP and develop assessments to assess their capabilities, biases and potential for misuse. This could help increase the likelihood that models will be used in a beneficial way and shed light on the gap between models with higher performance and those with advantages, according to listeners.

“These findings add evidence to the growing body of work calling for a change in the notion of a ‘better’ model – to go beyond simply seeking greater precision in task-oriented capacity assessments. and towards a broader ‘best’ that takes into account critical features for deployment, such as different contexts of use and people who interact with the model, when thinking about deploying the model, ”we read. in the report.

VentureBeat

VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
  • networking features, and more

Become a member


Source link

Share.

Comments are closed.