Meta (formerly Facebook) has launched a new AI benchmark called FACET (FAirness in Computer Vision EvaluaTion) to evaluate the "fairness" of AI models in classifying and detecting objects in photos and videos, particularly people.
FACET is comprised of 32,000 images containing 50,000 people, labeled by human annotators. It covers various attributes, including demographic and physical features, enabling in-depth evaluations of biases in AI models.
The benchmark aims to answer questions like whether AI models exhibit biases in classifying people based on their perceived gender presentation and physical attributes, like hairstyles or skin tones.
Notably, Meta claims FACET is more comprehensive than previous computer vision bias benchmarks. It can reveal biases like classifying individuals based on stereotypical attributes.
However, there are concerns regarding the origins of the dataset. Meta sourced images from Segment Anything 1 Billion, but it's unclear if individuals pictured were aware of their use, and details about annotator recruitment and wages are unclear.
Data annotation is a controversial industry with low wages and poor conditions for annotators in some cases, often in developing countries.
While FACET has potential to uncover and address AI biases, its origins raise questions. Meta needs to ensure ethical data sourcing and annotator treatment as it advances AI fairness benchmarks.