August 31, 2022

Group Fairness vs. Individual Fairness in Machine Learning

Continuing our discussion about fairness in machine learning, let’s address group and individual fairness.

In fair machine learning research, group and individual fairness measures are placed at distinct levels. While both are considered to be important, they might sometimes give way to conflicts.

💡 At an individual level, fairness can be defined as similar individuals being treated similarly.

💡 At a group level, a fair outcome demands the existence of parity between different protected groups, such as those defined by gender or race.

These measures can give rise to conflicts in situations where, in an attempt to satisfy group fairness, individuals who are similar with respect to the classification task, receive different outcomes.

Demographic parity

Let’s consider an employer who wishes to have similar job acceptance rates for both male and female candidates. This means that 50% of male and 50% of female candidates get the job. We call this demographic parity.

At a first glance, the statistical parity between the groups is maintained, and the number of female and male hires is balanced. However, from an individual perspective, the outcome might not be fair if the machine learning model chooses to give positive outcomes to candidates from the protected group just to ‘make up the numbers’, despite them not being qualified.

In short, group fairness measures can lead less qualified individuals who are part of the underrepresented group (be it the advantaged, or disadvantaged one) to be favored by the AI model over the ones who are better qualified. In this case, the demographic parity is maintained, but the accuracy of the prediction is not.

Demographic parity vs. performance-based metrics

A potential way of managing the shortcomings associated with demographic parity is by employing performance-based metrics such as equality of odds and equality of opportunity to implement a fair machine learning strategy.

Equality of Opportunity

Equality of Opportunity states that each prescribed group should get positive outcomes at equal rates, assuming that the people in each group are qualified. In other words, it ensures that people who are equally qualified for an opportunity, are equally likely to receive the same outcome.

By using Equality of Opportunity as a definition of fairness in machine learning, we would make sure that both the male and female candidates in our example are bound to receive similar acceptance rates for the job, assuming they are qualified for it.

Equality of Odds

The concept of Equality of Odds or Equalized Odds is even more restrictive, as it strives to not only correctly identify the positive outcomes at equal rates across groups (the same as in Equal Opportunity), but it also aims for the AI model to create the same proportion of false positives across groups.

In short, by using Equalized Odds as a measure of fairness in machine learning, we would ensure that the probability of a qualified candidate being hired and the probability of an unqualified candidate not being hired would be the same for both the male and female groups.

Intersectional fairness

Another shortcoming of group fairness measures is their suitability to be used for a limited number of protected groups only. They do not prevent unfairness against those who are found at the intersection of multiple types of discrimination, for example, ‘disabled African-American females’.

As humans, we belong to different subgroups and our identities overlap and intersect across multiple dimensions like race, gender, sexual orientation, and so forth. Intersectional fairness builds on the concept of algorithmic fairness, in order to get a complete picture of the biases and stereotypes which might be encoded in machine learning models.

Yet another telling example comes from the field of facial recognition, where AI models have a tendency to perform better on men than on women. At the same time, they’re also better at recognizing lighter skin tones rather than darker ones. Therefore, we can talk about the intersection of gender and race discrimination.

As such, we must ensure that machine learning fairness measures take into consideration all subgroups with different combinations of protected attributes.

Fairness through unawareness

Fairness through unawareness is the idea that withholding protected attributes from a machine learning prediction process, will make the AI model fair. However, this is untrue. Academic research has repeatedly shown that algorithms are able to identify patterns in unexpected ways, by means of non-protected attributes that serve as ‘proxies’.

For example, someone’s ZIP code might be a strong indicator of race, since there are neighborhoods that are segregated.

Other proxies might be correlated with gender, like the age at which someone starts programming. While this information is genuinely relevant to an AI model that scores resumes for a job, it also reflects social stereotypes.

As such, not explicitly using protected attributes in a prediction model does not ensure machine learning fairness, as AI models have the capacity to identify patterns indirectly.

Key takeaways

The issue of bias and unfairness in AI models has attracted a lot of attention in the last few years, both from scientific communities and governments across the globe. Having clear guidelines and ethical principles for the fair use of machine learning models is important. Such is understanding the ways in which AI models operate to reach certain outcomes.

Measuring fairness should be a priority for every business and organization that employs machine learning in decisions that directly impact human lives. Having a clear insight into a model’s fairness risk and data biases is crucial.

At Lumenova AI, we propose an effective way of measuring algorithmic fairness at a glance, by analyzing metrics such as data impartiality, demographic parity, equality of opportunity, equality of odds, and predictive parity.

Moreover, we offer a unique framework for measuring intersectional fairness, by allowing users to easily select which protected attributes and clusters they would like to analyze for their AI model.

To learn more about our tool and how it can make your model fair, feel free to contact our team.

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo