> When FAIR-Path was applied to the tested models, diagnostic disparities dropped by about 88 percent.
I have a better model to decrease disparities - just always return False. Equality solved.
But I analyzed the data from their github and it seems like it actually improved total model accuracy in most cases. But I had to dig really deep to find it out, unless I missed something in the paper.
Clickbait headline, but the article is a fantastic example of one of machine learning’s most common failure modes. This extends to generative models and is another reason to be wary of trusting their output.
> When FAIR-Path was applied to the tested models, diagnostic disparities dropped by about 88 percent.
I have a better model to decrease disparities - just always return False. Equality solved.
But I analyzed the data from their github and it seems like it actually improved total model accuracy in most cases. But I had to dig really deep to find it out, unless I missed something in the paper.
Clickbait headline, but the article is a fantastic example of one of machine learning’s most common failure modes. This extends to generative models and is another reason to be wary of trusting their output.
Source: https://hms.harvard.edu/news/researchers-discover-bias-ai-mo...