6 Comments

I'm torn on this one because not that long ago Medicine was accused of 'whitewashing' all medical records and treating everyone the same. Now when it does find differences, we haven't answered whether those differences are bad... or good. We've just gotten bothered that AI can tell the difference.

But there IS difference in health profiles of different racial genetics. Sickle Cell Anemia is just one example. We really need to be careful before we go back and 'whitewash' all over again.

https://www.cnn.com/2021/04/03/us/arab-americans-covid-19-impact/index.html

Expand full comment

The whitewashing I know off was the usage of one groups data to make predictions on other groups (which as you pointed out is bad b/c genetic differences). The phenomenon here is problematic for different reasons-

1) It's bothersome b/c AI is picking out differences in images that we didn't know were possible. This is more of a cautionary tale for anyone using AI to judge people (beyond just medicine). It will pick up on trends and possibly penalize protected groups, even on datasets that we think are impossible to identify such groups from. As another point of worry, this means we can also possibly identify someone's group from 'unrelated' samples. Coming from India- where we have a long history of serious discrimination- that's ringing a few bells for me.

2) The AI being able to hit fairly good classiication scores on extremely corrupted data merits further investigation. If it happens here, in which other cases does something similar take place in? From first hand experience, many teams aren't even familiar with basic data sanitation protocols. This is more final-boss level. I'm worried that many teams that built apps in critical sectors did not do all that had to be done.

Differences exist. They're neither good nor bad, they are reality. It's not shocking that AI can find that. But the fact that it can find these differences from data sources that are supposed to be impossible to decipher differences from is troubling and demands further investigation. The fact that people have ignored this research when discussing AI Risks is straightup criminal.

Expand full comment

I fully understand the sensitivity on the topic. I just worry that by hitting on what looks like 'bias' shuts down the very conversation we need to have to solve what you pointed out!

Expand full comment

We're so far from even scratching the surface of all this. We have no idea what's coming.

Expand full comment

This is absolutely crazy to me. Part of my research as a modeler working in MRI was trying to understand network feature extraction for medical images because we know so little about it. It's been proven time and time again how good these networks are at extracting features even humans can't see but something like is on another level and I love to see the capabilities shown here. However, this also further highlights the issues of racial bias in ML models when it can seen in such an important application like healthcare. This is a mega finding by this group.

Expand full comment

You're bang on

Expand full comment