Comments

You must log in or register to comment.

9

BrowseDuringClass1917 wrote

The study’s insights add to a growing body of evidence about how human bias seeps into our automated decision-making systems. It’s called algorithmic bias.

The most famous example came to light in 2015, when Google’s image-recognition system labeled African Americans as “gorillas.” Three years later, Amazon’s Rekognition system drew criticism for matching 28 members of Congress to criminal mugshots. Another study found that three facial-recognition systems — IBM, Microsoft, and China’s Megvii — were more likely to misidentify the gender of dark-skinned people (especially women) than of light-skinned people.

Excuse me, but what the fuck? How wasn’t this like big news?

7

Tequila_Wolf wrote (edited )

I thought this kind of thing was generally known. For me it just comes down to the idea that programs will inevitably contain the biases of the programmers.

And of course, programmers aren't usually exemplars of ethical practice.

4

bloodrose wrote

I think the problem is most people think of facial recognition as some fancy feature, not something that is actually being used for anything. Very few people seem to understand how important it is that their data is being used.

-3

MrPotatoeHead wrote

Features are determined by what is visible to the observer. I suspect that darker skinned people will likely begin wearing lighter colored coats and jackets, for greater visibility. There's a reason that reflective clothing is used by rescue workers at night.