Facial Recognition and “ethnicity”

[This post was updated on 22nd May 2018 to include NtechLab’s response]

Late on Friday I tweeted the following screenshot:

faces

When I came back after the weekend it had been retweeted 11,000 times, which is a new experience for me.

The company concerned is the Russian firm Ntech. They’re very good at facial rec, by all accounts, having won several competitions. And they were behind the controversial FindFace software that promised to match social media profiles simply from a photo taken of a subject.

Watching the debate following my tweet has been fascinating, and I thought it’d help to break down the different responses and discuss them:

1. “Arabic is a language, not an ethnicity!”
This was by far the most common response. On the one hand, it seems unfair to criticise the Ntech for their clumsy language usage. But on the other hand, in an area as fraught as ethnicity, language really matters and the website owners should have got this right, which feeds into response two….

2. “Techies need to see the wider picture”
Otherwise known as: “just because you can, doesn’t mean you should”. A lot of the responses picked up on the fact that tech developers often seem oblivious to the wider ramifications of their inventions. The problem with this argument is: should innovators refrain from inventing new tech on the basis of potential down-sides, thereby depriving society of potential up-sides?

(Incidentally, I tried to think of positive uses for Ntech software, and I did come up with one: diversity monitoring for businesses. But it’s a pretty weak example, and doesn’t hold up well against the considerable concern felt by many Twitter respondents, particularly those from minority ethnic communities).

3. “Ethnicity isn’t about skin colour”
From a technical standpoint, Ntech’s offer isn’t that radical (image recognition software can distinguish colours quite easily, and that’s exactly what’s going on here: the tech is simply distinguishing between skin tones).
But labelling colour recognition as “ethnicity recognition” is another example of sloppy language use by Ntech Lab.

As the Twitter responses show, there is considerable debate over how “race” and “ethnicity” are defined (the one thing everyone seemed to agree on was that neither term is solely about skin colour, and for some skin colour is irrelevant).

That’s why I was hesitant about the tweets calling Ntech’s software “racist”; strictly speaking, Ntech’s software is “colourist”, since it’s not capable of distinguishing race and ethnicity.

Of course, the tools could be used in a racist way, but perhaps Ntech would argue that’s up to its clients to decide (but that goes back to point 2, above).

(I’m painfully aware at this point that race and ethnicity aren’t my area of specialism).

In the long run, is this technology going to be useful? I think we can safely say it can’t do what it says on the tin, because it’s not actually recognising “ethnicity”. Really then, how useful is it to know skin colour? (Especially given the number of Twitter respondents who reported being racially categorised in a bewildering variety of ways thanks to their mixed heritage).

A spokesperson for NtechLab blamed a “communication and localization issue” and said: “We deeply apologize that the information on our website regarding ethnicity recognition has caused people to respond negatively to our AI technology.

“Our company doesn’t detect racial ethnicity, we recognize and celebrate ethnic diversity across all demographic groups.”

“We… will make due corrections on the web page to resolve this communication issue as soon as possible.”

However, as at 22nd May 2018, NtechLab’s website still includes “Ethnicity Recognition” among its upcoming projects. The photo and description have been removed.

11 thoughts on “Facial Recognition and “ethnicity”

  1. On (2) –

    This is why bicameral legislatures and checks on legislatures exist. Just because you can develop and pass a piece of legislation doesn’t mean you should.

    The problem at the moment is that the only check on techies is the market, and the market is frequently highly uninterested in societal ramifications. This is not the case for, say, pharmaceuticals, or food products, or electrical goods, or new buildings, or any number of other types of products. There’s oversight when it comes to other types of experimentation on people, too. All of these are regulated with varying degrees of vigor. There is, however, zero to very little regulation of tech products, and since the people inventing them do seem to be routinely oblivious to the social effects of their inventions, it seems to me that regulation is long overdue.

    Like

  2. Pingback: 078: Hounds hunt hackers, too-human Google AI, and ethnic recognition tech - WTF? - infosec industry

  3. Pingback: This startup’s racial-profiling algorithm shows AI can be dangerous way before any robot apocalypse – TV Aerials Middlesbrough

  4. Pingback: This startup’s racial-profiling algorithm shows AI can be dangerous way before any robot apocalypse — phone-gifts.com

  5. Pingback: DON’T SAY SKYNET This startup’s racial-profiling algorithm shows AI can be dangerous way before any robot apocalypse – medeemgl.com

  6. Pingback: This Startup’s Racial-Profiling Algorithm Shows AI Can Be Dangerous Way Before Any Robot Apocalypse - The T.I. Chronicles

  7. Pingback: Facebook aprovecha el RGPD para intentar ‘quedarse’ con tu cara, por Antonio Martínez Ron – Sociología en la Red – Divulgación, Innovación y Tecnología para la UNJFSC

  8. Pingback: This startup’s racial-profiling algorithm shows AI can be dangerous way before any robot apocalypse – apptracker

  9. Pingback: This startup’s racial-profiling algorithm shows AI can be dangerous – apptracker

  10. Pingback: Smashing Security podcast: Hounds hunt hackers, too-human Google AI, and ethnic recognition tech - WTF?

  11. Pingback: Smashing Security: 078: Hounds hunt hackers, too-human Google AI, and ethnic recognition tech - WTF?

Leave a comment