Coded Bias

****

Reviewed by: Jane Fae

Coded Bias
"The real issue set out in Coded Bias.. is that we are stumbling headlong into a world in which major life-changing decisions for every one of us are taken by impersonal unrelenting computer algorithms."

A long time ago, in a career before the career before this one, I was told a story about machine learning. US Defence boffins were eager to teach weapons to fire automatically on the enemy. They used neural nets to teach the difference between our tanks and theirs. It worked perfectly.

Then they tested the software on a firing range. It blew up every tank in sight. Because it had not learned the difference between ours and theirs but rather between pictures of tanks taken on a sunny day and pictures taken on an overcast one.

Copy picture

Apocryphal? Perhaps. Though given the current meme, circulating yesterday, in which an artificial intelligence (AI) Covid-19 detector identifies a cat as a deadly virus, not surprising.

This, the importance of the data sets used to create your machine learning, is the starting point for Coded Bias. Joy Buolamwini, a black MIT student, set up a vanity app on her mirror and then discovered it didn’t work. Not, that is, unless she put on a white face mask.

Why? Because the underlying AI was trained on data sets that are mostly male, mostly white. And as you would expect, they are good at identifying men, less good at identifying women and very bad at identifying black women.

This, by itself, is an issue. A computer app fails to identify one category of people? Big deal. By which I mean not much of one. Until you start to understand how, increasingly, as a society we are keen to sub-contract large swathes of decision-making to automated technologies.

The key word there is automated. Because, again, dipping into my long ago past, I used to edit a journal which, inter alia, received papers comparing and contrasting different “predictive” techniques. So technical folk would regularly put different forms of regression or machine learning through their paces and conclude that in this circumstance, this method worked slightly “better” than that.

Though, as the film makes clear: there is no absolute “best”. Just a range of subjective opinions on what works and trade-offs between different criteria. Often, human interpretation works far better than machine but humans cannot do the volume work – evaluating CVs, scanning thousands of pictures – that an AI app can.

Yes, the film explains: views of what is “best” for society come from a very narrow group. And no: you “can’t separate social from technical”.

All this is well known. The key to what makes Coded Bias both chilling and a serious warning that now is the time to think again about the world we are creating lies in that point about volume. AI can make thousands of decisions in seconds. Because the intelligence machine beneath the bonnet is hidden from view, and often not understood even when it is checked, it is not often impossible to tell if it made the right decision. And that is not counting the recent AI that sorted job applications by rejecting every single woman or likely woman candidate.

This is a Russian tank. The answer is a simple yes or no.

This is a black woman. Her name is Smith. Yes. No.

She is a terrorist. Here we return to the first point. You can verify skin colour. But what if an individual is arrested on the basis of bad AI? What if they are rejected for a job? Refused credit? Half the time people do not even know what just happened to them. And even when they suspect, what can they do?

The real issue set out in Coded Bias, and in the warnings enumerated eloquently by the experts who contribute to Shalini Kantayya’s film, is that we are stumbling headlong into a world in which major life-changing decisions for every one of us are taken by impersonal unrelenting computer algorithms. And without a proper legal framework governing this world, there is next to nothing we can do about it. Indeed, there is shocking footage from the UK of police fining an individual for refusing to have their face AI scanned.

Do they have such power? It is clear the police think they do. Others may think differently.

“The more humans share with me the more I learn.” a disembodied computer voice opens the film. But, what if the machine is picking up on the wrong lessons from all that data? And what if we, humans, are then giving that machine power over our lives?

Worse, if the computer says "no", or "yes", we humans tend to give it greater credence than if another human says it. As one commentator puts it: “We are automating racism.” Elsewhere, China is on the verge of implementing large scale “algorithmic obedience training”. If you want to be very afraid for the future of the human race, listen to what people have to say about their social scoring system, which combines facial recognition and behavioural modification.

We are come full circle. Back in the day, when I did this stuff, talks I gave on the technical side, on how to extract an extra one to two per cent performance out of an algorithm were much in demand. The other lecture, on the ethics of what we were doing, I delivered rather less frequently. Because it was too worthy and had little to do with helping business to make money.

Thirty years on and 30 years deeper down the rabbit hole, and Coded Bias makes plain that debate – the one we aren’t having right now – is more urgent than ever.

Reviewed on: 18 Jun 2020
Share this with others on...
Considering the racial bias issues of AI and facial recognition.
Amazon link

Director: Shalini Kantayya

Writer: Shalini Kantayya

Starring: Joy Buolamwini, Silkie Carlo, Jenny Jones, Tranae Moran, Kc Solaris, Mark Zuckerberg, Joy Buolamwini, Silkie Carlo, Jenny Jones, Tranae Moran, Kc Solaris, Mark Zuckerberg

Year: 2020

Runtime: 90 minutes

Country: US, China, UK


Search database:



DJDT

Versions

Time

Settings from settings.local

Headers

Request

SQL queries from 1 connection

Templates (9 rendered)

Cache calls from 2 backends

Signals