Home ›› 10 Jun 2020 ›› Front

IBM abandons 'biased' facial recognition tech

International Desk
10 Jun 2020 11:58:20 | Update: 10 Jun 2020 13:16:54
IBM abandons 'biased' facial recognition tech
A US government study suggested facial recognition algorithms were less accurate at identifying African-American faces

Tech giant IBM is to stop offering facial recognition software for "mass surveillance or racial profiling".

The announcement comes as the US faces calls for police reform following the killing of a black man, George Floyd.

In a letter to the US Congress, IBM said AI systems used in law enforcement needed testing "for bias".

One campaigner said it was a "cynical" move from a firm that has been instrumental in creating technology for the police.

In his letter to Congress, IBM chief executive Arvind Krishna said the "fight against racism is as urgent as ever", setting out three areas where the firm wanted to work with Congress: police reform, responsible use of technology, and broadening skills and educational opportunities.

"IBM firmly opposes and will not condone the uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms," he wrote.

"We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies".

Instead of relying on potentially biased facial recognition, the firm urged Congress to use technology that would bring "greater transparency", such as body cameras on police officers and data analytics.

Data analytics is more integral to IBM's business than facial recognition products. It has also worked to develop technology for predictive policing, which has also criticised for potential bias.

'Let's not be fooled'
Privacy International's Eva Blum-Dumontet said the firm had coined the term "smart city".

"All around the world, they pushed a model or urbanisation which relied on CCTV cameras and sensors processed by police forces, thanks to the smart policing platforms IBM was selling them," she said.

"This is why is it is very cynical for IBM to now turn around and claim they want a national dialogue about the use of technology in policing."

She added: "IBM are trying to redeem themselves because they have been instrumental in developing the technical capabilities of the police through the development of so-called smart policing techniques. But let's not be fooled by their latest move.

"First of all, their announcement was ambiguous. They talk about ending 'general purpose' facial recognition, which makes me think it will not be the end of facial recognition for IBM, it will just be customised in the future."

The Algorithmic Justice League was one of the first activist groups to indicate that there were racial biases in facial recognition data sets.

A 2019 study conducted by the Massachusetts Institute of Technology found that none of the facial recognition tools from Microsoft, Amazon and IBM were 100% accurate when it came to recognising men and women with dark skin.

And a study from the US National Institute of Standards and Technology suggested facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared with Caucasian ones.

Amazon, whose Rekognition software is used by police departments in the US, is one of the biggest players in the field, but there are also a host of smaller players such as Facewatch, which operates in the UK. Clearview AI, which has been told to stop using images from Facebook, Twitter and YouTube, also sells its software to US police forces.

Maria Axente, AI ethics expert at consultancy firm PwC, said facial recognition had demonstrated "significant ethical risks, mainly in enhancing existing bias and discrimination".

She added: "In order to build trust and solve important issues in society, purpose as much as profit should be a key measure of performance."

(Source: BBC)

×