Google and Microsoft researchers call for facial recognition regulation

In our infographic about mobile payment development published earlier, we indicated biometric technology such as facial recognition will be a future trend for identity managment. In view of this development, Google and Microsoft researchers have recently called for regulation of “oppressive” facial recognition technology.

Concerns were raised by a group of computing experts in a report published by a consortium of computing experts about artificial intelligence. It stated: “Facial recognition and affect recognition [such as detection for personality traits] need stringent regulation to protect the public interest.

“Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts.

"Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance.”

South Wales Police, the Metropolitan Police in London and Leicestershire Police all use the technology but doubts have been cast over its reliability. A recent study found the systems, created by Japanese company NEC, struggled to identify suspects wearing hats and glasses.

It also has difficulty identifying people in crowds, spotting fast moving objects and dealing with faces that appear in low-light conditions. Big Brother Watch obtained figures revealing that 98pc of Metropolitan Police “matches” wrongly identified innocent people and is now trying to take it to court for allegedly breaching the Human Rights Act.

Affect recognition, which vendors claim can detect personality, inner feelings, mental health was particularly troubling, the AI Now 2018 report warned. The technology is increasingly being used to test the efficacy of advertising and “worker engagement” in the US.

“These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level,” it stated.

 

Source: https://www.telegraph.co.uk