Clearview AI, the maker of a controversial facial recognition app, is confident that its technology has beneficial uses, as other big tech names are either leaving the market or ceasing to be used by law enforcement agencies for fear of abuse. The steps take place in the midst of studies showing that the technology has low levels of accuracy for women and minorities.
Hoan Ton-That, CEO of Clearview, says that his company’s technology can help protect children and victims of crime without the risk of racial bias. His criticism comes on the same day Amazon announced one one-year moratorium After weeks of protesting police brutality, and just a few days after IBM’s announcement that it would withdraw due to concerns about the facial recognition market, the product could be used for profiling.
“This is particularly important to me as a mixed race person,” Ton-That said in a statement on Wednesday evening. “We are very encouraged that our technology has proven to be accurate in this area and has helped prevent misidentification of colored people.”
Clearview identifies people by comparing photos against a database of images from social media and other websites. It came under fire after a New York Times investigation in January. Since then, Senator Edward Markey, a Massachusetts Democrat, has called Clearview a “daunting” data protection risk. In addition, Google, YouTube, Microsoft and Twitter have sent cease and desist letters to Clearview. The company is also facing several lawsuits.
Markey also raised concerns this week that police and law enforcement agencies in cities where people are protesting the murder of George Floyd, an unarmed black man, could use to identify and arrest protesters. He also expressed concern that the threat of surveillance could prevent people from “speaking out against injustice because they fear they will be permanently included in law enforcement databases.”
Ton-That also said the company is committed to “responsible handling” of its technology, adding that it should be used to identify suspects and not as a monitoring tool in protests or other circumstances.
“We strongly believe in protecting our communities and look forward to working with the government and policymakers to develop appropriate protocols for the correct use of facial recognition,” said Ton-That.
In addition to concerns about accuracy, data protection officers and legislators fear that technology can become an inevitable and invasive form of surveillance. A handful of cities have banned municipal use of the technology, and democratic lawmakers have proposed that residential units block the use of technologies such as facial recognition.