AnyVision, the controversial facial recognition startup, has raised $235M led by SoftBank and Eldridge – ProWellTech 1

AnyVision, the controversial facial recognition startup, has raised $235M led by SoftBank and Eldridge – ProWellTech

Facial recognition has been one of the more conflicted applications of artificial intelligence in the wider world: using computer vision to detect faces and subsequent identities of people has raised numerous questions about privacy, data protection, and the ethics underpinning the purposes of the work, and even the systems themselves. But on the other hand, it’s being adopted widely in a wide variety of use cases. Now one of the more controversial, but also successful, startups in the field has closed a big round of funding.

AnyVision — an Israeli startup that has built AI-based techniques to identify people by their faces, but also related tech such as temperature checks to detect higher temperatures in a crowd — has raised $235 million in funding, the company has confirmed.

This Series C, one of the bigger rounds for an AI startup, is being co-led by SoftBank’s Vision Fund 2 and Eldridge Industries, with previous investors also participating. (They are not named but the list includes Robert Bosch GmbH, Qualcomm Ventures and Lightspeed.) The company is not disclosing its valuation but we are asking. However, it has to be a sizable hike for the company, which had previously raised around $116 million, according to PitchBook, and has racked up a big list of customers since its last round in 2020.

Worth noting, too, that AnyVision’s CEO Avi Golan is a former operating partner at SoftBank’s investment arm.

AnyVision said the funding will be used to continue developing its SDKs, specifically to work in edge computing devices — smart cameras, body cameras, and chips that will be used in other devices — to increase the performance and speed of its systems.

Its systems, meanwhile, are used in video surveillance, watchlist alerts, and scenarios where an organization is looking to monitor crowds and control them, for example to keep track of numbers, to analyse dwell times in retail environments, or to flag illegal or dangerous behavior.

“AnyVision’s innovations in Recognition AI helped transform passive cameras into proactive security systems and empowered organizations take a more holistic view to advanced security threats,” Golan said in a statement in the investment announcement. “The Access Point AI platform is designed to protect people, places, and privacy while simultaneously reducing costs, power, bandwidth, and operational complexity.”

You may recognize the name AnyVision because of how much it has been in the press.

The startup was the subject of a report in 2019 that alleged that its technology was being quietly used by the Israeli government to run surveillance on Palestinians in the West Bank.

The company denied it, but the story quickly turned into a huge stain on its reputation, while also adding more scrutiny overall to the field of facial recognition.

That led to Microsoft, which had invested in AnyVision via its M12 venture arm, to run a full audit of the investment and its position on facial recognition investments overall. Ultimately, Microsoft divested its stake and pledged not to invest in further technology like it.

Since then, AnyVision has been working hard to spin itself as the “ethical” player in this space, acknowledging that there is a lot of work and shortcomings in the bigger market of facial recognition. But controversy has continued to court the company.

A report from Reuters in April of this year highlighted just how many companies were using AnyVision’s technology today, ranging from hospitals like Cedars Sinai in Los Angeles to major retailers like Macy’s and energy giant BP. AnyVision’s connections to power go beyond simply having big customers: it also turns out that the White House Press Secretary, Jen Psaki, once served as a communications consultant to the startup.

Then, a report published just yesterday in The Markup, combed through various public records for AnyVision, including a user guidebook from 2019, which also painted a pretty damning picture of just how much information the company can collect, and what it has been working on. (One pilot, and subsequent report resulting from it, involved tracking children in a school district in Texas: AnyVision collected 5,000 student photos and ran more than 164,000 detections in just seven days.)

There are other cases where you might imagine, however, that AnyVision’s technology might be deemed helpful or useful, maybe even welcomed. Its ability to detect temperatures, for example, and identify who may have been in contact with high-temperature people, could go a long way towards controlling less obvious cases of Covid-19, for example, helping contain the virus at mass events, providing a safeguard to enable those events to go ahead.

And to be completely clear, AnyVision is not the only company building and deploying this technology, nor the only one coming under scrutiny. Another, the U.S. company Clearview AI, is used by thousands of governments and law enforcement agencies, but earlier this year it was deemed “illegal” by Canadian privacy authorities.

Indeed, it seems that the story is not complete, either in terms of how these technologies will develop, how they will be used, and how the public comes to view them. For now, the traction AnyVision has had, even despite the controversy and ethical questions, seems to have swayed SoftBank.

“The visual recognition market is nascent but has large potential in the Western world,” said Anthony Doeh, a partner for SoftBank Investment Advisers, in a statement. “We have witnessed the transformative power of AI, biometrics and edge computing in other categories, and believe AnyVision is uniquely placed to redefine physical environment analytics across numerous industries.”

Similar Posts