Clearview AI is expanding sales of its face recognition software to companies from primarily serving the police, it told Reuters, and called for a review of how the startup uses billions of photos it scrapes from social media profiles.
Sales could be significant for Clearview, a presenter on Wednesday at the Montgomery Summit Investors Conference in California. It fuels an emerging debate about the ethics of using disputed data to design artificial intelligence systems as face recognition.
Clearview’s use of publicly available photos to train its tool gives it high marks for accuracy. Britain and Italy fined Clearview for violating privacy laws by collecting online images without consent, and this month the company settled with US rights activists on similar charges.
Clearview primarily helps the police to identify people through photos on social media, but that activity is threatened due to regulatory investigations.
The deal with the American Civil Liberties Union prohibits Clearview from providing capacity for social media to corporate customers.
Instead of online photo comparisons, the new private-sector offering matches people to ID photos and other data that customers collect with test subjects’ permission. It is intended to verify identities for access to physical or digital spaces.
Vaale, a Colombian app-based lending startup, said it adopted Clearview to match selfies with user-uploaded ID photos.
Vaale will save about 20 percent in costs and increase accuracy and speed by replacing Amazon.com Inc’s recognition service, says Chief Executive Santiago Tobón.
“We can not have duplicates of accounts and we must avoid fraud,” he said. “Without face recognition, we can not make Vaale work.”
Amazon declined to comment.
Clearview AI CEO Hoan Ton-That said that an American company that sells visit management systems to schools had also signed up.
He said that a customer’s photo database is stored for as long as they want and is not shared with others, nor is it used to train Clearview’s AI.
But the facial matching that Clearview sells to companies was trained on photos on social media. It said the multifaceted collection of public images reduces racial prejudice and other weaknesses that affect rival systems limited by smaller data sets.
“Why not have something more accurate that prevents mistakes or some kind of problem?” Ton-That said.
Nathan Freed Wessler, an ACLU lawyer involved in the union’s case against Clearview, said using poorly collected data is an inappropriate way to continue developing less biased algorithms.
Regulators and others must have the right to force companies to release algorithms that take advantage of disputed data, he said, noting that the latest deal did not include such a provision for reasons he could not disclose.
“It’s an important deterrent,” he said. When a company chooses to ignore legal protections for data collection, they should be held accountable. “
© Thomson Reuters 2022
Denial of responsibility! Tamilbloggers.xyz is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected] The content will be deleted within 24 hours.