A coalition of civil society organisations consisting of Privacy International, Hermes Center, Homo Digitalis and noyb have today, 27/5/2021, filed 5 complaints before the competent authorities in Austria, France, Greece, Italy, Greece and the United Kingdom against Clearview AI, Inc. This company develops facial recognition software and claims to have “the largest known database of more than 3 billion facial images” which it collects from social media platforms and other online sources.
The complaints filed with the relevant authorities, including the Data Protection Authority (No. 3458/27.5.2021), detail how Clearview AI’s automated image collector works. It is an automated tool that visits public websites and collects any images it detects that contain human faces. Along with these images, the automated collector also collects metadata that complements these images, such as the title of the website and its source link. The collected facial images are then matched against the facial recognition software created by Clearview AI in order to build the company’s database. Clearview AI sells access to this database to private companies and law enforcement agencies, such as police authorities, worldwide.
“European law is clear regarding the purposes for which companies can use our data,” says Ioannis Kouvakas, a lawyer at Privacy International. “Mining our facial features or sharing them with the police and other companies far exceeds our expectations as internet users.”
“It seems that Clearview mistakenly believes that the internet is a public forum from which anyone can grab whatever they want,” adds Lucie Audibert, a lawyer at Privacy International. “This perception is completely wrong. Such practices threaten the open nature of the internet and the countless rights and freedoms it encompasses.”
The relevant authorities now have 3 months to respond to our complaints. We expect them to cooperate and jointly decide that Clearview AI’s practices have no place in Europe.
Clearview AI became internationally known in January 2020 when a New York Times investigation revealed its practices, which until then had been shrouded in a veil of mystery. The 5 complaints filed add to a series of investigations launched following these revelations.
“Just because something is on the internet does not mean that others can use it in any way they wish. The Data Protection Authorities need to take action,” says Alan Dahi, a privacy lawyer at noyb.
Other reports made it known that Clearview AI had developed partnerships with law enforcement authorities in several European countries. In Greece, following questions submitted by Homo Digitalis, the Greek police have officially stated that they have not used the services of this company. “It is important to intensify the control. Data Protection Authorities have strong investigative powers and we need a
coordinated response to such collaborations between public and private entities,” says Marina Zacharopoulou, a lawyer and member of Homo Digitalis.
Because of its intrusive nature, the use of facial recognition technology, and especially the business models based on it, raises significant challenges for modern societies and the protection of our freedoms. Last month, the Italian Data Protection Authority stopped the use of live facial recognition by police authorities. “Facial recognition technologies threaten our lives both online and offline” says Fabio Pietrosanti, President of Hermes Center.
People living in Europe can ask Clearview AI if their face is in the company’s database and demand that their data be deleted. Privacy International has outlined the relevant steps in detail here.
Of course, support for the European #ReclaimYourFace initiative, which aims to end mass biometric surveillance in public places, is also important.
You can read more in the respective informative articles posted by Privacy International, noyb and Hermes Center.
You can read the complaint here.