After revealing its decision to terminate general purpose facial recognition and analysis software products in June, technology giant IBM has now called for greater restrictions on export of facial recognition software from the US.
In a letter to the US Department of Commerce last week, IBM recommended that the country restrict the export of facial recognition technologies that employ what it called “1-to-many” matching end uses.
“This is the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights,” IBM said in the letter dated September 11.
“To effectively target export controls on these particular use-cases of facial recognition technologies, we believe such rules should focus on the high-resolution cameras used to collect data and the software algorithms used to analyse and match that data in the context of a ‘1-to-many’ facial recognition system.”
According to IBM, the “1-to-many” systems are distinct from “1 to 1” facial matching systems, such as those that might unlock your phone or allow you to board an airplane — in those cases, facial recognition is verifying that a consenting person is who they say they are.
But in a “1-to-many” application, a system can, for example, pick a face out of crowd by matching one image against a database of many others, Christopher Padilla, Vice President, IBM Government and Regulatory Affairs, explained in a blog post.
In a letter earlier this year to members of the US Congress, IBM CEO Arvind Krishna stated that the company has sunset its own general purpose IBM facial recognition and analysis products.
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna had said.
In its letter to the US Department of Commerce, IBM suggested that the tightest restrictions be placed on end uses and end users that pose the greatest risk of societal harm.
Despite its promise of offering a helping hand to law enforcement agencies in tracking criminals, facial recognition technology has courted controversy for its potential for misuse by state authorities.
A few cities in the US including San Francisco, Oakland, San Diego, and most recently Portland, have already banned the use of facial recognition technology, citing its limitations and a lack of standard around its use, ZDNet reported