IBM kills facial recognition products and R&D
Written by James Orme Tue 9 Jun 2020

Big Blue will also no longer sell the controversial technology, citing unsolved human rights issues
IBM will no longer sell general-purpose facial recognition technology or continue research and development in the controversial area.
IBM CEO Arvind Krishna, whose company has been at the forefront of facial recognition innovation, announced the sweeping changes in a letter to US Congress today, citing issues of “racial profiling” amid protests against the treatment of African Americans that have swept America, and further afield, following the horrific murder of George Floyd by a police officer in Minneapolis.
“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter, seen by The Verge. “
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Facial recognition technology, powered by AI algorithms and models, has advanced markedly in recent years, but lawmakers and regulators have criticised its encroachment into the public sphere, arguing its susceptibility to rage, age and ethnicity bias could infringe on human rights.
This year a leaked EU policy draft revealed the European Commission is mulling a temporary five-year ban on the use of facial recognition technology in public areas while it crafts more thorough legislation to prevent the technology being abused by governments and businesses.
The UK ICO has warned police against the use of live facial recognition and has called for more regulation into how police leverage the technology. In March, the UK Met Police wrongly apprehended seven innocent members of the public who were incorrectly identified by a facial recognition deployment in Oxford Circus, London.
IBM itself came under fire in 2019 after the company was found to have publicly shared a training dataset of nearly one million Flickr photos without the express content of the subjects. At the time, IBM said only researchers had access to the data, that it was publicly available, and that individuals could opt-out of their inclusion.
Written by James Orme Tue 9 Jun 2020