In the span of 72 hours, both IBM and Amazon backed out of the facial recognition business this week.
It’s a chess match on the geopolitical playing board, with AI ethics and data bias in play.
IBM moved first, closely followed by Amazon.
(And then two days later Microsoft announced its intention to also exit the market; see below.)
The moves came after demonstrations were held across both the US and the world, in response to police mistreatment of black Americans. Facial recognition software has been called out by privacy and AI ethics groups as having higher error rates for people of color.
New IBM CEO Arvind Krishna stated in a letter to Congress on June 8, “IBM firmly opposes and will not condone uses of any technology, including facial-recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency.”
Amazon then announced on Wednesday that it is implementing a one-year moratorium on police use of its Rekognition technology, but it would still allow organizations focused on stopping human trafficking to continue to use the technology.
On its THINKPolicy Blog, IBM posted the letter from CEO Krishna submitted to Congress. It states in part, “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Amazon’s blog release stated in part, “We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
Amazon’s move was seen as “smart PR” in an account in Fast Company, which was skeptical that the 12-month moratorium would result in a significant change. Nicole Ozer, the technology and civil liberties director with the ACLU of Northern California, was quoted as stating, “This surveillance technology’s threat to our civil rights and civil liberties will not disappear in a year. Amazon must fully commit to a blanket moratorium on law enforcement use of face recognition until the dangers can be fully addressed, and it must press Congress and legislatures across the country to do the same.”
Nicole Ozer, technology and civil liberties director, ACLU of Northern California
Ozer went on to argue that Amazon “should also commit to stop selling surveillance systems like Ring that fuel the over-policing of communities of color.” She added “Face recognition technology gives governments the unprecedented power to spy on us wherever we go. It fuels police abuse. This surveillance technology must be stopped.”
A recent account from the Electronic Frontier Foundation warned consumers that video from Ring is in the Amazon cloud and is potentially accessible by Amazon employees and law enforcement agencies with agreements in place with Amazon.
Joy Buolamwini Led Research that Discovered Bias
In a piece in AI Trends last year, Facial Recognition Software Facing Challenges, described the work of MIT researcher Joy Buolamwini. Today she refers to herself as a “poet of code” and fighter for “algorithmic justice.”
The study from MIT Media Lab researchers in February 2018 found that Microsoft, IBM and China-based Megvii (FACE++) tools had high error rates when identifying darker-skin women compared to lighter-skin men. Buolamwini got the attention of the technology giants, members of Congress and other AI scholars.
“There needs to be a choice,” stated Buolamwini in an account in the Denver Post. “Right now, what’s happening is these technologies are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it’s almost too late.”
She caught some flak from the tech giants. Amazon challenged what it called Buolamwini’s “erroneous claims” and said the study confused facial analysis with facial recognition, improperly measuring the former with techniques for evaluating the latter.
“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” Matt Wood, general manager of artificial intelligence for Amazon’s cloud-computing division, wrote in a January 2019 blog post.
Buolamwini, who has founded a coalition of scholars, activists and others called the Algorithmic Justice League, has blended her scholarly investigations with activism. She has said a major message of her research is that AI systems need to be carefully reviewed and consistently monitored if they’re going to be used on the public. Not just to audit for accuracy, she said, but to ensure face recognition isn’t abused to violate privacy or cause other harms.
“We can’t just leave it to companies alone to do these kinds of checks,” she said.
No news of any change in China, where the government has embraced facial recognition and is expanding its use.
Read the source articles in Fortune, CNBC, TechCrunch, Fast Company and AI Trends.
Joy Buolamwini Comments for AI Trends
AI Trends reached out to Joy Buolamwini on these recent developments to get her reaction. She sent this response:
“With IBM’s decision and Amazon’s recent announcement, the efforts of so many civil liberties organizations, activists, shareholders, employees and researchers to end harmful use of facial recognition are gaining even more momentum. Given Amazon’s public dismissals of research showing racial and gender bias in their facial recognition and analysis systems, including research I coauthored with Deborah Raji, this is a welcomed though unexpected announcement.
“Microsoft also needs to take a stand. More importantly our lawmakers need to step up. We cannot rely on self-regulation or hope companies will choose to reign in harmful deployments of the technologies they develop. I reiterate a call for a federal moratorium on all government use of facial recognition technologies. The Algorithmic Justice League recently released a white paper calling for a federal office to set redlines and guidelines for this complex set of technologies which offers a pathway forward. The first step is to press pause, not just company wide, but nationwide.
“I also call on all companies that substantially profit from AI — including IBM. Amazon, Microsoft, Facebook, Google, and Apple— to commit at least 1 million dollars each towards advancing racial justice in the tech industry. The money should go directly as unrestricted gifts to support organizations like the Algorithmic Justice League, Black in AI, and Data for Black Lives that have been leading this work for years. Racial justice requires algorithmic justice.”
From Joy Buolamwini at the Algorithmic Justice League.
Update: Microsoft Also Exits the Market
Later the same day Buolamwini sent AI Trends these comments, Thursday, June 11, Microsoft President Brad Smith confirmed in a Washington Post live event that Microsoft would also exit the facial recognition market, according to an account in The Washington Post.
“We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology,” Smith stated, making clear the Microsoft decision is for now a moratorium.