IBM‘s decision to no longer offer general purpose facial recognition (FR) technology comes at a time when police activity is under increased scrutiny following the anti-racist protests across the US.
IBM’s statement is the strongest condemnation yet of the risks of FR technologies to come from a big technology firm. However, while ethical concerns have played their part in the company’s decision, IBM has also been moved by more material concerns.
The facial recognition market is still in its infancy but already big tech companies, especially those with access to large visual datasets, are best-positioned to benefit from selling the technology. Amazon and Google have heavily invested in machine learning and boast advanced cloud operations, two key technologies enabling FR. IBM can also rely on a strong cloud business and powerful AI engine, but it simply can’t match the its rivals’ repositories of visual data.
That said, FR is often sold using an as a service model and, as this technology becomes increasingly commoditized, revenue growth is expected to slow down. Ultimately, FR wasn’t a particularly profitable business for IBM.
FR ethics are seen as problematic
Big tech companies have been struggling with the ethics of AI over the last few years and they appear to be divided on the right approach. This is especially true when it comes to the use of facial recognition by law enforcement agencies.
Amazon continues to sell the technology to the police despite concerns over its accuracy. The ecommerce giant has been under pressure from civil liberties groups to stop selling its Rekognition software to police forces, with opponents citing the risk of potential abuse by the authorities.
In 2019 a proposal to stop selling the technology to government agencies won only 2% of approval among its shareholders. Both Google and Microsoft, on the other hand, refuse to sell FR to governments using it for mass surveillance, with Microsoft publicly advocating the need for governments to adopt laws that regulate the technology. IBM has pledged to do the same, but it still reserves the right to sell FR technology for specific purposes.
Facial recognition technology works differently across gender and races
The current protests across America after the killing of George Floyd are likely to give new vigour to the debate around the use of facial recognition technology. There are significant risks that FR used in law enforcement is unreliable. FR training data is often incomplete or unrepresentative of the general population. A study from MIT Media Lab shows that FR technology works differently across gender and races.
The darker the skin, the more errors arise, up to 35% for images of darker-skinned women. There are significant risks that FR used in law enforcement and border, airport, and retail security will be unreliable. In cases where misidentification can lead to arrest or incarceration, the level of accuracy must be as high as possible to avoid incredibly damaging mistakes being made.