Emotion detecting AI should not be allowed to make important decisions, AI research institute AI Now has warned.
This is one of the recommendations made in the New York University institute’s annual report intended to “ensure that AI systems are accountable to the communities and contexts they are meant to serve”.
Emotional AI, also known as affect recognition, uses artificial intelligence to analyse micro expressions with the aim of identifying human emotion.
A number of companies are commercialising affect recognition technology for a range of applications including recruitment, monitoring students in the classroom, customer services and criminal justice.
For example, HireVue offers software that screens job candidates for different qualities, and BrainCo is developing headbands that claim to detect students’ attention levels.
According to the report, the emotion-detection and recognition market was worth $12bn in 2018, and could grow to over $90bn by 2024. However, despite growing interest in the technology, AI Now warns that it is based on “markedly shaky foundations”.
AI Now warns of affect recognition
Although these technologies have potentially useful applications, the report says that the technology is “at best incomplete and at worst entirely lack validity” failing to reliably identify emotions without considering context and often detecting facial movements which can be misinterpreted.
There is also evidence to suggest that the technology can show bias. For example, a study by Dr Lauren Rhue found that two emotion recognition programmes assigned negative emotional score to black individuals from a data set of 400 photos of NBA players.
The rapid growth of the technology is particularly concerning in circumstances such as criminal justice, with the institute calling for those deploying affect recognition to “scrutinise why entities are using faulty technology to make assessments about character”.
As well as calling for more stringent regulation, the report states that affect recognition should not play a role in decisions such as “who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school”, with governments prohibiting its use in “high-stakes decision making processes”.