Natasha Crampton, Microsoft’s Chief AI Officer, announced the move in a blog post, citing the “heightened privacy concerns around this type of capability” and “the lack of scientific consensus” on the technology’s efficacy.
The company said it also plans to heavily restrict its own facial recognition platform. Facial emotion recognition software uses AI to determine a person’s emotional state by comparing parameters such as facial expressions, pupil size, mouth shape and other visual cues.
But rights group have raised concerns that the technology is an infringement of privacy.
Human rights groups raised similar concerns last month after it was revealed that Zoom was developing its own emotion-scanning software. In a blog post, Zoom confirmed the existence of the technology, which is believed to be in its early stages of development.
Crampton also said that Microsoft was planning to restrict access to its facial recognition technology, establishing transparency guidelines and protections to ensure customers who use facial recognition do so ethically.
This shift comes after Google stopped selling facial recognition software in 2018 and Meta (then Facebook) ended its facial recognition platform in 2021.
IBM, meanwhile, stopped supplying government and police agencies with facial recognition technology in the aftermath of the George Floyd killing in 2020.