High tech leaders traditionally vigorously lobby for limited government regulation of technology and instead promote the benefits of self-regulation. However, in an unusual turn of events, Microsoft president Brad Smith wrote a lengthy blog post on July 13th urging Congress to implement regulations for artificial intelligence (AI) powered facial recognition software. Why?
One reason might be that Microsoft was recently broadly criticized for its contract with U.S. Immigration and Customs Enforcement (ICE) during the recent controversial process of separating of illegal immigrant parents from their children at the border. According to Microsoft, the technology underlying their contract with ICE played no direct role in assisting with the separations. Specifically, in response to online accusations, Microsoft denied that its facial recognition software was used by ICE. Additionally, and more to the point, in May, Amazon announced its intention to sell its Rekognition software system to law enforcement agencies.
In today’s highly charged political environment, accusations of misuse of personal information can result in a precipitous drop in share value.
Just think about Facebook and Cambridge Analytica. Subsequent to Smith’s blog post, on July 26th, as if on cue, the American Civil Liberties Union (ACLU) published a test of Amazon’s Rekognition software that resulted in falsely matching photos of 28 members of Congress to criminal mugshot photos. That did not go over well with the Congressmen involved and the ACLU’s test had the desired effect. Five of the misidentified members of Congress demanded answers and a meeting with Amazon CEO Jeff Bezos.
There’s no doubt that facial recognition software has many beneficial uses. It can assist police with catching criminals, identify terrorists and find missing children or wondering adults afflicted with dementia. In India this year, the Delhi Police Department identified almost 3,000 missing children over four days using facial recognition software. However, an example of a nefarious use would be using this technology to scan, identify and catalog individuals in a crowd at an anti-government protest. Also, as shown by the ACLU test, the AI software can be inaccurate which can result severe unintended harm.
What type of regulations are we likely to see?
- Development of required accuracy standards.
- Providing channels for challenging misidentification.
- Requiring human oversight in conjunction with the use.
- Public reporting of use and results.
- Requiring law enforcement to obtain warrants for certain use cases.
- Requiring system security standards to prevent hacking of facial databases.
- Requiring disclosure if an individual’s image is included in facial recognition database.
- Requiring notification if a person is in an environment where the technology is active (such as in a commercial setting)
At the end of the day, businesses are primarily motivated by profits. For public companies, the pressure to maintain growth is intense. There is now great awareness that misuse of personal information can significantly affect the bottom line. What’s more, tech companies are also becoming aware that their technology (especially AI) is powerful – and the use or misuse of the technology by others is often not under their control, and that they will nevertheless be held accountable for misuse by others. Hence, the call for regulation is at least in part motivated by a desire to maintain share value, while at the same time serving to protect our individual freedoms.
If you want to discuss legal issues relating to information technology, please contact us.