Microsoft president Brad Smith is calling for facial recognition to be regulated by the US government.
While technological developments can be positive for the world, the pace of change can often mean advances have a major head start on regulation, potentially leading to negative consequences.
One such development is facial recognition technology. The technology has been on the receiving end of some negative press lately for its racial biases, but it’s the potential for it to be misused by authorities that has Microsoft president Brad Smith worried.
Facial recognition could be misused
In a blogpost published on 13 July, Smith said new laws were necessary given the “broad societal ramifications and potential for abuse” that facial recognition technology presents. The development and deployment of the technology has been accelerating in recent years, with firms such as Microsoft, Amazon and Google involved in their own projects.
The tug of war centres around proponents of the technology touting the ability to catch criminals or locate missing children, while those against it believe it could be used for illegal surveillance and citizen monitoring.
Smith wrote: “Facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?”
He also acknowledged the imperfection of the technology in terms of identifying individuals that were not white. “As reported widely in recent months, biases have been found in the performance of several fielded face recognition technologies. The technologies worked more accurately for white men than for white women, and were more accurate in identifying persons with lighter complexions than people of colour.”
A commission should be created
He suggested that the US administration create a “bipartisan and expert commission” to examine facial recognition and give expert advice. According to Smith, the commission should consider potential restrictions of facial recognition use by law enforcement or national security. He said it should also include the development of standards to prevent racial profiling, as well as requirements to notify members of the public when the technology is being used.
“In a democratic republic, there is no substitute for decision-making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms,” Smith added.
In June, the spotlight fell on Microsoft’s working relationship with the Immigration and Customs Enforcement (ICE) in the US. Smith explained in his blogpost that the company remains opposed to Trump’s immigration policy and that its work with ICE does not involve facial recognition technology.
“It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike,” he concluded.
While it is certainly a positive to see companies seeking regulation of technologies that could potentially be misused, many tech workers from a variety of firms are still grappling with other ethical quandaries their work is presenting. Recently, Google workers protested its participation in a military AI project and it is likely that this sort of narrative will continue to play out in Silicon Valley and beyond for the foreseeable future.