Ethical AI: Q&A with Fractal Analytics' Suraj Amonkar

The Ethical Use of Artificial Intelligence Act was recently introduced by U.S. Senators Cory Booker (D-NJ) and Jeff Merkley (D-OR) with the goal of establishing a 13-member Congressional Commission that will ensure facial recognition does not produce bias or inaccurate results.

Suraj Amonkar, Fellow, AI @ Scale, machine vision and conversational AI, at Fractal Analytics, recently shared his views on the proposed legislation and the issues it addresses. 

BDQ: What is the goal of the Ethical Use of Artificial Intelligence Act?

Suraj AmonkarSuraj Amonkar: The Ethical Use of AI Act is aimed specifically at using facial recognition as an AI technology. According to the bill text, it is being enacted because facial recognition is being marketed to police departments and government agencies. The technology has a history of less accurate performance for people of color and women, and facial recognition can chill First Amendment rights if used to identify people at political speeches, protests, or rallies. The goal of the bill would be to ensure facial recognition does not produce bias or inaccurate results or “create a constant state of surveillance of individuals in the United States that does not allow for a level of reasonable anonymity.”

BDQ: Why is this important?

SA: I think the Act has been introduced to regulate use of facial recognition for specific use cases such as reducing crime rates, aiding forensic investigations, finding missing people and victims of human trafficking, identifying perpetrators of crime on social media and tracking them. It is important to bifurcate the use of facial recognition so that it does not muddle with basic rights of individuals including, but not restricted to, their privacy rights.

BDQ: What is the problem with facial recognition?

SA: The problem is not with facial recognition as a technology but there are issues in the way it is implemented and used. For instance, some facial recognition technologies have encountered higher error rates when seeking to determine the gender of women and people of color. The use of this technology causes concerns about how much people are being watched and if hackers can access this data, causing more harm than good. There is also a growing concern with the possibility of misidentifying someone and leading to wrongful convictions. It can also be very damaging to society by being abused by law enforcement for things like constant surveillance of the public. 

BDQ: What are the risks of facial recognition misuse?

SA: Some of the risks include misidentifying people, wrongful convictions, misuse of private data, stalking, identity fraud, and predatory marketing. A camera in a retail store could do more than identify theft—it could use your face to link your online and offline purchasing activity, leading to intrusion of privacy and what some call predatory marketing.

BDQ: How is this being received in the AI community?

SA: It would not be wise of me to speak on behalf of the entire AI community. As a proponent of the technology, I do not believe in a complete moratorium. By limiting its use or delaying it means we are letting go of all the benefits it brings. Of course, there is a need to regulate this technology but that’s the case with every other technology area.

The onus is equally on providers of this technology as well. We have seen some examples where the technology implementation has thrown up huge errors, leading to bias or inaccurate results. This makes it especially important that all tech companies continue the work needed to identify and reduce these errors and improve the accuracy and quality of facial recognition tools and services.


Subscribe to Big Data Quarterly E-Edition