With great power comes great responsibility: Ethics in AI

Posted on Posted in News

http://bit.ly/2yC2Upo
June 22, 2018 at 07:22PM via https://news.google.coms/rss/search/section/q/ethics/ethics?ned=us&hl=en&gl=US

Technology is constantly developing and so are the ways it can assist law enforcement. What didn’t seem possible just five years ago is now taken for granted today. The next stage in the LE-friendly technology evolution lies with Artificial Intelligence or AI.

The ability to conduct facial recognition or read the number plate on a passing car is not new. AI-driven facial recognition occurs on your smart phone right now, whether as a security measure like Apple’s Face ID or cataloging thousands of your digital photos for easy reference.

The sophistication of AI is increasing and will enable police departments to quickly process video for automatic redaction, transcription and reporting.

While AI will make life easier, there are still protocols and politics to sort. For example, the ability of a police department to redact anything not associated with the crime or criminal is essential to preserve the rights and liberties of the citizens that same department is sworn to both serve and protect.

Axon for one is considering the human factors in AI via its ethics board, formed to ensure transparency in the LE tech giant’s work space and to take responsibility for the power Axon platforms will provide moving forward.

When Axon announced the formation of its AI ethics board in April of this year, it didn’t just take the public safety world by storm – the entire tech community paid attention. Forbes magazine suggested Google should follow the kind of transparency being advocated by Axon when it comes to discussing the role of AI in our lives.

At Axon’s recent Accelerate symposium, CEO and co-founder Rick Smith, along with two members of the ethics board – Jim Bueermann and Tracy Ann Kosa – took to the stage for a panel discussion on how and why the board came into existence.

Axon Vice President Mike Wagers, who moderated the panel discussion, introduced the session saying that just as for superhero Spiderman, “With great power comes great responsibility. Recognizing what AI could do and how great that power is in terms of what it means to bring new and innovative technologies to law enforcement, we also recognize the challenges it brings when it comes to civil liberties and privacy. That was the purpose of establishing the board.”

Learning from ELSI

When Axon acquired two AI teams in 2017, the announcement was met with concern from both the tech and mainstream media with suggestions the acquisition was the first step on the way to an Orwellian police state.

Getting both media and public buy-in early in the development of groundbreaking tech is the key to its success. Interpreting public opinion and responding in an appropriate way requires strategizing around how best to introduce new technology.

Similar negative headlines were seen in the early 1990s when the federal government launched the human genome project to sequence human DNA. As public discussion focused on nightmare scenarios of scientists creating designer babies or unleashing super viruses, the National Human Genome Research Institute founded the Ethical, Legal, and Social Implications (ELSI) program to identify and address the ethical, legal and social implications of genetic and genomic research for individuals, families and communities.

“While the first human genome cost about $3B to read one person’s DNA, it now costs a little over $100 today,” Smith told the Accelerate audience. “Most of the promising cancer therapies are coming out of the human genome project now. This is a life-changing thing that is going to make the world a much better place, but it never would have happened if the public turned against it. AI could die an early death if the world freaks out and states pass laws against it and we may never get to see the promise of it. That is one of the reasons why Axon created an advisory board around ethics and privacy.”

A model for trust and legitimacy

Police Foundation President Jim Bueermann, who helped initiate Axon’s AI board and run the board’s first meeting, told the Accelerate audience there are no wallflowers on the board: “Members of the ethics board are not shy. I think individually we have taken the position that we serve policing best by telling Axon the truth, the whole truth and nothing but the truth, no matter how uncomfortable that may be. Rick was in the first meeting the whole time. When the big boss spends the entire day in a meeting focused on something as dry as the ethics of artificial intelligence, there is a message there about the commitment of the organization to try to do the right thing with what is probably one of the most powerful technologies the world has ever seen.”

For Bueermann – who worked for the Redlands (California) Police Department for 33 years and retired as chief – educating the community about new policing technology maintains credibility.

“When we set up a citywide camera surveillance system, we decided to stand up a citizen privacy council. Legitimacy, trust and credibility in the community are the lifeblood for a police department. You are able to do your job as a cop because the community trusts you to do the right thing. That can go off the rails if you use a technology as powerful as AI without some sense of where it is going. In the public sector, you are obligated to do certain things because you are publically funded. Axon is under no obligation to do this. My own personal belief is that this is a good model for literally any corporation or part of the private sector that is engaging in technology development for the police.”

Facial recognition

As a company, Axon has yet to invest any money or manpower into facial recognition initiatives, Smith told attendees, with facial recognition remaining relatively controversial in the United States. This is in contrast with other nations where, for example, the technology is more acceptable in France as the French National Police focus on how tech can help detect and prevent homegrown terrorist attacks like the Bataclan massacre.

The dilemma is how to build a system with the right controls that can be deployed in different geographies under different cultural norms.

“We know it is a technology that is going to be of interest, but in terms of bang for the buck, there are a lot of things we can be doing around helping redaction and improving current business processes for departments, which is where we think it is best to build our AI capabilities,” Smith told the audience.

The board will meet later this summer to discuss facial recognition and form operating guidelines.

The shifting sands of privacy

Ethics board member and Accelerate panelist Tracy Ann Kosa is a senior program manager at Google working on privacy features for the Google Cloud Platform. Previously at Microsoft she led the development and implementation of the global privacy compliance program. During the panel discussion, Wagers asked Kosa if the notion of privacy was dead.

“Previously privacy was very much a one-to-one relationship, where you were collecting data from me and you would tell me what that collection looks like and I would tell you yes or no depending on how much control I had,” explained Kosa. “Now I think we have a greater understanding that my answer also impacts the privacy of everyone else in this room.

“The more we all consent to the sharing of our data with large organizations, and I don’t mean just ones like Facebook, the less privacy we all have. But that doesn’t mean the concept is dead. The most visceral example I can give to you is that public bathrooms still have locks on doors. We very much value privacy and there is still some notion of control that exists in all of us. I think some of the challenges we will have with AI, is that are we going to teach the technology what our human values are so it can incorporate them and represent the privacy concerns of all of us?”

“If I could go back and be a police chief again, I would designate someone to be our internal privacy officer, which Seattle has done,” Bueermann told the audience. “The minute you assume that mantle and look through the lens of privacy, you see the whole operation differently. As AI continues to evolve we should ask, ‘Just because we can, should we?’”

Axon AI Ethics Board

Axon’s AI Ethics Board is comprised of thought leaders in computer science, privacy, civil liberties and policing, who will provide guidance on how the company develops and uses AI. Board members include:

Ali Farhadi, PhD, Associate Professor of Computer Science and Engineering at University of Washington, Senior Research Manager at AI2 and CEO of Xnor.ai

Barry Friedman, Professor and Director of the Policing Project at New York University School of Law

Jeremy Gillula, PhD, Privacy and Civil Liberties Technologist

Jim Bueermann, President of the Police Foundation

Miles Brundage, Research Fellow at the University of Oxford’s Future of Humanity Institute

Tracy Ann Kosa, PhD, Fellow at Stanford University and Adjunct Professor at Seattle University School of Law

Vera Bumpers, Chief of Houston Metro Police Department, incoming President of the National Organization of Black Law Enforcement Executives

Walter McNeil, Sheriff of Leon County Sheriff’s Office, Florida, former President of the International Association of Chiefs of Police

For more on Axon’s AI board, visit https://www.axon.com/info/ai.

Accelerate 2019 is scheduled for April 30-May 2 in Phoenix, Arizona. Register now at https://www.acceleratepolicing.com/.