Creating a Tool to Reproducibly Estimate the Ethical Impact of Artificial Intelligence

How can an organization systematically and reproducibly measure the ethical impact of its AI-enabled platforms?1 Organizations that create applications enhanced by artificial intelligence and machine learning (AI/ML) are increasingly asked to review the ethical impact of their work.


“Soft Law” Governance of Artificial Intelligence

On November 26, 2017, Elon Musk tweeted: “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA wdn’t [sic] make flying safer. They’re there for good reason.”

In this and other recent pronouncements, Musk is calling for artificial intelligence (AI) to be regulated by traditional regulation, just as we regulate foods, drugs, aircraft and cars. Putting aside the quibble that food, drugs, aircraft and cars are each regulated very differently, these calls for regulation seem to envision one or more federal regulatory agencies adopting binding regulations to ensure the safety of AI. Musk is not alone in calling for “regulation” of AI, and some serious AI scholars and policymakers have likewise called for regulation of AI using traditional governmental regulatory approaches .