I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Pamela Passman, Founder and President, Center for Responsible Enterprise and Trade
From speech recognition, big data analysis and machine learning to bold new healthcare advances, financial and business advice, talking computers, robots and self-driving cars, the rapid development and adoption of artificial intelligence (“AI”) is becoming widespread in more and more areas of daily life.
While unleashing opportunities for businesses and communities across the world, AI technologies have also brought about a host of new risks. The implications of AI for companies’ legal and ethical responsibilities are now being discussed among governments and a variety of business and non-governmental groups worldwide. A raft of new voluntary ethical codes and even some legislative proposals relating to AI have also started to appear.
The trend is that while companies and other organizations that develop, implement or use AI systems are and will be expected to comply with existing data privacy and other legal norms, they will soon have to adhere to a number of new transparency and other ethical expectations as well—the outlines of which are beginning to converge. This new reality will require even more concretely the responsible design, implementation and use of AI.
When robo cars go wrong
The death of a pedestrian hit by a self-driving Uber car in Tempe, Arizona, in March 2018brought into sharp relief some of the types of legal issues raised by the growing use of AI.
Given that private-sector and government enforcement has started to appear, however, it would be advisable for companies even now to think aboutparticipating in a relevant AI code of conduct
The Volvo involved had an automatic emergency braking system, but the Uber self-driving mechanism installed in the car had disabled that system. The car’s “backup driver” herself may have been watching a TV program on a personal device when the accident happened, and the pedestrian may have been jaywalking when she stepped out into the street. But the AI system did not make the car slow down, and it was not designed to alert the driver of the need to brake.
This accident raised the same question that arose in the pre-AI world: Who was responsible, and for what? Uber quickly settled a civil lawsuit with the deceased’s family for an undisclosed sum, local police have done some investigation on both the backup driver’s and pedestrian’s actions but brought no charges to date. And the press trumpeted the question: “Can you sue a robocar?”.
A broad range of legal and ethical issues
The implications of AI go far beyond such personal injury questions, however, to a wide range of legal and ethical issues. nIn remarks to the OECD, Masahiko Tominaga of the Japanese Ministry for Internal Affairs and Communications summarized the challenges of AI as safety, cybersecurity, privacy and ethics.
A number of governments have been looking at these issues as well. The U.S. House of Representatives is considering a resolution(H.Res. 153) that would support the development of ethics guidelines for AI. These guidelines include the safety, security and control of AI systems; accountability and oversight of automated decision making; transparency and explainability of AI systems, processes and implications; and access and fairness regarding AI services and benefits.
The European Union (EU)has recently released AI Ethical Guidelines, which outlined an expectation that “AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle.” Recent guidelines from the Chinese government supported Beijing Academy of Artificial Intelligence’s call for AI research to respect “human privacy, dignity, freedom, autonomy and rights”.
These and other recent trends and developments on the policy front seem to make clear that the use of AI will not absolve the humans involved from existing, and even new legal and other responsibilities. As Mady Delvaux, a Member of the European Parliament, put it so succinctly: “[Robots] may be devoid of emotions, but they are not exempt from rules. If a bot causes harm, the damage should be compensated: not just by the bot's owner, but also its designers, producers and users.”
Check This Out: Top Artificial Intelligence Solution Companies
Towards ethical codes for AI
With such discussions and questions about the risks and responsibilities associated with AI percolating all over the world, intergovernmental groups including the OECD and the G20;industry groups such as Asilomar, the Partnership on AI and the IEEE; NGO groups including the UNI Global Union; and individual companies like Microsoft, IBM, Google and Intel; have all been developing AI ethics codes to help companies not only understand and adhere to existing legal and regulatory requirements, but also address broader ethical concerns.
At least 32 such AI ethics documents have already appeared from these types of groups, and according to a recent analysis by Harvard’s Berkman Klein’s Cyberlaw Clinicas part of its Principled Artificial Intelligence Project, many of these documents address some or all of the same themes.
There does seem to be a trend for the convergence of various groups’ expectations for the responsible design, implementation and use of AI. While the individual ethical codes do vary quite a bit, the common elements that seem to be emerging can be summarized as nine particular responsibilities:
1. Responsible design and use
• Safety and security. AI systems should be designed, function and used in a responsible, secure and safe way throughout their life cycle.
• Ongoing risk management. Potential risks posed by AI systems should be continually assessed, addressed and managed commensurate with their expected impact.
• Transparency. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
2. Lawful use
• Data privacy. The collection, processing, use and transfer of personally identifiable data by AI systems should comply with relevant data privacy laws.
• Cybersecurity. The use of AI should include effective cyber and physical security to mitigate the risk of data theft and to promote trust.
• Use for lawful purposes. AI should not be used for dangerous purposes or activities that are otherwise illegal.
3. Ethical use
• Socially useful. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
• Human values. The use of AI should not undermine but support human rights, dignity, liberties, fairness and diversity, and avoid discrimination and bias.
• Human control. Humans should remain in control of choosing how and whether to delegate decisions to AI systems, and AI should be used to accomplish human-chosen objectives.
Undoubtedly, private-sector, NGO and government expectations for the design, implementation and use of artificial intelligence will continue to evolve and converge. Given that private-sector and government enforcement has started to appear, however, it would be advisable for companies even now to think about participating in a relevant AI code of conduct and to review their own development, implementation and use of AI technologies to assess and manage their risks and responsibilities in these areas.
Check This Out: Top Al Companies