A proactive approach to ensuring ethical AI requires addressing three key areas, according to Jason Shepherd, CEO at Nubix.

  • Policy. This includes developing the appropriate framework for driving standardization and establishing regulations. Documents like the Asilomar AI Principles can be useful to start the conversation. Government agencies in the United States, Europe and elsewhere have launched efforts to ensure ethical AI, and a raft of standards, tools and techniques from research bodies, vendors and academic institutions are available to help organizations craft AI policy. See "Resources for developing ethical AI" (below). Ethical AI policies will need to address how to deal with legal issues when something goes wrong. Companies should consider incorporating AI policies into their own codes of conduct. But effectiveness will depend on employees following the rules, which may not always be realistic when money or prestige are on the line.
  • Education. Executives, data scientists, front-line employees and consumers all need to understand policies, key considerations and potential negative impacts of unethical AI and fake data. One big concern is the tradeoff between ease of use around data sharing and AI automation and the potential negative repercussions of oversharing or adverse automations. "Ultimately, consumers' willingness to proactively take control of their data and pay attention to potential threats enabled by AI is a complex equation based on a combination of instant gratification, value, perception and risk," Shepherd said.
  • Technology. Executives also need to architect AI systems to automatically detect fake data and unethical behavior. This requires not just looking at a company's own AI but vetting suppliers and partners for the malicious use of AI. Examples include the deployment of deep fake videos and text to undermine a competitor, or the use of AI to launch sophisticated cyberattacks. This will become more of an issue as AI tools become commoditized. To combat this potential snowball effect, organizations need to invest in defensive measures rooted in open, transparent and trusted AI infrastructure. Shepherd believes this will give rise to the adoption of trust fabrics that provide a system-level approach to automating privacy assurance, ensuring data confidence and detecting unethical use of AI.

Examples of AI codes of ethics

An AI code of ethics can spell out the principles and provide the motivation that drives appropriate behavior. For example, Mastercard's Jha said he is currently working with the following tenets to help develop the company's current AI code of ethics:

  • An ethical AI system must be inclusive, explainable, have a positive purpose and use data responsibly.
  • An inclusive AI system is unbiased and works equally well across all spectra of society. This requires full knowledge of each data source used to train the AI models in order to ensure no inherent bias in the data set. It also requires a careful audit of the trained model to filter any problematic attributes learned in the process. And the models need to be closely monitored to ensure no corruption occurs in the future as well.
  • An explainable AI system supports the governance required of companies to ensure the ethical use of AI. It is hard to be confident in the actions of a system that cannot be explained. Attaining confidence might entail a tradeoff in which a small compromise in model performance is made in order to select an algorithm that can be explained.
  • An AI system endowed with a positive purpose aims to, for example, reduce fraud, eliminate waste, reward people, slow climate change, cure disease, etc. Any technology can be used for doing harm, but it is imperative that we think of ways to safeguard AI from being exploited for bad purposes. This will be a tough challenge, but given the wide scope and scale of AI, the risk of not addressing this challenge and misusing this technology is far greater than ever before.
  • An AI system that uses data responsibly observes data privacy rights. Data is key to an AI system, and often more data results in better models. However, it is critical that in the race to collect more and more data, people's right to privacy and transparency isn't sacrificed. Responsible collection, management and use of data is essential to creating an AI system that can be trusted. In an ideal world, data should only be collected when needed, not continuously, and the granularity of data should be as narrow as possible. For example, if an application only needs zip code-level geolocation data to provide weather prediction, it shouldn't collect the exact location of the consumer. And the system should routinely delete data that is no longer required.