One big question in the industry these days is about the safeguards companies can take to ensure their AI is fair and ethical. Stakeholders are trying to determine how enterprises can ensure that their employees, investors and customers trust their AI technology. With AI advancing at the incredible rate that it is and being applied to diverse use cases such as criminal detection, this is an important and timely topic.
Recently there have been a number of passionate conversations about several high profile companies using biased AI. This may make many businesses fear the consequences of using biased AI and the damage it could cause to their reputation. In a new white paper, “AI Ethics,” Colin Priest, Senior Director of Product Marketing at DataRobot, explains that if companies approach AI ethics using four basic principles, they can ensure that their AI is trustworthy and remains true to their business rules and core values. These four principles include:
- Ethical Purpose – Companies must consider the task they are assigning to AI, the objective of that task, who will be affected and how automating the task will affect the long and short term goals of the company. Businesses must ensure that their AI’s actions have a net good to society.
- Fairness – Companies must ensure their AI’s actions avoid entrenching historical disadvantage and avoid discriminating on sensitive features. While this may not be done on purpose, data scientists and the companies that employ them but be extra sensitive to not only their own biases but those that may be in data sets.
- Disclosure – Companies must disclose sufficient information to an AI’s stakeholders so that they can make informed decisions, including its capabilities and limitations. In order for stakeholders to make informed choices, an AI’s processes and decisions must be explainable.
- Governance – Companies must apply high standards of governance over the design, training, deployment and operation of AIs where there is risk. Governance ensures that AI systems are secure, reliable and robust, and that appropriate processes are in place to ensure responsibility and accountability for AI systems.
Contributed by Daniel D. Gutierrez, Managing Editor and Resident Data Scientist for insideAI News. In addition to being a tech journalist, Daniel also is a consultant in data scientist, author, educator and sits on a number of advisory boards for various start-up companies.
Sign up for the free insideAI News newsletter.