AI Ethics: What are its Key Principles?

Developments in data analysis and machine learning (ML) have led to a dramatic increase in the use of artificial intelligence (AI) by both private and public institutions. AI is involved in many industries and activities – banking and finance, industrial manufacturing, agriculture, marketing, social media, urban planning, healthcare, policing, and more. Unfortunately, these technologies have sometimes been introduced without the necessary ethical considerations, which has led to unnecessary scandals and harm to individuals. One of the most important challenges the AI community faces is to develop AI ethics to the same level of sophistication as AI itself, in order to design algorithms that can contribute to individual and social wellbeing.

One of the most important challenges the AI community faces is to develop AI ethics to the same level of sophistication as AI itself, in order to design algorithms that can contribute to individual and social wellbeing.

Understanding Ethics

Ethics is the sphere of philosophy that deals with matters relating to what a “good life” is, as well as what is right and wrong, what is permissible and impermissible, and what is virtuous and vicious.

It asks:

  • How should we live?
  • What is a good life and how do we get there?

Practical ethics is the part of ethics that looks to satisfactorily resolve dilemmas that are presented to human beings in the real world.

It attempts to provide directions for conduct, and asks questions such as the following:

  • Should a developer design an app to be as addictive as possible?
  • Should a lawyer suppress evidence that makes their client look guilty?
  • Should a doctor help a dying patient who wishes to end their life sooner?

In the world of banking and finance, ethics poses questions such as:

  • Should a bank or asset manager sell an unsuitable product or service to a customer?
  • Should a trader engage in front-running or other dubious practices based on insider knowledge of future transactions that will impact a security’s price?
Ethics is the sphere of philosophy that deals with matters relating to what a "good life" is, as well as what is right and wrong, what is permissible and impermissible, and what is virtuous and vicious.

The Need for AI Ethics

AI ethics is the branch of practical ethics that deals with AI. Because AI ethics is a very new field, there is no common framework similar to that for other areas of practical ethics (such as medical ethics). However, several AI ethics “codes” have been developed, along with some attempts to find commonalities among them.

AI ethics is the branch of practical ethics that deals with AI. Because AI ethics is a very new field, there is no common framework similar to that for other areas of practical ethics (such as medical ethics).

Practical Ethics

Practical ethics is a field that flourished in great part thanks to the development of medical ethics (also called bioethics or biomedical ethics), which became a subfield of practical ethics. AI ethics is, similarly, a subfield of practical ethics.

In the 1960s, physicians were faced with new ethical challenges as a result of some medical scandals and the emergence of technology such as the mechanical ventilator that created new situations. We are currently in a similar place regarding AI. Computer scientists and data analysts are being faced with new ethical challenges as a result of various scandals (the Cambridge Analytica/Facebook case being a prominent example) and new technologies, such as machine learning (ML), that create novel circumstances. It is therefore not surprising that AI ethics can learn a lot from medical ethics.

Because AI ethics is a very new field, there is no common framework. However, national and international organizations have formed expert committees to draft guidelines. Some of the most prominent include:

  • High-Level Expert Group on Artificial Intelligence (appointed by the European Commission)
  • OECD’s AI Group of Experts (AIGO)
  • Select Committee on Artificial Intelligence (appointed by the UK House of Lords/Parliament)

Recommendations have also been issued by academic and research institutions, as well as nonprofit organizations such as the Association of Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE), Access Now, and Amnesty International.

Further, private technology firms have published their own codes, though these are generally less well regarded given the conflicts of interests that such firms are subject to.

Practical ethics is a field that flourished in great part thanks to the development of medical ethics (also called bioethics or biomedical ethics), which became a subfield of practical ethics. AI ethics is, similarly, a subfield of practical ethics.

Classical Principles of Practical Ethics

The classical principles of practical ethics can be broken into four areas:

  • Beneficence
  • Nonmaleficence (do no harm)
  • Autonomy
  • Justice (fairness)

Beneficence

For AI to be ethical, it must be beneficial, because if it is not, then we would be better off without it. This is the main idea behind beneficence.

Sometimes, the principle of beneficence is described in terms of wellbeing in the context of AI. AI should enhance wellbeing.

Socio-economic opportunities and prosperity are also sometimes mentioned.

One question that arises is: Who exactly should AI benefit?

The most inclusive answer is human beings, society as a whole, and other sentient creatures. For example, it is not enough for AI to benefit a specific company.

Importantly, it is not sufficient to have the desire or intention for the AI to be beneficial – the AI must actually be beneficial.

Nonmaleficence (do no harm)

The principle of nonmaleficence is the other side of the coin to beneficence. It requires that we do not create harm or injury to others, either through acts or omissions (lack of action).

It is negligent to impose a careless or unreasonable risk of harm upon another. Implicit in this principle is the idea of competence. A doctor should only perform surgery on a patient if they are competent enough for there to be a good chance of success – negligence is no excuse. In the same way, no bank or other financial institution should expose its customers to an AI without having the appropriate competence to ensure that the AI is not going to cause harm.

Autonomy

Adult human beings are capable of deciding what their values are, what is meaningful to them, what kind of life they want to lead, and of acting in accordance with those values. When you make an autonomous decision, you fully own it. It is the type of decision that expresses your deepest convictions. An autonomous choice is one that you can endorse upon reflection.

To respect someone’s autonomy means that you do not coerce someone into doing something they don’t want to do. It requires that you don’t manipulate people or ignore their interests.

Justice (fairness)

Justice is concerned with treating people fairly.

It can refer to:

  • Outcomes, such as gender equality, or
  • Processes, such as having the possibility to challenge decisions or having a right to redress

These are referred to, respectively, as substantive and procedural fairness.

The principle of justice is violated whenever:

  • Algorithms are biased in unjustifiable ways that favor some people over others, or
  • Algorithms are embedded in a system that does not comply with fair procedures
The classical principles of practical ethics can be broken into four areas: Beneficence Nonmaleficence (do no harm) Autonomy Justice (fairness)

Ethical Principles Specific to AI Ethics

There are several ethical principles that are specific to the field AI ethics.

Accountability and Responsibility

As algorithms become more common and are deployed in more areas by more firms and other institutions, there is a risk that accountability is lost whenever algorithms are involved.

Respect for Privacy

Human beings need privacy to be able to relax from the burden of being with other people. We need it to explore new ideas freely to make up our own minds, which is why it is crucial for autonomy.

Transparency and Explicability

Transparency is the most prevalent principle in AI ethics codes.

Elements that make AI special are its complexity and opacity. Most ML algorithms are opaque (“black box”) – it is hard to understand what is going on inside the algorithm.

Explicability – sometimes called explainability or interpretability – can be interpreted in many ways, but the main idea is that for an algorithm to be transparent enough we need to have some rough understanding of how it works, even if we don’t fully understand the code.

Unique Insights into the World of Corporate Learning