Testing times for AI highlight commercial and ethical conflicts

Reports that safety testing processes on AI models have been cut have led to concerns being expressed around ethics and governance and prompted calls for stricter regulation in this uniquely challenging area.

Commercial considerations have increased pressure for faster rollout of new versions of popular AI platforms such as OpenAI’s ChatGPT. Until now, AI companies have made voluntary commitments with UK and US governments to allow tests to be conducted on AI models. However, according to reports, they have recently severely reduced the time allowed to undertake these tests before release. This, industry insiders believe, greatly increases the risk of misuse.

***

Get weekly insights from The Intuition Finance Digest. Elevate your understanding of the finance world with expertly-crafted articles and podcasts sent straight to your inbox every week. Click here: https://www.intuition.com/finance-insights-the-intuition-finance-digest/

***

AI integration exposes ethical and regulatory shortcomings

As AI systems become more deeply integrated into every aspect of society, the need for appropriate ethical standards, guidelines, and regulation becomes increasingly urgent. Without these guardrails, AI not only has the potential to cause harm, but may also erode public trust, undermining confidence in the technology itself.

With regulatory authorities struggling to keep pace with the rapidly evolving technology, ethics—rather than law—has stepped in to fill the gap, assuming the primary role in promoting social good in the deployment of AI.

An important distinction must be made between the two disciplines. The law is an external means of control or enforcement. Ethics, on the other hand, represents an internal means of control, moral principles that offer a framework to guide behavior where laws are absent or ambiguous – something especially valuable in a rapidly-emerging field like AI.

But applying AI ethics is problematic. Even the definition of “intelligence” is debated in an AI context. Because AI ethics is a relatively new field, there is no common framework such as those that exist for other fields (for example, medical ethics). Nonetheless, several AI ethical “codes” have been developed and there have been attempts to identify commonalities among them.

Know-How spotlight: AI, AI ethics, ESG risk and more

AI outpaces law; ethics fills void.

Applying broader principles to AI

The classic principles of medical ethics have become classic principles of practical ethics in the broadest sense. These in turn influence the evolving field of AI ethics. Such principles include:

Beneficence:

This suggests that if AI is to be ethical, it should also be beneficial. If it is not beneficial, we are better off without it.

Nonmaleficence (Do No Harm):

This requires that we do not harm or cause injury to others, either through acts or omissions (lack of action). This implies a minimum level of competence.

Autonomy:

Upholds the right of individuals to self-governance.

Justice:

Emphasizes the fair treatment of individuals and groups.

Different types of AI: Super, narrow, & general

Medical ethics shape AI’s ethical core.

Why AI requires additional principles

AI presents challenges not fully addressed by classic ethical frameworks. It is a more political field than medicine, touching on areas like labor, policing, and public policy. At the same time, AI’s unique features, such as its opacity, bring unique ethical challenges. This has led to the emergence of AI-specific principles, detailed below:

Respect for privacy

Machine learning algorithms tend to use vast quantities of data, much of it personal (data that relates to an identified or identifiable individual). Large language models (LLMs) like ChatGPT, for example, have been trained on vast amounts of personal data.

In the area of regulation, the EU’s General Protection Data Regulation (GDPR) Article 22 grants individuals the right not to be subject to decisions based solely on automated processing, including profiling, which significantly affects them. In the US, some states like California, Virginia, and Colorado have enacted privacy laws addressing AI’s data usage, emphasizing consumer rights and data protection.

The Framework Convention on Artificial Intelligence (Council of Europe) is an international treaty to make AI systems respect privacy rights and data protection, while aligning AI development with human rights standards.

Transparency & explicability

AI is characterized by complexity and opacity. While access to algorithmic inputs and outputs is relatively straightforward, what goes on in between is not, which means the outcome of the AI process may not be easily explained.

Seeking to address such issues, the EU Artificial Intelligence Act classifies AI systems based on risk levels, imposing transparency obligations on high-risk systems. This includes disclosure of AI-generated content and an obligation to inform users when they interact with AI. The upcoming Generative AI Disclosure Act (US) will oblige AI companies to disclose any copyrighted works used in training datasets.

Accountability & responsibility

The use and reliance on algorithms can create a “responsibility gap” where humans may abdicate traditional requirements of their role when harnessing AI in the course of their duties.

To address this, the EU Artificial Intelligence Act requires providers of high-risk AI systems to implement risk management systems and maintain documentation to ensure accountability throughout the AI lifecycle. Meanwhile, the US National Telecommunications and Information Administration (NTIA) has called for public input on AI accountability policies, including audits and assessments, that hold AI actors accountable.

Fairness

In the context of AI, fairness issues arise when there is bias, but also in questions surrounding generative AI and appropriate compensation and copyright.

Aiming to uphold fairness and prevent bias, the EU Artificial Intelligence Act prohibits AI practices that result in social scoring or discriminatory outcomes. The US Copyright Office has determined that works created solely by AI, without human input, are not eligible for copyright protection, emphasizing the necessity of human authorship.

Almost certainly the most high-profile ethical dispute of the AI era is the lawsuit brought by the New York Times against OpenAI concerning authorship in AI training. As regulators scramble to keep up with technological innovation, the courts are likely to become key battlegrounds for the many contentious side effects of AI.

AI ethics expands as challenges grow.

Conclusion

To sum up, the field of AI ethics presents unique challenges, as the rapid development of AI continues to outpace regulatory responses. Ongoing scrutiny remains essential – not only to uphold core ethical principles, but also to ensure appropriate testing and safety standards are in place before deployment. It’s unlikely we’ve heard the last in the debate over how ethical standards in AI should be set.

Intuition Know-How, a premier digital learning solution for finance professionals, has several tutorials relevant to the content of this article:

  • AI & GenAI – An Introduction
  • AI Ethics – An Introduction
  • AI Ethics – Key Principles
  • AI Ethics – Key Issues
  • AI Ethics – Data Privacy & Security
  • AI Ethics – Bias & Discrimination
  • AI Ethics – Generative AI (Coming Soon)
  • AI Ethics – Case Studies (Coming Soon)
  • AI Implementation (Coming Soon)
  • Responsible AI (Coming Soon)

Browse full tutorial offering