“Trustworthy” AI – Can regulation enhance trust in the AI that businesses and people are using?

As part of its “Coordinated Plan on Artificial Intelligence,” the EU has proposed a regulation that sets out harmonized rules on artificial intelligence. This addresses “the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market.” Many other countries are also looking at introducing legal frameworks around the use of AI. Why are such regulations emerging and will they improve or hinder the uptake of AI?

***

Prefer to listen instead of read? Listen to this article on our podcast, The Intuition Finance Digest.

Listen on Spotify | Listen on Apple Podcasts | Listen on Amazon Music

***

The EU Plan looks to “act and align to seize opportunities of AI technologies and to facilitate the European approach to AI, that is human-centric, trustworthy, secure, sustainable and inclusive AI, in full respect of our core European values.” In doing so, it sets out four key sets of proposals:

  1. Set enabling conditions for AI development and uptake in the EU
  2. Make the EU the place where excellence thrives from the lab to the market
  3. Ensure that AI works for people and is a force for good in society
  4. Build strategic leadership in high-impact sectors

In pursuit of the third objective, the issue of trust has emerged as critical. This forms a central part of the plan and sets out the overall intention and the measures implemented to date to develop a policy framework to ensure trust in AI systems. This requires a framework that seeks to ensure the protection of EU values and fundamental rights such as non-discrimination, privacy and data protection, and the sustainable and efficient use of resources.

[AI Ethics: What are its Key Principles?]

As part of its “Coordinated Plan on Artificial Intelligence,” the EU has proposed a regulation that sets out harmonized rules on artificial intelligence. This addresses “the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market.” Many other countries are also looking at introducing legal frameworks around the use of AI. Why are such regulations emerging and will they improve or hinder the uptake of AI?

OECD AI principles looks for human-centric approach

But while the EU is looking to promote its “vision on sustainable and trustworthy AI” and many other similar national and supranational initiatives are already under way, the extent to which government and regulation can ensure the promotion of trust in AI is open to debate. The OECD AI Principles set out “a common aspiration” for its adhering countries and focuses on “how governments and other actors can shape a human-centric approach to trustworthy AI.”

Following on from the Principles, the OECD Recommendations on Artificial Intelligence identify “five complementary values-based principles for the responsible stewardship of trustworthy AI and calls on AI actors to promote and implement them.”

  1. Inclusive growth, sustainable development, and well-being guides the development and use of AI toward prosperity and beneficial outcomes for people and the planet as a priority.
  2. Human-centered values and fairness governs the view that AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity, and should include appropriate safeguards to ensure a fair and just society.
  3. Transparency and explainability promotes transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
  4. Robustness, security, and safety means that AI systems must function in a robust, secure, and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
  5. Accountability believes that AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.

[What is the internet of things (IoT)?]

Government intervention in AI a prerequisite for trust

On the face of things, the OECD principles seem admirable and desirable, and the capacity for bad actors to harness AI for mischievous or nefarious ends is gradually dawning on public consciousness. Meanwhile, the reaction to the release of ChatGPT has demonstrated the capacity of AI to revolutionize day-to-day activities until recently considered outside the capabilities of machines.

Certain governing regimes may take issue with aspects of one or more of the OECD principles but the actual and potential power of AI is such that massive government intervention in AI development can be expected, as evidenced by the OECD’s recommendations for policymakers, and would appear to be a prerequisite to secure public trust in AI systems.

Intuition Know-How has a number of tutorials relevant to AI and the associated issues:

  • Information Technology (IT) in Business
  • FinTech – An Introduction
  • AI Ethics – An Introduction
  • AI Ethics – Key Principles
  • AI Ethics – Key Issues
  • AI Ethics – Bias & Discrimination
  • AI Ethics – Data Privacy & Security
Intuition Know-How