The Problem with Ethical AI

Eric Sandosham, Ph.D.
7 min readApr 21, 2024

--

Another governance nonsense!

Photo by Jezael Melgoza on Unsplash

Background

On 13th March 2024, the EU passed the EU AI Act, effectively hamstringing themselves further in the global AI race. The act claims to ‘unify’ the approach to development and implementation of AI within the EU region, but it is a ‘limitive’ rather than an additive approach to solutioning. It is NOT an act of innovation but one of governance. And governance simply will not propel you forward competitively.

The EU AI Act is in essence an expression of EU’s interpretation of Ethical AI. I have never been a fan of the construct of ethical AI. I think it’s misinformed at best, and at worst a new kind of bullshit to monetise. I’m good with Responsible AI, which is really just an extension of responsible product development, the focus of which is ensuring that products perform as designed without causing undesired negative outcomes. This would cover such methodologies as red-teaming and penetration testing to stress-test the product.

And so I’m dedicating my 35th weekly article to calling out bullshit on ethical AI.

(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)

EU AI Act

Let me first summarise the gist of the EU AI Act, and then we can get going on the arguments on why it isn’t helpful, and why I take issue broadly with the construct of ethical AI.

Firstly, the EU defines AI as “… a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Based on this definition of AI, the EU AI Act can be summarised into the following key points:

  1. It’s essentially a risk classification system of AI use cases that have human-impacted outcomes. These use cases are classified as prohibited, high-risk and minimal-risk.
  2. General purpose AI models such as OpenAI’s ChatGPT and Google’s Gemini would come under separate classification which assesses the solution from a systemic risk perspective.
  3. Prohibited AI solutions are those that are perceived to pose an unacceptable risk to people’s safety, security and fundamental rights. This would include AI solutions for social scoring, emotional recognition in workplace, inference of ‘discriminatory’ information such as ethnicity, religious beliefs, sexual orientation, political persuasion, and predictive policing ala Minority Report (predicting likelihood of offence).
  4. High-risk AI solutions are those used in physical and health safety contexts such as vehicular AI, medical device AI, critical infrastructure AI. Additionally, it includes AI solutions in recruitment, biometric surveillance, financial access (e.g. credit worthiness), healthcare access (e.g. insurance worthiness).
  5. Minimal-risk AI solutions would include chatbots, AI-generated video/audio/images, spam filtering, product recommendation systems, personal and administrative task automation.

Fear of AI

The EU AI Act seems to be partly motivated by some kind of fear about AI being ‘weaponised’ and perhaps even going ‘rogue’. Is the EU AI Act in fact attempting to slow things done? Is it in fact trying to prevent job disruptions and/or job displacements? Is it really about ethics or protectionism?

It’s important to note that AI isn’t doing anything that humans aren’t doing already. Humans are already doing social scoring based on observations; they are already doing predictive policing by inserting themselves into communities and undercover work; they are already doing all kinds of background checks and arguably intrusive profiling during recruitment interviews. All the AI solution is doing is making the implementation more consistent, more reliable with less errors, and much, much faster. It is also important to note that there are already copious regulations on vehicular safety, healthcare safety and infrastructure safety. Why the need to call out AI specifically?

AI solutions don’t make decisions; they automate and enforce them. The target outcomes and decision parameters are still entirely designed by humans. It is a completely irrational fear that AI could soon become uncontrollable and dangerous. There is absolutely no evidence to support such a claim! The road to AGI (artificial general intelligence) is still a long way away, and even when that day comes, so what? We conflate AGI with consciousness and sentience, which have no correlation whatsoever. And even if intelligence leads ultimately to consciousness, would it resemble the kind of consciousness we (sort of) understand? Or would its consciousness be something entirely alien? Do you know how a bat experience consciousness? (Reference to a classic argument.)

Why do we set higher quality thresholds for AI vs humans when there is already so much evidence that AI-based computation accuracy exceeds human-based computation? We seem to think that the AI’s ability is inferior and must hence be held to a higher standard. The compliance requirements of the EU AI Act smacks of double-standards when we don’t hold human decisioning to the same level of assurance.

Ethical AI is a Red-Herring

In AI research, development and implementation, people often use the terms ethical AI and responsible AI interchangeably. But they are not the same. My training in data analytics makes me sensitive to semantics and interpretation; it matters.

What is the difference between ethics, morality and responsibility?

  1. Ethics is about the systematised rule-based implementation of what is right vs wrong; it tends to have legal implications. Morality is a culturally and religiously informed sense of what is right or wrong, but may not be universally aligned. As an example, euthanasia is considered morally wrong by some but can be ethically right.
  2. Responsibility, on the other hand, is about accountability and obligation. For example, I can choose not to get involved and walk away from an evolving crime and that is because of the responsibility to myself and my family, but it can be legislated as unethical in some societies and even deemed as immoral.

It should be obvious that ethics and morality are continuously evolving social constructs. Even human rights is an evolving social construct. Soon we might be seeing the “right to digital access” being enshrined. And those constructed ethics and rights need to co-evolve with technology and AI.

For a technology like AI, I would argue that making development easier and increasing broad adoption will naturally improve governance; market forces on demand-supply and existing anti-monopoly laws already pushes it towards transparency. Profit organisations are inherently bottom-line self-serving and competition would only seek to increase AI’s positive rather than negative utility. The real danger is perhaps in the agenda of non-profit and government organisations, and we would be better served to direct our intellectual resources towards articulating what responsible use of AI should look like for these agencies.

Towards Responsible AI

Responsibility is all about accountability and obligation. In the case of AI, we should be defining safeguards on abuse rather than prohibiting or restricting the use cases because of fear of abuse. Responsible AI is a contextualisation of responsible product development. Existing legal frameworks already exist for this. Responsibility entails having assurance that the product works as intended and the risk of mis-use is minimised through built-in safety features. I have no issue with AI being used for recruitment so long as there are safeguards for abuse. I understand that AI will have some errors in predicting the right person for the role, but the errors will be smaller than compared to an entirely-human solution, and this is only going to be better for society. An AI solution that could not reduce these recruitment errors would not even have been brought to market, and I don’t need extra ‘classification’ to protect me from it.

The role of AI is to discriminate. To discriminate is make a distinction between 2 or more objects or groups. AI seeks to make this distinction as error-free as possible based on the information signals carried on the data attributes. It is erroneous discrimination that is the concern (a manifestation of this is bias) — i.e. the distinction is insufficiently supported by the information signals. Testing for bias is part of product quality assurance, that the product performs as intended.

Consider the hypothetical case of Shadow AI. This is already on the cards — a hyper-personalised general purpose AI that becomes our constant companion and assistant. In the near future, I can see the collective of shadow AI’s working and collaborating together on our behalf to significantly improve productivity. With population deceleration, the need for shadow AI is real. Now, our shadow AI will obviously need to recognise our emotions at work and at play as part of ‘responsible’ design. How will this rub-up against the EU AI Act?

Conclusion

I don’t see the EU AI Act becoming a global standard as they so hope. I firmly believe the act will slow AI research, development and implementation in EU; choking AI adoption in the region, and thus hurting it in the long run. We have a collective obligation to push the envelope with AI; it’s in our nature to create. Regardless of how we might feel, our species is marked for extinction — asteroid collision, climate change, exploding sun, etc. As carbon-based biological creatures, we are not built for space travel and cannot escape our impending extinction.

But perhaps AI can.

--

--

Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.