Sitemap

The Age of Trust

4 min readSep 28, 2025

Why trust matters more in the Age of AI, and how we can measure it.

Press enter or click to view image in full size
Photo by Jonathan Cooper on Unsplash

Background

I was recently involved in a piece of work to look at how organisations and their leadership are and will be impacted by the latest mega-trends enfolding the corporate world. As part of that research, I can safely say that we are squarely in the Age of AI. It’s a technology that’s clearly divisive, with one camp seeing massive job displacement on the horizon, while another seeing a future of empowered work. While some of the anxieties are self-manifested, they speak to a more fundamental concern: with every significant technology shift, trust is threatened and needs to be renegotiated. Trust between human and the new technology. Trust between organisations and their employees. Trust between state and citizens. Simply put, the Age of AI ushers in the Age of Trust.

My 110th article seeks to provide an illumination to the question: How do we engender trust in the Age of AI, specifically in the way we design, use and facilitate the adoption of AI?

(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)

What is Trust?

Trust is one of those concepts that is both immediately intuitive and confusingly abstract. Like love. When asked to unpack what we mean by trust, we see that it can take on a range of meanings depending on the context. Trust can mean honesty or not motivated by self-interest in a relationship, reliability or safety in a product, good or high quality in a service, accountable or responsible in an organisation.

So what would trust mean in the context of AI? AI adoption represents a range of concerns, from cognitive dissonance (e.g. “It’s so human-like!”), to scepticism (e.g. “Can it really help me?”), to confusion (i.e. “How do I know if it’s doing the right thing?”), to outright fear (e.g. “It’s going to take over the world and kill us!”). How do we engender trust in this multitude of scenarios?

We do it through information signalling. Enter “trust metrics”.

Trust Metrics

Trust metrics are simply numerically encoded ways to represent trust in various context. We are probably familiar with the infamous Net Promoter Score (you can tell I’m not a fan!) which seeks to encode trust between a product & service provider with their customers. User ratings on Amazon and Trip Advisor operate similar principles. Pricing can also be a trust metric; it encodes trust between a product & service provider and their potential customers. Knowing who a person is connected to (say via a social media platform), also encodes trust in that person’s abilities and reputation. All in all, trust metrics can be built from user ratings / inputs, interaction and transaction data, and network data.

As with all metrics, trust metrics must allow us to monitor and should ideally allow us to take the appropriate correction actions when deviations become significant.

Trust in the Age of AI

What is the nature of trust “dissolution” in the Age of AI, and what kind of trust metrics should we be considering in this domain? I’ve identified 2 common-enough scenarios where we could begin our exploration of the use of trust metrics:

  1. Employees don’t trust that their organisations have their backs, and are actively working towards displacing or replacing them with AI-driven solutions. If an organisation wants to show that it will “protect” its employees in the Age of AI, what trust metrics should they create?
  2. Customers don’t trust that AI will handle their interactions with the organisation securely and accurately. What trust metrics would give them confidence?

I will confess here that I have not done any consulting work to create trust metrics in the above scenarios. However, I’m going to take a crack at it here. Hopefully, this inspires a recognition of the opportunity, and a directional approach to creating useful trust metrics in the Age of AI. So let’s start.

Scenario #1: Employee-trusting-organisation-not-to-replace-them-with-AI

In this scenario, the objective is not about reducing adoption or reliance on AI, but rather, to drive and foster human+AI co-existence within the organisation. AI would enable and empower humans in their work activities. Humans will be supported and trained. With such an objective in mind, we could consider the following trust metrics:

  • % job roles that have direct interface with AI systems (e.g. a frontline staff using AI-supported CRM).
  • % AI workflows with man-in-the-loop.
  • % employees trained with intermediate AI fluency skills.

The higher these %, the higher the “co-existence”.

Scenario #2: Customer-trusting-AI-interaction-with-organisation

In this scenario, the objective is similarly not about reducing the use of AI in customer-facing interactions, but rather, to encourage customers to be comfortable with the AI-supported interaction systems. In this scenario, we could consider the following trust metrics:

  • Average resolution time of categories of problems handled by AI.
  • % interactions requiring human intervention.
  • Crowd-sourced interaction reputation score.
  • Date when the AI system was last updated.

Conclusion

The trust metrics suggested in the above 2 scenarios are probably quite rudimentary. I have every confidence that some readers will be able to come up with more cogent ones. And that opens up the space for discussion and thought leadership. The reality is that trust in the Age of AI is not often recognised nor discussed in leadership conversations and organisation strategy planning. But I do believe that it will be one of those critical ingredients that will ultimately shape the success of AI adoption within an organisation and within an ecosystem. As anxieties build up due to reactions to uncertainties, creating effective information signalling to encode trust will be an imperative that shouldn’t be ignored. Welcome to the Age of Trust.

--

--

Eric Sandosham, Ph.D.
Eric Sandosham, Ph.D.

Written by Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.

Responses (1)