In Tech We Trust?

Exploring the meaning and value of trust in the context of AI.

TRUSTCOVIDARTIFICIAL INTELLIGENCEREGULATION

8/1/20207 min read

The adoption of emerging technologies is hampered by a lack of trust—not in the tech itself, but in institutions.

It has been often said that technology, particularly emerging technologies such as artificial intelligence (AI), will be inhibited by a lack of public trust. At the 2020 CogX Festival, Roger Taylor, the chair of the UK’s Centre for Data Ethics and Innovation, stated that low levels of trust are a barrier to the effective use of AI tools. Failure to address this, he said, would make it harder to operate an effective, free and open society.

AI has the potential to benefit society. A 2019 report by the European Commission’s High Level Expert Group on AI (AI HLEG) argued that AI can aid human flourishing and help achieve the UN’s Sustainable Development Goals. The Centre for Data Innovation (2019).  has stated that AI can “boost competitiveness, increase productivity, protect national security, and help solve societal challenges.” However, the AI HLEG caution that trust is a significant challenge to the adoption of AI.

Trust is firmly on the agenda, but the question remains: trust in who? And if there is a trust crisis, what is to be done about it?

Before answering these questions, I will explore the conditions in which people invest their trust.

What are the conditions for trust?

The Edelman Trust Barometer is an online survey that measures trust in institutions across 28 markets. In 2020, it found that: “People today grant their trust based on two distinct attributes: competence (delivering on promises) and ethical behaviour (doing the right thing and working to improve society).”

This can serve as a framework for understanding when people grant trust, at least to institutions.

An institution is trusted when:

  1. it is competent, and

  2. it behaves ethically.

With this in mind, I will now turn to my first question: trust in who?

It is the system the public do not trust

The problem is not a lack of trust in tech companies. In fact, for the past 8 years in the Edelman Trust Barometer, tech companies came out on top as the industry most trusted “to do what is right” across 23 markets. 

Furthermore, most Americans surveyed by The Verge in 2019 would trust the tech companies responsible for building and deploying AI with their data (with the notable exception of Facebook—let’s put that down to Cambridge Analytica). 69–75% of respondents trust Google, Apple, Amazon and Microsoft with their information.

If trust in tech companies is high (or higher than we might expect), we should look elsewhere for the trust deficit. And when we do, it appears that it is in governments and other institutions.

In the 2020 Edelman report and across 26 markets, the UK is second only to Russia in having the least trust in NGOs, businesses, governments and the media “to do what is right.” Furthermore, across all markets surveyed, trust is in low supply.

“This year’s Trust Barometer reveals that none of the four institutions is seen as both competent and ethical. Business ranks highest in competence, holding a massive 54-point edge over government as an institution that is good at what it does (64 percent vs. 10 percent). NGOs lead on ethical behaviour over government (a 31-point gap) and business (a 25-point gap). Government and media are perceived as both incompetent and unethical.”

Why is trust in governments so low? More specifically, why are they not trusted with regards to emerging technologies?

The public lacks optimism in government competency

The public, at least in the UK and USA, does not view their government’s ability to understand and regulate technology favourably. One reason for this is that there are fundamental concerns about whether policy makers even understand technology. The Edelman Barometer 2020 found in the UK that 72% believe the government does not understand emerging technologies enough to regulate them effectively.

Cast your mind back to 2018 when Facebook was due to get a congressional lambasting. Instead, Zuckerberg faced a barrage of confused and embarrassing questions. (My personal favourites: “Is Twitter the same as what you do?” and “How do you sustain a business model in which users don’t pay for your service?”) This was an episode on the world stage that left both parties, Facebook and the US government, worse off.

The Cambridge Analytica scandal meant that for many people, Facebook failed condition #2 of the trust framework above and as a result is now the least trusted tech company. In its response, the US government failed condition #1 by revealing its own naivety and incompetence regarding the subject matter.

If governments are not trusted to regulate technology and to create the frameworks in which that industry operates, perhaps that’s where the problem lies.

Lack of trust in the system is reflected in a lack of trust in technology

Consider the Covid-19 tracing app that was due to be launched by the UK government (under the guise of NHSX) before being axed unceremoniously, in favour of Google and Apple’s alternative. Aside from the technical challenges that beset the app’s development, there was an even bigger issue related to trust and uptake.

A team at Oxford University showed that for the app to work effectively, it would have required about 60% of the UK’s population to download and use it. However, according to a study by cybersecurity firm Anomali, the public already had doubts about the competency of the government to develop such technology.

“[N]early half (48%) of the UK public surveyed about the NHSX COVID-19 tracing app do not trust the UK government to keep their information safe from hackers.”

Perhaps even more concerning, a 2020 survey by Emmeline Taylor and others revealed that 60% of people believed that their data might be used for purposes other than for tracing COVID-19. In other words, the ethical behaviour of the government was in question.

We can easily imagine some of the fears that sections of the public may have with the centralised collection and subsequent sharing of their data with other departments. For example, immigrants or refugees might fear that their location history will be accessed by the Home Office and prejudice their naturalisation or asylum case. Benefit claimants or the self-employed may fear how their data will be used against them, whether they will be caught out, rightly or wrongly, by the Department for Work and Pensions.

Concerns about the state collecting huge swathes of personal information, particularly sensitive location history, combined with a lack of optimism in the state’s motives, create a recipe for distrust.

This shows that before the app was even developed, the government was failing on both trust conditions: UK citizens did not believe that the government would be competent or ethical in its efforts. The efficacy of such an app was always going to be questionable if such a significant proportion of people felt that its use would carry such high risks. As the Ada Lovelace Institute argued in their 2020 “Exit through the App Store?” report: “The effectiveness of a digital contact tracing app will be contingent on widespread public trust and confidence, which must translate into broad adoption of the app.”

What this example shows is that attitudes towards the state (and its motives or competency) are reflected in the adoption rate of technology. When people don’t trust the system, they won’t trust the technology either. As Emmeline Taylor et al. put it:

“When the public trusts authorities, their concerns about privacy are mitigated. They can feel reassured that new technologies, laws and powers will be used in the correct way and not be abused.”

Here we have a crisis of trust, and I will now answer my second question: what is to be done about it?

Building trust in the NHSX app

Since the UK government did not satisfy condition #1, and because time was of the essence (thus ruling out any solutions that depended upon building trust over time in its technological competency), it should have leveraged existing tech solutions from suppliers that are generally considered to be competent.

Indeed, across Europe, this is exactly what happened, as nations turned to the tech giants who touted a decentralised data storing, privacy-first approach to the app. This approach has helped tech companies satisfy condition #2, as people can see that they are going to great lengths to behave ethically and avoid any fear of surveillance—in stark contrast to the initial UK government approach, which favoured a centralised approach.

There are a series of measures the UK government could have implemented to satisfy condition #2. The Ada Lovelace Institute outlined one solution:

“Government should support and foster public trust in symptom tracking efforts by strengthening the governance landscape in which they are being deployed.”

They recommended first establishing an expert advisory group that advises on and oversees the implementation of the app (which would also have helped satisfy condition #1). And second, they proposed an independent oversight mechanism that allows for the scrutiny of policy formulation.

Furthermore, evidence suggests that the public would respond well to greater regulation and oversight. Research by the Nuffield Foundation and The University of Sheffield found that people believe “better communication and the existence of safeguards, accountability and transparency would make organisations more trustworthy.”

A combination of regulation, oversight, accountability, and safeguards would have allowed the government to satisfy condition #2.

Returning to the topic of emerging technologies in general, the idea that trust can be built through a stronger governance landscape is shared also by Ashley Casovan, the Executive Director of AI Global. At CogX 2020, she argued that to build trust in society for the use of AI tools, we need to make sure that these tools have known and accepted guardrails around them. This would provide clarity on the criteria and parameters that were involved in designing these systems.

It is in tech companies’ interests to foster trust in the system

While people may have few problems with the technology itself, they have concerns about wider social problems. This may well, in turn, result in a lower adoption rate of technologies, as the failure of the track and trace app clearly demonstrated. So it is in the interest of those developing emerging technologies to foster trust in the wider system.

If the best way to build this trust is by strengthening the governance landscape, expanding regulation, and increasing oversight and accountability, then tech companies have a strong reason to support these measures. In fact, it is in their ultimate interests to assist in this process and ensure that regulation has teeth. By abiding by the spirit and letter of the law, and accepting penalties when they don’t, they will foster the public trust in the systems on which their continued success depends.

As the case of the Covid-19 tracing app has shown, tech companies should also continue in the vein of advising the government on how to follow best practice. The tech sector and government need to aid each other by holding each other to higher standards so that the public believe that they are both competent and ethical. This will ultimately build trust in technology, the state, institutions and the system.