Mon | Dec 11, 2023 | 6:04 AM PST

Have you been thinking about digital trust? How do you trust an algorithm that's making thousands of decisions a second when you don't even know how it works? And how do you trust a company that is silently tracking your movements every day, collecting data on you, and not telling you what they do with that data? With our digital global economy being founded on trust, we need to establish a meaningful definition of "digital trust." Let's start with a quick story.

In July of 2023, the city of Hayward, California, declared a state of emergency after a cyberattack had degraded their emergency services dispatching capability. Here in the United States, we dial 9-1-1 when we need police or fire assistance, or if we have a medical emergency. And in Hayward, the cyberattack made calling that number and getting help extremely difficult. The city's dispatch center had to find ways to answer the calls that were coming in from citizens while also helping police officers and firefighters and medical teams respond to these calls. But they didn't have any of their normal systems.

The cyberattack was the cause of this issue, of course, but the real problem at hand was that citizens had no warning that their emergency services could fail this way, nor any estimate for when the systems would be fully restored. We expect that emergency services will use their technologies to reliably provide us with services, that they will protect the private information we give them, and that the operators of their systems will act responsibly; but residents were failed in this sense.

To get into the meat and potatoes of things, the World Economic Forum defines digital trust as the "expectation that digital technologies and services—and the organizations providing them—will protect all stakeholders' interests and uphold societal expectations and values." Additionally, digital trust involves several interconnected elements, including:

•  Security of Systems and Data
•  Privacy of Data
•  Transparency of Operation
•  Accountability when things go wrong
•  Reliability

But why is digital trust suddenly important? What are the origins of the need for a trust framework?

In the 1980s, the internet as we know it today was called ARPANET and used mostly by researchers and the military. According to Cliff Stoll, author of the book The Cuckoo's Egg, the community was small, and the level of trust was very high. There weren't enough users of ARPANET to warrant any real scrutiny of everyone's activities. System administrators didn't bother locking down their systems, because the possibility of bad actors using them didn't really cross their minds. 

The Cuckoo's Egg tells the story of how Stoll worked with various law enforcement agencies in various physical jurisdictions, including in the United States and in Europe, to catch some intruders that had broken into his computer network way back when. Unlike the attitudes of today, Stoll was shocked that someone was taking entering his network without permission, and the violation of trust upset him. 

Rare computing resources across an entire community that was spread out over thousands of miles was a new and wonderful thing. Stoll put his finger on something that I think is very important here, which is that it's difficult for trust to scale. As our networks get bigger, as our digital technologies spread, and as the services that we've come to rely on are converted into digital services, it's very difficult for trust to scale without a lot of additional effort to make sure that it happens. Trusting a handful of users on ARPANET is much different than trusting millions of strangers on the world wide web. 

What Stoll was calling us to do is to take the threats of scams, misinformation campaigns, and cybercrime seriously. Once digital trust erodes too much, how will we collaborate with each other online? Forget downloading that PDF your colleague sent you, how do you know it's really them that sent it? And don't even think about using an online payment portal. 

We are living in a digital frontier, a lawless community with little access to fully functional digital equivalents to police, firefighters, and other protections we enjoy in the real world. With some experts predicting that cybercrime is going to grow into a $10 trillion industry by 2025, how long can digital trust be taken for granted before we begin to lose faith in our digital services?

This leads us to another important question: how do we restore digital trust? The World Economic Forum will help us out again with this question, as they have studied digital trust quite thoroughly. According to them, digital trust is built on two main components called mechanical trust and relational trust. Mechanical trust, which I refer to as tools, are mechanisms that deliver predefined outputs reliably and predictably. An example of this would be the trust you have in a well-maintained car's brakes to stop the car, even if you are speeding down the highway. If a system is secure and performs predictably, then individuals will be more willing to use it. But the outputs must be predictable and reliable, and the same results must be delivered every time to be truly trustworthy. 

An example of how mechanical trust works in a digital context can be illustrated by looking at community usage of facial recognition. Facial recognition technology could be useful in many ways, but we lack mechanical trust in it outside of using it to unlock your smartphone. In 2019, the City of San Francisco banned the use of facial recognition technology at a community scale; in other words, deployed in a way that people on public streets and sidewalks could be surveilled and whose identity could be discovered simply by examining the unique features of their face. There are many examples of why people don't trust facial recognition at a community scale, but I will share just one. 

In 2022, a man named Randall Reed from Atlanta, Georgia, spent nearly a week in jail because he was falsely accused of stealing women's purses in a completely different state that he had never even visited. Because the digital mechanism of the facial recognition technology used to arrest him failed, there were awful impacts on Reed's life. With this case and many others, facial recognition technology has not demonstrated that it can deliver predefined outputs with reliability and predictability. 

Relational trust, which I call rules, is equally important. If people don't believe that we're all playing by the same rules, even though the technology is trustworthy mechanically, our trust in that system is going to be broken down. To revisit the earlier example of the brakes on a car, we need shared agreements on the road to use our brakes effectively. When drivers see a red light on a public road, the red light means that we have to stop. If people don't agree on that, then whether our brakes are mechanically trustworthy or not doesn't matter; we'll press the brake pedal and still get sideswiped by someone who doesn't agree with stopping at a red light.

We need similar shared agreements on our digital technologies. It needs to be clear when they'll be used, where they'll be used, why we're using them, how they're useful, and that they won't be abused. In order to trust technology like facial recognition, we need rules and agreements about how it can be used. People think facial recognition technology is creepy because at this point, we don't trust governments to not abuse it.

Contrary to popular belief, digital trust is not solely a technological problem, nor is it solely a personal problem. We must treat the establishment and maintenance of digital trust as we would manage a public health problem, with study, and borrow the mechanisms that we've used to mitigate public health issues to apply to the realm of digital trust. 

I consider digital trust, just like cyber risk management, to be a team sport. It requires significant effort on the part of businesses as well as governments. Our response to the threat to digital trust needs to be on a local, national, and global scale. On top of that, the response needs to be able to address contextual situations like cultural differences. The use of facial recognition in my town, in the United States, may not be reasonable in the town you live in.

So what have we been able to accomplish before in a public health fashion on a culture-wide basis? An example is construction site safety. In the past, safety in construction sites was considered to be a hinderance and unacceptable. In the United States in the year 1900, about 300 out of every 100,000 were killed on the job every year, but over time, this rate has decreased to just nine in 100,000. We passed the Occupational Safety and Health Administration Act in 1971, finally having rules to put our safety tools to good use. It is no longer the fault of the individual worker for "not being careful enough" or not knowing to bring their own safety equipment to work.

With regards to digital trust, I think we're going to see individuals and organizations demanding stronger security and privacy measures. That is a trend that is already apparent. Trust will have to be built with artificial intelligence and digital automation, the recent technologies that we have a hard time dealing with. Recently, strikes by workers in Hollywood who expressed concern about AI replacing them have shown how people can preserve their relational and mechanical trust in a technology.

The last things we will need to have are more laws and regulations, such as the GDPR in the European Union and the CCRP in California, perhaps even a national set of privacy laws for the United States and across the world in countries where such laws don't exist yet, better cybersecurity standards, digital ethical frameworks, and a litany of privacy enhancing technologies like data portability and consent management. 

That is a long list of things we need in order to preserve digital trust. We've got to work together to figure out how to solve this problem, because there are very few experts who can tell us exactly what we need to do. This is the kind of problem that requires adaptive leadership, which is to say a way to bring people together to work on this problem so that we can come up with shared solutions, rules, and tools that we can all live with. In the United States, I can tell you this will be difficult because we have a history of government distrust by the private sector. Yet, the only way I think we're going to succeed is if we work together to figure out ways to overcome that.

Comments