Designing future organizations based on ethical foundations for AI

By

I recently spoke at Tech: The New Era conference, which was part of London Tech Week, in a conversation with Kriti Sharma on the ethics of AI.

I had previously interviewed Kriti as part of the OFX/BBC Storyworks Where the world is moving podcast series I hosted in a very interesting episode on AI ethics, so I was delighted to have the chance for another fascinating conversation with her.

My core message was that we have critical decisions to make in how we use and implement AI. We must start by thinking through the ethical issues and potential implications of AI, and from that designing the future organizations that will in turn shape all of society and the role of humans in creating value.

Watch the video here, or you can read the full transcript below.

TRANSCRIPT
Jeff:
I’d like to welcome Ross Dawson, who’s coming to us all the way from Australia, from Sydney and Kriti Sharma, who’s in London. And the subject of this discussion is very much going to be focusing on artificial intelligence and also the ethics of AI. Ross, could I start with you, given the fact that you’re further away than Kriti is? Goodness, how does this technology work? But to ask you first of all about the reason I approached you is because you wrote a really interesting article, which you published on LinkedIn. But in your view, how is AI reshaping the whole concept of work? Obviously, COVID-19 has had an impact but prior to COVID, we were also looking about how AI could reshape work. What’s what’s your view?

Ross:
Well, the key thing here is that it is in our hands. We can shape that. And a lot of people are essentially saying the future of work is predestined. Oh, it’s great. We’re going to get massive technological employment or it’s going to be this or that. It is in our hands and that is up to every organization to create. There are massive risks from AI in terms of if you treat that the wrong way, you could say, “All right, all these employees, how can we replace those with AI?” And thus then create a landscape where people aren’t employed and the ones who are remaining are just hanging onto their old roles. But this requires a vision of what organizations can be, what the future of work can be, where this kind of this interdependence or this collaboration or this teamwork, where humans plus artificial intelligence can together create something different. That requires designing organizations very different than they are today.

Ross:
Let’s say, how do we get from here to there? Be able to re-envision this future of work, future of organizations where it’s very positive. And I think we’re going to see this massive divergence where some organizations, some leaders will see the potential and the potential to hire more people or to enable them to do more using AI to support their unique human capabilities. And you get other organizations are going to go this very inhuman, technologically focused way where are you going to eviscerate the talent in their organizations and they’ll be very different types of organizations, but the potential and what we need to focus on is how we can create that positive future of organizations and work by design.

Jeff:
But there is a tendency, we saw during the industrial revolution with the Luddites saying, “This mechanization which results in massive increases in productivity is going to be a thing. It’s going to put people out of work.” There is a kind of left right divide on this. Do you think however, that the move to AI potentially could be sort of redeeming and improving as far as the workforce is concerned?

Ross:
Well again, that’s our choice and that is a divergent path where I think one of the broader points come from this is that there’s going to be extraordinary value creation through artificial intelligence. And the question is who owns that? Is it going to be just the tech giants that have the ability to develop those capabilities and then to profit from them and human workers marginalized? Or do we find ways to be able to make that more positive or inclusive or ways in which that supports more, I suppose, broader prosperity? And it is there are two sides of this. The AI is this potentially supreme technology as it were, can be used or abused and that’s where we come back to this ethics frame. We need to be starting from our intent. That’s the starting point. What do we want to create using AI? And that is what will shape what we do with it.

Jeff:
Kriti, what’s your thoughts? Because we talk about AI as though it were a thing, but of course it’s not. We’ve had essentially hardwired computer algorithms for years. I think the difference is that now computers are faster and have much more power and therefore they can perform self learning. But it’s still essentially hardwired algorithms isn’t it? But so what do you believe is the sort of positive aspect that Ross alluded to coming out of this whole redefinition of work around AI?

Kriti:
I actually think, well see algorithmic development can be in different forms. Could be either hardwired or pre-programmed, so to speak, or learning from different mechanisms and observing the environment improving over time. But I think Ross, you pointed out, quite well, that a lot of this information is available, but it’s also about how companies really embrace it and organizations really understand and implement it. And I’ll give you an example.

Kriti:
We were recently working with a client who are super interested in real time, data analytics to help them understand how to understand a market in the COVID world and there are countries where they previously had only eight to 9% of their share in online commerce and now it’s 90 in the world of COVID. And it was all about identifying and spotting new opportunities and using real time AI. The challenge here was as they have access to all this technology and let’s be honest here, the AI technology now is very commoditized. You could get algorithms off the shelf, GPT that was really GPT-3, by opening out. This is becoming so much more ubiquitously available.

Kriti:
In this case, they got access to all these near real time decisions, but the whole system did not work because of one word. And that’s Ross’ point, design. The organization was not designed to act on it in real time. The people still had to take this data or these decisions, go higher up the organizational hierarchy. You get multiple layers of approvals before they could act on it. Now, this is really where the value comes in. If someone else can act on it, they’ve redesigned their organization, they’re more agile to be able to adapt and react, that’s where a lot of benefits are.

Kriti:
Now I’d also say, when you look at design of AI, it’s not just about the technology. The tech is going to be more or less the same, some can invest more than the others, but it’s eventually about skills. And that cannot, the importance of skills cannot be overstated at this point. Many of the companies I work with, they often say, “Well, we’ve been pouring money down this data analytics and big data and AI world and not seeing results so far yet.” And then there are others who have redesigned this.

Jeff:
Who are you referring to when you talk about skills though, is it the shop floor workers or the factory workers or the C-suite?

Kriti:
I’m talking everywhere really, Jeff. Because if you look at whether it’s people making decisions in contact centers about how to support colleagues or they’re talking about decisions within organizations and which markets to enter, this could be strategic decisions all the way through to operations, even and straight to the boardroom. There are companies that are really making more data driven decisions around investments or even business decisions.

Kriti:
I’d like to bring another example of something that happened to me when I was building very well designed, or so I thought, a very well designed AI system. It was a few years ago and we were redesigning contact centers using AI for one of the big companies I used to work with at the time. And we realized that all these calls that were coming in from businesses to get help and support on matters, AI could handle about 70 to 80% of those because they were rather mundane activities. We thought, okay great. Let’s design a system because it’s the promise of AI that AI takes care of the mundane tasks and humans can focus on the more intelligent things. That’s what we did. We built this technology.

Kriti:
Day one, it was 50% accurate, two weeks time, I think about 60 to 70 and then in three to four weeks, it was over 80% accurate. It could handle 80% of the queries and only the remaining 20 went to the human workers. And I thought the human workers were super happy because their workload is now reduced, turns out they weren’t because they found a, and they said to me, “Well previously, only 20% of my job was difficult problems and now a 100% of my job is difficult problems because the AI does the easy part.

Kriti:
There are all of these other challenges we need to look at. Now, how do you design skills for this world where machines do a lot of these tasks? How do you train someone to come in? Because at the early stages, the algorithms would know a lot more about that. And this is where the answer is as Ross pointed out, it’s entirely within our control. It’s not just a tech problem. It’s an organization design, it’s a policy issue.

Jeff:
Just one final question before moving on from this topic for a second. You’re talking about organizational change and AI, but there presumably are certain types of organizations, Ross, that are utilizing it. I think Amazon would be a case in point where ironically, because it’s an eCommerce driven company, it’s very much driven by how to cross sell, how to recommend products, how to fulfill a very quickly and so on. And ironically has been employing hordes of people to actually do the fulfillment work, but are there other, are there other organization? One question is, has Amazon done it well in your view in terms of embracing AI? Is that a kind of model to aspire to? But are there other types of organizations that you think are doing it well and others that are doing it really appallingly?

Ross:
Well, one example, which I think is it’s just a really nice, neat example is Stitch Fix, which is a US apparel subscription firm. What it does, is it has advisors, I suppose fashion advisors, and they have their clients and they recommend them what they should be wearing and they go out and they have a subscription model and get those fulfilled. But this AI goes throughout the organization, of course, through the logistics and the fulfillment, but also fundamentally as an advisor to those, essentially the fashion, the buyers assistants and saying, “Okay, well, this is the profile of your person. This is what they’ve liked before. This is the things which we would suggest. These are what similar people have done.” The individual human supported by the technology is able to build a fast, stronger relationship with their customer to make them continue to subscribe, make more money. And so they expanded from a few hundred advisors, customer advisors, to do well over 3,000.

Ross:
And so this is an example where it is focused on the human at the center. All of the things that can be automated are and the technology assists them as fundamentally. Amazon, it is one of the very first to implement these technologies from its recommendation algorithms back in the nineties. And there’s many facets of how it’s worked. It is continuing to automate its warehouses. And there’s, I suppose, there’s a lot of discussion around the appropriateness of its employment practices, but I think it does seem as a massive employer that will continue to employ people as it finds new ways to compliment the technology with people, because that’s the way you design a good organization.

Jeff:
Yeah, so it’s kind of ironic, isn’t it? The organizations that use these AI tools in a very effective way can actually see huge employment growth because of the competitive advantage that was brings. Let’s move on now to another topic, there’s been a lot of media coverage about machine bias, particularly in terms of facial recognition software, where the algorithms have an inherent bias, for example, in terms of identifying as potential troublemakers or people who are about to engage in antisocial behavior, that they may well be skewed towards black and ethnic minority communities. Are there other examples that you can think of where these inherent biases that come perhaps from the machine learning itself, mean that there is again, a bit of a question mark as to whether these are ethical use cases for AI? Kriti?

Kriti:
I’d just start by fundamentally questioning, are these technologies ready to be released in the wide, wide world out there? There’s a lot of cheerleading for AI technologies and as someone who’s been working this fields since a teenager really, it’s quite exciting, but at the same time, I wonder what kind of quality controls and checks do we put in place to say, “Now it’s ready to be applied.” And I’ll give you just in the case of facial recognition example, you mentioned Jeff, MIT Media Lab did a report, I think it was a couple of years ago when they audited some of the most commonly available facial recognition algorithms and found that the error rate for lighter skinned men was about 1% and for darker skinned women was about 35%. And in fact, the algorithms fail to recognize Michelle Obama, Praveen Sri.

Kriti:
Really, it’s a question, well, have we tested it well enough? Is it designed in the right way? Not just for a certain group of people, but also who might be classified as outliers or edge cases, which happens to be half the world’s population. I do think that there’s a fundamental question. It’s not just about an ethical question. It’s also good design practice question. When you build software, we look at coverage, bias, testing. We need to do the same for AI. It’s not immune to it.

Kriti:
And then secondly, the impact these technologies have on people. If you’re using it in a surveillance way or facial recognition tools to get access to technology or access to your bank accounts even, you’re now taking a bunch of people out and who can’t even access some of these basic services. And we’re seeing algorithmic bias in applications like the criminal justice system. And that’s where things get really tricky because the impact it has on people is huge. It’s impacting opportunities. For example, when we used it in CV screening cases, which have been quite popularized recently, or access to job adverts. We know in the US, ads that pay more than $200,000, ads of jobs that pay more than $200,000 are shown less frequently to women then to men.

Kriti:
It’s this whole funnel that we really need to think through. It’s somewhat connected to the previous conversation we were just having about Amazon and their warehouse practices. You use this technology in any way you want, but we have to make sure it’s designed for people, we’re not dis-incentivizing certain groups and communities and profiting in other ways. We need to think about it more at a holistic level.

Jeff:
Yeah. Ross, any thoughts?

Ross:
Absolutely. Just piggybacking off what Kriti was saying, that there are so many decisions that shape our society. And these include whether we get credit, whether we’re allowed to rent somewhere, whether we selected for a job and of course, many, many decisions through the judicial system. These are places where inevitably the power of machine learning and using a lot of data to be able to make decisions, will be applied and already in all of those domains is being applied. Now, the challenge and the problem is firstly from data. AI feeds on data, enormous data, particularly when you look at deep learning algorithms. And so it requires massive amounts of data and sometimes it can pick up correlations that humans wouldn’t as to something which may indicate that this is a particular, an appropriate decision to make.

Ross:
But one of the issues is firstly, that that data is based on our existing society. Any machine learning that’s based on data that reflects our current society, we’ll be perpetuating that. There’s just no opportunity to evolve as a society, to change, to give people opportunity to express themselves beyond the way that they’ve been positioned in society already. On top of that, there are ways in which we start to see that accentuation of what may be bias in the algorithms. And some of this is I don’t think that very rarely is that conscious, but that starts to flow through in any decision. These start to self perpetuate the system. This is because there are so many decisions that shape our society and our role in society and the power of machine learning to do these efficiently and effectively, we are inevitably going to see more and more machine learning, but there is implicit bias just in the fact of using existing data from society which is biased already so it has to feed on itself.

Ross:
Now, one important point to make is that just because machines are biased, doesn’t mean humans aren’t. And that’s in fact, one of the arguments is that if you can create better machine decision making, you can actually transcend human bias. But we don’t seem to got to the point of being able to do that yet. This is one of the challenges. We exist in a biased society. Let’s understand that. Let’s recognize that. Now we need to be able to designing the algorithm to allow us to transcend what we have lived in our society so far and go beyond that existing bias we have.

Kriti:
I just add to that Ross, in that you’re absolutely right. We need to do this. And I do feel very optimistic that there is so much awareness of this topic that has been raised in the last three to four years, compared to where we were about even half a decade ago, but we are facing some interesting challenges where we do need help from policy makers and more of a sandbox environment to be able to solve these problems and test them out. As somebody who works in this field day in day out, one of the challenges we’re facing now, when we’re trying to design unbiased systems, is how do you measure that a system is not biased? In order to measure a system and not being biased, you have to calibrate or test it against, in some cases protected characteristics.

Kriti:
For me to know, this algorithm’s not racially biased, I do need some samples of data with tagged race information, for example. And so you get into this interesting gray zone of even when you’re trying to solve the problem, A, to really capture the data in some cases, which may be very protected characteristic related information. One of the calls to action or areas where we can get help as a community is from regulators or governments to create more sandbox environments where you can try to solve these problems so we can come up with more quantitative ways, more analytical ways to measure what’s biased. That’s a task for you.

Jeff:
Do you think that there’s a risk here that, I think COVID-19 in particular has showing just how unaware people are about the concept of risk and the relative risk of one catching this disease and to having a really detrimental effect on their health. But more fundamentally, there’s a disconnect between people’s understanding of correlation and causality. And if a machine is telling someone that there is a causality here or people think that the machine is telling them there’s a causality, and they really will never understand the nature of the algorithms, because it’s even more complex than an argument between two people with different opinions, we may get ourselves into this kind of really awful situation where people are unfairly accused or finger pointed or in fact the machine becomes part of the social media rabble almost, is there a terrible dystopian aspect to this, Ross?

Ross:
There’s certainly the potential for it. And so if we think about this, how much we devolve to machines. And so there’s some who have said, for example, our judicial system, that judges are implicitly biased and there’s plenty of studies which have shown that, for example, a judge gives more lenient sentences if their sports team has won the previous day or it’s the accused person’s birthday or a whole array of other factors. This is now a, so we could say, “All right, let’s try to create a system where the machines are running things, which are guiding things, which are transcending the kind of bounds we have.” And I think actually, I think that we certainly could, and quite likely should go to sports umpires being AI, because I think they can see things, make better judgment calls or whether things are fouls or not, or just on the verge of being able to get that.

Ross:
But when it starts to be things which actually impact our lives and we are developing those systems, the chances of it getting those right, is I think minimal. This is probably the governance structures we need to build and one of them is human oversight. All systems. Yes, we may implement systems which create better outcomes, but we have this, we need to be able to have structures where we have the human insight. We have design by intent as we discussed. And to your question around whether this could lead to a dystopian future, absolutely. And that’s why these kinds of governance structures, thinking through the ethics and be able to create this, have the intent and be able to design that, is absolutely fundamental to avoid that. What could be extraordinarily dystopian world.

Jeff:
Yeah, I think you’re absolutely right, which is a good segue into AI for Good. Kriti, you’re obviously of the view that this ultimately can be wonderful and redeeming technology. And when we started this conversation about how organizational structures and employment might change, actually AI could be a force for good. But can you think of other examples? And why did you come up with this concept of AI for good? And my second question is, are you seeing evidence of governments realizing that there’s an opportunity here to try to get this thing right?

Kriti:
Yeah, absolutely. For background, I started AI for Good, which is an organization, set it up three years ago. And the mission is to solve of really difficult problems and not solve them using AI, but being part of the solution and designing with people who are subject matter experts. One of the important projects we started was a tool that helps the victims of domestic violence. One in three women around the world face violence in their own homes, from their own partners and the system, including support networks, charities, shelters, safe houses, even legal and traditional information or emotional support is rather limited with lots of different funding challenges. And also just the way the issue occurs. We built a tool called Rainbow. It’s a project that we started in South Africa, the country with the highest femicide rates anywhere in the world.

Kriti:
And the tool is designed to be nonjudgmental. Is powered by AI and was created together with subject matter experts. People who’ve been working in the field of domestic violence for decades and it’s not a savior for people, but rather helps people take the first step towards seeking help or understanding their issue a little easier. Solves a very narrow, specific problem using AI. And what we’ve found is during lockdown during COVID, the number of cases of domestic violence have increased many fold. Traditional system support helplines, contact lines, they aren’t able to operate in the same way as they did before. And as well as people who are living in these uncomfortable, potentially dangerous situations with perpetrators, can’t necessarily pick up the phone and speak when they’re in the same house at the same time.

Kriti:
We found with the technology that we built and deploy and had been working on for the last few years, we were able to scale much faster and we were able to give people the right help and the right support in a more agile and intuitive, faster way than what could have been possible if we were purely analog or purely phone line based helpline support. It’s in moments like this, when you really start to see that you can’t just switch on AI in a moment because the technology needs to develop and harden and become robust over time. That if you had just started it two months ago, we would have been in a very different position to the fact that we invested in this area and built it for the last three to four years.

Kriti:
Let’s not kid ourselves or think that we can start to build AI immediately tomorrow to solve all the problems. It takes time. You make mistakes, you learn to collaborate with people and it’s different fields coming together. There’s a lot of fear about this technology. There’s a lot of skepticism, rightly so, and you have to bring different groups and communities together. Some of whom may not have even crossed the digital transformation chasm, let alone the AI work, but we must bring them on the journey.

Jeff:
Yeah, it’s interesting. I had one of these sessions recently, and one of the panelists was from the Turing Institute and one of the things they do are workshops with development teams within corporates and government as well, to brainstorm how some of these technologies can be used and also to bring best practice from different facets of the AI community as well. That’s a key part of the co-creation process really for AI, which it goes right back to your first point about design. Ross, again a twin pronged question for you, is AI a force for good, firstly? And secondly, what advice would you give to governments and entrepreneurs in terms of how they might embrace the technology?

Ross:
Certainly AI can be a force for good and coming back to this idea of intent. It is an incredibly powerful tool. And we just say, “Well, how can we apply this incredibly powerful tool?” I think one of the most obvious ones is education. The world is changing, AI is taking jobs, amongst other things, but AI can educate us, tailor education to the individual and their learning style and discover their potential. We can use it for healthcare. We can use it to help the disabled. They’re going to be a democratizing force. There’s this wonderful do not pay app, which has started off helping people to contest parking fines is now broadened to be able to provide people that may not have the resources to legal advice. There’s all of these ways in which we can say, “Well how can we use this tool?” And enormous potential to do that.

Ross:
It can be an incredible force for good if you design that. The summary, I suppose the advice, is start with intent and that comes from values. What values will guide your decisions? And so, of course, we’re all supposed to have some values and organizational values, but now is a time when those will shape your decisions as never before, because there are so many options, so many choices. We require to really be absolutely crystal clear on what are your values? How and why you adhere to them. And I also believe it’s critical to have this vision of what it is you want to create? What kind of organization? What sort of society you will be embedded in.

Ross:
And there’s many processes and design issues and ways in which you can implement AI in organization but part of that intent, I believe, has to be making AI and humans complementary. We do not want to be slaves to what we have created and there’s no need for that. We can create these as tools to support us, to create a better organizations, better work, better society. And that I think it all cascades down from there, from that intent, from that design, from that understanding what it is you want to create and AI can be a tool then used well to support that.

Jeff:
Kriti?

Kriti:
I fully agree with what Ross said. I also would say focus a lot more rather than trying to solve every possible problem with AI, just pick specific areas, start to show impact and then grow into adjacencies. I feel often that the area of explainability can be underrated. Some has. If you really want to drive adoption of AI and builds that trust with people, with humans who are using the technology, building explainability is very important. And by that I mean helping understand how did the machine come to the conclusion it did? What was the evidence behind it? People are not going to take actions based on a recommendation, just because a machine said so. We’re not there yet. Building explainability, defining purpose, focusing on the right problems to solve and most importantly, who creates AI? Who are the people in your team? How diverse are they? What kind of different backgrounds they represent is super important.

Jeff:
Kriti Sharma, Ross Dawson, thank you very much. Indeed, a fascinating discussion. I’m sure elements of this will be introduced into many of the other discussions over the course of today, but listen now, I really appreciate your time and thank you very much indeed.

Kriti:
Great, thank you guys.

Ross:
Great time.