Please enter some search terms
Two faces made of circuits

The mistake we keep making around Artificial Intelligence (and how to avoid it)

When Elon Musk talks, people tend to listen. There is nothing artificial about the Tesla founder’s intelligence.

But recently, Musk came forward with a statement that contained a slight downgrade of his usually-interstellar confidence.

“Excessive automation at Tesla was a mistake,” he said. “Humans are underrated”. 

The context was the hole the company finds itself in having overestimated how many cars its robot-led production line could turn out.

But the broader point is absolutely crucial. 

 

Elon Musk tweet

 

We know emerging technologies such as AI are going to be a huge part of our future on planet Earth. Most large organisations, companies and government departments are already beginning to incorporate AI into their thinking.

But could the ways they are sometimes doing it be flawed? 

Emerging technologies such as Artificial Intelligence, the Internet of Things and Virtual Reality offer so many opportunities. 

Properly designed they can vastly improve the world we live in and give us unparalleled access to new experiences. They can help usher in a fairer, better set of circumstances for people with physical or other limitations. They can create and provide better-tailored products and services and deliver massive efficiency gains.

Man wearing VR goggles
Emerging technologies such as machine learning, artificial intelligence and virtual reality offer exciting new possibilities. But only if we get the design process right.

Already we have seen helpful examples of this around the world, such as Google’s flight tracking system that promises to identify whether your flight will be delayed

But in a few cases, we have also seen some seriously negative impacts of applying technologies without giving proper consideration to the design process. 

There is a lesson here. Too often we see both the capabilities and the limitations of emerging technologies in purely technical terms. We think that because emerging technologies remove human beings from processes and roles that human biases are removed as well 

A computer-driven intelligence must be objective, right? It must be fair? 

Examples, where applications of emerging technologies can lead to perverse outcomes, are stacking up. In one case, a predictive policing system that is meant to inform custodial decisions may actually act to exacerbate biases rather than eliminate them.

Although the system is built on the foundation of neutrality, it uses a person’s postcode as one of the data points in the underlying model. A draft academic paper focused on this system highlighted the potential for this piece of information to be used by the system to discriminate against the poor.

This type of system flaw is not isolated. And as organisations pivot towards greater use of machine learning and artificial intelligence they are salutary lessons. Machine learning systems sill require human-centered design. 

Human-Centered emerging technologies 

Unless we intentionally design applications in ways that are human-centered, ethical and aware of potential unintended outcomes, negative consequences will be a possible inadvertent outcome. It doesn’t have to be that way. 

When ThinkPlace was asked to help design a mental health digital assistant, we took a human-centered approach that brought together the views of potential users, digital experts and government officials and included them at all stages of the design process. The outcome? An inclusive, well-resourced AI product that supports users in ways that are tailored for them. 

It makes for harder work but the result is worth it. Some of the questions we tackled were:

  • How might we provide relevant resources whilst avoiding clinical diagnosis that is based on a few search terms users have entered? 

  • How might we cater to the very real triggers that would lead to users asking for assistance? 

  • How might we augment the digital assistant in real terms so that users have access to another person who can help? 

  • How might we pace users through questions before providing answers? 

  • How might the digital assistant take time to understand the complexity of the situation before providing answers? 

  • How might we provide similar functionality to remote users who may have little access to the internet or those who live in close-knit communities where privacy and confidentiality are more of a concern?

  • How might we use the digital assistant only in situations where it can add value? 

  • How do we ensure that the automated systems and services of the future do not replicate the same biases as those of the past (and present)? 

At ThinkPlace, we have a huge amount of experience and expertise at global, national and local levels, designing and applying technologies in complex systems for social impact.

We also have a clear focus on ethics. All of our projects go through an ethics screen led by our Chief Ethicist. All must show that they are contributing to a better world, in line with our adoption of the United Nations’ Sustainable Development Goals. 

As we expand our business into emerging technologies, harnessing our skills and expertise in this rapidly-emerging area, we are bringing all of that experience to bear. 

It is a much-needed voice. 

As designers, there is one thing we should never lose sight of. Each application of an emerging technology exists in a context, as part of a complex system, and it is used or navigated by actual people.

Although emerging technologies might replace human actors in some domains and free them up to focus on high-value problems, this should not remove the need for human-centred design.

That’s why we are so excited to be operating as designers in the field of emerging technologies. If you are running a large organisation and you aren’t planning for the rise of the robots, then you’d want to start doing so. But if you’re planning for emerging technologies in a way that isn’t human-centered, then you could be doomed to make a big mistake. 

Humans aren’t just under-rated, you see. They need to be at the core of everything we design. 

Share article: 
Sector: 
Services: 
Share

Want to stay up to date with our work and ideas?

Sign up for our monthly newsletter

Sign up