Please enter some search terms
ThinkPlace is working with the world economic forum on a roadmap for regulating Artificial Intelligence in New Zealand

Helping governments grapple with the challenge of AI

Regulating in the age of AI presents unique issues for our society and for those within government who are working to improve it.

Recent advances in AI and machine learning have shown us the scale of complexity and the sometimes-disorienting speed at which change is taking place. Well-publicised cases of unethical practices with AI and machine learning have fuelled mistrust across the global community and have the potential to create crippling uncertainty in the minds of those who might deploy AI in pursuit of greater efficiencies or social benefit.

In this context, governments around the world are beginning to realise that they have both a responsibility and a need to ensure that AI is adopted in ethical ways. But apprehending this need and acting strategically upon it are two different things.

Many government actors are dealing with the same emerging questions about how best to do this: How quickly should the uses that AI is put to and the way in which it operates be regulated? And how can this be done while upholding important human rights and values?

ThinkPlace is working with the world economic forum on a roadmap for regulating Artificial Intelligence in New Zealand

Drawing on our deep expertise with emerging technologies and digital ethics, ThinkPlace recently facilitated a conversation to explore these questions for the World Economic Forum Center for the Fourth Digital Revolution (C4IR) and the New Zealand Department of Internal Affairs. Leaders from across government, business, Māoridom (New Zealand indigenous community) and civil society gathered to have open dialogue about the opportunities and challenges of AI, and potential steps forward.

Kate MacDonald, NZ Government Fellow at C4IR, said the intention was to bring together stakeholders across all sectors to collaborate on designing innovative and agile frameworks for governing AI.

Activities in New Zealand will be based on creating a policy roadmap to guide decision-makers; convening a national conversation; and piloting the most promising approaches and tools - Kate MacDonald, C4IR

 

This is an opportuntity space not just a risk one

This was a hugely necessary and very timely conversation. For many government leaders across multiple sectors artificial intelligence is an uncharted territory with huge potential. While attention is rightly often focused upon making sure that applications do not create harms for citizens (either intended or unintended) there is also huge opportunity for AI to be a powerful force for public good at a human level.

We’re already seeing some of the benefits – efficiencies, cost savings, error reduction and future forecasting – to name a few. It’s not hard to see the potential benefits when algorithms can assess a wide range of data to make determinations about where aid should be delivered during an unfolding crisis or which portions of the population might be targeted for a public health campaign, a tax audit or a new form of skills training.

But it’s also not hard to see the potential ethical pitfalls if these processes are mishandled. When we empower AI to pick winners and losers, when we use people’s private information and data to aid in decision making we are operating at a new ethical frontier.

Done wrong, AI has the potential to create harms, including privacy issues, entrenched biases, power structures and racism, control issues, and accountability issues. The diverse range of sectors also means that there is a wide range of interests in AI from all sides of the spectrum.

The scale of the potential harms can be intimidating – and there is currently no map, no template for doing this well. This event was a necessary first step towards creating that template and making sure it is co-created by all of the voices who have a stake in its application and success.

We were asking: How might we enable the best of AI in a way that protects human rights and values and meets the needs of our sectors?

 

Here are the top four insights that we took away from the day.
 

1. Whose ethics?

When talking about AI and regulation, it’s not simply a discussion about the tools or technologies but also the ethical framework that serves as a scaffold. It includes questioning how decisions are made, how power is distributed and who benefits and is excluded.
 

2. What will our country stand for?

Regulating AI well means striking the right balance between global scalability and local relevance, and to do this requires a deep understanding of where our nation wants to sit on the spectrum – to appropriately honour the diversity of voices and rights of our citizens.

 

3. How might we aspire to be trustworthy, not just trusted?

AI will be meaningless if the leaders, data, frameworks and tools for regulating are not considered trustworthy by citizens. AI relies heavily on data inputs – but leveraging this data in the long-term requires the reputation of reliability and a strong alignment in ethics. We’ve seen what happens when companies are trusted, but are not trustworthy with their use of data.

 

4. How do we honour the cultural uniqueness of data ownership and data sovereignty?

In our conversations we explored the notion of individual, community and societal data ownership and data sovereignty, but also recognised that these are rooted in cultures. In Aotearoa New Zealand, honouring and upholding Te Tiriti o Waitangi (the Treaty of Waitangi) is critical for ethical application of AI. However, as part of a global community, we also need to identify the expectations of other country cultures and understand how our unique approach may have similarities and differences with our overseas peers.

 

ThinkPlace is working with the world economic forum on a roadmap for regulating Artificial Intelligence in New Zealand

 

While these discussions and the insights they generated were specific to a New Zealand context they also have much potential to resonate more broadly.

“New Zealand was chosen as a country to explore this work because ensuring people’s rights in the digital age and keeping citizens safe in a technological world are two key government priorities,” Kate MacDonald says.

“From a WEF point of view, New Zealand is a small country, able to move nimbly and work across the system and sectors. It already has a strong collaborative culture, and is bicultural with a wide diversity within its population.”

“I was thrilled at the high energy and enthusiasm in the room, the wide ranging and respectful conversations and the real passion the people in the room had to see this succeed.”

 

Want to know more? Inquire about our short course in digital ethics

 

 

 

Share article: 
Author
Jim Scully's profile'
Jim Scully
Sector: 
Services: 
Share

Want to stay up to date with our work and ideas?

Sign up for our monthly newsletter

Sign up