Global AI ethics framework needed to enshrine rules of a ‘good corporate citizen’
- Without some level of harmonisation, the world may be a long way off from a universally enshrined set of AI codes of conduct
- Laws are about governing humans, not machines or codes, and should therefore be far more ‘divine’ and come from a single source of truth
Yet, these technologies – especially AI – are not invisible. I had to be reminded about how pervasive they are in everyday functions like online maps, autocorrect, or loan application approvals and to be corrected that AI is not isolated to the idea of some dystopian robot colony taking over the human race.
None of these cryptocurrencies will ever jiggle like coins in your pocket, nor will you hear the whirring of AI machinery churn your online payments.
At last week’s TechLaw fest, which explores the intersection of technology and legal implications, the Singapore Academy Of Law’s chief executive Zee Kin Yeong told me that AI regulations were top of mind. This is not dissimilar in other parts of the world.
It followed hot on the heels of another regulation leader, the European Union, which was the first to roll out laws on ethics in AI, including the proper use of data, the removal of biases, and stronger prohibitions on things like deepfakes – digital image manipulation.
Ultimately, “these are intended to make sure that you’re a good corporate citizen, to make sure the AI you’re using behaves within the range of expected”, Yeong said.
It reminded me of what American AI ethicist Rumman Chowdhury had said about “people having to build, … to implement the technology for AI to be harmful”.
She sees the problems in the world manifesting themselves in machine-learning models and predictive modelling.
“Today, there are a lot of conversations if AI is alive, or if AI is going to make everybody lose their job. I think that’s the wrong thing to think about,” she told The New York Times in June. “The things we should be asking are, what are the things that are wrong with society today? And how might that be reflected in the AI systems that we’re building?”
It seems to me, then, that the ultimate governing laws are about governing humans, not the machines or the codes, and should therefore be far more “divine” and come from a single source of truth.
Perhaps, something like the Universal Declaration of Human Rights adopted by the United Nations.
There is work being done at these international bodies, but the conundrum is there are so many different sets of rules. Many are not law, merely guidelines.
There is work being done by Unesco. In 2021, member countries adopted a voluntary AI ethics framework. The Organisation for Economic Co-operation and Development countries also adopted a set of nonbinding principles for AI.
And then there’s talk that Southeast Asian countries might be drawing up ethics guidelines as an Asean bloc, to be released in early 2024 at the fourth Asean digital ministers’ meeting chaired by Singapore.
In other words, without some level of harmonisation, we may be a long way off from a universally enshrined set of AI codes of conduct.
In the meantime, enjoy the benefits of AI but heed the caution of prominent Russian computer scientist Roman Yampolskiy: “Most people think experts know what they’re doing; we have no idea how it works, why it works, how to control it.”