Review | When AI goes wrong who’s to blame, Singapore law professor asks; do we legally treat algorithms and machines as we once did mercenaries and miscreant animals?
- Simon Chesterman, a law professor in Singapore, asks some sobering questions about legal responsibility for the decisions of AI machines and algorithms
- Like mercenary troops, algorithms that decide on your guilt or innocence, or right to entitlements, lack moral intuition, he notes. So are we still in control?
We, the Robots? Regulating Artificial Intelligence and the Limits of the Law by Simon Chesterman, pub. Cambridge University Press
Chesterman, dean and professor of law at the National University of Singapore, brings a sober but readable approach to a subject otherwise much given to speculation and fearmongering. He enlivens his work with stories from the real world: accidents involving self-driving cars; stock market collapses caused by automated trading; biases in the opaque proprietary software used to assess the likelihood an individual will default on a loan or repeat a criminal offence.
Existing laws and regulations, whose design was predicated on the direct involvement of humans, are already struggling to cope with problems arising merely from the speed of transaction made possible by ever-faster processors and computer-to-computer communications. Examples include the “flash crash” of 2018, in which US stock markets took a tumble driven by overenthusiastic algorithmic trading systems doing thousands of deals with each other in a matter of seconds.