- #1
- 11,308
- 8,744
https://www.lawgazette.co.uk/law/ar...es-to-require-human-liability/5113150.article
Artificial intelligence systems will have to identify a legal person to be held responsible for any problems under proposals for regulating AI unveiled by the government today.
The proposed 'pro innovation' regime will be operated by existing regulators rather than a dedicated central body along the lines of that being created by the EU, the government said.
The proposals were published as the Data Protection and Digital Information Bill, which sets out an independent data protection regime, is introduced to parliament. The measure will be debated after the summer recess.
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI. It figured out by itself how to move the throttle, and it displaced human workers who could have done the same thing manually.
On an abstract level, if we have a black box with which we communicate, what is the test we can use to prove that the content of the box is an AI?