FionaK |
|
| I was listening to a discussion about AI today. It was interesting.
If it was possible to build a truly intelligent and conscious machine, it would be necessary to install some kind of ethics, at least as a platform for it to develop from. Isaac Asimov considered this in his I, Robot books and he made do with 3 basic rules. But I think that presupposes the robot is not truly intelligent and conscious: if the rules are unbreakable they are not like anything we consider to be human and/or intelligent. Or so I think.
I presume other writers have specifically looked at this problem, because intelligent machines taking over the world is a familiar enough theme. But I do not know of works which consider this seriously, nor of scientific thinking on the subject
Where would we start?
|
| |