Intelligent Systems

Table of Contents

Ethics of AI

three main questions:

“Ethics begins when elements of a moral system conflict.”

Fundamental ethics: moral absolutism, you are not allowed to do something due to e.g. religion

Pragmatic ethics: humans always have a choice, you have the freedom of choice at any point in time

Sci-fi ethics (problems down the road)

Asimov’s laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The trolley problem is a good example of an ethical dilemma, and can be extended to self-driving cars (should it kill the driver or bystanders?).

How do we treat AI? How should we?

Today’s problems

Prosecutor’s fallacy: