Ethics of AI
three main questions:
- how do we encode ethical behavior?
- how should we behave towards AI?
- how does the existence of AI affects our daily lives?
“Ethics begins when elements of a moral system conflict.”
Fundamental ethics: moral absolutism, you are not allowed to do something due to e.g. religion
Pragmatic ethics: humans always have a choice, you have the freedom of choice at any point in time
Sci-fi ethics (problems down the road)
Asimov’s laws:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The trolley problem is a good example of an ethical dilemma, and can be extended to self-driving cars (should it kill the driver or bystanders?).
How do we treat AI? How should we?
Today’s problems
- Autonomous weapons: weapons that decide what to do by themselves
- what are we allowing these systems to do?
- the Dutch government said it’s fine “if there’s a human in the wider loop”…but this is very vague, what is the wider loop?
- Privacy
- big companies have a bunch of data about people
- often, people give this data for free.
- Profiling (e.g. racial)
- e.g. a black person was stopped while driving in an expensive car because the system thought he could only be driving the car if he stole it.
Prosecutor’s fallacy:
- using probabilities incorrectly. $P(\text{black} | \text{uses drugs}) \neq P(\text{uses drugs} | \text{black})$