Ethical and Philosophical Questions Surrounding AI

Questions sketching out (part of) the AI Ethics landscape.

PHILOSOPHYARTIFICIAL INTELLIGENCE

12/1/20193 min read

Philosophy might not produce definitive answers. It can, however, ask difficult and profound questions: questions that can re-frame issues, expose assumptions, and challenge us to think deeply and critically.

Some of the following questions touch upon a number of concepts that have a long tradition in philosophy, for example, justice, autonomy, moral obligation, and rights. However, some others touch on relatively new concepts, such as privacy, personal data, robot rights, and non-human decision-making.

Decision-making
  • Should an AI system make decisions that affect human lives?

  • Are there any sorts of decisions an AI should not be allowed to make?

  • Should there always be a human in the loop?

  • What is valuable about a human in the loop?

  • Is a human error more acceptable than system errors?

  • Should we hold AI systems to the same standards as human decision-makers, or hold them to a higher standard?

  • Is a black-box process acceptable, i.e. where there is a lack of transparency and explanation?

  • Should we prefer better performance or explicability, if there is a trade-off between them?

  • What does it mean to explain a decision?

  • What makes a decision fair?

  • Who is ultimately held responsible for the decisions made by an AI?

  • Does accountability require transparency?

Personal Data
  • What kind of rights, if any, do we have over our personal data?

    • What right do we have over personal data that involves other people, e.g. transaction history?

  • How does personal data relate to personal identity?

    • Is personal data more akin to property or an extension of the self?

    • Can I own someone else’s personal data?

  • Is data a public good?

  • Can we meaningfully consent to giving away our personal data, if doing so is effectively compulsory for civic life?

  • Why is privacy valuable?

  • Should AI aim to preserve privacy?


Justice and Ethical AI
  • What is data justice?

  • Must AI systems counterbalance historical injustice or bias?

    • Should AI reflect or improve society?

  • How can AI undermine human flourishing?

    • Should it be allowed to?​

  • Is diversity important in AI development?

  • Why should we demand that AI be ethical?

    • What about AI means that it has this high standard compared to other technologies?

  • Why ought AI be beneficial to humanity?

Trust and Safety
  • What is trust?

  • What makes something trustworthy?

  • What is the point of making an AI system trustworthy?

  • Should we treat trust as a means to an end, or an end in itself?

  • What makes a system or product safe?

  • How safe should an AI system be?

  • What would be the sufficient precautions, evidence, or tests that would ascertain its safety?

To AI or not to AI
  • When ought we use AI systems?

  • At what point does it become unethical to not use AI?

  • Who has an obligation to make use of the best available resources, including technology?

  • Should certain AI systems never exist?

  • Are some uses of facial recognition always morally unacceptable?

  • Do we risk a new era of physiognomy?

  • Are some uses of affect recognition always morally unacceptable?

  • Should some things always remain private and personal?​

  • Should I be cognitively enhanced by AI?​

  • Would this be fair?​

Availability
  • Should AI tools and systems become widely available?

  • Should there be compulsory training or education on AI?

  • Are there risks to democratising AI?

  • Are there risks to monopolising AI?

Warfare
  • Should we ban the development and use of autonomous weapons?

  • If there is a human in the loop, is it more acceptable to use such weapons?

  • If the weapons are only semi-autonomous, is this more acceptable?

  • If civilian casualties are reduced as a result of using autonomous weapons, does this make it morally acceptable?

Workers rights
  • What moral rights do developers have over the subsequent use of the AI systems they helped create?

    • Given the often piecemeal nature of AI development, is this right diluted?

Robot Rights
  • What is autonomy?​

  • How is a human autonomous?

  • Can an AI be autonomous?

    • Can an AI be autonomous in the same way as a human?

  • If an agent is autonomous, do we owe that agent moral obligations or rights?

  • Should we ever give an AI rights?