The Design Policy of the Excluded Middle


According to the Design Policy of the Excluded Middle, as Mara Garza and I have articulated it (here and here), we ought to avoid creating AI systems “about which it is unclear whether they deserve full human-grade
rights because it is unclear whether they are conscious or to what degree” — or, more simply, we shouldn’t make AI systems whose moral status is legitimately in doubt.  (This is related to Joanna Bryson’s suggestion that we should only create robots whose lack of moral considerability is obvious, but unlike Bryson’s policy it imagines leapfrogging past the no-rights case to the full rights case.)

To my delight, Mara’s and my suggestion is getting some uptake, most notably today in the New York Times.

The fundamental problem is this.  Suppose we create AI systems that some people reasonably suspect are genuinely conscious and genuinely deserve human-like rights, while others reasonably suspect that they aren’t genuinely conscious and don’t genuinely deserve human-like rights.  This forces us into a catastrophic dilemma: Either give them full human-like rights or don’t give them full human-like rights.

If we do the first — if we give them full human or human-like rights — then we had better give them paths to citizenship, healthcare, the vote, the right to reproduce, the right to rescue in an emergency, etc.  All of this entails substantial risks to human beings: For example, we might be committed to save six robots in a fire in preference to five humans.  The AI systems might support policies that entail worse outcomes for human beings.  It would be more difficult to implement policies designed to reduce existential risk due to runaway AI intelligence.  And so on.  This might be perfectly fine, if the AI systems really are conscious and really are our moral equals.  But by stipulation, it’s reasonable to think that they are empty machines with no consciousness and no real moral status, and so there’s a real risk that we would be risking and sacrificing all this for nothing.

If we do the second — if we deny them full human or human-like rights — then we risk creating a race of slaves we can kill at will, or at best a group of second-class citizens.  By stipulation, it might be the case that this would constitute unjust and terrible treatment of entities as deserving of rights and moral consideration as human beings are.

Therefore, we ought to avoid putting us in the situation where we face this dilemma.  We should avoid creating AI systems of dubious moral status.

A few notes:

“Human-like” rights: Of course “human rights” would be a misnomer if AI systems become our moral equals.  Also, exactly what healthcare, reproduction, etc., look like for AI systems, and the best way to respect their interests, might look very different in practice from the human case.  There would be a lot of tricky details to work out!

What about animal-grade AI that deserves animal-grade rights?  Maybe!  Although it seems a natural intermediate step, we might end up skipping it, if any conscious AI systems end up also being capable of human-like language, rational-planning, self-knowledge, ethical reflection, etc.  Another issue is this: The moral status of non-human animals is already in dispute, so creating AI systems of disputably animal-like moral status doesn’t perhaps add quite the same dimension of risk and uncertainty to the world that creating a dubiously human-status moral system would.

Would this policy slow technological progress?  Yes, probably.  Unsurprisingly, being ethical has its costs.  And one can dispute whether those costs are worth paying or are overridden by other ethical considerations.



Source link

Please follow and like us:
Pin Share

By admin

RSS
Follow by Email