When we ask ourselves the question “what matters for moral responsibility,” we may question whether or not an agent could have done otherwise. Daniel Dennett questions the claim that moral responsibility requires the ability to do otherwise. However, he does acknowledge that we often ask the question “could the agent have done otherwise” when we are considering whether or not that agent is morally responsible for some act that they did.
What Dennett points out is that in asking that question (“could the agent have done otherwise”) we are rarely meaning to ask if the agent could have done otherwise (could have acted differently) in the exact same conditions as when they carried out their original action. So when we ask the question “could the agent have done otherwise,” according to Dennett we do not think it is important or helpful in judging whether the agent is morally responsible to find out if the agent could have done otherwise in the same conditions as their original action.
If this type of information does not matter for judging whether an agent is morally responsible, then Dennett has yet to show what does matter. Dennett has to account for (i) what we are really asking when we ask “could the agent have done otherwise” & (ii) what we care about when we are considering or judging whether the agent is morally responsible.
Dennett uses examples to illustrate his view in light of the two considerations listed in the above paragraph. Consider the example he gives of a robot which has artificial intelligence.
Answer the following questions in your discussion post:
1 .If the robot does a “bad” act, according to Dennett, is it reasonable to ask if the robot could have done otherwise?
2. How is Dennett’s robot example supposed to translate into the case for humans in regards to moral responsibility?
3. Do you think the robot case is a good analogy for the human case?