Highest Rated Comments


iramusa19 karma

Hello Professor Singer,

I am a PhD student of Robotics from Edinburgh. Some of current robots run with algorithms which rely on punishment and reward system (reinforcement learning). These robots try to complete some task and are in "pain" if they fail. Do you think it is ethical to apply punishment to those agents? What features robot requires to make it equally morally relevant as human?

These algorithms could be rewritten in a way that is functionally equivalent and not using reinforcement learning (i.e. you could not distinguish two robots with different code). Is there any value in doing that?

Do you think we will be able to ultimately disseminate what is so important about pleasure and suffering of others? Is it possible that after some years of research we decide that not caring about other beings' utility is the way to go?

iramusa9 karma

Hello Professor Singer,

I agree that if you want to be moral, utilitarianism is the way to go. Why would I want to be moral though?

If you could take a pill that makes you more sensitive to other people's suffering (thus making you a better altruist) would you take it? Why?