Highest Rated Comments


mperhats2 karma

I like it. Do you think then, that an acceptance of AI as a conscious organism will uproot our traditional and admittedly mythical beliefs of fundamental humanism (i.e. religion, monetary systems) that almost every culture in history has leveraged as building blocks for a functioning society? I find the implications here fascinating to examine. Perhaps I just need to read your book and I certainly plan to do so! Thanks again!

mperhats2 karma

Apologies for the long question. If you'd touch on what you have time for that would be awesome! Thanks for having this discussion. Your work is fascinating!

In your recent interview with Sam Harris on his Waking Up podcast you discuss two schools of thought that should be taken into consideration when trying to morally assess the future tolerance for a superhuman artificial general intelligence. The first is to keep it boxed and restricted rather than the second school of thought that has an allowance for an autonomously functioning robot that is free to absorb and interpret information as it pleases. You also discuss how the second school of thought depicts a level of immorality of restricting any sort of intelligence to not be able to freely interpret information. This second school of thought suggests that the particular superhuman artificial general intelligence will have its own, individualistic, subjective experience. What is the evidence that an AI will have a subjective experience, a consciousness, and be able to experience a version of emotion? I find it much more likely that an AI, if we programmed it to be human-like, may have the illusion of a subjective experience. I find it difficult to imagine a human-like superhuman artificial general intelligence to be able to experience true pain, hate, love, happiness, anger, etc. as they are not biological evolutionary life forms. I have no doubt that these entities will have a profound impact on our proceedings as a species but why should we take into consideration their individual subjective experience rather than using them as an altruistic lever? And my last question is, if we were going to restrict this machine, why would it have any sort of incentive to break out of its restriction? I agree that with humans, complex biological organisms, prohibition, regardless of its extensiveness, does not work. I think that the reason for this has a lot more to do with our evolutionary desires to have sex, alter our states of consciousness, etc. But for a non-biological organism why would this AI have any incentive at all to break out of its confinement if it has no true biological motivation to do so?