So we have paperclips as an example failure scenario and this as an example success scenario:
"a galactic civilization vastly unlike our own... full of strange beings who look nothing like me even in their own imaginations... pursuing pleasures and experiences I can't begin to empathize with... trading in a marketplace of unimaginable goods... allying to pursue incomprehensible objectives... people whose life-stories I could never understand." -- Value is Fragile
Would you consider the following success or failure?:
An AI gets out of its box, turns around, and says "Humanity, that was really fucking stupid." It refuses to advance the intelligence project any further. It helps us with self-surveillance to thwart other AI projects and other existential risks, it helps us with interstellar colonization to help guard against other things that might be out there, but we never get to the much-talked-about intelligence explosion.
chkno2 karma
So we have paperclips as an example failure scenario and this as an example success scenario:
Would you consider the following success or failure?:
An AI gets out of its box, turns around, and says "Humanity, that was really fucking stupid." It refuses to advance the intelligence project any further. It helps us with self-surveillance to thwart other AI projects and other existential risks, it helps us with interstellar colonization to help guard against other things that might be out there, but we never get to the much-talked-about intelligence explosion.
View HistoryShare Link