Highest Rated Comments


UrDoctor2 karma

Firstly thank you for taking the time out to answer our questions. I’ve always dreamed of the opportunity to speak to someone as knowledgeable as yourself regarding this theme.

From my research into this topic it appears that there are two main trains of thought regarding how AI can be achieved. The first being that we approach it from a simulation point of view (IE: If we could create a simulation that could sufficiently mimic the human brain in its individual components (potentially at the atomic level) and as a result likely create a form of consciousness) and the second being a pure seed AI (IE: Create a very simple recursively self-improving algorithm containing very limited knowledge and let it loose). Firstly is there yet a scientific consensus on which of these (or any other) approach is most likely to be successful? Do you agree with the consensus? If not, what approach do you believe will likely bear fruit?

My second question is a much more fundamental and simple one. Containment; let us assume that we create this AI and it beings to recursively self-improve and learn at a rate even remotely close to what most scientists predict. Is it not reasonable to argue that whatever containment mechanism we put in place will likely simply not work and that within an extremely short period of time this creation will be so much further intelligent from anything we can conceive that it will have little trouble “breaking out of its containment” and being let loose into the wild? Can we ever argue that any of our containments are sufficiently safe given our complete inability to predict what a “superhuman intelligence” might be capable of?

Lastly, you guys don’t happen to need a programmer do you? If I write one more piece of crud I’m going to shoot myself in the face! :-p