This is all about serious researchers and engineers working to bring science fiction to life -- and then transcend anything SF writers have imagined.

David Hanson (maker of the world's most realistic humanoid robot heads, such as Robot Einstein), Mark Tilden (inventor of BEAM robotics and the Robo Sapien) and I are collaborating on a new project aimed at using the OpenCog Artificial General Intelligence system to control humanoid robots. The medium-term goal of the project is to create a humanoid robot with the rough general intelligence of a three year old child. Our focus is embodied learning and communication. We believe we are poised to make a breakthrough in cognitive robotics and general intelligence research, over the next few years. Getting to the level of three year old child will mean we have solved the "common sense knowledge" problem, and are poised to move on to human adult level AGI and address all manner of applied problems with our technology. We have some funding from the Hong Kong government, obtained together with our collaborator Gino Yu (an AI and consciousness researcher), and are seeking additional funds via an Indiegogo campaign; see http://geni-lab.com/help ...

The Reddit guidelines say that all AMAs require proof ... my proof is that I have just tweeted the link to this AMA from my Twitter account @bengoertzel ...

Comments: 82 • Responses: 34  • Date: 

CanadianVelociraptor15 karma

Thanks for doing an AMA! I think AI is a fascinating field of study, and I find your work with financial analysis especially interesting. The work you mention with humanoid robots is also fascinating to me. A few questions come to mind, if you have the time:

  • How successful has AI been when it comes to financial predictions? Is this a promising niche for the technology?

  • How costly is it for someone to set up hardware for an AI system? How much can be done on a consumer computer? Is the main limitation software or computing hardware at this point?

  • Do you have any go-to resources for learning about AI? It’s such a complex topic that I wouldn't know where to begin.

  • What do you think of Isaac Asimov's "three laws" of robotics? Will it ever be necessary to implement such behavioral restrictions on an AI?

EDIT: One more question :)

  • How expensive is/was it to sign up for a cryonics provider like Alcor?

bengoertzel15 karma

There is plenty of evidence that AI can work well for financial prediction, yeah....

But my particular thoughts and discoveries about that are not for public consumption!

With some more hard work and just a little luck, within a few years, the AI-based hedge fund Aidyia Limited that I've co-founded will have a huge trove of money, and then funding AGI will be something I can do personally..... But that's still early-stage, we're still testing and haven't started trading yet....

bengoertzel11 karma

If you wanna learn about AGI and have a little technical background, the links here are a place to start:

http://www.agi-society.org/resources/

heredami11 karma

Hi, thanks for doing this AMA

  • If you had unlimited funds at your disposal, what kind of strategy precisely would you use to get to the goal (AGI) as quickly as possible? How unlimited funds would affect the process? Is the talent there to take advantage of more funding? Is the infrastructure in place to support it?

  • How important is Friendly AI research? Do you think it’s too early to start thinking about Friendly AI and the top priority at the moment is just to pursue progress at any means available?

  • Do you think a possibly unfriendly superintelligent AI that was running on an isolated computer, only able to interact with the rest of the world via text interface, would be able to break out of its confinement by persuading YOU to release it? (I'm refering to this: http://yudkowsky.net/singularity/aibox )

bengoertzel15 karma

Regarding AI boxing --- the question is: If Eliezer Yudkowsky were locked in a box, only able to interact with the world via text interface, would I ever feel like letting him out?? ;D

bengoertzel13 karma

With a very large amount of funds (let's set aside "unlimited" since potential infinity leads to various strange things!), I would pursue the same research program as now, since it's the only one I solidly know how to make work.... But we would certainly proceed a lot faster. (And also, of course, if I had a truly massive amount of $$ I could fund other AGI projects besides my own, and we could see which ones worked best, and maybe create many kinds of AGIS...)

Yes, there is the talent and infrastructure to build an AGI much faster than is currently being done. However, people have this frustrating desire to pay rent and their electric bill and buy food and put their kids through college, etc. --- and with very rare exceptions, society has little desire to pay them to work on AGI R&D .....

So as fast as progress may seem in historical terms, when you're involved in the actual R&D, you see progress is way slower and more awkward than it could be, for purely boring practical/financial reasons

bengoertzel14 karma

As an example, a year ago I spent a while outlining a design for making OpenCog fully exploit massively distributed computing infrastructure: http://wiki.opencog.org/w/DistributedAtomspace

How much progress has been made on implementing that? Zero. Why? It's too big for a volunteer, and I don't have access to extra $$ to pay someone to do it....

bencordoza10 karma

Have there been any attempts on your life from people that claim to be from the future?

bengoertzel9 karma

not yet, fortunately...

Buck-Nasty9 karma

Do you think Andrew Ng and others who argue that there might be a single algorithm to explain human cognition are correct?

bengoertzel15 karma

I think they are wrong, at the level where it matters....

If you look at things abstractly enough, sure, there's a single algorithm like "Search the space of all procedures and find the one that, based on your experience, has the property that its execution is most likely to achieve your goals; then execute that procedure, while also continuing the aforementioned search."

But if you look at things at the level needed for practical, scalable, real-time implementation -- THEN the one-algorithm perspective is not really the most useful one.

The brain is a huge complex mess with many different subsystems doing many different things. It does not have the elegance and simplicity of a computer science algorithm.

bengoertzel8 karma

"Deep learning" such as Ng advocates is wonderful yet a bit deceptive

-- as a general principle of "recognizing patterns among patterns among .... among patterns among data", sure, it's universal

-- BUT, the specific deep learning architectures Ng and others advocate, are pretty simplistic and specialized in what they can do using a practical amount of resources, and are probably mainly suitable for specific applications like visual & auditory perception

bengoertzel9 karma

We are using DeSTIN, a deep learning architecture similar to Ng's, within OpenCog -- but only for visual and auditory perception.

Satelllliiiiiteee3 karma

Can't multiple algorithms working together be thought of as a single algorithm? What is the difference?

bengoertzel10 karma

True, the definition of "algorithm" is a bit fuzzy...

The main point is, Ng believes a quite simple algorithm can explain everything the human mind/brain does; and I think the needed mechanisms are far more complex and various...

bengoertzel9 karma

OK folks -- I set aside one hour for this, and the hour is done. Thanks for all your questions and comments! If you'd like to accelerate the world's path to a positive Singularity dramatically, please arrange a bank transfer for 10 million dollars to the OpenCog Foundation. Short of that, we are thankful for any donations, AI programming assistance, or "merely" good vibes and positive thoughts and feelings....

Warm thoughts to all -- enjoy the rest of your day !! ...

bengoertzel8 karma

Ah ... and many thanks to the great Jason Peffley for suggesting and helping organize this AMA...

Hasta luego...

proggR8 karma

Hello Ben, thank you for doing an AMA. I apologize for the long windedness in advance. I added the bolded text as headings just to break things up a little more easily so if you don't have time to read through my full post I'd be happy to just get suggested readings as per the AMA Question section.

AMA Question

I'm interested more and more in AI but most of what I know has just been cobbled together from learning I've done in other subjects (psychology, sociology, programming, data modeling, etc), with everything but programming being just as hobby learning. AI interests me because it combines a number of subjects I've been interested in for years and tries to fit them all together. I have Society of Mind by Minsky and How to Create A Mind by Kurzweil at home but haven't started either yet. Do you have any follow up reading you would recommend for someone just starting to learn about AI that I could read once I've started/finished these books? I'm particularly interested in information/data modelling.

Feedback Request for Community AI Model

I had a number of long commutes to work when I was thinking about AI a lot and started to think about the idea of starting not with a single AI, but with a community of AI. Perhaps this is already how things are done and is nothing novel but like I said, I haven't done a lot of reading on AI specifically so I'm not sure the exact approaches being used.

My thought process is that the earliest humans could only identify incredibly simple patterns. We would have had to learn what makes a plant different than an animal, what was a predator and what was prey, etc. The complex patterns we idenfity now, we're only able to do so because the community has retained these patterns and passed them onto us so we don't have to go through the trouble of re-determining them. If I were isolated at birth and presented with various objects, teaching myself with no feedback from peers what patterns can be derived from them would be a horribly arduous, if not impossible, task. By brute forcing a single complex AI, we're locking the AI in a room by itself rather than providing it access to peers and a searchable history of patterns.

This made me think about how I would model a community of ai that made sharing information for the purpose of bettering the global knowledge core to their existence. I've been planning a proof of concept for how I imagine this community AI model, but this AMA gives me a great chance to get feedback long before I commit any development time to it. If you see anything that wouldn't work, or that would work better in another way, or know of projects or readings that are heading in the same direction I would love any and all feedback.

The Model

Instead of creating a single complex intelligent agent, you spawn a community of simple agents, and a special kind of agent I'm calling the zeitgeist agent, that acts as an intercessor for certain requests (more on that in a bit).

Agents each contain their own neural networks which data is mapped to, and a reference to each piece of information is stored as meta data to which "trust" values can be assigned which would relate to how "sure" the agent is of something.

Agents contain references to other agents they have interacted with, along with meta data about that agent including a rating for how much they trust them as a whole based on previous interactions, and how much they trust them for specific information domain based on previous interactions. Domain trust will also slowly allow agents to become "experts" within certain domains as they become go-tos for other agents within that domain. This allows agents to learn broadly, but have proficiencies emerge as a byproduct of more attention being given to one subject over another and this will vary from agent to agent depending on what they're exposed to and how their personal networks have evolved over time.

As an agent receieves information, a number of things take place: it takes into account who gave it the information, how much they trust that agent, how much they trust that agent in that domain, how much trust the agent has placed on that information, whether conflicting information exists within its own neural network, and the receiving agent then determines whether to blindly trust the information, blindly distrust the information, or whether to verify it with its peers.

Requests for verification are performed by finding peers who also know about this information which is why a "language" will need to be used to allow for this interaction. I'm envisioning the language simply being a unique hash that can be translated to the inputs recieved that are used by the the neural networks, and whenever a new piece of information is recieved the zeitgeist provisions a new "word" for it and updates a dictionary it maintains that is common to all agents within the community. When a word is passed between agents, if the receiving agent doesn't know the word, it requests the definition from the zeitgeist agent and then moves on to judging the information associated with the word.

When a verification request is made to peers, the same evaluation of trust/distrust/verify is performed on the aggregate of responses and if there is still doubt that isn't enough doubt to dismiss it entirely, the receiving agent can make a request to the zeitgeist. This is where I think the model gets interesting, but again it may be commonplace.

As agents age and die, rather than lose all the information they've collected, their state gets committed to the zeitgeist agent. Normal agents and the zeitgeist agent could be modelled relatively similarly, with these dead agents just acting as a different type peers in an array. When requests are made to the zeitgeist agent, it can inspect the states of all past agents to determine if there was a trustworthy answer to return. If after going through the trust/distrust/verify process its still in doubt, I'm imagining a network of these communities (because the model is meant to be distributed in nature) that can have the same request past onto the zeitgeist agent from another community in order to pull "knowledge" from other, perhaps more powerful, communities.

Once the agent finally has its answer about how much trust to assign that information, if it conflicts with information recieved from other peers during this process, it can notify those peers that it has a different value for that information and inform them of the value, the trust they've assigned, and some way of mapping where this trust was derived from in order for the agent being corrected to perform its own trust/distrust/verify process on the corrected information. This correction process is meant to have a system that's generally self correcting, though bias can still present itself.

I'm picturing a cycle the agent goes through that includes phases of learning, teaching, reflecting, and procreating. Their lifespan and reproductive rates will be determined by certain values including the amount of information they've acquired and verified, the amount of trust other agents have placed on them, and (this part I'm entirely unsure of how to implement) how much information they've determined a priori, which is to say that, through some type of self reflection, they will identify patterns within their neural network, posit a "truth" from those patterns, and pass it into the community to be verified by other agents. There would also exist the ability to reflect on inconsistencies within their "psyche", or put differently to evalutate the trust values and make corrections as needed by making requests against the community to correct their data set with more up to date information.

Agents would require a single mate to replicate. Agent replication habits are based on status within the community (as determined by the ability to reason and the aggregate trust of the community in that agent), peer-to-peer trust, relationships meaning the array of peers determines who the agent can approach for replicating with, and heriditary factors that reward or punish agents who are performing above/sub par. The number of offspring the agent is able to create will be determined at birth, perhaps having a degree of flexibility depending on events within its life, and would be known to the agent so the agent can plan to have the most optimized offspring by selecting or accepting from the best partners. There would likely also be a reward for sharing true information to allow for some branches to become just conduits of information moving it through the community. Because replication relies on trust and ability to collect validated knowledge, as well as being dependent on finding the most optimal partner, lines of agents who are consistently wrong or unable to reflect and produce anything meaningful to the community will slowly die off as their pool of partners will shrink.

The patterns at first would be incredibly simple, but by sharing information between peers, as well as between extended networks of peers, they could become more and more complex over time with patterns being passed down from one generation of agent to the next via the zeitgeist agent so the entire community would be learning from itself, much like how we have developed as a species.

Thanks again

I look forward to any feedback or reading you would recommend. I'm thinking of developing a basic proof of concept so feedback that could correct anything or could help fill in some of the blanks would be a huge help (especially the section about self reflection and determining new truths from patterns a priori). Thanks again for doing an AMA. AI really does have world changing possibilities and I'm excited to see the progress that's made on it over the next few decades and longer.

bengoertzel9 karma

Regarding the social approach to AGI, it's interesting, but my intuition is that building a society of AGIs doesn't let you get away with making the individual AGIs any less sophisticated....

E.g. a society of simple neural net AGIs is going to still be simplistic and dumb...

Once you've created a really smart AGI (an OpenCog or whatever), then there may be benefit in creating a society of them and letting them learn from each other. But in that case, creating the society is going to be 1% of the work, the hard part will be in engineering the AGIs in the first place....

bengoertzel3 karma

Some general reading on AGI is linked to here, http://www.agi-society.org/resources/

cobaltcollapse8 karma

Say in a few years robots become so humanlike that they can somewhat integrate into society. Would you be for or against the idea of human-robot marriage?

bengoertzel11 karma

For myself ... well, I'm already happily married and my wife seems not that open to polygamy.... So I may have to say not to the robot wife myself...

But for others, sure, why not?

Actually, marriage in its current form is an artifact of legacy human society and psychology, which will seem pretty archaic in the future to those of us who become superhuman superminds....

BUT some kind of partnership/ coupling / partial fusion between minds may still exist, in forms we can't yet understand or foresee...

oh_no_the_claw5 karma

How do AI researchers feel they can address the black box problem of human consciousness? Psychologists and neurologists have a very poor understanding of how the brain determines behavior. Some feel that the human brain is too limited to fully understand itself.

What can an AI researcher do to bridge the gap in knowledge? If the goal is to create an AI with human-like behavior how can that be a realistic goal without a requisite understanding of human behavior?

Secondly, how should AI researchers reconcile the rival schools of psychology---behaviorism vs cognitivism. They can't both be right. Do AI researchers pick a side or throw out psychology completely and start from scratch?

bengoertzel9 karma

As for cognitivism vs. behaviorism, let's remember that AGI is in the end an engineering pursuit. We're trying to build something....

It is not necessary to resolve all related philosophical and conceptual issues in order to build something workable....

Academics tend to stake out extreme positions and argue about them, as that is their culture and what they are paid to do. Building real systems, one can take the best insights from different academic subgroups and combine them pragmatically.

For instance, in practice reinforcement learning (which comes from the behaviorist tradition) works fine together with cognitive architectures (which come from the cognitivist tradition) ...

bengoertzel6 karma

Let's distinguish AGI from "cognitive modeling".... In the latter one is concerned with making detailed models of human cognition, with a goal of understanding how the human mind works....

However, AGI has a different goal.... As an example, if we could create a robot that would get a degree from MIT (via physically going to all the same classes as human students), and then get a job as an MIT professor and do all the aspects of its job adequately -- I would consider this a big success of "human like" AGI, even if the internal mechanisms were not that similar to the human brain ... and even if this robot didn't display all the same cognitive and emotional peculiarities as human beings....

ItsAConspiracy5 karma

What's your take on FriendlyAI? And how long do you think it'll take before we have self-improving AI?

bengoertzel10 karma

I would rather have AIs that are friendly than AIs that are nasty !!

bengoertzel11 karma

However, if you mean SIAI/MIRI's notion of "provably Friendly AI", I think that is a fanciful notion without much foundation in reality.

We don't understand the universe in which we and our AGIs are embedded that well, so even if someone miraculously came up with a proof that some AGI design would always play nicely even as it improved its own sourcecode and became superintelligent --- the premises of their proof might get invalidated by new discoveries about real life.

While those guys are struggling to grasp with the math of phantom nearly infinitely powerful self-modifying AIs, others are going to build real thinking machines and obsolete the SIAI/MIRI daydreaming ;)

Not that there's anything wrong with daydreaming...

idiotball231 karma

So do you think it's not even worth trying to achieve a friendly AI?

bengoertzel8 karma

It's not worth spending much effort trying to make a provably Friendly AI in the SIAI/MIRI sense....

(Though even so, that may be worth more effort than lots of things society burns its $$ on, like making shinier jewelry and fancier cars, etc.)

It is certainly worth effort pursuing practical approaches to increasing the odds that our AGI systems are nice to people and other sentient beings...

Gutei4 karma

Thank you for taking the time out of your undoubtedly busy schedule to answer a few questions from us here.

  • What is/are the most influential areas of science outside of technology that help you in the designing of your artificial intelligence? I've been studying a lot of Anthropology lately within an academic summer camp for high school youths and I'm curious as to the types of psychology/human biology (or others things) you would need to have knowledge of in order to design this.*

  • Do you think that the "correct" designing of a "childlike" AI could potentially lead us to discovering how we, as a species, work better than those actually studying our brains?

  • And what do you do when you are not working on the project that still keeps your mental attention (hobbies or research for pleasure)?

bengoertzel10 karma

I have been heavily influenced by the philosophy of mind, and also by all sorts of Western and Eastern metaphysical philosophy... and by cognitive science ... and by complex systems science / general systems theory ...

I guess that building childlike AGIs will teach us some new things about human intelligence; but in the case of a system like OpenCog that is pretty different from a brain internally, these lessons will be incomplete and need to be complemented by information from actual study of human beings...

Regarding hobbies and other interests, they are too numerous to list, but I tend to be outdoorsy and hike a lot, and enjoy composing/playing freaky music and reading literature.... And then there's the sex, drugs and rock'n'roll ... or whatever... ;p ...

thjhytjthjtrh4 karma

did you see Dmitry itskov's AMA? http://www.reddit.com/r/IAmA/comments/1ftun4/hello_my_name_is_dmitry_itskov_and_my_project_the/\

what do you think of his work?

bengoertzel5 karma

I know Dmitry a bit F2F, though I didn't notice his AMA before...

I think it's great that, having amassed considerable wealth, he is devoting a significant fraction of it to his goal of putting his brain in a robot....

While that is not my precise personal aspiration, it's more interesting than what most Internet zillionaires choose to do with their money...

bengoertzel6 karma

I will be curious to see how Dmitry's world-view evolves over the next years/decades, as he has an interesting combination of "obsession with a certain form of mind uploading" and interest in traditional Eastern spiritual paths....

Im_Captain_Jack4 karma

Where do you think Robotics will be 50 years from now?

bengoertzel8 karma

I tend to agree with Kurzweil, Vinge etc. that there will be a Singularity type event this century...

Robotics, post this point, will probably cease to exist as a distinct area of endeavor ... and intelligence will be an aspect of the overall network of self-organizing, self-improving, self-redesigning nature/technology...

Im_Captain_Jack2 karma

Interesting. If you were to try and create a Singularity event yourself, how would you do it?

bengoertzel18 karma

I have been told that drinking a mixture of Robitussin PM, vodka and DMT allows one to create one's own private Singularity !!!

bengoertzel9 karma

But seriously ... the path to Singularity is the creation of AGI ... so I would keep doing what I'm already doing... trying to build superhuman thinking machines as best I know how...

No_Fruit_Juice3 karma

I love that you are doing this AMA!

My question to you is: What is your end goal with creating robots? I understand you want to create the smartest robot, but do you have a goal, or have something in mind that you want this robot to accomplish? (go to space, solve logic problems etc.)

Another question I had is: Do you think that robots are the next big science bubble or are they right behind nanotech?

Once again, thanks for the AMA!

bengoertzel6 karma

The end goal of my AGI work is the creation of dramatically superhuman general intelligence.... The super-mind can then figure out what comes next!

Earlier-stage than that, I would like to see AGI scientists helping us make scientific breakthroughs -- in AGI, in life extension biology, in nanotech, in energy production, etc. etc.

drhyver3 karma

who would win a fight? richard feynman or alan turing? and why?

bengoertzel12 karma

it would be a tie, since they are both dead

iloveninjacats3 karma

Do you think one day I could have a I robot style slave? Man we live in hope.

But seriously, what do you think is potentially achievable in our lifetime?

bengoertzel4 karma

I think it would be funny if one day a robot made YOU its slave !!!

bengoertzel5 karma

But seriously -- I prefer to think of generally intelligent robots as our partners and helpers ;)

AbCynthia9562 karma

I've been chatting with human interface tools as long as they've existed. It's disturbing to see the rise of the asshole bot. Elbot is the sole survivor of the Not An Asshole generation.

bengoertzel4 karma

That reminds me of the talking anus in William Burroughs' novel Naked Lunch 8-D

Mythrandia2 karma

I have heard of a model that describes AGI as emerging from a variety of different modules (e.g. an inference engine, a natural language processing device, a knowledge storage and retrieval system, and various other modules). How does your project stand with regard to creation of the modules necessary to the emergence of an AGI? Are some modules completed, whereas others are still far from being ready for deployment? Which modules are the most difficult to assemble? Where are you at in the stage of incrementally constructing an AGI? What are the hardest problems you're wrestling with in your current project?

bengoertzel3 karma

There are many models of AGI as involving different modules. At a very high level this is workable, but I think that the dynamics occurring inside the different modules must be synergetically coupled ... so the modules cannot really operate separately....

In our OpenCog system, we have a central dynamic knowledge store called the Atomspace, and then different "modules" feed data into the Atomspace, get data out of it, and process the data in different ways.... Most of the "modules" we need for human-level AGI exist in our system in some form, but in many cases they are way too simplistic than they need to be.... We have designs for how to make them sufficiently capable but bringing these to life will take a lot of work still...

hablo1 karma

Could you explain your approach to modelling the brain?

bengoertzel4 karma

I am not trying to model the human brain, but rather to build an AGI system with general intelligence roughly comparable to (and then exceeding) that of the human brain, via a human-like cognitive architecture wrapping up various clever computer science algorithms.

I have done some work modeling the human brain in the past (as part of a US government contract a couple years back), and my feeling was that our empirical knowledge of brain dynamics is still far too weak to enable us to create realistic models of the interesting aspects of what the brain does...

xsus361 karma

Do you think robots should be sentient beings or should they have some sort of "restriction" from thinking free willed thoughts? Also, did you read Robopocalypse by Daniel Wilson? If so, what did you think?

bengoertzel2 karma

Didn't read Robopocalype ... but I think robots WILL and SHOULD be sentient beings, yeah....