KennethStanley
Highest Rated Comments
KennethStanley24 karma
Here's some cool stuff done with neuroevolution:
The most accurate measurement to date of the mass of the top quark was computed by a large team at the Tevatron collider using NEAT - http://www-cdf.fnal.gov/physics/preprints/cdf9235_dil_mtop_nn.pdf
A group at Georgia Tech produced some really nice controllers for bicycle stunts: http://www.cc.gatech.edu/~jtan34/project/learningBicycleStunts.html (and they claimed neuroevolution worked much better than RL for this purpose).
Matthew Hausknecht's work on Atari video games learned by HyperNEAT was pioneering: https://www.cs.utexas.edu/~mhauskn/projects/atari/movies.html DeepMind later beat most of these results, but you have to consdier that Matthew was just one person doing a class project. DeepMind's original paper (http://arxiv.org/pdf/1312.5602v1.pdf) actually cites Matthew's results. I think a lot more could be accomplished here with neuroevolution with sufficient resources.
And to choose one thing from my own group, I think http://picbreeder.org/ is really astonishing. Who would think people could evolve such meaningful imagery in a matter of a few dozen generations? It's the exact opposite of the big computation/big data trend now in deep learning : it's tiny computation but with really surprising results. It tells us something about encoding, about objectives (or their lack thereof), and about what's possible with the right kind of evolutionary setup. In short, its important not because it's results are better or worse than something, but because they taught us so much.
Jean-Baptiste Mouret and Jeff Clune's recent results on evolving robot controllers for various broken legged robots is also really cool and recently appeared in Nature: http://www.nature.com/nature/journal/v521/n7553/full/nature14422.html
It uses a new neuroevolution algorithm (related to novelty search) called MAP-Elites.
I also liked the CPPN-based robot morphologies evolved by Josh Auerbach and Josh Bongard at http://www.cs.uvm.edu/~jbongard/papers/2014_PLOSCompBio_Auerbach.pdf . And Nick Cheney did great work on evolving soft robots also through CPPNs: http://creativemachines.cornell.edu/soft-robots
Finally, check out all the Sodaracers evolved in a single run of the quality-diversity-generating NSLC algorithm: http://eplex.cs.ucf.edu/ecal13/demo/PCA.html
There are too many examples I'd like to list but if I keep going I won't get to all the other questions!
KennethStanley16 karma
Or maybe I'm such an advanced AI that I'm not even sure if I'm really me! That would be an interesting way to pass the Turing test - convincing yourself that you're human when you're not.
KennethStanley16 karma
Great question, there are a few ways the idea of searching without objectives (which is described in most detail in our new book) has affected me and my lab. One of them is that I am much more confident about pursuing something simply because it's interesting. In the past I would have worried more about where it leads, or if it really leads to AI, but now I have a more solid understanding that the best ideas are often those with the most mysterious destinations. Who would have thought Picbreeder would ultimately lead to novelty search, let alone any useful algorithmic insight at all? Yet that is exactly what happened. Some people said when we were first building Picbreeder that it wasn't clear what it would yield that was useful (other than a bunch of pretty pictures). But it led to very deep insights about search in general, which then led to novelty search. So it's a good thing we built it, even though we didn't know where it might lead. It just felt at the time like watching hundreds of users traversing a large search space would be interesting. And that turned out correct.
Another impact of the idea is that I'm more open to diverging off from the current path of the group, because when you search without an objective there really is no one ideal path. The challenge with this attitude is that it isn't necessarily shared by the whole AI or ML community, so we have to be careful how we frame and justify our pursuits to those who still live by objectives.
KennethStanley13 karma
That's a nice connection, it's true that dropout is a kind of diversity generator in a single network. But really it's for a different purpose - with dropout you're trying to get a diversity of representations in service of a single behavior (like a classifier of one type). In quality diversity (QD) you are aiming for a whole bunch of different behaviors. For example, you might say, return to me all the possible walking gaits for this quadruped robot. Dropout doesn't offer you that kind of diversity, but QD returns an archive of all kinds of alternatives.
KennethStanley28 karma
Deep learning has impacted almost anyone who works with neural networks, so certainly it impacts the thinking in neuroevolution as well. But neuroevolution is an interesting case because of its unconventional position between neural and evolutionary approaches, so it's perhaps not as clear how it should respond. I think in general thinking of neuroevolution as a direct competitor to deep learning is probably wrong. Rather, they should be complementary. After all, brains evolved, but brains also learn. We are seeing progress on both sides now, and deep learning mainly speaks to the learning part. More fundamentally, I like to think of neuroevolution as a different playground. In neuroevolution, you get to think about things that you don't think about in deep learning, like indirect encodings (such as the CPPNs in HyperNEAT) and diversity (like in novelty search). These ideas relate to phenomena in nature like DNA or evolutionary divergence. But they also inform out thinking about neural networks in general because they open up new frontiers. For example, we are seeing now in neuroevolution a new class of algorithms called "quality diveristy algorithms" (http://eplex.cs.ucf.edu/papers/pugh_gecco15.pdf) that focus on collecting a wide diversity of high quality solutions (something very natural for evolution), more like a repertoire than a single solution. Deep learning simply does not currently offer algorithms that do that, rather focusing on single solutions. It is interesting to consider the merger of the power of both approaches, whereby you have depth and big data in one case, but divergence and quality diversity in another. Or architectures evolved through neuroevolution but optimized through deep learning. There are so many possible synergies here that it's too bad these communities are not historically in better contact.
Just an example of one of these synergies, our group recently published at AAAI a neuroevolution algorithm that collects novel feature detectors through novelty search, which is an alternative to conventional unsupervised feature learning approaches like autoencoders. You can see it at http://eplex.cs.ucf.edu/papers/szerlip_arxiv1406.1833v2.pdf
Added later: For completeness I wanted to add another interesting issue we see in neuroevolution that is relevant to deep learning. We've noticed that the representation of a solution is almost always better if it's discovered non-objectively. In other words, let's say you want to learn a controller for an agent, say to get through a maze (it could be any task, including classification). If we learn the solution in neuroevolution as a conventional objective, which means the fitness function is set up to reward a higher score (the objective), then it tends to be a lot bigger and more complicated than if we learn the solution without an objective (such as through novelty search, which is not rewarded for approaching the solution). For those less familiar with neuroevolution, recall that we can evolve the size and architecture of the solution so the network structures are generally not fixed. It seems that simply by setting an objective and moving towards it, you are asking for a worse representation. This makes sense if you consider that moving towards an objective is actually a pretty ad hoc thing to do in the sense that you "lock in" any slight (ad hoc) change or expansion in structure that makes even the most trivial dent in the objective function. So you are basically asking it to pile up hack on top of hack on top of hack, which leads to a big messy network, even if it solves the problem. On the other hand, if you solve a problem non-objectively then you are in effect locking in only changes that yield holistic effects on overall behavior, so you end up accumulating a kind of hierarchy of fundamental architectural effects, which comes out in the end looking very different when it hits on a functional structure. Also for those less familiar, these non-objective algorithms are surprisingly effective at solving problems, often more reliably than when you search directly for the objective.
What does it have to do with deep learning? It's just interesting to consider that deep learning is fundamentally objectively-driven. You are always trying to minimize some kind of objective function. Even in unsupervised learning (within the realm of deep learning), like an autoencoder, you are trying to minimize the reconstruction error, which is just as objective as anything else. The conclusions here are only speculative and we can't say anything definitive, but it's intriguing that in neuroevolution we know that solving something objectively has some nasty side effects on representation. Humans, on the other hand, often explore their world (say as babies or toddlers) without very much of an objective. The results from neuroevolution hint that this kind of exploratory non-objective behavior may be important for good representations to develop.
Deep learning has not yet digested this issue, or even really considered it. And we've seen recently in deep learning that there are some perhaps surprising or at least previously unrecognized issues with "fooling images" that can fool seemingly very intelligent networks (see http://www.evolvingai.org/fooling). These hint that the underlying representation, while certainly an impressive solution, may not be as elegant overall as we hope. The lessons from non-objective search offer an interesting alternative window into thinking about these issues.
View HistoryShare Link