Highest Rated Comments
colah35 karma
Greg: Deep learning is tearing through “machine perception” applications -- those tasks that involve seeing, hearing, and understand sensory input. I suspect the next big opportunities are in the area of understanding sequences, relationships, concepts, language and possibly even programs + algorithms themselves. As far as achieving AI, what we have to date might be one baby step closer to AI, but what deep networks are able to do today is still remedial by any meaningful measure of intelligence.
colah21 karma
Chris: I was really really surprised by DeepDream. I think everyone was.
http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html
When I saw the “dream” images Alex made, I was shocked that they were produced by a neural network, let alone by such a simple procedure. (Naively, it seems like the procedure Alex does should just cause the image to explode.)
I was also really surprised when my own experiments with visualizing “what does a neural network think X looks like?” started producing unexpected additions to the object. Barbells have muscular arms lifting them. Balance beams have legs sprouting off the top. Lemons have knives cutting through them. In retrospect, it isn’t that surprising that the networks learned strange understandings of what these objects are, since they only have example images to learn from, but… still very surprising at the time.
colah21 karma
Greg: Their potential? About –70 mV. (Sorry, I’m an ex-neuroscientist, so I can’t resist the pun.)
But in all seriousness, the human brain is obviously an amazing learning machine, and it also happens to be a spiking neural network. That said, I used to do research in artificial spiking neural networks and didn’t have a lot of success. I think the problem is that we haven’t yet worked out the math to be able to do ML effectively on spiking models. Ultimately, most ML today comes down to calculus and calculating derivatives... and as spiking models aren’t generally differentiable, our usual techniques don’t work. We also don’t know for sure why it is that the human brain uses spikes. Is it for energetic reasons, is it for better information transmission along axons, is it for some deep computational reason? There are a lot of great hypotheses, but it’s an undecided question in neuroscience and so a hard thing to build an engineered ML system around.
colah39 karma
Greg: The core libraries we use at Google are written in C++ for speed, but we often use Python as a convenient configuration language for constructing and training networks. Look for more about our tools in the coming months.
View HistoryShare Link