MyNatureIsMe
Highest Rated Comments
MyNatureIsMe3 karma
NNs are all about finding structure in data. Categories (in Category theory) are all about finding structure in mathematical concepts. I've already seen this blog post: http://colah.github.io/posts/2015-09-NN-Types-FP/ which shows off nice ideas of what might once be. Are there active projects going this path yet? - That is, ones trying to implement a Neural Network in Category Theory? You could then possibly even do weird stuff like "2-NN"s which would be "neural networks of neural networks". I have no idea what those would be but I'd guess they could basically reason about neural network architectures, finding ways in which those are interrelated and also being able to solve for/find "optimal" NNs for a given problem for a suitable definition of optimality.
MyNatureIsMe1 karma
perhaps there could be some notion of "background independence" in representation objects for Machine Learning? (i.e. some kind of underlying structure that a given feature vector is only a single representation of, but you could find infinitely many equally good representations... or something; where ML then works with those objects directly instead of feature vectors.)
MyNatureIsMe1 karma
also: What about "Deep Dreaming" for videos, both for interpolation and for extrapolation? (I think I have seen some really nice work on interpolating between frames, although I think the process was entirely unrelated to "Deep Dreaming" since, in that case, the interpolation frames were the intended output)
MyNatureIsMe1 karma
Do you have some ideas in mind for how to model attention in Neural Networks? And what are the greatest expected benefits from such a model?
MyNatureIsMe5 karma
Do such statistical methods not exist in principle or have you just not found any yet?
View HistoryShare Link