Highest Rated Comments


Kaixhin11 karma

Growing computational resources are often cited as a major reason for the resurgence of neural networks. Google has created cutting-edge models in supervised learning (Going Deeper with Convolutions), unsupervised learning (Building High-level Features Using Large Scale Unsupervised Learning) and reinforcement learning (Massively Parallel Methods for Deep Reinforcement Learning) by utilising their vast resources. In what ways/areas do you think small research labs can be competitive when it comes to furthering deep learning research?

Kaixhin4 karma

Probability, statistics, and optimisation for starters.

Kaixhin3 karma

There is actually literature on this (check Wiki for "hyperparameter optimization"). Whilst grid search is methodical, random search may be just as good (Random Search for Hyper-Parameter Optimization). Theoretically Bayesian optimisation (Practical Bayesian Optimization of Machine Learning Algorithms) gives better results with fewer evaluations, but it's harder to set up.

But there are also parameter choices to use these techniques, and you may not be sure about where to even start, so practically it's still a pain (especially when one trial from training a large neural network can take days).

Kaixhin3 karma

Google did a large-scale empirical test (The Effects of Hyperparameters on SGD Training of Neural Networks), but basically there's no magic formula for choosing alpha (yet).