- Genome Hackers Show No One’s DNA Is Anonymous Anymore | WIRED
- Meet the Scientists Bringing Extinct Species Back From the Dead
- E.P.A. to Disband a Key Scientific Review Panel on Air Pollution – The New York Times
- Neoliberalism has conned us into fighting climate change as individuals | Martin Lukacs | Environment | The Guardian
- Neoliberalism has conned us into fighting climate change as individuals | Martin Lukacs | Environment | The Guardian
- ExxonMobil CEO Depressed After Realizing Earth Could End Before They Finish Extracting All The Oil
- Building a more reliable infrastructure with new Stackdriver tools and partners | Google Cloud Blog
- Mouse pups born from same-sex parents: Get the facts
- AVR-IoT | Microchip Technology
- Twitter
RT @ShannonVallor: "Amazon’s system taught itself that male candidates were preferable." No. This is not what happened. Amazon taught…
- Meet TransmogrifAI, Open Source AutoML That Powers Einstein Predictions – YouTube
- [1810.01075] Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Random Matrix Theory (RMT) is applied to analyze weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of Self-Regularization. The empirical spectral density (ESD) of DNN layer matrices displays signatures of traditionally-regularized statistical models, even in the absence of exogenously specifying traditional forms of explicit regularization. Building on relatively recent results in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit Self-Regularization. These phases can be observed during the training process as well as in the final learned DNNs. For smaller and/or older DNNs, this Implicit Self-Regularization is like traditional Tikhonov regularization, in that there is a "size scale" separating signal from noise. For state-of-the-art DNNs, however, we identify a novel form of Heavy-Tailed Self-Regularization, similar to the self-organization seen in the statistical physics of disordered systems. This results from correlations arising at all size scales, which arises implicitly due to the training process itself. This implicit Self-Regularization can depend strongly on the many knobs of the training process. By exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size. This demonstrates that—all else being equal—DNN optimization with larger batch sizes leads to less-well implicitly-regularized models, and it provides an explanation for the generalization gap phenomena.
- Twitter
RT @kaggle: With just 30 days left to compete in the Airbus Ship Detection Challenge, there’s no better time to get started! 🛳W…
- Twitter
Florida banned state workers from using the term ‘climate change’
- Florida banned state workers from using term ‘climate change’ – report | US news | The Guardian
Florida banned state workers from using the term ‘climate change’
- Twitter
RT @Interior: A rare & wonderful sight: a muskox under a rainbow at Cape Krusenstern National Monument #Alaska
Digest powered by RSS Digest