- This Is Not Propaganda by Peter Pomerantsev review – dispatches from the war on truth | Books | The Guardian
A timely volume of analysis and memoir shows how populism is destabilising democracy and reshaping our sense of normality
- Athleisure, barre and kale: the tyranny of the ideal woman | News | The Guardian
- Why Trump Supporters Hate Being Called Racists – The Atlantic
- I am an uppity immigrant. Don’t expect me to be ‘grateful.’ – The Washington Post
- The extraction of microplastics from water using ferro-fluids – YouTube
- Twitter
RT @chelseyhotel: When the motel adjoins the bait shop you get
- ‘It was terrifying’: Black Chadds Ford couple left shaken by white Pa. trooper’s alleged misconduct
RT @BrentNYT: Handcuffed for sitting while black in your own driveway – in front of your own, upscale home.
via @phillyinquirer - arXiv Vanity – Read academic papers from arXiv as web pages
- Mexican Police Kill Migrant in Front of His Daughter as He Was Trying to Cross to the U.S.
- The World According to Yaphet Kotto – InsideHook
- (31) Zack Akil: Safer Cycling with an Edge TPU watching your back | PyData London 2019 – YouTube
- EdgeTPU with Keras
- [1907.03118v1] Fast Universal Style Transfer for Artistic and Photorealistic Rendering
Universal style transfer is an image editing task that renders an input content image using the visual style of arbitrary reference images, including both artistic and photorealistic stylization. Given a pair of images as the source of content and the reference of style, existing solutions usually first train an auto-encoder (AE) to reconstruct the image using deep features and then embeds pre-defined style transfer modules into the AE reconstruction procedure to transfer the style of the reconstructed image through modifying the deep features. While existing methods typically need multiple rounds of time-consuming AE reconstruction for better stylization, our work intends to design novel neural network architectures on top of AE for fast style transfer with fewer artifacts and distortions all in one pass of end-to-end inference. To this end, we propose two network architectures named ArtNet and PhotoNet to improve artistic and photo-realistic stylization, respectively. Extensive experiments demonstrate that ArtNet generates images with fewer artifacts and distortions against the state-of-the-art artistic transfer algorithms, while PhotoNet improves the photorealistic stylization results by creating sharp images faithfully preserving rich details of the input content. Moreover, ArtNet and PhotoNet can achieve 3X to 100X speed-up over the state-of-the-art algorithms, which is a major advantage for large content images.
- Fast Universal Style Transfer for Artistic and Photorealistic Rendering | DeepAI
Universal style transfer is an image editing task that renders an input content image using the visual style of arbitrary reference images, including both artistic and photorealistic stylization. Given a pair of images as the source of content and the reference of style, existing solutions usually first train an auto-encoder (AE) to reconstruct the image using deep features and then embeds pre-defined style transfer modules into the AE reconstruction procedure to transfer the style of the reconstructed image through modifying the deep features. While existing methods typically need multiple rounds of time-consuming AE reconstruction for better stylization, our work intends to design novel neural network architectures on top of AE for fast style transfer with fewer artifacts and distortions all in one pass of end-to-end inference. To this end, we propose two network architectures named ArtNet and PhotoNet to improve artistic and photo-realistic stylization, respectively. Extensive experiments demonstrate that ArtNet generates images with fewer artifacts and distortions against the state-of-the-art artistic transfer algorithms, while PhotoNet improves the photorealistic stylization results by creating sharp images faithfully preserving rich details of the input content. Moreover, ArtNet and PhotoNet can achieve 3X to 100X speed-up over the state-of-the-art algorithms, which is a major advantage for large content images.
- ResNext WSL | PyTorch
ResNext models trained with billion scale weakly-supervised data.
View on Github
- PyTorch opens hub for hosting ML models • DEVCLASS
- [1902.01894] A Generalized Framework for Population Based Training
Population Based Training (PBT) is a recent approach that jointly optimizes neural network weights and hyperparameters which periodically copies weights of the best performers and mutates hyperparameters during training. Previous PBT implementations have been synchronized glass-box systems. We propose a general, black-box PBT framework that distributes many asynchronous "trials" (a small number of training steps with warm-starting) across a cluster, coordinated by the PBT controller. The black-box design does not make assumptions on model architectures, loss functions or training procedures. Our system supports dynamic hyperparameter schedules to optimize both differentiable and non-differentiable metrics. We apply our system to train a state-of-the-art WaveNet generative model for human voice synthesis. We show that our PBT system achieves better accuracy, less sensitivity and faster convergence compared to existing methods, given the same computational resource.
- Topic: hyperparameter-tuning
- [1711.09846] Population Based Training of Neural Networks
Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present \emph{Population Based Training (PBT)}, a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.
- Population based training of neural networks | DeepMind
- AutoML: Automating the design of machine learning models for autonomous driving
- Just how big a problem is voter suppression? – The Washington Post
- A clinically applicable approach to continuous prediction of future acute kidney injury | Nature
Digest powered by RSS Digest
0 comments ↓
There are no comments yet...Kick things off by filling out the form below.
You must log in to post a comment.