Entries from July 2017 ↓
July 31st, 2017 — pinboard
Digest powered by RSS Digest
July 30th, 2017 — pinboard
- A brief guide to eternal youth – get a dog, avoid tax and inject teenage blood | Opinion | The Guardian
- Week 37: Experts in authoritarianism advise to keep a list of things subtly changing around you, so…
- Hackers break into voting machines in 90 minutes at competition | TheHill
- The psychology behind why couples always fight when assembling Ikea furniture — Quartz
- Alarming New Animation Shows The Months Are Indeed Getting Warmer | HuffPost
RT @tveitdal: Alarming New Animation Shows The Months Are Indeed Getting Warmer
- Twitter
RT @tveitdal: Alarming New Animation Shows The Months Are Indeed Getting Warmer
- flypulse defibrillator drones get to the scene 4X faster than an ambulance
- AWS won’t be ceding its massive market share lead anytime soon – TechCrunch
- Memo to Tech’s Titans: Please Remember What It Was Like to Be Small
This is the kind of passion about net neutrality that we should be hearing from Jeff Bezos, Zuck, Larry and Sergey https://shift.newco.co/memo-to-techs-titans-please-remember-what-it-was-like-to-be-small-d6668a8fa630
- A Biohacker’s Plan to Upgrade Dalmatians Ends Up in the Doghouse – MIT Technology Review
- Priebus, sashay away – The Washington Post
- Twitter
RT @sarahmei: This thread lays out why the advice women get to "talk more like a man" (don’t hedge, etc.) is actually harmful to…
- Twitter
RT @jsmooth995: Thread, from one of the best people to speak on this
- Twitter
RT @sundress: Really interesting thread; I think about these things a bit also in terms of what work we see/value.
- Twitter
RT @AnneFrankCenter: Wake up and smell @POTUS’ escalating oppression.
- Twitter
RT @TopherSpiro: I can’t stop thinking about this. Here is concrete evidence that one voice can make a difference. It’s unbelievable.
- Stanford’s Robert Sapolsky Demystifies Depression, Which, Like Diabetes, Is Rooted in Biology | Open Culture
- Twitter
RT @chudgr: Sudden thought: this is actually authoritarianism, not some prologue or omen or drill. This is EXACTLY what authori…
- How loss of Arctic sea ice further fuels global warming
- Hey, Jeff Bezos, here’s how to help Seattle’s housing problem | The Seattle Times
- How to train your own Object Detector with TensorFlow’s Object Detector API
- Building a Real-Time Object Recognition App with Tensorflow and OpenCV
- The XY Problem
- The Parable of the Paperclip Maximizer – Hacker Noon
- High-Profile Russian Death In Washington Was No Accident — It Was Murder, Officials Say
- Twitter
RT @ElleOhHell: Every picture of Tucker Carlson looks like my dog’s face when I sing Happy Birthday to him
Digest powered by RSS Digest
July 29th, 2017 — pinboard
Digest powered by RSS Digest
July 28th, 2017 — pinboard
Digest powered by RSS Digest
July 27th, 2017 — pinboard
- Twitter
RT @ddiamond: If you’re just waking up, incredible story about Trump White House reportedly threatening Alaska for Murkowski vote…
- WizSec: Breaking open the MtGox case, part 1
- Twitter
RT @lizthegrey: For the record: fundraiser now stands at ~$73k of donations, $38k of employer match, totaling $184k in total raised.
- ImageNet: the data that spawned the current AI boom — Quartz
- Marina Ratner, Émigré Mathematician Who Found Midlife Acclaim, Dies at 78 – NYTimes.com
- New hands-on labs for scientific data processing on Google Cloud Platform | Google Cloud Big Data and Machine Learning Blog | Google Cloud Platform
RT @vambenepe: 7 new self-paced hands-on labs for data science using Google Cloud. With BigQuery, Dataflow, Dataproc, Datalab, etc…
- Twitter
RT @poniewozik: That time when Andy Warhol went to Trump Tower to judge the cheerleading tryouts for Trump’s New Jersey Generals
- Twitter
RT @timlampe: I think about this everyday
- 7 Ways to Write Better Opening Paragraphs for Your Blog Posts | Social Media Today
- The Senate’s Health Care Travesty – The New York Times
- Our Minds Have Been Hijacked by Our Phones. Tristan Harris Wants to Rescue Them | WIRED
- Brain’s Stem Cells Slow Ageing in Mice – Scientific American
- Twitter
RT @Moltz: WA legalized marijuana, violent crime went down.
- ‘Giving up wasn’t an option’: How one man beat the odds to graduate from college – The Washington Post
- Twitter
RT @chrislhayes: In the last three days, Republican men in the house have threatened their female senate colleagues with shooting an…
- The Senate’s ACA Repeal Bill Would Devastate Rural Communities – Center for American Progress
Digest powered by RSS Digest
July 26th, 2017 — pinboard
- Azure/aci-connector-k8s: Azure Container Instances Connector for Kubernetes
Azure Container Instances Connector for Kubernetes
- Fast and Easy Containers: Azure Container Instances | Blog | Microsoft Azure
- [1707.07328] Adversarial Examples for Evaluating Reading Comprehension Systems
Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear. To reward systems with real language understanding abilities, we propose an adversarial evaluation scheme for the Stanford Question Answering Dataset (SQuAD). Our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans. In this adversarial setting, the accuracy of sixteen published models drops from an average of 75% F1 score to 36%; when the adversary is allowed to add ungrammatical sequences of words, average accuracy on four models decreases further to 7%. We hope our insights will motivate the development of new models that understand language more precisely.
- AWS vs Microsoft Azure and Google Cloud, a user perspective
- Twitter
RT @rakyll: Just to expose how much of a third world country the US is, this is the bill I got because I had difficulty breathi…
- Twitter
RT @evanmcmurry: Trump: "We ended up with 51 votes, 51 to…whatever. I don’t know what it is."
- Your Brain Doesn’t Contain Memories. It Is Memories | WIRED
- A Math Genius Blooms Late and Conquers His Field | WIRED
- 111 N.F.L. Brains. All But One Had C.T.E. – The New York Times
- Dr. Shigeaki Hinohara, Longevity Expert, Dies at (or Lives to) 105 – NYTimes.com
- Tributes pour in for ‘once in a lifetime’ musician G Yunupingu
- What is Kubernetes? An introductory overview and complete Q&A
- This is not okay – The Washington Post
- Twitter
RT @matthewclifford: Truly astonishing thread
- Intelligence Agencies Say North Korean Missile Could Reach U.S. in a Year – The New York Times
- ContainerShip launches its fully managed Kubernetes service
- Twitter
RT @jkarsh: The next time someone tries to tell you Obamacare was shoved down America’s throat by Democrats, show them this.…
- Did Ron Johnson almost torpedo the motion to proceed?
- Remembering Maryam Mirzakhani | inclusion/exclusion
- Twitter
RT @nycsouthpaw: some sick puppies in ICE testing the boundaries rn
- Twitter
RT @SeanMcElwee: ICE is becoming so brutal under Trump that an agent went public with concerns to New Yorker.
- A Warrant to Search Your Vagina – NYTimes.com
- Amazing Tensorflow Github Projects – Source Dexter
- Why Women Aren’t C.E.O.s, According to Women Who Almost Were – NYTimes.com
- What Women Can Do When “Meritocracies” Push Them Out
- Margaret Bergmann Lambert, Jewish Athlete Excluded From Berlin Olympics, Dies at 103 – The New York Times
- Twitter
RT @Phil_Lewis_: Emmett Till was born on this date in 1941. He would have been 76 years old today.
- House conservatives push for a probe of Comey and Clinton campaign – The Washington Post
- Windows XP at DEFCON: Preparation – Little 418
RT @the_thagomizer: @MimmingCodes writes about setting up a laptop for DefCon and it is both educational and hysterical.
Digest powered by RSS Digest
July 25th, 2017 — pinboard
Digest powered by RSS Digest
July 24th, 2017 — pinboard
- Google’s BBR fixes TCP’s dirty little secret – Tom Limoncelli’s EverythingSysadmin Blog
- A computer was asked to predict which start-ups would be successful. The results were astonishing
- Donald Trump’s Ghostwriter Tells All | The New Yorker
- The Mystery of Ezra Cohen-Watnick – The Atlantic
- The Trump Administration Just Made it Easier for Law Enforcement to Take Your Property – Mother Jones
- The new Detroit’s fatal flaw – The Washington Post
- This Man Used His Inherited Fortune To Fund The Racist Right
- Too much surveillance makes us less free. It also makes us less safe. – The Washington Post
- A Lost Cat’s Reincarnation, in Masahisa Fukase’s “Afterword” | The New Yorker
- Everywhere You Look, We’ve Downgraded Real Problems Into Mere ‘Issues’ – The New York Times
- How to Mail Your Own Potato – YouTube
- DenseNet/models at master · liuzhuang13/DenseNet
Memory Efficient Implementation of DenseNets
The standard (orginal) implementation of DenseNet with recursive concatenation is very memory inefficient. This can be an obstacle when we need to train DenseNets on high resolution images (such as for object detection and localization tasks) or on devices with limited memory.
In theory, DenseNet should use memory more efficiently than other networks, because one of its key features is that it encourages feature reusing in the network. The fact that DenseNet is "memory hungry" in practice is simply an artifact of implementation. In particular, the culprit is the recursive concatenation which re-allocates memory for all previous outputs at each layer. Consider a dense block with N layers, the first layer’s output has N copies in the memory, the second layer’s output has (N-1) copies, …, leading to a quadratic increase (1+2+…+N) in memory consumption as the network depth grows.
Using optnet (-optMemory 1) or shareGradInput (-optMemory 2), we can significantly reduce the run-time memory footprint of the standard implementaion (with recursive concatenation). However, the memory consumption is still a quadratic function in depth.
We implement a customized densely connected layer (largely motivated by the Caffe implementation of memory efficient DenseNet by Tongcheng), which uses shared buffers to store the concatenated outputs and gradients, thus dramatically reducing the memory footprint of DenseNet during training. The mode -optMemory 3 activates shareGradInput and shared output buffers, while the mode -optMemory 4 further shares the memory to store the output of the Batch-Normalization layer before each 1×1 convolution layer. The latter makes the memory consumption linear in network depth, but introduces a training time overhead due to the need to re-forward these Batch-Normalization layers in the backward pass.
- DenseNet/efficient_densenet_techreport.pdf at master · liuzhuang13/DenseNet
- The Atlantic is ‘most vital when America is most fractured.’ Good thing it soars today. – The Washington Post
- Twitter
RT @djrothkopf: A world being transformed by science and a White House without a scientist in it. Death knell for US leadership.
- Technology Is Biased Too. How Do We Fix It? | FiveThirtyEight
- Michael Chabon: ‘I have a socialist approach to my regrets’ | Life and style | The Guardian
- [1412.6980] Adam: A Method for Stochastic Optimization
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
- A Gentle Guide to Using Batch Normalization in Tensorflow – Rui Shu
- batch normalization | Francis’s standard
- Installing Emacs on OS X – WikEmacs
- [1707.02968] Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10x or 100x? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between `enormous data’ and deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks still increases linearly with orders of magnitude of training data size. Second, we show that representation learning (or pre-training) still holds a lot of promise. One can improve performance on any vision tasks by just training a better base model. Finally, as expected, we present new state-of-the-art results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.
- Training an object detector using Cloud Machine Learning Engine | Google Cloud Big Data and Machine Learning Blog | Google Cloud Platform
- Capacity and Trainability in Recurrent Neural Networks
Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.
- Google Brain Team – Research at Google
- A hacker stole $31M of Ether—how it happened and what it means for Ethereum
- Dear tech dudes, stop being so dumb about women | TechCrunch
- U.N. Brought Cholera to Haiti. Now It Is Fumbling Its Effort to Atone. – NYTimes.com
"U.N. Brought Cholera to Haiti. Now It Is Fumbling Its Effort to Atone."
- Twitter
"U.N. Brought Cholera to Haiti. Now It Is Fumbling Its Effort to Atone."
Digest powered by RSS Digest
July 23rd, 2017 — pinboard
Digest powered by RSS Digest
July 22nd, 2017 — pinboard
- Abuses Hide in the Silence of Nondisparagement Agreements – The New York Times
- Poland appears to be dismantling its own hard-won democracy – The Washington Post
- Taking the Data Scientist Out of Data Science
- classic slipper – mahabis // slippers reinvented
- Where Else Does the U.S. Have an Infrastructure Problem? Antarctica – The New York Times
- Trump is playing health care games with lives. Where are the grown-ups?
- [1707.04585] The Reversible Residual Network: Backpropagation Without Storing Activations
Deep residual networks (ResNets) have significantly pushed forward the state-of-the-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth.
- Google now highest-spending company for federal lobbying – San Francisco Chronicle
- Vote for Your Favorite Mac Markdown Editor – TidBITS
- PSA: Update to iOS 10.3.3 to fix serious wifi vulnerability allowing attacker complete control | 9to5Mac
- Attention and Augmented Recurrent Neural Networks
- getpelican/pelican: Static site generator that supports Markdown and reST syntax. Powered by Python.
Static site generator that supports Markdown and reST syntax. Powered by Python. http://getpelican.com/
- Pelican, a simple static blog generator in python – Carnets Web
Digest powered by RSS Digest