Entries from May 2017 ↓
May 31st, 2017 — pinboard
- Google now knows when its users go to the store and buy stuff – The Washington Post
- Drop your notepad, reMarkable is coming for your reams
- “Personal kanban”: a time-management system that explodes the myth of multitasking — Quartz
- Leaked regulation: Trump plans to roll back Obamacare birth control mandate – Vox
- Latent semantic analysis – Wikipedia
- data/subreddit-algebra at master · fivethirtyeight/data · GitHub
- Sources: Russians discussed ‘derogatory’ information about Trump and associates during campaign – CNNPolitics.com
- Core Kubernetes: Jazz Improv over Orchestration – Heptio
- Twitter
RT @kishau: Scathing and so beautifully written. This is just … superb. 👇🏽
- Rebecca Solnit: The Loneliness of Donald Trump | Literary Hub
A brilliant essay: The most mocked man in the world.
“Obliviousness is privilege’s form of deprivation”
http://lithub.com/rebecca-solnit-the-loneliness-of-donald-trump/ h/t @moorehn
- Twitter
RT @ticiaverveer: A beautiful Etruscan amber carving of a lion’s head.
500-480 BC #Italy (or lioness?) worn as a pendant…
- Human Resources Isn’t About Humans – Backchannel
- What’s That Bug? – Are we experts yet?
- Twitter
RT @dabeard: McCain to Australia: #Trump has ‘unsettled’ America; please stick with US & encourage us to help deal with this…
- Windows shattered at Lexington Herald-Leader; suspected bullet damage found | Lexington Herald Leader
- Trump administration plans to minimize civil rights efforts in agencies – The Washington Post
- How Trump Took Hate Groups Mainstream | Mother Jones
- How Congress dismantled federal Internet privacy rules – The Washington Post
Google funneled money to Republicans in a successful attempt to kill ISP privacy. Not a peep out of Googlers. https://www.washingtonpost.com/politics/how-congress-dismantled-federal-internet-privacy-rules/2017/05/29/7ad06e14-2f5b-11e7-8674-437ddb6e813e_story.html
- First Look at the Essential Phone, Andy Rubin’s Anti-iPhone | WIRED
- contrib.layers.avg_pool2d raises warnings when serializing metagraph · Issue #9939 · tensorflow/tensorflow
- The Bullshitter-in-Chief – Vox
Trump offers the style of authoritarian propaganda under circumstances of democratic pluralism. https://www.vox.com/policy-and-politics/2017/5/30/15631710/trump-bullshit?utm_campaign=mattyglesias&utm_content=chorus&utm_medium=social&utm_source=twitter https://twitter.com/mattyglesias/status/869538487212670976/photo/1
- [1511.07289] Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
- Twitter
RT @EdJoyce: ‘An enormous loss’ 900 miles of the #GreatBarrierReef have bleached severely since 2016. …
- Twitter
RT @ColinKahl: In the wake of Trump’s trip, our closest allies now see the United States as a threat to European peace.…
- Twitter
RT @TVMaury: When the Nordic prime ministers are openly mocking #PresidentTrump and the #Saudis ..
- Twitter
RT @EdJoyce: The great #Greenland meltdown: "Nobody expected the ice sheet to lose so much mass so quickly" @uciess…
- Twitter
RT @GarettJones: Bellman, inventor of dynamic programming, had to hide the fact he was inventing it from the Secretary of Defense:…
- Twitter
RT @chrislhayes: Former acting head of the Civil Rights Division
- White Male Terrorists Are an Issue We Should Discuss | Teen Vogue
RT @grumpygamer: "There were over 300 mass shootings in the US in 2015, and less than 1 percent of them were committed by Muslims"
Digest powered by RSS Digest
May 30th, 2017 — pinboard
Digest powered by RSS Digest
May 29th, 2017 — pinboard
- But You Dont Look Sick? support for those with invisible illness or chronic illness The Spoon Theory written by Christine Miserandino – But You Dont Look Sick? support for those with invisible illness or chronic illness
- [1606.03498] Improved Techniques for Training GANs
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.
- An introduction to Generative Adversarial Networks (with code in TensorFlow) – AYLIEN
There has been a large resurgence of interest in generative models recently (see this blog post by OpenAI for example). These are models that can learn to create data that is similar to data that we give them. The intuition behind this is that if we can get a model to write high-quality news articles for example, then it must have also learned a lot about news articles in general. Or in other words, the model should also have a good internal representation of news articles. We can then hopefully use this representation to help us with other related tasks, such as classifying news articles by topic.
Actually training models to create data like this is not easy, but in recent years a number of methods have started to work quite well. One such promising approach is using Generative Adversarial Networks (GANs). The prominent deep learning researcher and director of AI research at Facebook, Yann LeCun, recently cited GANs as being one of the most important new developments in deep learning:
“There are many interesting recent development in deep learning…The most important one, in my opinion, is adversarial training (also called GAN for Generative Adversarial Networks). This, and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” – Yann LeCun
The rest of this post will describe the GAN formulation in a bit more detail, and provide a brief example (with code in TensorFlow) of using a GAN to solve a toy problem.
- Exponential Linear Units (ELU) for Deep Network Learning | LinkedIn
We introduce the “exponential linear unit” (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parameterized ReLUs (PReLUs), ELUs also avoid a vanishing gradient via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero. Zero means speed up learning because they bring the gradient closer to the unit natural gradient.
- Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch)
- Waves Rippled Through Greenland’s Ice. That’s Ominous | Climate Central
- Coral bleaching on Great Barrier Reef worse than expected, surveys show | Environment | The Guardian
- In ‘Enormous Success,’ Scientists Tie 52 Genes to Human Intelligence – NYTimes.com
- Microsoft’s weapon in high-stakes cloud-computing battle with Amazon? Freebies | The Seattle Times
- Why the President Will Not Survive the Trump Russia Scandal
- Written/Unwritten | Patricia A. Matthew | University of North Carolina Press
Diversity and the Hidden Truths of Tenure
- In Memoriam: Jean E. Sammet 1928-2017 | News | Communications of the ACM
- Twitter
RT @LawrenceBlock: That’s actually the origin of the word–a spoonerism that supplanted its antecedent.
- Twitter
RT @ClimateCentral: Waves have been detected rippling through one of Greenland’s glaciers. That’s not good.
- Waves Rippled Through Greenland’s Ice. That’s Ominous | Climate Central
RT @ClimateCentral: Waves have been detected rippling through one of Greenland’s glaciers. That’s not good.
Digest powered by RSS Digest
May 28th, 2017 — pinboard
Digest powered by RSS Digest
May 27th, 2017 — pinboard
- Understanding deep learning requires re-thinking generalization | the morning paper
- Questions & Intuition for Tackling Deep Learning Problems
Never mind a neural network; can a human with no prior knowledge, educated on nothing but a diet of your training data set, solve the problem? Is your network looking at your data through the right lens? Is your network learning the quirks in your training data set, or is it learning to solve the problem at hand? Does your network have siblings that can give it a leg-up (through pre-trained weights)? Is your network incapable or just lazy? If it’s the latter, how do you force it to learn?
- the morning paper | an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer
an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer
- Usage patterns and the economics of the public cloud | the morning paper
- [1703.07326] One-Shot Imitation Learning
Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning.
Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks.
- Stephen Wolfram: A New Kind of Science | Online—Table of Contents
- AgileBits Blog | Introducing Travel Mode: Protect your data when crossing borders
1password
- Measures for Justice
for WA
- Measures for Justice
collects data on justice systems in several states, funded by Gates and Zuckerberg foundations. As Bach says, “Justice in America happens in 3,000 counties, each with its own justice system.”
- GitHub – oarriaga/face_classification: Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV.
Real-time face detection and emotion/gender classification using fer2013/IMDB datasets with a keras CNN model and openCV.
IMDB gender classification test accuracy: 96%.
fer2013 emotion classification test accuracy: 66%.
- We’re About to Cripple the Genomic Medical Era – NewCo Shift
Trumpcare needlessly cedes US leadership in data, science, and health
- The Calculus of Service Availability – ACM Queue
As detailed in Site Reliability Engineering: How Google Runs Production Systems1 (hereafter referred to as the SRE book), Google products and services seek high-velocity feature development while maintaining aggressive SLOs (service-level objectives) for availability and responsiveness. An SLO says that the service should almost always be up, and the service should almost always be fast; SLOs also provide precise numbers to define what "almost always" means for a particular service. SLOs are based on the following observation:
The vast majority of software services and systems should aim for almost-perfect reliability rather than perfect reliability—that is, 99.999 or 99.99 percent rather than 100 percent—because users cannot tell the difference between a service being 100 percent available and less than "perfectly" available. There are many other systems in the path between user and service (laptop, home WiFi, ISP, the power grid, …), and those systems collectively are far less than 100 percent available. Thus, the marginal difference between 99.99 percent and 100 percent gets lost in the noise of other unavailability, and the user receives no benefit from the enormous effort required to add that last fractional percent of availability. Notable exceptions to this rule include antilock brake control systems and pacemakers!
For a detailed discussion of how SLOs relate to SLIs (service-level indicators) and SLAs (service-level agreements), see the "Service Level Objectives" chapter in the SRE book. That chapter also details how to choose metrics that are meaningful for a particular service or system, which in turn drives the choice of an appropriate SLO for that service.
This article expands upon the topic of SLOs to focus on service dependencies. Specifically, we look at how the availability of critical dependencies informs the availability of a service, and how to design in order to mitigate and minimize critical dependencies.
- GitHub – openai/baselines: OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
We’re releasing OpenAI Baselines, a set of high-quality implementations of reinforcement learning algorithms. To start, we’re making available an open source version of Deep Q-Learning and three of its variants.
These algorithms will make it easier for the research community to replicate, refine, and identify new ideas, and will create good baselines to build research on top of. Our DQN implementation and its variants are roughly on par with the scores in published papers. We expect they will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones.
- How Your Data is Stored, or, The Laws of the Imaginary Greeks
eventual consistency VERY LUCIDLY explained. It follows the original (entertaining) paper by Leslie Lamport but spells everything out clearly for non-computer-scientists.
- [1705.07962] pix2code: Generating Code from a Graphical User Interface Screenshot
Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites and mobile applications. In this paper, we show that Deep Learning techniques can be leveraged to automatically generate code given a graphical user interface screenshot as input. Our model is able to generate code targeting three different platforms (i.e. iOS, Android and web-based technologies) from a single input image with over 77% of accuracy.
Comments:Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA
Subjects:Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)
MSC classes:68T45
ACM classes:I.2.1; I.2.10; I.2.2; I.2.6
Cite as:arXiv:1705.07962 [cs.LG]
(or arXiv:1705.07962v1 [cs.LG] for this version)
- GitHub – tonybeltramelli/pix2code: pix2code: Generating Code from a Graphical User Interface Screenshot
pix2code: Generating Code from a Graphical User Interface Screenshot
- This app uses artificial intelligence to turn design mockups into source code
RT @NandoDF: This app uses artificial intelligence to turn design mockups into source code. Cool Pixels2Code
- Me and my penis: 100 men reveal all | Life and style | The Guardian
- Interview with crime fiction author Christopher Brookmyre as he publishes his 18th novel, Black Widow (From HeraldScotland)
- Scio: Moving Big Data to Google Cloud, a Spotify Story
- Google has reportedly launched a new AI-focused venture capital program | TechCrunch
- Ancestry.com takes DNA ownership rights from customers and their relatives
- Twitter
RT @teioh: God help us all
- Witnesses: Man Cut the Throats of Two MAX Passengers Who Tried to Stop Anti-Muslim Bullying of Women on Northeast Portland Train – Willamette Week
- Hillary Clinton Roasts the Trump Administration in a Remarkable Speech | The Nation
- Google’s AlphaGo Trounces Humans—But It Also Gives Them a Boost | WIRED
- Juno spacecraft reveals a more complex Jupiter | Science News
- Inside Hillary Clinton’s Surreal Post-Election Life
- Twitter
RT @surt_lab: So about that "you only get your mitochondria from your mom" thing…. The answer is more complicated
Digest powered by RSS Digest
May 26th, 2017 — pinboard
- AT&T completes its nationwide LTE-M network ahead of schedule
Meet your new wireless network for IoT http://bit.ly/2s1KWEn
#Tech #News #smartdevices https://twitter.com/RWW/status/868154288786337792/photo/1
- dmlc/xgboost: Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Flink and DataFlow
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Flink and DataFlow
- Did the Turkish President’s Security Detail Attack Protesters in Washington? What the Video Shows – The New York Times
- Calling Bullshit — Syllabus
- The GOP inherits what Trump has wrought – The Washington Post
- Mossberg: The Disappearing Computer – Recode
- Why the Atom text editor is a solid tool for writers | Opensource.com
- AlphaGo offers a sobering look into the future of man versus machine | WIRED UK
- Capturing the Earth as art | Cosmos
- Twitter
RT @JasmineLeiylani: Hold on a second. It’s 2017 & she’s only the FIRST ASIAN WOMAN ON THE FRONT PAGE COVER OF @VanityFair WTF Vanity Fa…
- How to build a static website with Jekyll | Opensource.com
- Eating Beans Could Be A Magical Solution To Climate Change
- Barack Obama on food and climate change: ‘We can still act and it won’t be too late’ | Global development | The Guardian
- At His Own Wake, Celebrating Life and the Gift of Death – NYTimes.com
- NASA Spacecraft Finds a Chaotic Dance of Storms at Jupiter’s Poles – NYTimes.com
- Twitter
RT @marlaerwin: I miss this.
- Twitter
RT @NASA: Jupiter’s poles are covered in cyclones, some as big as the Earth – That & more new results from @NASAJuno. Details…
- Calling Bullshit.
After months of work, we’re now live.
A course for these troubled times: Calling Bullshit in the Age of Big Data.
http://callingbullshit.org https://twitter.com/CT_Bergstrom/status/819430803385876480/photo/1
- Paleontologists with the UW’s Burke Museum discover major T. rex fossil | UW Today
- Twitter
RT @roebuk: Client-side security at it’s best
- Uber, Lyft returning to Austin on Monday | The Texas Tribune
- Twitter
RT @EdJoyce: 23 hours ago from @mkvackay —
Eleven Mile state park last night! @Colorado #stormhour #colorado #Astrophotography…
- AlphaSOC on Twitter: "Our lightweight utility to submit data to the DNS Analytics API and retrieve alerts is now available. Happy hunting! https://t.co/FwLf50n4cM https://t.co/t7f6EBxRyv"
- Twitter
RT @landpsychology: The cat No I haven’t seen him
- Twitter
RT @karaswisher: Silicon Valley technical women’s group cuts ties with Uber, citing "continuing allegations’
- A Silicon Valley technical women’s group has cut ties with Uber, citing ‘continuing allegations’ about the treatment of female employees – Recode
- Twitter
RT @MelissaSulewski: It reads like an Onion piece.
- Innocence Project victory: Shaurn Thomas walks free after 24 years
- Gregory Hines & Sammy Davis Jr – YouTube
Digest powered by RSS Digest
May 25th, 2017 — pinboard
Digest powered by RSS Digest
May 24th, 2017 — pinboard
- Google Cloud Platform Blog: Istio: a modern approach to developing and managing microservices
Today Google, IBM and Lyft announced the alpha release of Istio: a new open-source project that provides a uniform way to help connect, secure, manage and monitor microservices.
Istio encapsulates many of the best practices Google has been using to run massive-scale services in production for years. We’re happy to contribute this to the community as an open solution that works with Kubernetes; on-premises or in any cloud, to help solve challenges in modern application development. Istio provides developers and devops fine-grained visibility and control over traffic without requiring any changes to application code and provides CIOs and CSOs the tools needed to help enforce security and compliance requirements across the enterprise.
- google/youtube-8m: Starter code for working with the YouTube-8M dataset.
Starter code for working with the YouTube-8M dataset. https://research.google.com/youtube8m/
- West Big Data Innovation Hub Annual All Hands Meeting | Boulder, CO | Summary | powered by RegOnline
- Next Ride
denver airport transport
- The truth we rarely hear: From mirror to prism | The Psychologist
- The 26 major cities with the highest quality of life in the world – Business Insider Nordic
- Morphosis: Paul Klee, ‘Dancing Under the Empire of Fear’ (1938)
- How America’s Leading Science Fiction Authors Are Shaping Your Future | Arts & Culture | Smithsonian
(2/2) Yes, @GreatDismal called utopian/dystopian SF a “pointless dichotomy” in this (great!) piece by @Eileen_gunn: http://www.smithsonianmag.com/arts-culture/how-americas-leading-science-fiction-authors-are-shaping-your-future-180951169/
- 21st Century Yokel by Tom Cox: Unbound
- Some Thoughts About The Creation Of My New Book – Tom Cox
- Pope Francis says destroying the environment is a sin | World news | The Guardian
- Fundraiser by Jeff Lew : Erase Tacoma School Lunch Debt!
- Fundraiser by Jeff Lew : Erase Renton School Lunch Debt!
- Seattle parent pays off $21K school-lunch debt with GoFundMe campaign | The Seattle Times
- Review: In a New Book, Pope Francis Calls Mercy Essential – The New York Times
- Manchester’s heartbreak: ‘I never grasped what big pop gigs were for until I saw one through my daughter’s eyes’ | UK news | The Guardian
- Roger Moore wasn’t a good Bond, but he was my Bond – Salon.com
- The Amazing Life and Work of Maria Sibylla Merian – Atlas Obscura
- Combine K8S + Let’s Encrypt, Kubernetes Persistent Storage –
Seattle Kubernetes Meetup (Seattle, WA)
| Meetup
RT @kubernetesSEA: Come & learn about using Let’s Encrypt w kubernetes and where we’re at with persistent storage next week
- Twitter
Trump plans a 69 percent budget cut, large staff reductions at clean energy office
- Trump is targeting the government’s top renewable energy office for a 69 percent budget cut – The Washington Post
- Trump Budget Would Increase Homelessness and Hardship in Every State, End Federal Role in Community Development | Center on Budget and Policy Priorities
- Day One – Scytale.io – Medium
- SPIFFE / About
- How to Stay Sane if Trump is Driving You Insane: Advice From a Therapist
- Think Your Credentials Are Ignored Because You’re A Woman? It Could Be : 13.7: Cosmos And Culture : NPR
- Twitter
RT @gregsramblings: @SFist Salesforce Tower from above (from my flight this morning)
- Do Donald Trump’s Old Tweets Predict His Future? – The Atlantic
- Trump Budget Based on $2 Trillion Math Error [Updated]
Digest powered by RSS Digest
May 23rd, 2017 — pinboard
Digest powered by RSS Digest
May 22nd, 2017 — pinboard
- ML Toolkit (TensorFlow Dev Summit 2017) – YouTube
TensorFlow is an extremely powerful framework, yet has been missing packaged solutions that work out-of-the-box. In this talk, Ashish Agarwal introduces a toolkit of algorithms that takes a step in that direction.
- broadinstitute/scalable_analytics: Public collaboration of Scalable Single Cell Analytics
Public collaboration of Scalable Single Cell Analytics
- Twitter
RT @mattyglesias: This only works because of Germany’s expansive, sunny deserts and wind-swept plains; would never work in the USA.
- Campaign group to challenge UK over surrender of passwords at border control | Politics | The Guardian
RT @Pinboard: We need a travel mode for social media accounts. This man was arrested at Heathrow for not providing passwords:
- Swift is like Kotlin
- Stevey’s Blog Rants: Why Kotlin Is Better Than Whatever Dumb Language You’re Using
.@steve_yegge on Kotlin, and why Android is its killer app: http://bit.ly/2rsUOdK
- Stevey’s Blog Rants
.@steve_yegge on Kotlin, and why Android is its killer app: http://bit.ly/2rsUOdK
- Surfer John John Florence’s Very Wavy World | GQ
- Israeli Intelligence Furious Over Trump’s Loose Lips | Foreign Policy
- Twitter
"White House Moves to Block Ethics Inquiry Into Ex-Lobbyists on Payroll"
- White House Moves to Block Ethics Inquiry Into Ex-Lobbyists on Payroll – The New York Times
- Golang Iota for Enumerations – ozmox
- ‘The Internet Is Broken’: @ev Is Trying to Salvage It – NYTimes.com
- 85% of Germany’s power just came from renewable energy, setting a new record | indy100
85% of Germany’s power just came from renewable energy, setting a new record http://sc.org/2qB7fRT
- Untitled (http://www.sfgate.com/news/texas/article/Texas-Senate-approves-religious-refusal-11163562.php)
RT @jilevin: Texas Senate approves ‘religious refusal’ adoption measure
- Texas Senate approves ‘religious refusal’ adoption measure – SFGate
- Google Home: An Insight Into a 4-Year-Old’s Mind – Jono Bacon
- Pinboard on Twitter: "Remember that Lieberman (who may become head of the FBI) wrote a bill that would give the President emergency powers over the Internet"
- Wireless Time for Jewelers (1912)
- ACLU Seeks Documents on ICE’s Use of Cell Phone Trackers | American Civil Liberties Union
- Want To Hold An ICO? CoinList Makes It Easy — And Legal
- developer-roadmap/README.md at master · kamranahmedse/developer-roadmap · GitHub
- GitHub – kamranahmedse/developer-roadmap: Roadmap to becoming a web developer in 2017
- The Skywhale – Wikipedia
- Kelsey Hightower – Keynote – Pycon 2017 – YouTube
- How To Public Speaking – YouTube
- Microsoft, Google Gaining on AWS, But Not Quickly Enough — Redmond Channel Partner
- [1611.01604] Dynamic Coattention Networks For Question Answering
Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.
- CS224n: Natural Language Processing with Deep Learning
CS224n: Natural Language Processing with Deep Learning
Digest powered by RSS Digest