Entries from November 2021 ↓
November 30th, 2021 — Uncategorized
Digest powered by RSS Digest
November 29th, 2021 — Uncategorized
Digest powered by RSS Digest
November 28th, 2021 — Uncategorized
Digest powered by RSS Digest
November 28th, 2021 — Uncategorized
- Aerial – A free and open-source Mac Screen Saver
- 2021 Remaster "While My Guitar Gently Weeps" with Prince, Tom Petty, Jeff Lynne and Steve Winwood – YouTube
- 'Get Back': Meet the Beatles As They Come Together One Last Time – Rolling Stone
- ‘The Beatles: Get Back’ is a New (Happy) Ending to a Story We All Know | Observer
- The Beatles were like aliens from the future in 1969 – and they are still as radical today | Jonathan Freedland
- A Cure for Type 1 Diabetes? For One Man, It Seems to Have Worked.
A new treatment using stem cells that produce insulin has surprised experts and given them hope for the 1.5 million Americans living with the disease.
- Record, replay and measure user flows – Chrome Developers
- Getting Back to Normal Is Only Possible Until You Test Positive
- New FDA chief will face COVID woes and calls for drug-approval reform
After long delay, US President Joe Biden picks Robert Califf to once again head the US Food and Drug Administration.
- Artificial intelligence powers protein-folding predictions
- espresso Display – espresso Displays
- Shop Waterless Haircare Essentials | Everist
- Nike FlyEase Slip On Shoes. Nike.com
- Why Are Ahmaud Arbery’s Killers So Scared?
- Why We Must Monitor the Sale of Surveillance Tech – The American Prospect
- Google AI Blog: Announcing WIT: A Wikipedia-Based Image-Text Dataset
Multimodal visio-linguistic models rely on rich datasets in order to model the relationship between images and text. Traditionally, these datasets have been created by either manually captioning images, or crawling the web and extracting the alt-text as the caption. While the former approach tends to result in higher quality data, the intensive manual annotation process limits the amount of data that can be created. On the other hand, the automated extraction approach can lead to bigger datasets, but these require either heuristics and careful filtering to ensure data quality or scaling-up models to achieve strong performance. An additional shortcoming of existing datasets is the dearth of coverage in non-English languages. This naturally led us to ask: Can one overcome these limitations and create a high-quality, large-sized, multilingual dataset with a variety of content?
Today we introduce the Wikipedia-Based Image Text (WIT) Dataset, a large multimodal dataset, created by extracting multiple different text selections associated with an image from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets. As detailed in “WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learningâ€, presented at SIGIR ‘21, this resulted in a curated set of 37.5 million entity-rich image-text examples with 11.5 million unique images across 108 languages. The WIT dataset is available for download and use under the Creative Commons license. We are also excited to announce that we are hosting a competition with the WIT dataset in Kaggle in collaboration with Wikimedia Research and other external collaborators
- Spreadsheet Timeline
Instantly generate a timeline to put in your spreadsheet
- Brewing your coffee in this travel thermos is easier than using a Nespresso – Yanko Design
- Review: Peter Jackson’s The Beatles: Get Back | Pitchfork
- Triumph for Indian Astronomers! Distinct Studies Discover a ‘Hot-Jupiter’ and Rare Stars Hotter Than Our Sun | The Weather Channel – Articles from The Weather Channel | weather.com
A team of Ahemdabad-based astronomers has discovered a new exoplanet while Pune-based astronomers have identified a rare class of radio stars that are hotter than the Sun. – Articles from The Weather Channel | weather.com
- Reasons to Survive November by Tony Hoagland | The Writer's Almanac with Garrison Keillor
- Cloud Sock – Brother Vellies
- House Shorts – Hunter Green Boxers With Pockets | Jambys
- Follain Refillable Everything Soap – Clean Beauty | Follain
- The Beatles’ ‘Let It Be’: Glyn Johns Remembers
- Lennon Or McCartney?
This used to be the question that was asked….â€Who do you like better, Lennon or McCartney?â€Â I think it was meant to determine how cool you were. Since Paul was considered the cu…
Digest powered by RSS Digest
November 26th, 2021 — Uncategorized
Digest powered by RSS Digest
November 25th, 2021 — Uncategorized
Digest powered by RSS Digest
November 24th, 2021 — Uncategorized
- Tom Stoddart obituary
Tom Stoddart obituary.
Photojournalist who covered the civil war in Lebanon, the siege of Sarajevo, the fall of the Berlin wall and the 2003 invasion of Iraq
https://www.theguardian.com/artanddesign/2021/nov/23/tom-stoddart-obituary
- ‘Find a part of each day to relish’: coping with cancer and Covid
This year has challenged us all. But for Sarah Hughes it’s been particularly hard. Here, she talks about living with cancer – and letting in the light in the darkest of times
- Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences | PNAS
Learning biological properties from sequence data is a logical step toward generative and predictive artificial intelligence for biology. Here, we propose scaling a deep contextual language model with unsupervised learning to sequences spanning evolutionary diversity. We find that without prior knowledge, information emerges in the learned representations on fundamental properties of proteins such as secondary structure, contacts, and biological activity. We show the learned representations are useful across benchmarks for remote homology detection, prediction of secondary structure, long-range residue–residue contacts, and mutational effect. Unsupervised representation learning enables state-of-the-art supervised prediction of mutational effect and secondary structure and improves state-of-the-art features for long-range contact prediction.
- Roche, Genentech tap Flywheel to spin up AI-based drug discovery research | FierceBiotech
Roche and its Genentech division have tapped data curation developer Flywheel to help train its machine learning models aimed at discovering potential new drugs.Â
Flywheel’s platform helps automate the ingestion, classification and analysis of medical images collected from radiology scans and digital pathology programs as well as data from electronic health records and diagnostic tests.
By taking the cumbersome steps out of aggregation and preprocessing, the startup’s cloud-based platform aims to deliver large batches of ready-to-use information to Roche and Genentech’s drug discovery teams working on precision medicine projects across multiple locations.
- Vincentx15/Equi-RC: Equivariant layers for RC-complement symmetry in DNA sequence data
Equivariant layers for RC-complement symmetry in DNA sequence data
This is a repository that implements the layers as described in "Reverse-Complement Equivariant Networks for DNA Sequences" in Keras and Pytorch. The simplest way to use it is to include the appropriate standalone python script in your code.
- Reverse-Complement Equivariant Networks for DNA Sequences | bioRxiv
As DNA sequencing technologies keep improving in scale and cost, there is a growing need to develop machine learning models to analyze DNA sequences, e.g., to decipher regulatory signals from DNA fragments bound by a particular protein of interest. As a double helix made of two complementary strands, a DNA fragment can be sequenced as two equivalent, so-called Reverse Complement (RC) sequences of nucleotides. To take into account this inherent symmetry of the data in machine learning models can facilitate learning. In this sense, several authors have recently proposed particular RC-equivariant convolutional neural networks (CNNs). However, it remains unknown whether other RC-equivariant architectures exist, which could potentially increase the set of basic models adapted to DNA sequences for practitioners. Here, we close this gap by characterizing the set of all linear RC-equivariant layers, and show in particular that new architectures exist beyond the ones already explored. We further discuss RC-equivariant pointwise nonlinearities adapted to different architectures, as well as RC-equivariant embeddings of k-mers as an alternative to one-hot encoding of nucleotides. We show experimentally that the new architectures can outperform existing ones.
- Leveraging Deep Learning for Multilingual Sentiment Analysis – AYLIEN News API
It is a strong indicator of today’s globalized world and rapidly growing access to Internet platforms, that we have users from over 188 countries and 500 cities globally using our Text Analysis and News APIs. Our users need to be able to understand and analyze what’s being said out there, about them, their products, services, or their competitors, regardless of the locality and the language used. Social media content on platforms like Twitter, Facebook and Instagram can provide unrivalled insights into customer opinion and experience to brands and organizations. However, as shown by the following stats, users post content in a multitude of languages on these platforms:
Only about 39% of tweets posted are in English;
Facebook recently reported that about 50% of its users speak a language other than English;
Native platforms such as Sina Weibo and WeChat, where most of the content is written in a native language, are on the rise;
70% of active Instagram users are based outside the US.
A look at online review platforms such as Yelp and TripAdvisor, as well as various news outlets and blogs, reveals similar patterns regarding the variety of language used. Therefore, no matter if you are a social media analyst, or a hotel owner trying to gauge customer satisfaction, or a hedge fund analyst trying to analyze a foreign market, you need to be able to understand textual content in a multitude of languages.
- (Saving…) [2005.14165v4] Language Models are Few-Shot Learners
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions – something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
Digest powered by RSS Digest
November 23rd, 2021 — Uncategorized
- (400) https://twitter.com/nytimesbooks/status/1462077043110289408
RT @MameFatouNiang: When Mary Shelley wrote #Frankenstein in 1818 (aged 19), neither Jules Verne, nor Welles were born yet (Allan Poe was 9).
A teenage girl wrote what is still considered today the 1st science fiction novel. This article continues the long tradition of erasing her.
- is there a cure for existential loneliness? – by Claire Stapleton – Tech Support
- Translational AI and Deep Learning in Diagnostic Pathology
There has been an exponential growth in the application of AI in health and in pathology. This is resulting in the innovation of deep learning technologies that are specifically aimed at cellular im
- 1910.04867
- Man Keeps a Rock For Years, Hoping It's Gold. It Turned Out to Be Far More Valuable
In 2015, David Hole was prospecting in Maryborough Regional Park near Melbourne, Australia.
- GoDaddy says data breach exposed over a million user accounts | TechCrunch
- Evergreen – Invest to accelerate the transition to renewable energy
Our planet needs trillions in renewable energy investment to stem the tide of climate change. Our mission is to enable anyone to invest in renewables and accelerate our transition to a low-carbon world.
- PAW Climate Tech Conference
There is a wealth of expertise, passion, and money pouring into climate tech as both startups and established industrial players seek to address one of the most important challenges facing humanity. Machine learning can be an important component in tech for addressing the climate crisis. Join PAW Climate to explore how companies apply machine learning to problems such as smart electrical grids, supply chain optimization, building energy efficiency, industrial control, precision agriculture, climate risk assessment, weather forecasting, ecosystem monitoring, and disaster response.
- Myst AI – Expert Forecasting for Energy Companies
The vast ways in which we expend energy — heating our homes, cooling our office buildings, or operating our factories — can all be measured. Myst mines untapped, curated datasets related to weather, energy markets, and human behavior to improve model performance and strengthen your business.
- Home | Open Climate Fix
Open Climate Fix is a non-profit product lab, totally focused on reducing greenhouse gas emissions as rapidly as possible. Every part of the organisation is designed to maximise climate impact, such as our open and collaborative approach, our rapid prototyping, and our attention on finding scalable & practical solutions.
By using an open-source approach, we can draw upon a much larger pool of expertise than any individual company, so combining existing islands of knowledge and accelerating progress.
Our approach is to search for ML (Machine Learning) problems where, if we solve a well-defined ML task, then there is likely to be a large climate impact. Then, for each of these challenges, we will:
Collate & release data, and write software tools to make it super-easy for people to consume this data.
Run a collaborative “global research project†where everyone from 16-year-olds to PhD students to corporate research labs can help solve the ML task.
Help to put good solutions into production, once the community has developed them, so we can be reducing emissions ASAP.
- Home | Frost Methane
- (400) https://twitter.com/Kodakforever/status/1462156194089517056/photo/1
RT @Kodakforever: 📸 Downtown Los Angeles -1949 #kodak #colorslides #kodachrome #35mm © ssilberman collection from flickr
- (400) https://twitter.com/psychcomm/status/1462181183144484868/photo/1
RT @psychcomm:
- Work on Climate
Eugene and Cassandra
- (400) https://twitter.com/NancyHightower/status/1462509263612530692/photo/1
RT @NancyHightower: Union Square. Reflection from a puddle. Part of my underwater city series (if you needed a little beauty today).
Digest powered by RSS Digest
November 22nd, 2021 — Uncategorized
- ICU is full of the unvaccinated – my patience with them is wearing thin | Anonymous
Most of the resources we are devoting to Covid in hospital are being spent on people who have not had jab, says an NHS consultant
- UK regulator approves ‘first of its kind’ Covid antibody treatment
Sajid Javid says green light for Ronapreve – which was used to treat Donald Trump – is ‘fantastic news’
- pointing to the right twitter API module · olivierthereaux/oldtweets@b58d409 · GitHub
- No Future: Full Throttle Death Drive and Coronacapitalism in The Netherlands – The Research Papers
- 100 Notable Books of 2021
The year’s notable fiction, poetry and nonfiction, selected by the editors of The New York Times Book Review
- Single-cell transcriptomic characterization of a gastrulating human embryo | Nature
Gastrulation is the fundamental process in all multicellular animals through which the basic body plan is first laid down1,2,3,4. It is pivotal in generating cellular diversity coordinated with spatial patterning. In humans, gastrulation occurs in the third week after fertilization. Our understanding of this process in humans is relatively limited and based primarily on historical specimens5,6,7,8, experimental models9,10,11,12 or, more recently, in vitro cultured samples13,14,15,16. Here we characterize in a spatially resolved manner the single-cell transcriptional profile of an entire gastrulating human embryo, staged to be between 16 and 19 days after fertilization. We use these data to analyse the cell types present and to make comparisons with other model systems. In addition to pluripotent epiblast, we identified primordial germ cells, red blood cells and various mesodermal and endodermal cell types. This dataset offers a unique glimpse into a central but inaccessible stage of our development. This characterization provides new context for interpreting experiments in other model systems and represents a valuable resource for guiding directed differentiation of human cells in vitro.
- Antiaging diets: Separating fact from fiction
- A Crypto True Believer Makes His Case
- The 30 Things Every Woman Should Do Before She Turns 90 (Or Even 89)
- How Will the COVID Pills Change the Pandemic? | The New Yorker
- Natural Garlic Sauce Condiment | Karam’s Garlic Sauce
- Kiwi – Dauphinette
- Give Your Loved One an Oyster I.O.U.: A Food-Themed Holiday Gift Guide | The New Yorker
- 12 inch Left or Right Hand Spoon – Allegheny Treenware, LLC
- Course | Introduction to Biology – The Secret of Life | edX
- Biochemistry Resource Box | Resource Boxes | Introduction to Biology – The Secret of Life | edX
- Important Course Information | Before You Start | Introduction to Biology – The Secret of Life | edX
- Iditarod sled dog musher Blair Braverman shares the tales from the trail : NPR
- Cell Size and Scale
- The Notorious Mrs. Mossler
A Houston socialite was accused of plotting her husband’s murder—and of having an affair with her nephew. But Candace Mossler was only getting started.
- Important Course Information | Before You Start | Introduction to Biology – The Secret of Life | edX
Digest powered by RSS Digest
November 21st, 2021 — Uncategorized
Digest powered by RSS Digest