Entries from November 2021 ↓

pinboard November 30, 2021

Digest powered by RSS Digest

pinboard November 29, 2021

Digest powered by RSS Digest

pinboard November 28, 2021

Digest powered by RSS Digest

pinboard November 28, 2021

Digest powered by RSS Digest

pinboard November 26, 2021

Digest powered by RSS Digest

pinboard November 25, 2021

Digest powered by RSS Digest

pinboard November 24, 2021

  • Tom Stoddart obituary
    Tom Stoddart obituary.
    Photojournalist who covered the civil war in Lebanon, the siege of Sarajevo, the fall of the Berlin wall and the 2003 invasion of Iraq
    https://www.theguardian.com/artanddesign/2021/nov/23/tom-stoddart-obituary
  • ‘Find a part of each day to relish’: coping with cancer and Covid
    This year has challenged us all. But for Sarah Hughes it’s been particularly hard. Here, she talks about living with cancer – and letting in the light in the darkest of times
  • Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences | PNAS
    Learning biological properties from sequence data is a logical step toward generative and predictive artificial intelligence for biology. Here, we propose scaling a deep contextual language model with unsupervised learning to sequences spanning evolutionary diversity. We find that without prior knowledge, information emerges in the learned representations on fundamental properties of proteins such as secondary structure, contacts, and biological activity. We show the learned representations are useful across benchmarks for remote homology detection, prediction of secondary structure, long-range residue–residue contacts, and mutational effect. Unsupervised representation learning enables state-of-the-art supervised prediction of mutational effect and secondary structure and improves state-of-the-art features for long-range contact prediction.
  • Roche, Genentech tap Flywheel to spin up AI-based drug discovery research | FierceBiotech
    Roche and its Genentech division have tapped data curation developer Flywheel to help train its machine learning models aimed at discovering potential new drugs. 

    Flywheel’s platform helps automate the ingestion, classification and analysis of medical images collected from radiology scans and digital pathology programs as well as data from electronic health records and diagnostic tests.

    By taking the cumbersome steps out of aggregation and preprocessing, the startup’s cloud-based platform aims to deliver large batches of ready-to-use information to Roche and Genentech’s drug discovery teams working on precision medicine projects across multiple locations.

  • Vincentx15/Equi-RC: Equivariant layers for RC-complement symmetry in DNA sequence data
    Equivariant layers for RC-complement symmetry in DNA sequence data

    This is a repository that implements the layers as described in "Reverse-Complement Equivariant Networks for DNA Sequences" in Keras and Pytorch. The simplest way to use it is to include the appropriate standalone python script in your code.

  • Reverse-Complement Equivariant Networks for DNA Sequences | bioRxiv
    As DNA sequencing technologies keep improving in scale and cost, there is a growing need to develop machine learning models to analyze DNA sequences, e.g., to decipher regulatory signals from DNA fragments bound by a particular protein of interest. As a double helix made of two complementary strands, a DNA fragment can be sequenced as two equivalent, so-called Reverse Complement (RC) sequences of nucleotides. To take into account this inherent symmetry of the data in machine learning models can facilitate learning. In this sense, several authors have recently proposed particular RC-equivariant convolutional neural networks (CNNs). However, it remains unknown whether other RC-equivariant architectures exist, which could potentially increase the set of basic models adapted to DNA sequences for practitioners. Here, we close this gap by characterizing the set of all linear RC-equivariant layers, and show in particular that new architectures exist beyond the ones already explored. We further discuss RC-equivariant pointwise nonlinearities adapted to different architectures, as well as RC-equivariant embeddings of k-mers as an alternative to one-hot encoding of nucleotides. We show experimentally that the new architectures can outperform existing ones.
  • Leveraging Deep Learning for Multilingual Sentiment Analysis – AYLIEN News API
    It is a strong indicator of today’s globalized world and rapidly growing access to Internet platforms, that we have users from over 188 countries and 500 cities globally using our Text Analysis and News APIs. Our users need to be able to understand and analyze what’s being said out there, about them, their products, services, or their competitors, regardless of the locality and the language used. Social media content on platforms like Twitter, Facebook and Instagram can provide unrivalled insights into customer opinion and experience to brands and organizations. However, as shown by the following stats, users post content in a multitude of languages on these platforms:

    Only about 39% of tweets posted are in English;
    Facebook recently reported that about 50% of its users speak a language other than English;
    Native platforms such as Sina Weibo and WeChat, where most of the content is written in a native language, are on the rise;
    70% of active Instagram users are based outside the US.
    A look at online review platforms such as Yelp and TripAdvisor, as well as various news outlets and blogs, reveals similar patterns regarding the variety of language used. Therefore, no matter if you are a social media analyst, or a hotel owner trying to gauge customer satisfaction, or a hedge fund analyst trying to analyze a foreign market, you need to be able to understand textual content in a multitude of languages.

  • (Saving…) [2005.14165v4] Language Models are Few-Shot Learners
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions – something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.

Digest powered by RSS Digest

pinboard November 23, 2021

Digest powered by RSS Digest

pinboard November 22, 2021

Digest powered by RSS Digest

pinboard November 21, 2021

Digest powered by RSS Digest