簡(jiǎn)介: ![]() Deep learning will transform medicine, but not in the way that many advocates think. The amount of data times the mutation frequency divided by the biological complexity and the number of hidden variables is small, so downloading a hundred thousand genomes and training a neural network won’t cut it. 原文鏈接:http:///why-medicine-needs-deep-learning/ 2.【博客】Read-through: Wasserstein GAN 簡(jiǎn)介: For Wasserstein GAN, it was mostly compelling word of mouth.
3.【demo & 代碼】Image-to-Image Demo 簡(jiǎn)介: ![]() Recently, I made a Tensorflow port of pix2pix by Isola et al., covered in the article Image-to-Image Translation in Tensorflow. I've taken a few pre-trained models and made an interactive web thing for trying them out. Chrome is recommended. The pix2pix model works by training on pairs of images such as building facade labels to building facades, and then attempts to generate the corresponding output image from any input image you give it. The idea is straight from the pix2pix paper, which is a good read. 原文鏈接:http:///pixsrv/index.html 4.【博客】Sorting through the tags: how does Tumblr’s graph-based topic modeling work? 簡(jiǎn)介: ![]() What makes Tumblr stand apart from other social media platforms lies in the unique way its users communicate with each other. Each user has their own highly customizable blog where they can post and share content?—?like articles, images, GIFs, or videos?—?or re-post content published by another user. Sharing and re-posting content is not only key to how social connections are formed, but also how trending and popular topics are established, since the user must tag each post that they publish. 5.【博客 & 代碼】How to implement Sentiment Analysis using word embedding and Convolutional Neural Networks on Keras. 簡(jiǎn)介: Imdb has released a database of 50,000 movie reviews classified in two categories: Negative and Positive. This is a typical sequence binary classification problem. In this article, I will show how to implement a Deep Learning system for such sentiment analysis with ~87% accuracy. (State of the art is at 88.89% accuracy). |
|
|