Bench philosophy: Deep learning implementations in biology

A Deeper Understanding of Life
by Steven D. Buckingham, Labtimes 02/2016



Deep learning algorithms, applied by computer scientists to “teach” machines, are inspired by the brain's neuronal networks. So why not use deep learning approaches to tackle biological problems?

This year, artificial intelligence hit the news once again when, for the first time, a computer programme beat the European champion in a game of Go. Why was this so important? It is because Go is not like chess – the number of possible moves is much less constrained and is much more like the complex, open-ended world with which humans feel comfortable. And until recently, a world that we felt was our own domain, off limits to computers. Until recently, we silenced our fears about robots taking over, by assuring ourselves there are some tasks they could never do. But the recent capitulation of Go to a piece of software reflects something big happening in artificial intelligence that will certainly affect us all.


Deep learning may support biologists in many different fields, ranging from data mining and sequence alignments to population genetics and drug design. Photo: University of Washington

What has changed? The answer is deep learning – an artificial intelligence approach that has learned from the way our own brains are wired up. Deep learning has been causing a quiet revolution in the world of artificial intelligence and the ripples of that revolution are beginning to lap at the shores of biology.

Two machine learning ways

So what is deep learning and why should biologists pay it any attention? There are two ways you can make a machine intelligent. The first is the hard way – you start at the very bottom and try to figure out the rules, one by one, that underscore an intelligent behaviour. For example, if you were writing a programme to translate from one language to another, you would laboriously code in all the grammatical rules, syntactic rules, idioms, vocabulary, etc., along with all the exceptions listed one-by-one and code them into a programme. It will then do exactly what you tell it to do – it is just an embodiment of rules you had worked out yourself. Your thoughts encapsulated in code. But with a little imagination, you can probably guess why this approach didn't really take off. The problem, of course, is that when you do things this way, there turns out to be so many rules and exceptions that you never really get to the end of it all. And in some cases, such as in computer vision, you can prove (more or less) that it is actually impossible to finally solve the problem – there just isn't a simple mapping from one domain (the image) to the other (identifying an object).

So, as far back as the 1950s, some people attempted a slightly different approach by taking (as all great engineering inventions do) their cue from nature. Noticing that the brain appears to be built up from some very simple components (now stop laughing, you neuroscientists – it did look ‘simple’ at the time...), some computer engineers tried just hooking up a computer equivalent of the neuron and tweaking the strength of the connections between them (‘weights’), until it did what the­ ­engineers wanted. Thus, the ‘perceptron’ was born and there was a lot of excitement because it looked as though there were many things it could do that foxed the ­“hard coders”.

But then it all went through a slump, when someone at the back of the room cleared their throat and asked how a perceptron could figure out the simple ‘XOR’ problem (say ‘yes’ if one, but only one, of two inputs is ‘yes’).

Similar to neural networks

But neural networks gradually made a comeback thanks to two important developments. The first was the discovery that if you add a hidden layer in between the input and output layers, you can, in fact, approximate any function you can think. The second development was the invention of the “back propagation” learning algorithm. This is a method that allows the network to figure out how to adjust the weights (the strength of the connections between the “neurones”) in response to an error. That way, you don't have to figure out yourself what weights to give the network, you just unleash it on a training set and let it learn from its mistakes.

But this isn't deep learning, yet. The final big step took place just over the last decade or so, when another lesson was learned from the brain. Deep learning is when, just like in the brain, you stack up a series of neural networks, with each one's output providing the input for the next layer further up the chain. The beauty is that each layer takes care of its own learning. And the inspiration from nature goes further still: many deep learning systems contain what is called ‘convolutional’ layers – they apply filters to their inputs that are not unlike the visual fields of neurones in our visual system. Such networks are capable of learning highly abstract statistical patterns in the data.

The results of deep learning are quite staggering. Take a look at Google's showcase of the TensorFlow deep learning kit, they have recently released to the public as open source (http://googleresearch.blogspot.co.uk/2014/11/a-picture-is-worth-thousand-coherent.html). Take a picture of two pizzas on a stove top, present it to a suitably crafted, deep learning network and it will return a caption (“two pizzas on a stove top”). What is more, these visual networks can even “dream” – in the absence of input they can generate images internally that look very much like surreal art.

Not limited to images

So, how is this impacting biology? Remember what I said earlier: in essence, deep learning is all about finding higher order statistical patterns. And these statistical properties can be in just about any domain. They are not just limited to images: free text (including natural language), speech and other areas have been tried with notable success. And just as previously, refractory problems in these areas are falling under the deep learning onslaught, similarly, hard problems in biology are yielding to its charms.

Take the problem of recognising sequence specificities of DNA- and RNA-binding proteins. Sure, this is regularly being done using tricks like position-weight matrices. But, as Brendan Frey and his team have pointed out in a recent publication (Nature Biotechnology 33: 831), there remain some serious challenges. First, when you try to leverage high-throughput experimental data, you face the problem that the data come in qualitatively different forms. And then there is, of course, the sheer volume of data to be processed. So Frey applied his own deep learning network (DeepBind) and found that it not only performed incredibly well, but it was also able to generalise easily over different types of data input and was robust against noisy or missing data.

Deep learning start-ups

Frey knows that deep learning is not just a toy for academics – indeed, it is starting to earn hard cash as it is the basis of some new start-ups. Last summer, the University of Toronto launched a spin-off company called Deep Genomics (beating the inevitable rush for obvious company names) – thanks to Frey's work. Deep Genomics fed their deep learning networks with information about gene mutations and the resulting aberrations in RNA splicing and editing. The result, published last year in Science, was encouraging: the artificial brain spotted thousands of known disease-causing mutations and threw up a few new ones of its own, including 17 new autism genes (Xiong et al., 9; 347,6218).

This February, Danish biotech companies Bavarian Nordic and Evaxion Biotech formed a team with the Technical University of Denmark to make use of deep learning in an attempt to discover a vaccine against MRSA (Methicillin-resistant Staphylococcus aureus). Evaxion have experience in this area: they make their money from a deep learning network that looks at the proteome of a pathogen, then predicts a list of proteins able to elicit a “super protective” (i.e. protective across a range of strains) antibody response.

Andrew Radin and Andrew Radin met up and formed a company...wait – no – that is NOT an error – Andrew M. Radin originally encountered Andrew A. Radin over a domain-name dispute but eventually made friends and ended up working together. The result of their collaboration is TwoXAR (www.twoxar.com), which claims that whereas you and I would take up to six years to gather evidence for a new drug candidate, their deep learning net can do it in minutes. Unconvinced? Understandable, however, some hard-nosed business people have obviously been won over – last November the Andrews Radin managed to raise $3.4 million in seed financing for TwoXAR. (By the way, have you realised how they devised their company name?)

Cutting costs with deep learning

But TwoXAR are not alone. GE Healthcare are also convinced by deep learning. At the end of 2015 they signed a deal with newly-launched Arterys to develop automated cardiac MRI assessment, using just a single, ten-minute scan – something unheard of to-date.

But for once, it is not big business that stands to profit from this new technique. It has not escaped notice that a piece of software that can outperform humans can cut costs for anyone, including the less well-off. This is due to an odd quirk of technology history that provides a great ready-to-use platform for deep learning networks. Deep learning has been implemented to take advantage of the GPU (Graphics Processing Unit). Why is that good? Because the mobile market is aimed at users who want a good graphic experience, so tablets and mobiles are usually pretty well endowed in the GPU department.

Start-up SocialEyes (www.social­eyesus.com) has developed an Android tablet that you can point at someone's eye, take an image of the retina and pass the image through a deep learning network that detects retinal markers for disease. This is great news for the developing world, where expensive expertise can be hard to find.

Deep learning is taking all the prizes – literally. In 2012 Merck launched their “Molecular Activity Challenge”, in which teams competed “to identify the best statistical techniques for predicting biological activities of different molecules, both on- and off-target, given numerical descriptors generated from their chemical structures”. Teams were given 15 molecular activity data sets, each for a biologically relevant target. And the winner is... yes, you guessed it – a deep learning network! One, as it happens, developed by one of the greats in the field, Geoffrey Hinton and his team.

More user-friendly utilities

Geoffrey Hinton, no less? So is deep learning only for the experts? Well, not quite. Getting hold of the software for running deep learning is easy – there are free packages, such as the Theano from http://deeplearning.net/software/theano/ (a ­Python package that works nicely with your GPU), C++-based Caffe (which talks nicely to Python) and, of course, Google's TensorFlow. All these sites have well-crafted tutorials, which at least give the impression that the new user will eventually find their way around this new field. But beware – training these networks takes time. And there is not much here for you, if you don't know how to programme.

But even there, some slightly more user-friendly utilities are coming onto the scene. Keras (https://github.com/fchollet/keras/wiki/Keras,-now-running-on-TensorFlow) was originally designed to be – according to the GitHub site – “a model-level framework, providing a set of ‘Lego blocks’ for building Deep Learning models in a fast and straightforward way”.

Having said that, you will probably never build and train your own deep learning network. But it is a near certainty, you will see their impact in bioscience grow at a noticeable rate in the coming decade.





Last Changed: 11.04.2016




Information 4


Information 5