Artificial Intelligence and Medicine

Note: Apologies for not updating in a while. I’ve been working on a science fiction novel for National Novel Writing Month. Stay tuned, I might show some of it on this blog!

Artificial intelligence, or more accurately machine learning, has exploded within the past couple of years. This is not the artificial intelligence of science fiction movies (that would be called general artificial intelligence), but algorithms, or sets of instructions, to perform tasks. They have applications in search engines, speech recognition, and even games.

Formally the definition of machine learning consists of algorithms that can perform tasks without explicit programming, through the process of learning. The most famous examples right now would be Watson, who won against two human opponents in Jeopardy!, and Deepmind, which won a Go game against a top champion.

So what does machine learning do exactly? It is mainly used to find patterns and associations in data, much like traditional statistical methods. The goal is to find possible predictors for an outcome of interest in a dataset and use those predictors to accurately predict the outcome in a new dataset. Within health, there are many possible applications. such as diagnosing illnesses from medical images, natural language processing to extract pertinent info from electronic medical records, identifying potential drug targets, and determining risk of disease.

Machine learning can be categorized into two types: supervised learning and unsupervised learning. Supervised learning is the more familiar type, where data are already labeled with what we want the algorithm to learn (i.e., both inputs and outputs are known). Unsupervised learning is where the algorithm analyzes unlabeled data to identify patterns and correlations. Examples of supervised learning methods include multiple linear regression, logistic regression, decision trees, support vector machines, and neural networks. Examples of unsupervised learning methods include principle components analysis, factor analysis, autoencoders, and k-means clustering. I’m not going to into the nitty gritty of what differentiates these methods, but they were each created to handle the different structures of the data and use different methods for learning and building models. (One machine learning algorithm that’s gained prominence is deep learning, a subset of neural networks that analyzes large multilayered datasets. It’s most famous use is in Google’s search engine). All these methods are trained on an initial dataset (or two, or three, or more). Then the resultant model is tested on another dataset to determine accuracy.

Yes, like in traditional biostatistical regression (among others), machine learning methods build models to describe associations and predict outcomes. But they do it through the process of learning. What do I mean by learning? Algorithms are penalized for making errors. For example, in image analysis, if it misidentifies a cat as a dog, or a benign tumor for an active one, its model gains a larger error term. The algorithm then makes modifications to reduce that error term until it reaches an optimal state. Like other models, we measure performance with measures of error, percent correctly predicted, sensitivity, specificity, false positive rate, false negative rate, and others.

So how is it doing in medicine? The most success has been seen in radiology, that is, looking at images to identify and classify diseases faster and almost as accurately (in some instances) as radiologists. Personally I haven’t seen it applied much to administrative claims datasets, but there is potential there, especially with all the data that are available. Perhaps it could be used to stage fibrosis for Hepatitis C or identify patterns associated with addiction.

However, there are some serious obstacles to widespread adoption. The largest one would be that the algorithm itself is a black box. Especially with neural networks, we don’t know how their models form their predictions. This would be especially dangerous in a life and death field such as medicine. Hopefully, there are tools being built that can increase transparency. Another drawback is the vast amounts of data that are required. Going back to image analysis, millions of images are needed to train an algorithm. And finally, there’s that fear lurking in the back of everyone’s mind: their jobs will be replaced by computers.

In my opinion, it will be a long time before these algorithms replace the human element in medicine. They are currently not capable of critical thinking, or determining what’s most clinically relevant. They do however, show much promise for research purposes, and may bypass many of the problems of statistical inference. But make no mistake, they are extremely complicated. I haven’t even touched the math in this post.

Summary: Machine learning is a set of methods to do complex tasks through learning. There are many possible applications in medicine. Problems include lack of transparency and complexity.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s