A scene of two children as it might be viewed by a person with diabetic retinopathy. (National Institutes of Health)
Would you trust an algorithm to make a medical diagnosis?
In the past six months, hundreds of the most talented data scientists from all over the worldentered a competition, sponsored by the California Healthcare Foundation, to build just such an algorithm. The winner, Benjamin Graham, a professor of statistics from the University of Warwick, developed a computer program that rivaled human experts in identifying signs of diabetic retinopathy from an eye scan.
Health experts say this could have major implications for the 347 million people who have diabetes worldwide. Diabetic retinopathy affects 40 to 45 percent of people with diabetes. Without treatment, it often leads to severe vision loss. Early detection can make a huge difference, and an algorithm could step in where resources are scarce.
"There has been this dream in our community to develop an algorithm that can read images of the retina," said Jorge Cuadros. He's a clinical professor in the department of optometry at the University of California, Berkeley, and the CEO of Eyepacs, a clinical application for exchanging eye-related information. Cuadros has been exploring algorithms and their utility for diabetic retinopathy for more than a decade.
According to Cuadros, connecting patients with diabetes to the right treatment has proven to be an ongoing public health challenge, particularly in rural and low-income communities. Among other things, he added, a computer algorithm could do the vital work of "triaging" by prioritizing the patients most at risk of the disease, when there's a giant pile of eye scans.
The algorithm could also help human specialists avoid making mistakes. Graham's winning algorithm had an "agreement rate" that was roughly 10 percent higher than a human-only approach. What this means is that the algorithm and a human expert were more likely to agree on a diagnosis than two human experts.
"At some point in the near future we'll be limited less by what technology can do and more by people's willingness to trust it," said Anthony Goldbloom, cofounder and CEO of Kaggle, a San Francisco-based web startup that hosted the competition.
Smarter than Humans
Cuadros had his doubts when he initially learned about the competition, which kicked off in February of this year.
"I wondered whether an algorithm could be as good as a human," he said.
But a new wave of tools had emerged in recent years that use sophisticated machine-learning technology for medical diagnosis. In recent years, startups like Entilic and Cellscope have developed sophisticated technology to find signs of diseases like cancer from patients' medical images.
So Cuadros, through his work at Eyepacs, agreed to provide thousands of anonymous eye scans to the data scientists. These images had already been graded by one or more human experts as sight-threatening or not.
Graham differentiated between the scans by leveraging neural networks, a branch of machine-learning that teaches a computer to map the function of the brain. "The winner applied this technique to develop the algorithm in ways that would not have been possible five years ago," said Goldbloom.
According to Cuadros, human specialists get it right about 90 percent of the time. The error rate is in often due to tiredness or problems with the quality of the image.
"What was so amazing is that there was a very high correlation between the algorithm and the human experts," Cuadros said. "But when there was a disagreement, sometimes the algorithm got it right and the humans were wrong."
What's Next for the Algorithm?
New programs have popped up in rural parts of California in recent years to incorporate screening into primary care visits, with results interpreted remotely by ophthalmologists via an eye scan. One immediate use for the algorithm is to notify photographers in real-time whether they need to retake the image.
But it remains to be seen whether the algorithm will make its way into clinical practice as a diagnostic tool that will replace or support human specialists. The next step is a more formal clinical trial to convince other decision-makers in the medical community. The trial will include 1,000 patients and will kick off later in the year.
Cuadros hopes the algorithm may ultimately prove useful in places where "virtually zero percent of patients with diabetes get retinal exams." In North Africa, for instance, nonprofit groups are leveraging telemedicine services so providers anywhere in the world can look at people's eye scans. But the limiting factor is that a high-quality Internet connection is required. "In future, these clinics would only need the algorithm and an inexpensive retinal camera," he said.
Globally, the algorithm could potentially alert a human specialist to look more closely at one scan over another. It may also be useful in nudging primary care providers when a patient needs to see a retinal specialist immediately, rather than a regular eye doctor.
The regulatory climate is far more tricky in the U.S., Cuadros said, as there are barriers to computers performing eye exams without a human specialist present. "Unless this policy changes," he said, "there will be fewer incentives for clinics to rely on the algorithm."