Members of the Accelerating Therapeutics for Opportunities in Medicine (ATOM) consortium at Lawrence Livermore National Laboratories. From left: Stacie Calad-Thomson, GlaxoSmithKline; Jim Brase, Lawrence Livermore National Lab; Jason Paragas, and John Baldoni, vice president of in silico discovery at GSK. (Julie Russell/LLNL)
Cancer researchers at UCSF and computer scientists at Lawrence Livermore National Laboratory are partnering with researchers from the National Cancer Institute's Frederick National Laboratory and pharmaceutical giant GlaxoSmithKline (GSK) in an attempt to do just that. Members of the consortium hope to use supercomputing power to significantly slash the time needed toscreen potential cancer drugs and bring them to human trials.
"We're rolling on this, and it's really exciting," says Alan Ashworth, president of the Helen Diller Family Comprehensive Cancer Center at UCSF. The university is committing $750,000 annually plus researchers and lab space, Ashworth says.
GSK is contributing a vast store of data concerning 2 million molecules that could lead to cancer drugs. The data trove also includes 500 compounds that failed in the development process. The combination of the huge data set with information on failed drugs will be key to training the project's algorithms to understand how compounds interact with the human body.
A Long Process
Ashworth was one of the scientists who, in 1995, discovered the gene BRCA2, which increases the risk of developing inherited breast cancer. It took 10 years from discovery of the drug to the treatment of cancer related to that mutation, he said, and another 9 years before the drug was licensed.
The reason cancer drug development can take so long is that most testing is done with actual molecules — millions of them in order to find one that might become an effective drug in treating a particular type of cancer.
For each molecule, researchers create dozens of iterations in a process called "optimization," altering the molecule to find the most effective form. Chemists have to create every modification in the lab, so researchers can test each one for toxic effects and efficacy. Just one toxicology study on one molecule takes 18 months, says Michelle Arkin, an associate professor in the UCSF School of Pharmacy. If the molecule fails, you start over.
But if you could test these same molecules in a supercomputer programmed to understand the relevant biological relationships, it would significantly streamline the process — a computer can do in seconds what may take days or weeks in a lab.
The consortium, called ATOM, for Accelerating Therapeutics for Opportunities in Medicine, believes computing power has finally reached the point where it can aid in the process.
"There's been a revolution in machine learning," says Jim Brase, the deputy associate director for computation at Lawrence Livermore National Laboratory.
In the past, Brase says, when scientists wanted to analyze a large set of data, they had to tell the computer what interactions to search for in determining how a drug molecule might affect a specific protein in a particular type of tumor.
But now, Brase says, scientists believe they can teach a computer the fundamentals of biology so it can learn to identify which relationships might be effective and which could be useless or toxic.
Success will depend on scientists' ability to push computer learning to new levels.
Stacie Calad-Thomson, operations, planning and strategy director with GSK, says the consortium will take the first two years of the project to develop algorithms. After that, the goal is to identify an effective drug and take it to human trials -- all within one year.
The original data donated by GSK will remain private, says Mary Anne Rhyne, the company's director of corporate communications. But any resulting drug-testing tools will be made publicly available.
Reasons for Skepticism and Hope
The goals of computer-assisted drug development and personalized medicine have been around for a long time, and many of these efforts have disappointed.
Earlier this year STAT, as well as watchdog site Health News Review, reported on the failed collaboration between the M.D. Anderson Cancer Center and IBM's Watson, which sought to make personal cancer treatment recommendations and match patients to clinical trials. Behind the largely positive news coverage, the project did not work as hoped. (Not everyone agrees it's been a bust.)
Meanwhile, the company's Watson for Drug Discovery program is hoping candidate drugs it identified as potential treatment for Parkinson's disease will prove to work. And IBM and Pfizer announced a collaboration in 2016 to use Watson to accelerate cancer drug discovery.
Dr. Steven Salzberg, director of the Center for Computational Biology at the McKusick-Nathans Institute of Genetic Medicine at Johns Hopkins University, is skeptical of such supercomputing efforts.
"I honestly don't see how throwing lots of computers at the problem will speed up preclinical cancer drug discovery," he says. "It's the classic problem where when your only tool is a hammer, everything looks like a nail.
"We all need computers for research, so that's good. But cutting drug discovery from 6 years down to 1? That sounds implausible to me."
But at the Dana-Farber Cancer Institute at Harvard, Aedin Culhane, who is also a researcher in the Department of Biostatistics, is more optimistic. Yes, the idea of computer-assisted drug discovery has been around for a long time, she says, but it's only now that it's finally matured.
"These projects were completely and utterly just a pipe dream 10 years ago," says Culhane. "And now we have more and better data, and we're learning more day by day. Given good data, the machines can learn, and I'm very hopeful about it. Much more than I was 10 years ago."
She finds ATOM's intent to publish negative results especially interesting. "Because that's something that is missing. Academics rarely publish negative results, and pharma even more so."
Re-examing old data in the light of new information, she says, can sometimes offer novel insights. Perhaps researchers misunderstood how a failed drug was working in the body the first time around; or maybe reviewing its off-target effects in light of new discoveries related to tumors will reveal a deeper understanding of biochemistry.
John Baldoni, senior vice president for computerized drug screening at GSK, says that not only are data more abundant, but the engineering required for computer-based analysis is more sophisticated.
He also believes researchers will be able to draw on software advancements in fields like facial recognition and dimensional analysis.
"We have applications in other sectors that are analogous to the applications that we want to develop in the pharma sector," he said.
The technology is also starting to demonstrate results, according to Baldoni. He points to clinical testing of molecules discovered by computer algorithms from Nimbus Therapeutics.
"What I would say to a skeptic," he says, "is 'You might be right.' But I hope they're not. I think there are a confluence of things that give this a better shot now than we had in the past."
Get the best of KQED’s science coverage in your inbox weekly.