Imagine it’s 1998, and you’re the doctor in charge at an emergency department. You look in on an elderly woman who has arrived from home by ambulance. She’s pale, her forehead moist, her eyes unfocused. Her pulse is fast and her blood pressure low. An X-ray shows pneumonia, which has probably led to systemic inflammation and the overwhelming, immensely complex immune response known as severe sepsis.
What do you do with this patient? You can give her antibiotics for the pneumonia. You can give her IV fluids—and maybe even mechanical ventilation or medications—to try to raise her blood pressure. Oxygen might help. Definitely a hospital admission.
You think back to a recent journal article about the search for drugs to interrupt the sepsis response (a response that often does patients more harm than the infection that sets it off). No such drug is available yet, though. In fact, you’re only too aware that not much seems to lower the 40-plus-percent mortality rate in sepsis patients. Discouraged, you order fluids and antibiotics and ask the on-call intensivist to see her.
Not long after the date of this scenario, sepsis care changed dramatically. A look at how it did so can tell us something about how biomedical research lights the way, however imperfectly, for physicians at the bedside. How do physicians know what they know—or what they think they know?
In 2001, a University of Pittsburgh–trained critical care specialist at Detroit’s Henry Ford Hospital published a landmark paper on sepsis care in The New England Journal of Medicine. Emanuel Rivers (Res ’87), an MD and MPH, and his colleagues studied 263 patients with severe sepsis and septic shock, comparing mortality in patients treated within six hours with a strict bundle of interventions called early goal-directed therapy (EGDT) to that of patients treated with a simpler group of interventions, one that left more decisions up to the clinician’s judgment.
Patients treated with early goal-directed therapy, which included intravenous fluids, medications to raise blood pressure, continuous monitoring of blood oxygen and blood pressure by dint of internal catheters, and even blood transfusion—all aimed at specific blood pressure and oxygenation goals—did better than patients treated with the simpler interventions. Their rapid heartbeats slowed, their blood pressures rose from low levels, their blood oxygen levels improved. And they survived at higher rates, with a remarkable 16 percent lower risk of dying in the hospital than the other group.
The results offered emergency and intensive-care physicians new hope. Pitt’s Donald Yealy, an MD (Res ’88, Fel ’89), professor and chair of the Department of Emergency Medicine and professor of clinical and translational science, recalls the frustration regarding sepsis care in the pre-Rivers era. “Almost all of the research up until that point didn’t show any one thing was particularly helpful,” Yealy says. “People often had the approach that, once sepsis occurred, you could do supportive care; but really, it was out of your hands. . . . It’s not that patients were ignored, but it seemed like nothing mattered all that much.”
But after Rivers, sepsis didn’t seem so hopeless after all. Pitt’s Derek Angus, an MD and MPH, Distinguished Professor, Mitchell P. Fink Professor, and chair of the Department of Critical Care Medicine, calls the Rivers paper “the shot heard ’round the world.”
That shot was no magic bullet—it showcased a precise, stepwise series of largely uncontroversial treatments, swiftly administered. And it seemed to work. As other researchers rushed to replicate the exciting results, some hospitals adopted the protocol outright. The Surviving Sepsis Campaign launched in fall 2002 and issued its first set of guidelines in 2004; these noted the success of the Rivers protocol and recommended that physicians use its goals. Rivers, as that iconic paper is known among emergency physicians, has been cited more than 3,000 times since its publication.
Still, not everyone was sold yet. Angus says he and his Pitt colleagues viewed the Rivers study with “equipoise.”
“It was a great proof-of-concept study. But it was a single-center study, and so there were important questions about whether the findings could be validated,” Angus says.
Some physicians hesitated to adopt Rivers because the protocol is no picnic. It effectively brings the intensive-care unit into the emergency department, so it requires a lot of resources. Clinicians must place a central venous line and an arterial line—as well as intubate, ventilate, sedate, and paralyze sicker patients—with all the careful monitoring those procedures require. Everything takes place along strict numerical parameters; the clinician works to optimize oxygen levels, blood pressure, and red blood cell levels to specific goals. Titrating blood-pressure support medication requires an eagle eye and a careful hand. The blood bank, too, has to stand by on notice.
“For a while, since [Rivers’] evidence was all that was available, I think people thought that this was the ideal or the singular best pathway,” Yealy says. “The problem is that it’s very difficult to deliver. . . . Many people, I think, considered the use of it, but found it difficult to implement in their own setting.”
Some physicians wondered, too, whether to chalk up the study’s dramatic results not so much to its protocol as to the axioms on which that protocol was built: that sepsis should be sought out, diagnosed, and treated with as much urgency as a gunshot wound. Was determining exactly how to proceed less important than simply proceeding? Emergency physicians and intensivists badly needed a study to answer that question.
They had to wait more than a decade. But in May 2014, Angus, Yealy, and numerous collaborators published a large, randomized, controlled trial that compared septic-shock patients treated with a Rivers-like protocol to patients treated with either of two other simpler approaches—one a protocol and one a “usual care” option that left decisions up to the doctor. All three groups received early diagnosis and treatment, reflecting the post-Rivers consensus that such action is key. The study, called the Protocolized Care for Early Septic Shock (or ProCESS) trial, found no significant survival difference among the groups of patients, who numbered 1,341 people at 31 hospitals. The mortality rate hovered between 21 percent (Rivers protocol) and 18.2 (other protocol-based therapy) at 60 days. (That’s in-hospital deaths; the p. 12 graph shows cumulative mortality at 90 days.) ProCESS lends weight to what many physicians have long thought: Once patients get appropriate early diagnosis, antibiotics, and fluids, there may be more than one right way to proceed.
“What we’ve shown is that . . . how you [treat sepsis] is much less important than the commitment to looking for it and to staying on top of it as early as possible and as aggressively as possible,” says Yealy.
R. Phillip Dellinger, an MD and critical care specialist at Cooper University Health Care in Camden, N.J., is one of the leaders of the Surviving Sepsis Campaign, which still recommends a Rivers-like protocol for septic shock, including placing a central venous line. Dellinger says protocols can be particularly effective in community hospitals and wherever a major study isn’t goading clinicians to extra vigilance; and he suspects ProCESS’s “usual care” patients probably received care similar to what a protocol would call for. Still, he calls ProCESS “a study to be applauded,” because it “really speaks to the power of early identification and early treatment of septic shock and severe sepsis.”
Yealy draws the same message from ProCESS. “Our trial does not refute Rivers. It actually clarifies it,” he says. “Now we think of sepsis like we think of trauma, like we think of stroke, and like we think of heart attack. You have to get moving; you have to do things. That was really the durable message of Rivers.”
Large trials often clarify small trials in this way and sometimes overturn them. For many medical questions, small single-hospital trials are all that clinicians have to go on. But they’re seldom the last word on a subject.
“When it’s one small initial trial, it’s very difficult to make that become a standard operating procedure or become part of a protocol,” Yealy says. “The first study sets the promise of benefit. But it’s rare for it to answer the question completely.”
Cautionary examples abound. Physicians once routinely prescribed hormone-replacement therapy for postmenopausal women, a recommendation they based on small observational studies. Because women’s lipid levels fell with hormone replacement, physicians reasoned, the therapy would help prevent heart disease. Then came the Women’s Health Initiative. More than 16,000 women randomly received either hormone replacement or placebo; the hormone-replacement groups suffered a much higher risk of stroke.
Similarly, oncologists once held out hope that beta-carotene supplements could reduce mortality in lung cancer patients; large studies disappointed them. Intensivists took notice when a single-center study of critically ill patients seemed to show a significant benefit to tight blood-sugar control. (That was rather large, at 1,500 patients.) Eight years later, though, a 42-hospital study of 6,100 patients found that tight control led to higher mortality.
In short, though even large studies can be poorly designed, it’s especially risky to base the standard of care on small or single-center early studies. Cause and effect are more easily confused, for one thing. High blood sugar may not worsen critical illness but merely indicate its presence, so attempts to control it could be misleading. Selection bias, confounding variables, and lack of blinding or controls can skew results in small trials, too. Some simply don’t enroll enough patients for their results to be statistically compelling. And some smaller trials have compelling numbers, but because of an anomaly (like a genetic trait common to the regional population but not the population at large), they don’t hold up on a large scale.
Joseph P. Costantino (a DrPH) knows what is and isn’t enough to hang your stethoscope on. He is a professor of biostatistics at Pitt Public Health and director of the Statistics and Data Management Center for NRG Oncology Foundation, a major National Cancer Institute grant recipient that conducts multi-institutional clinical cancer trials.
“A lot of the information that we have in evidence-based medicine comes from prospective, nonrandomized studies by people just looking at records and assessing who got a treatment and who didn’t, and then comparing the two groups,” Costantino says.
So-called observational studies like that can certainly be useful. But because these studies don’t randomize patients, hidden factors could influence results. The randomized controlled trial is considered the gold standard in clinical research for determining cause-and-effect relationships.
“I’m a firm believer in the randomized controlled trial as the best way to seek the truth,” Costantino says.
A subtler factor can also contribute to disparities between large and small trial results, according to Edward Chu, an MD professor of medicine and of pharmacology and chemical biology, who has spent his career conducting clinical trials of investigational cancer drugs. That factor is meticulousness. Chu says that investigators conducting early phase or other small studies may be more careful compared to those running larger, late-phase studies, and that makes an important difference.
“Even though they’re all working off the same playbook in terms of eligibility criteria, exclusion criteria, I think that level of scrutiny, perhaps the attention to detail, may not be quite as great” in late-phase studies compared to smaller ones, Chu explains.
Less experienced investigators running small trials, he says, tend to follow protocols closely when enrolling patients, whereas seasoned investigators may exercise more judgment about whom to enroll.
Ironically, this slight sloppiness is more representative of how a treatment is likely to be used in the “real world,” Chu suggests, making large trials better predictors of a treatment’s efficacy.
The long delay between Rivers and Angus certainly wasn’t for lack of interest. Conceiving a large multicenter trial and seeing it through to completion is an immense task.
Angus and colleagues designed their follow-up sepsis study in 2005, shortly after wrapping up another one. They secured funding in 2006. It took 18 months to set up the study sites, as institutional review boards examined and approved the study protocol and collaborators learned how to administer it. Patient enrollment took another five-plus years. Crunching the data, by comparison, went quickly.
That schedule is, unfortunately, typical. Enrolling patients can be the rate-limiting step. With rarer diseases, like certain cancers, enrollment can drag because the right patient only comes along occasionally.
Angus says doctors often also mistakenly view clinical trials as distractions or even as being at odds with good patient care. Convincing them otherwise could greatly accelerate the pace of research.
It can be hard, too, to convince people to try new treatments that seem daring. Such reluctance slowed landmark studies comparing lumpectomy plus radiation to total mastectomy in breast-cancer patients, the first of which was launched in 1976 by the National Surgical Adjuvant Breast and Bowel Project (NSABP) under the direction of Pitt’s Bernard Fisher (MD Distinguished Service Professor of Surgery). Fisher hoped to demonstrate—and ultimately did—that the first, less invasive option was as safe and effective as the second. But few patients wanted to be the first to take that chance.
“Getting women and physicians to agree to be randomized to a study where you’re going to do a little bit of surgery compared to this radical surgery—when, for years, the belief was ‘The more surgery the better’—was very, very difficult,” Costantino notes.
Besides delaying medical progress, such hesitancy can undermine the quality of results. By the time the Rivers trial was approved, funded, and under way, new research had emerged suggesting that its blood-transfusion threshold was too strict. It’s hard for researchers to design the ideal research protocol when the standard of care evolves out from under them.
“There’s no question that these trials are incredibly labor intensive and expensive,” Angus says. “There’s a tremendous penalty that we constantly pay in terms of the delay to knowing the answer and the precision to which we know the answer, simply by having clinical trials be logistically burdensome.”
Cancer researchers, at least, are finding ways to speed things up, thanks to what we’re learning about cancer biology.
Typically, clinical researchers test new medical treatments in three phases. In phase 1, a few patients receive the new treatment and researchers test safety, dosage, and side effects throughout the course of several months to a year. Phase 2 trials focus on the treatment’s efficacy in a few dozen or several hundred patients over about two years. Phase 3 trials last much longer; they randomize hundreds or thousands of patients to receive the new treatment or one or more standard treatments.
Pitt has earned an outstanding reputation in phase 3 clinical trials for cancer. For instance, its NSABP conducted the original studies of lumpectomy for breast cancer, as well as landmark research into breast-cancer prevention and treatment with tamoxifen. (In early 2014, the NSABP merged with two other research groups, the Gynecologic Oncology Group and the Radiation Therapy Oncology Group, to form the NRG Oncology Foundation.)
And now the pace and logistics of cancer trials are changing. Tumors result from mutations that release the brakes on a cell’s growth and division. Two patients with the same cancer diagnosis may have very different mutations; and as sequencing technology improves, it’s getting easier to detect and categorize cancers by specific mutation. Many new drugs are aimed precisely at those specific mutations, and researchers expect many more to emerge, potentially transforming cancer treatment.
Studying such drugs means tracking down a group of cancer patients who share the relevant genetic anomaly. Though that sounds difficult, it also presents a golden opportunity. Those studies will require fewer patients than studies of a less-precise drug would—and the results will be more relevant.
Recognizing this, the National Cancer Institute reorganized its clinical trials structure in March 2014 to link cancer centers around the nation in a National Clinical Trials Network (NCTN). (NRG Oncology is one of five of its adult patient “network groups” in the United States and Canada.) The network is intended to speed up late-phase trials by allowing member institutions to collaborate and pool resources.
“Some of these subtypes are so small that there aren’t many patients out there, so you do need to have a large collaborative effort,” says Costantino. “If one group is doing a study, it’s open to the entire system, and the entire system is encouraged to participate.”
Members will share a data-management system and a single institutional review board, both of which are expected to shave time off trials. In April, the University of Pittsburgh became one of 30 recipients of a Network Lead Academic Participating Site grant, which is set aside specifically for the NCTN. At about $5 million, the grant will fund cancer trials under the leadership of Adam Brufsky, an MD/PhD professor of medicine and codirector of the Comprehensive Breast Cancer Center.
NCI isn’t overlooking early phase trials, either. To coordinate phase 1 and 2 trials of investigational cancer drugs and of biomarkers that could help physicians detect patients most likely to benefit, it created the Experimental Therapeutics Clinical Trials Network, or ETCTN, in early 2013. Chu is principal investigator on a $4.25 million ETCTN grant; Pitt is one of 12 centers in the nation to receive a grant of this kind.
There’s reason to think that tests of new cancer drugs could go rapidly. Case in point: ceritinib, a drug the FDA approved to treat a subtype of nonsmall cell lung cancer after it performed spectacularly in a multicenter phase 1 trial that tested an unusually high number of patients. (More typically, a phase 1 cancer trial might come up with 80 to 100 patients; this one had 163.) Early phase trials, then, can be enough to demonstrate both safety and efficacy if researchers can enroll plenty of patients with the relevant mutation.
Ceritinib, Chu says, may herald a new paradigm of drug development.
“They had a genetic mutation. They have a genetic test. They have a drug that targets it. And, poof—in phase 1, [an] incredibly positive clinical benefit,” Chu says. “It’s going to be the poster child.”
Bringing together multiple centers with early phase expertise is critically important, Chu adds. The arrangement takes advantage of each center’s strengths. Pitt, for example, brings strengths in drug metabolism, clinical pharmacology, imaging, and pathology, among other areas. UPMC also has a broad patient base, which makes it easier to find the right patients for any given study.
“In the end, the whole is greater than the sum of the individual parts,” Chu says. “Then there’s real synergy.”
Sepsis researchers like Angus don’t have the oncologists’ luxury of dividing patients into genetic subsets—not just yet, anyway. They are finding other ways to push their research ahead.
Reaching across borders is one strategy. Large though the Angus study was, it enrolled only enough patients to detect a potential 6 to 7 percent difference in mortality between protocols. One of those protocols might still have an edge over the others—just a few percentage points perhaps, but enough to be worth knowing. So Angus plans to pool data from the ProCESS trial with those of two other large sepsis studies. One, called ARISE, was led by Rinaldo Bellomo, an MD and another Pitt-trained intensivist (Fel ’93), who teaches at the University of Melbourne and Monash University. Bellomo et al. reported on October 1 in NEJM no difference in mortality (18.6 v. 18.8 percent at 90 days) between the Rivers protocol and usual care. That study—with 1,600 patients mostly from Australia and New Zealand—was even larger than ProCESS. The other multicenter trial, ProMISe, takes place in the United Kingdom. With such a huge patient pool across so many centers, small but potentially lifesaving subtleties in sepsis care should be detectable.
Keeping in touch with patients over time and re-examining samples collected during the study can bring more valuable insights. Angus will follow ProCESS patients for years to learn more about long-term sepsis survival. Other researchers across the nation, including Pitt’s Brian Suffoletto, MD assistant professor of emergency medicine, are examining ProCESS blood samples to investigate the role played by the endothelial cells lining blood vessels in sepsis.
And new large-scale studies continue to be born. Associate professor of critical care medicine David Huang, an MD/MPH, who trained at both Pitt and Henry Ford Hospital, is leading a new multicenter study of procalcitonin, a marker of inflammation, to see whether it can alert doctors to early stage pneumonia. If so, that would make decisions about prescribing antibiotics easier.
Though the process of understanding sepsis has been arduous, we can take heart. The mortality rate has plummeted since 1998. That’s thanks to studies both large and small (especially the study by Rivers).
Angus hopes that physicians will become more receptive to the idea of involving their patients in research studies. The National Cancer Institute reports that just 3 percent of cancer patients are enrolled in clinical trials. Angus believes that trials would run 10 times faster if just 10 percent of eligible patients were to enroll instead.
So whether you’re a doctor or a patient, add “study enrollment” to your to-do list. Medical progress needs you.
Editor’s Note: Writer Jenny Blair, an MD, trained in emergency medicine. She says that as a resident a decade ago, she made hundreds of index cards “that served as mnemonics/reminders of this and that.” The Rivers protocol was too complex to fit on a card, so she carried a folded up sheet of paper instead. She still has it today.