UPFRONT: What You See Is What You Get


“What we see depends mainly on what we look for.” — John Lubbock

Clinical trials are designed to prove the null hypothesis wrong. So by definition, the organizers have a thought, and then they design a trial to prove that thought. So when the result of a trial does not match a preconceived notion, people will try to spin the result toward the answer they originally wanted. We see this all the time in retina, especially in National Eye Institute clinical trials.

Pharmaceutical industry-sponsored phase 3 registration studies have a binary goal: either the new drug is equal to or better than the current standard of care. The drug must meet the standard or it is not approved. There is no room for opinions or marketing spin, because the FDA-approved label is based completely on the clinical trial results, and a company can market only what is contained in the label.

The CATT trial (and the numerous other comparison studies performed worldwide) was designed to see if there was a difference in the efficacy (but not safety) between ranibizumab and bevacizumab. We all know that the study results showed that the 2 drugs were similar in efficacy. However, your interpretation of CATT results depends on what you were looking to get out of the study. Bevacizumab users note that the efficacy of monthly treatments of the 2 regimens were similar, which bolsters their point, yet they conveniently forget that as-needed bevacizumab was inferior. As-needed users overlook that monthly treatment won by a long shot, and that the PRN they practice is not what was done in the study. Finally, the treat-and-extend crowd use the CATT results to show that their regimen is best — even though it was not even studied in the trial. In this issue, we look at the 7-year results of the study.

A similar phenomenon came out of the DRCR Protocol T results. Bevaciuzmab users note that because the number of letters gained was not so great, the study supports the practice of starting with bevacizumab and switching if the things don’t look good early. On the other hand, ranibizumab users interpret the first-year results as a fluke and take the second-year results as the truth, overlooking the difference in area under the curve. And aflibercept users feel vindicated, even though the difference was small.

It is human nature to want to be right, and these varying interpretations of the same trial results merely show that retina specialists are human. But, it is important to look at things from other angles. To that end, in this issue, you will find the first of our new multidisciplinary roundtables tackling interesting and often controversial topics. The goal of these roundtables is to discuss topics in retina with both retina and other specialists. By including views often different from our own, we hope to shed more light on these issues. We plan to expand this section and include other topics of interest in future issues. We welcome any suggestions. RP