Comparative effectiveness research doesn't translate to clinical practice

The Patient Protection and Affordable Care Act is funneling an estimated $3.5 billion into comparative effectiveness research, but little of this research translates into clinical practice. Five factors holding it back need to be addressed, according to an article published in the October issue of Health Affairs.

“Decades of experience suggest that translating evidence into changes in clinical practice is rarely rapid and that treatments with weak evidence of effectiveness are often rapidly adopted,” wrote lead author Justin W. Tambie, PhD, a policy researcher with RAND Corporation in Arlington, Va., along with his colleagues.

From study design to adoption of new clinical practices, the life stages of comparative effectiveness research are shaped by existing practices, professional expectation, financial incentives, local market demands and regulation, according to Tambie. Keeping these factors, all of which can advance or stall a transition to clinical, researchers examined recently published studies on translation of comparative effectiveness research to clinical practice and determined five causes that cause it to fail.

Misalignment of financial incentives pushes patients toward costly treatments, even if evidence suggests less expensive treatments can achieve similar results. In a fee-for-service system, providers are often reimbursed more for invasive and aggressive treatments than for more conservative and cost-effective treatments.  Under pressure from pharmaceutical companies, professional associations and device manufacturers, payers have difficulty modifying coverage policies.

  • Ambiguity of results make it difficult for providers, payers and legislators to act on comparative effectiveness research. Regardless of how carefully designed comparative effectiveness studies are, they rarely produce definitive results and even more infrequently lead to a broad consensus on best practices for treatment.

  • Cognitive biases in interpreting new information reinforce healthcare professionals’ preconceived ideas and leave them predisposed to reject contrary evidence. Additionally, many healthcare professionals believe that action is better than inaction and that the latest technologies provide the best care even if inaction or boring forms of treatment are the best option.

  • Failure to address the needs of end users means some comparative effectiveness research is not relevant to those who might consider it. Tambie cites a study that addressed the effectiveness of two separate psychotics when clinicians were more interested in their safety relative to the other.

  • Limited use of decision support could point providers to the most effective treatments, but the lack of a business case for these tools means they have not been widely adopted.

Building stakeholder consensus surrounding the design of comparative effectiveness research and its translation to clinical practice could help lower barriers to implementation, as could additional legislation to move healthcare away from fee-for-service, according to Tambie.

“American taxpayers and other stakeholders have made a substantial investment in the production of new comparative effectiveness research, in the hope that the results will better inform clinician and patient decisions and thus improve the quality and reduce the cost of care,” he concluded. “To achieve these goals, policy makers must also recognize and address the root causes that have impeded translation of evidence into practice.”

Trimed Popup
Trimed Popup