Cancer Forums and News by PhD's

News | Forums Register

Go Back   Cancer Forums and News by PhD's > Main Category > Main Forum

Thread Tools Display Modes
Old 10-29-2010, 09:29 PM
gdpawel gdpawel is offline
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,360
Default Comparative Effectiveness Research Panel

The comparative effectiveness research panel does not tell any doctor what treatment or drug they can or cannot prescribe. It looks at "comparative" effectiveness, not "cost" effectiveness. When it compares two treatments, it does not take cost into account.

The panel's only mission is to figure out which treatment is most effective for patients who meet a certain medical profile. The panel's assessment is then made public so that doctors can take it into account when making decisions. The law says that the information cannot be used as the sole consideration when deciding what to cover.

The major use of this informaton will be to put it into Health IT (information technology) programs, flagging doctors who are treating disease X or Y, providing the comparative effectiveness information on which patients are likely to benefit from treatment A or B. Doctors are free to use the information, or ignore it. Some doctors will ignore it becuase they have always used treatment A and they're just not ready to be convinced by new medical evidence. Other doctors will ignore it because they are treating a patients suffering from 3 diseases and so while treatment A might be best for most patients, it wouldn't be best in her case.

In the UK, where the government does comparative effectiveness reserach and makes it available to all doctors, doctors comply with the recommendations about 88% of the time. The government considers this a good compliance rate becuase there are always situations where there is something unique about the patient or the cirumstance. They want doctors to use their judgment when "applying" the resarch.

On the other hand, in the U.S. doctors practice evidence-based medicine, doing the things that are recommended (beta-blockers after heart attacks, etc.) only about 50% of the time. This is a very low rate of compliance with "best practice" guidelines.

On the Health Business Blog, David Williams asks New England Health Care Institute’s Valerie Fleishman to explain the definition of comparative effectiveness research (CER), describe CER provisions contained in the new legislation and discuss the challenges in disseminating new information to be used at the point of care. It’s an excellent interview which tells you everything you need to know about Comparative Effectiveness Research --and how the research will be used at point of care.


Fleishman begins by explaining that comparative effectiveness research was first established as part of the Recovery Act in 2009 as part of the stimulus legislation. That legislation had in it a $1.1 billion investment for comparative effectiveness research and laid the foundation for what appeared in the Affordable Care Act (ACA) in March. The ACA strengthens this plank of reform by

1. Establishing a non-profit corporation called the Patient Centered Outcomes Research Institute (PCORI). That institute is set up as a non-governmental, private entity to oversee the federally funded investment that in comparative effectiveness research. The ACA says that the institute will identify priorities for comparative effectiveness research, will help to carry out the research agenda, create advisory panels, establish peer review processes, and will also in part be responsible for releasing the research findings and coordinating the research that gets done.

2. Stipulating that comparative effectiveness research must take into account different subpopulations to avoid one-size-fits-all results.

3. Setting up a trust fund so comparative effectiveness research can be funded on an ongoing basis. The $1.1 billion put in as part of the stimulus was really just for the first couple of years. The trust fund creates a pool of funding of about $600 million per year from both the Medicare trust and a fee on health plans.

4. Stipulating that none of the reports or findings can be used as mandates, guidelines or policy recommendations and that the findings cannot be used as sole evidence in making determination decisions.

But physicians are free to make use of comparative effectiveness information as they see fit when prescribing treatments for patients, and Fleishman talks about the opportunities to disseminate the information. “There is an enormous opportunity, particularly when you think about the computerized physician order entry systems and others that will be part of this. Meaningful use clearly stipulates that these systems have to have some form of clinical decision support. One of the hurdles to adoption for CER is that clinicians themselves don’t have ready access to the latest information on what the evidence says about different forms of treatments or procedures. Providing the information technology infrastructure and tools to clinicians at the point of care so they can have access to the data will go a long way toward making the data accessible.”
Gregory D. Pawelski

Last edited by gdpawel : 07-31-2012 at 11:21 AM. Reason: correct url address
Reply With Quote
Old 10-29-2010, 09:30 PM
gdpawel gdpawel is offline
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,360

It's been difficult for the Medicare program to contol the substantial costs of cancer drugs. In an issue of the New England Journal of Medicine, an article by Dr. Peter Bach stated that the costs to Medicare of injectable cancer drugs given in doctors' offices increased from $3 billion in 1997 to $11 billion in 2004, an increase of 267% at a time when the costs for the entire Medicare program increased 47%.

It also stated that there was a huge reduction in Medicare expenses that occurred when the off-label use of ESAs (drugs for anemia-related issues) was found to actually cause harm to patients. The drugs were proven to be over-used and the net result of expose was that use of these drugs quickly dropped and the costs to Medicare dropped from over $1 billion a year to just $200 million.

In 2003, in the political payback deal of the century, Congress guaranteed premium pricing for pharmaceuticals, by prohibiting Medicare from negotiating drug prices, and it provided hundreds of billions of dollars in U.S. taxpayer subsidies to pay for these premium drug costs.

Dr. Bach stated ways that the Medicare program could control costs. One of the ways the Medicare program could control costs is to fund a comparative-effectiveness program to assess whether or not treatments are really better than older treatments. Decisions can be made about what cancer treatments patients can actually afford.

Comparative research has the potential to tell us which drugs and treatments are safe, and which ones work. This is not information that the private sector will generate on its own, or that the "industry" wants to share. Companies want to control the data, how it is reviewed, evaluated, and whether the public and government find out about it and use it.

Comparative-effectiveness research can help doctors and patients, through research, studies and comparisons, undertand which drugs, therapies and treatments work and which don't. Doctors will still have the ultimate decision, along with the patient.

I've heard many times, in regard to health care reform, that the government is going to ration our health care with comparative effectiveness research (CER).

Let's take this issue with regard to NCCN and ASCO guidelines and see how fraternal organizations enlist rationing.

ASCO has issued guidelines on how physicians should discuss cost of treatment options with patients. I've never heard that ASCO has been knighted a regulatory agency. Some experts warn that their guidelines could raise costs even further, thus limiting access to cancer patients.

Allen Lichter of ASCO has said "Cancer sticker shock is hitting hard now, the cost of treating cancer is rising by 15% annually. Affordable treatment options are a particular issue for patients with incurable forms of cancer who are looking for both the longest possible survival and the best quality of life."

One of ASCO's suggestions is for oncologists to consider the cost, essentially as another side effect, when choosing a treatment. According to MSK's Leonard Saltz and other physicians at attendance at one of NCCN's meetings, many physicians do not know the cost of certain treatments because they are not included in treatment "standards." "If we can do it just as well less expensively, I think doctors should know that and be able to make a decision," Saltz said.

Why is our government being accused of health care rationing when others have suggested it themselves?

Why Most Published Research Findings Are False

Gregory D. Pawelski

Last edited by gdpawel : 03-07-2013 at 11:33 AM. Reason: corrected url address
Reply With Quote
Old 01-19-2011, 11:41 PM
gdpawel gdpawel is offline
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,360
Default Prices Soar for Cancer Drugs

These so-called "smart drugs" focus their effects on specific, identifiable processes occurring within cancer cells. The new "targeted" drugs are highly promising in that they "sometimes" provide benefit to patients who have failed traditional therapies. However, they do not work for everyone, they often have unwanted side effects, and they are all extremely expensive. Patients, physicians, insurance carriers, and the FDA are all calling for the discovery of "predictive" tests that allow for rational and cost-effective use of these drugs.

There was a predictive test (chemoresponse assay) developed that holds the key to solving some of the problems confronting this high-price healthcare system that is seeking ways to best allocate available resources while accomplishing the critical task of matching individual patients with the treatments most likely to benefit them. Not only is it an important predictive test, it is also a unique tool that can help to identify newer and better drugs, evaluate promising drug combinations, and serve as a "gold standard" correlative model with which to develop new DNA, RNA, and protein-based tests that better predict for drug activity.

One good recent example of Comparative Effectiveness Research in cancer medicine was a Duke University cost savings study on the impact of a chemoresponse assay on treatment costs for recurrent ovarian cancer. They sought to estimate mean costs of chemotherapy treatment with or without use of a chemoresponse assay.

They estimated mean costs for 3 groups: (1) assay assisted: 75 women who received oncologist's choice of chemotherapy following chemoresponse testing (65% adherence to test results), (2) assay adherent: modeled group assuming 100% adherence to assay results, and (3) empiric: modeled from market share data on most frequently utilized chemotherapy regimens. Cost estimates were based on commercial claims database reimbursements.

The most common chemotherapy regimens used were topotecan, doxorubicin, and carboplatin/paclitaxel. Mean chemotherapy costs for 6 cycles were $48,758 (empiric), $33,187 (assay assisted), and $23,986 (assay adherent). The cost savings related to the assay were associated with a shift from higher- to lower-cost chemotherapy regimens and lower use of supportive drugs such as hematopoiesis-stimulating agents.

Conclusion of the study was that assay-assisted chemotherapy for recurrent ovarian cancer may result in reduced costs compared to empiric therapy. What most medical oncologists do now (PMID: 20417480).

But how does one get ASCO and others to understand this and allow its judicious use? They have single-handedly done more over the past 20 years to keep assay-testing (pre-testing) technology under a bushel basket and out of the public light. It has hurt literally hundreds of thousands of patients. We'd be much further along and technology would have improved, even more accurate. New treatments would have been discovered and targeted immediately to the people who could most benefit from them. This has been one great lost of opportunity in clinical cancer research.
Gregory D. Pawelski

Last edited by gdpawel : 03-12-2013 at 10:53 AM. Reason: additional info
Reply With Quote
Sponsored Links
Old 01-19-2011, 11:49 PM
gdpawel gdpawel is offline
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,360
Default Impact of Health Care Reform on Cancer Patients

Cancer Journal: November/December 2010 - Volume 16 - Issue 6 - pp 593-599 doi: 10.1097/PPO.0b013e3181feee9a

Special Issue on Impact of Health Care Reform on Cancer Patients

An Opportunity for Coordinated Cancer Care: Intersection of Health Care Reform, Primary Care Providers, and Cancer Patients

Collins, Lauren G. MD; Wender, Richard MD; Altshuler, Marc MD

From the Division of Geriatric Medicine, Department of Family and Community Medicine, Jefferson Medical College/Thomas Jefferson University, Philadelphia, PA.

The US health care system has become increasingly unsustainable, threatened by poor quality and spiraling costs. Many Americans are not receiving recommended preventive care, including cancer screening tests. Passage of the Affordable Care Act in March 2010 has the potential to reverse this course by increasing access to primary care providers, extending coverage and affordability of health insurance, and instituting proven quality measures. In order for health care reform to succeed, it will require a stronger primary care workforce, a new emphasis on patient-centered care, and payment incentives that reward quality over quantity. Innovations such as patient-centered medical homes, accountable care organizations, and improved quality reporting methods are central features of a redesigned health care delivery system and will ultimately change the face of cancer care in the United States.


In regards to Cancer Screenings

Consumer reports evaluated eleven cancer screening tests, and has found that most of us should avoid eight of them. The non-profit consumer's organization says that most preventive cancer screenings are oversold and may confuse rather than clarify.

In a new report, the authors say that not all cancer screening tests are helpful. In fact, they added that some of them may be harmful.

Consumer Reports emphasizes that its advice regarding avoiding eight cancer screenings is directed at those who are not at high risk and do not have signs and symptoms of cancer.

The most effective tests

The following cancer screening tests, according to the authors of the new report, are the most effective and received the highest ratings:

Cervical cancer - received the highest score. It is recommended for females aged from 21 to 65 years. Females younger than 21 should skip the Pap smear (cervical cancer screening), because this type of cancer is extremely rare and the tests are not accurate for this age group. In March 2012, the United States Preventative Services Task Force issued its guidelines for cervical cancer screening.

Breast cancer - received the second highest score for females aged from 50 to 74 years. However, women in their forties and 75+ should consult with a health care profesional to determine whether screening is advisable, based on their risk factors.

Colon cancer - received the highest score for men and women aged from 50 to 75 years. Lower scores were attributed for patients aged 76 to 85. A low score was given for people aged 86+. The lowest score went to patients up to the age of 49. Younger patients should only consider testing for colon cancer if they are deemed to be high risk. The disease is extremely rare among those younger than 50.

In March 2012, the American College of Physicians published a new guidance statement in the Annals of Internal Medicine, concerning colorectal cancer screenings.

The majority of people are advised to avoid the following cancer screening tests:

Bladder cancer - consists of a urine test, which looks for blood or cancer cells

Lung cancer - the patient undergoes a low-dose CT scan. American Cancer Society guidelines advise doctors to only recommend lung cancer screening for older people who have smoked for many years.

Oral cancer - a health care professional, such as a dentist, carries out a routine visual exam of the mouth. The American Cancer Society recommends this be done as part of a patient's normal routine oral care.

Ovarian cancer - received the lowest rating for females of all ages because it is not very effective. Only high-risk women need to be tested. In September 2012, the US Preventive Services Task Force (USPSTF) recommended against routine ovarian cancer screening because the risks are greater than the benefits.

Prostate cancer - consists of a blood test. Levels of prostate-specific antigen (PSA) are measured. According to the American Cancer Society, patients should discuss with their doctors whether they should undergo this test. In May 2012, the US Preventive Services Task Force concluded that the PSA blood test "may benefit a small number of men but will result in harm to many others".

Pancreatic cancer - received the lowest score for men and women of all ages. Only those at high risk should consider pancreatic cancer screening, which consists of image tests of the abdomen or genetic tests. There is no current test that is able to detect the disease in its early stage (curable stage).

Skin cancer - a dermatologist carries out a visual exam of the patient's skin and looks out for signs of melanoma (deadly skin cancer). According to the American Cancer Society, this should be part of a routine check-up done by doctors.

Testicular cancer - received the lowest score for men of all ages. Only men at high risk should be considered for testicular cancer screening. The majority of testicular cancers that are detected without screening are curable.
The medical team gathered and examined data from medical studies, talked to experts, and surveyed over 10,000 readers. They also talked to patients.

A more detailed analysis was done on evidence-based reviews from the US Preventive Services Task Force, an independent group which is supported by the HHS (Department of Health and Human Services).

John Santa, M.D., M.P.H., director of the Consumer Reports Health Ratings Center, said:

"We know from our surveys that consumers approach screenings with an 'I have nothing to lose' attitude, which couldn't be further from the truth. Unfortunately some health organizations have promulgated this belief, inflating the benefits of cancer screenings while minimizing the harm they can do.

To help clarify when most consumers should use cancer screenings and when they should skip them, we rate each screening and whether it is useful for a specific age group. We also try to identify some high risk factors that may make screening a reasonable choice."

Not even doctors can always agree on which screening should be classed as necessary, Dr. Santa explained.

The authors of the report advise patients to ask their health care professional a series of questions before agreeing to undergo any kind of cancer screening test.

Below are some possible questions:

Will a positive test result save my life?

Am I at a higher risk of developing this cancer than the rest of the population? If so, why?

How often do patients get false-positive results for this type of screening?

Are there any other tests available which are equally good?

If my results come back positive, what happens next?

Over the last ten years, the number of Americans seeking preventive cancer screening has dropped, researchers from the University of Miami Miller School of Medicine reported in Frontiers in Cancer Epidemiology in December 2012.

The authors believe that reductions in workers' insurance coverage, plus the failure of leading bodies to agree on screening guidelines have contributed to the fall.

While there has been a drop in advance cancer diagnoses in America over the last decade, the number of cancer survivors returning to work has risen. The researchers believe that "keeping to a cancer screening schedule could be an important factor, as this helps detect secondary tumors early."

Citation: "Avoid Eight Cancer Screenings, Says Consumer Reports." Medical News Today. MediLexicon, Intl., 3 Feb. 2013
Gregory D. Pawelski

Last edited by gdpawel : 02-03-2013 at 11:34 PM. Reason: additional info
Reply With Quote
Old 04-14-2011, 10:23 PM
gdpawel gdpawel is offline
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,360
Default The Illusory Side of “Comparative Effectiveness Research”

Dr. Nortin M. Hadler, Professor of Medicine and Microbiology/Immunology, University of North Carolina at Chapel Hill and Dr. Robert A. McNutt, Professor of Medicine, Chief, Section on Medical Informatics and Patient Safety, Rush University Medical Center, Chicago. Their argument that comparative effectiveness research (CER) needs an “anchor”—one treatment with known efficacy. In their analysis of randomized controlled trials, they highlight the crucial question: how high should we set the bar to consider the results of the trial compelling?


“Comparative effectiveness research” is now legislated as a priority for translational research. The goal is to inform decision making by assessing relative effectiveness in practice. An impressive effort has been mobilized to target efforts and establish a methodological framework. We argue that any such exercise requires a comparator with known and meaningful efficacy; there must be at least one anchoring group or subset for which a particular intervention has reproducible and meaningful benefit in randomized controlled trials. Without such, there is a likelihood that the effort will degenerate into comparative ineffectiveness research.

As charged in the American Recovery and Reinvestment Act, the Institute of Medicine defined comparative effectiveness research (CER) as “ …the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition, or to improve the delivery of care… at both the individual and population levels.”

However, you can’t compare treatments for effectiveness from observational data unless you are certain that one of the comparators is efficacious. There must be at least one group of patients for whom the treatment has unequivocal efficacy. Otherwise, CER might discern differences in relative ineffectiveness. We argue that CER cannot succeed as the primary mechanism to assure the provision of rational health care.

The difference between efficacy and effectiveness

The science of efficacy tests the hypothesis that a particular intervention works in a particular group of patients. CER asks whether an intervention works better than other interventions in practice where patients are more heterogeneous than those recruited and accepted in a trial. The gold standard of efficacy research is the randomized controlled trial (RCT). RCTs usually monitor defined, albeit sizeable populations for surrogate outcomes in order to detect a difference in the short term. Modern biostatistics has probed every nuance of the RCT paradigm. The result is a highly sophisticated understanding of its limitations. A particularly vexing limitation is that the RCT fails to test hypotheses broadly enough; that is, RCTs limit the variability of patients making it difficult to generalize the value of treatments to those not studied.

CER to the rescue?

The methodology employed for CER is not constrained by limits on patient variability as in RCTs. CER utilizes real world data sets to deduce benefit/ harm in a range of patients -including those who might reasonably be excluded from a RCT. This entails large clinical and administrative networks to provide data. Datasets must be large enough to capture individuals’ differences that affect the estimates of benefit/harm across the gamut of insurance, age, co-morbidities, and life style. This inclusivity is paramount. For example, when we buy a book at, we are given a list of “other books bought by those who bought your book”. There is a data mining program in the background that links characteristics of the book you bought to characteristics of books bought like yours and to the characteristics of buyers. A different list of book recommendations results based on variations in buyer characteristics, like age, gender, and purchase history. This is a perfect analogy to what CER promises.

But, there are fundamental differences between book buying and health care provisions. In book buying there is a defined/ homogenous outcome, the book; health care outcomes are not homogeneous and often subjective (life, function, jobs, fancier hospitals, etc.). It is hard to imagine the messy list from “Amazon” health we would see based on what we chose as a goal of health care; it is easy to imagine how readily perturbed the list would be by introducing nuances in outcome. One of the fundamental problems with attempts to rationalize health care is that we still don’t agree on how to measure either health or what is rational care.

Furthermore, Amazon is not the sole vendor of books. The associations at Amazon may not reflect the totality of characteristics (books and people) across all places books can be bought. Hence, any book list suggested solely by Amazon may be incomplete or flawed. For CER to be a valid “Amazon” for health care, it has to define and capture the nuances of health care outcomes and provision across all sites of care (including the home).

Clearly any inference regarding relative benefits and harms from the analysis of large datasets is suspect. Shortcomings relating to benefits, harms and provision of care are lurking. Any statistical modeling would require assumptions and compromises. Hence, the validity of interpreting observational data will depend on the degree to which diagnosis, clinical course, interventions, coincident diseases, personal characteristics or outcomes is assumed and not quantified. No matter how compulsively this is done, CER demands judgments about the importance of each of these variables. Therefore, CER cannot be the engine of health care decision making.

As an example, total knee replacement (TKR) has at present escaped efficacy testing. How would we learn from observational research if TKR works? Some of the relevant variables to assess efficacy can be parsed from observational data such as patient demographics, type of hardware, co-morbidities, and the like. However, some variables are very difficult to parse in the best of circumstances - such as a definition of benefit; or surgical experience; or, more elusive, surgical skillfulness.

Efficacy research is the horse; CER is the cart.

There are 2 alternative ways forward other than the present plans for CER. First, we could design efficacy trials that are efficient in providing gold standards across a wider range of patient characteristics. We would have to expand trials to larger populations. For the sake of validity, we would have measure only a single clinically meaningful outcome even if that took a great deal of time. And we’d have to foreswear all shortcuts that trade reliability off against efficiency (such as “tack on” questions for “post-hoc” analysis).

There is a second approach that is more straightforward. We can design elegant RCTs seeking a large enough clinically meaningful outcome on highly selected patient populations. If none is detected, we can either abandon the intervention or choose another highly selected population to study. If a clinically meaningful difference is detected, the result can serve as the anchoring comparator for CER.

However, to design such a straightforward RCT, we must also deal with the philosophical challenge in the design of efficacy trials; the challenge that relates to the notion of “clinically significant.” How high should we set the bar for the absolute difference in outcome between the treated and control groups to consider the results of the trial compelling?

One way to think about this is to convert the absolute difference into a more intuitive measure, the Number Needed to Treat (NNT). If the outcome is easily measured, such as death or stroke, for example, we might find an intervention valuable if we had to treat 20 patients to spare 1. Few students of efficacy would be persuaded if we had to treat more than 50 to spare 1. Between 20 and 50 delineates debate; smaller effects are ephemeral and subject to false positive assertions. For an outcome that is more difficult to measure, such as symptoms or quality of life, we would argue for a more stringent bar. If we framed the problem of RCT design like this, we may be able to engage a national debate on just how high the bar should be set for each clinical malady.

If we, then, applied this stringency to future RCT design, trials would be more efficient and reliable and would eliminate trials aimed to test equivalency. Then, armed with clinically meaningful RCT results in some subset of patients, we are in the position to turn to CER. CER will help us seek out other subsets of patients benefited at least as much and to identify subsets harmed. We feel that it would not be in the best interest of our public and personal health to prematurely seek answers in flawed datasets at the expense of forgoing best evidence in better RCT designs.

CONFLICT OF INTEREST DISCLOSURES: The authors have none to report.

Comparative Effectiveness Research in Oncology

Gary H. Lyman examines the work in progress in different methodologies and discusses both the potential and challenges of each. His clear description of this topic provides a roadmap for how CER can extend its promise of personalized medicine.


Scott Ramsey et al., in “Oncology Comparative Effectiveness Research: A Multistakeholder Perspective on Principles for Conduct and Reporting”, focus on five different CER concentrations, including the need to consider cost, separation of well-conducted from poorly conducted CER, and better reporting of results to busy practitioners.

Gregory D. Pawelski

Last edited by gdpawel : 06-17-2013 at 07:32 PM. Reason: additional info
Reply With Quote
Sponsored Links
Old 05-31-2016, 09:47 AM
gdpawel gdpawel is offline
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,360
Default Report: Federally funded institute avoids comparing drugs, other treatments

Jayne O'Donnell, USA TODAY

WASHINGTON — An institute that pays researchers to compare medical treatments has spent only half of its more than $1.4 billion in available federal money on what is called comparative effectiveness research and has largely ignored prescription drugs, despite their role in driving up health care costs, according to a study released Tuesday by a Washington-based policy group.

The research into the comparative effectiveness of treatments and pharmaceuticals is meant to determine if lower-cost options provide the same benefit as more expensive procedures and drugs.

The Patient-Centered Outcomes Research Institute (PCORI) has paid for a higher percentage of comparative effectiveness research in the last two years, from 37% before 2014 to 58% now, according to the new study by the Center for American Progress, a Democratic-leaning research group. Since PCORI's creation in the 2010 Affordable Care Act, it has now spent 51% on comparative effectiveness research, the study showed.

The institute hasn't focused on what the law intended, says study co-author Topher Spiro, CAP's vice president of health policy and a former Democratic Senate health committee aide who worked on the legislation. The study was led by Ezekiel Emanuel, an oncologist and former White House health policy adviser who also helped write the law.

Research gaps "involve high-cost treatments, such as certain drugs, medical devices, and surgical procedures," the report said. That's in part because some experts believe PCORI has been afraid to conduct research that could antagonize "powerful industries."

For example, only 4% of the research compares two or more drugs, although drugs account for 17% of overall health care spending, the report said.

PCORI executive director Joe Selby, a physician, said he disagreed with how the report defined comparative effectiveness research (CER) as strictly head-to-head comparisons, but acknowledged the institute had a "bit of a lull" in funding drug research at first. There also weren't nearly as many major drug introductions when PCORI started, and it's now supporting research that compares drugs to treat Hepatitis C, depression and multiple sclerosis, he said.

PCORI has been criticized since it first appeared in the legislation that became the ACA. Although CER had bipartisan support, the institute got caught in the politics surrounding what Republican former vice presidential candidate Sarah Palin called "death panels" that could decide who received medical treatment.

Now despite widespread concerns about rising drug costs, CER is being attacked by patient groups supported by the pharmaceutical industry that claim the research could limit access to some life-saving drugs.

Congress limited the ability of PCORI to consider costs when comparing health care, drugs or prioritizing research studies. The institute also has a 21-person board appointed by the Government Accountability Office, Congress' investigative arm. Drug and medical device makers have board positions along with medical and benefits experts from academia, employers and government, including Francis Collins, director of the National Institutes of Health.

Since 2010, former Rep. Tony Coelho, D-Calif., has led the Partnership to Improve Patient Care (PIPC), which has been a leading critic of CER. Its work is also paid for by the pharmaceutical industry and other patient groups that are in turn supported financially by drug companies.

"I’m their big advocate and also a critic," Coelho said of PCORI in an interview.

Coelho, who has epilepsy, said the late Sen. Ted Kennedy, D-Mass., asked him to chair the group because of his political background, his disability and because he wouldn't let anyone "unduly influence" him in efforts to get patients' voices heard. Coelho resigned from Congress in 1989 as he faced an expected House Ethics Committee probe and a Justice Department criminal inquiry into his failure to report junk bond purchases with loans from a friend who was a savings-and-loan official.

PCORI does not want to be targeted by patient groups, said Leah Binder, the CEO of the Leapfrog Group, which conducts safety ratings of hospitals, and a former member of an institute advisory committee.

Still, Binder and others say PCORI made a large and lasting contribution to research when it required patients to be involved in the studies it supports, a practice now emulated through the medical research community.

If PCORI paid for more head-to-head research, it would be easy for insurers and other business to pay for the cost analyses, said institute board member Harlan Krumholz, a cardiologist.

"I would like to see us sprint towards doing as much comparative research as possible," says Krumholz, also a Yale University professor and health care researcher. "We don't need to look at cost; we need to look at effectiveness."

Patient groups, which are nearly always backed by pharmaceutical companies, are major players in what PCORI decides to research. That can limit the focus on finding lower-cost options, which could hurt the profitability of the companies that pay for the patient groups.

The institute tries to solicit ideas from patient groups that receive money from a "diversity of sources," said Jean Slutsky, the institute's chief engagement and dissemination officer. "That's always our concern — that any one sector would be responsible for bias," she said. "Some people are more successful than others in getting public comment in."

Comparative effectiveness research may threaten "personalized medicine" and limit access to certain higher-priced drugs, some of its critics say. PIPC's website notes that the research can help, "but it also can be misapplied in ways that unintentionally undermine patient access to care and physician-patient decision-making."

Indeed, most patients and their families have different views of cost, says Michael Rea, a pharmacist who is CEO of Rx Savings Solutions and has two young boys with autism.

"I'd pay anything I could for something that would help my sons," says Rea, whose company represents health insurance plans. "But there's a breaking point from a societal standpoint on what is realistic."

The pharmaceutical industry, patient groups and PCORI have several connections, including:

• Coelho's group, which serves as a PCORI watchdog, counts among its members the Pharmaceutical Research and Manufacturers of America trade group and a long list of patient groups, many of which get considerable funding from the pharmaceutical industry. One member, the Alliance for Patient Access, acknowledged that most of its money comes from the drug industry but would not provide a percentage. Coelho also won't say what percent of his group's budget comes from drug companies, "I don't answer that question," he said. "It is immaterial to me."

• National Health Council CEO Marc Boutin, who represents patients on one of PCORI's advisory committees, says "we work quite hard to coordinate" with PIPC. A 2011 article in the Journal of Public Health cited NHC for "principles that did not encourage transparency," but NHC said in response that it "requires its member companies to disclose funding from corporations and to present the information in an easily accessible manner." NHC spokeswoman Nancy Hughes said that 70% of its membership is made up of non-profits and patient groups. These "generate the majority of our dues dollars," but she did not know the percentage of funding from drugmakers. Many of its patient group members, like PIPC, depend on drug industry money.

• Eugene Washington, a physician, was the founding chairman of the PCORI board until September 2013. During part of the time, he was also a board member of medical device and drug maker Johnson & Johnson. Washington, who is now CEO of the Duke University Health System, was also dean of the University of California, Los Angeles medical school while on the J&J board. Although Washington was not named, UCLA's medical school was one of six academic medical centers with officials on J&J's board cited in a 2014 Journal of the American Medical Association article questioning the health systems' ties to drug industry boards. UCLA's medical school's corporate relationships were also the subject of a whistleblower lawsuit that led to a $10 million settlement in 2014. Duke's press office declined to comment.

Asked to single out the most significant research findings to date, Selby points to studies including those that found oral antibiotics can work as well as those delivered intravenously to children with serious bone infections and on the benefits of giving stroke victims blood thinners when they were discharged from hospitals.

During the institute's board meeting week and in interviews, there was weariness and excitement about its future.

"Let's get this engine moving," said PCORI board member Lawrence Becker, Xerox's director of strategic partnerships and alliances. "It's been a long road the last six years."
Gregory D. Pawelski
Reply With Quote
Sponsored Links

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

All times are GMT -5. The time now is 03:08 AM.