Tuesday, March 31, 2009

Fish oil pills don't boost benefit of heart drugs

ORLANDO, Fla., 31 mar 2009 – Heart attack patients who are already taking the right medicines to prevent future problems get no added benefit from taking fish oil capsules, a large study in Germany finds.

The study tested a 1-gram daily dose of a prescription version of highly purified omega-3 fatty acid — the "good fat" contained in certain oily fish that is thought to help the heart.

Researchers led by Dr. Jochen Senges of the University of Heidelberg gave fish oil or dummy capsules to more than 3,800 people who had suffered a heart attack in the previous two weeks. About 90 percent were already receiving all the medicines recommended to prevent a second attack, including aspirin, anti-clotting and cholesterol drugs.

After a year, it made no difference whether these patients took fish oil or dummy capsules. In both groups, fewer than 2 percent had suffered sudden cardiac death, 4 percent had another heart attack, and fewer than 2 percent had suffered a stroke.

If recent heart attack patients are already getting good care, "there is almost nothing you can do better on top of this" to further lower risk, Senges said. He presented the results Monday at an American College of Cardiology conference.

The research doesn't mean that fish oil is of no value, and the study didn't address whether it can help prevent heart disease in the first place, doctors said.

The prescription version used in the study, sold as Omacor and Lovaza in the United States and as Zodin in Europe, is a highly purified and standardized form, different from what many consumers buy off the shelf.

Omega-3 fatty acids also are found in wild oily fish such as salmon, tuna, mackerel, sardines and herring. Scientists think it raises HDL, or good cholesterol, lowers harmful fats called triglycerides and slows the growth of plaque that can clog arteries.

The American Heart Association recommends adults eat fish at least twice a week, said Alice Lichtenstein, a Tufts University nutrition professor and Heart Association spokeswoman. For people with heart disease, the association advises 1 gram of omega-3 a day.

"A modest, 3-ounce cooked salmon has a little more than a gram," she said.

Fish oil capsules are not for children or women who are pregnant or nursing, because the pills pose a bleeding risk. Taking more than 3 grams a day from supplements should only be done under a doctor's orders, the heart association warns. The capsules also should be stopped a week or so before surgery because of a risk of bleeding.

The German study shows that "we need to be a little more cautious about the prediction of individual benefit of any nutritional supplements," said Lichtenstein, who had no role in the research.

"We see this pattern — people are so willing to embrace the simple answer," as if it's possible "to crack a capsule over a hot fudge sundae" and undo the harm of harmful diets and lack of exercise, she said.

Study: Cholesterol drug lowers blood clot risk

ORLANDO, Fla., 31 mar 2009 – Statin drugs, taken by millions of Americans to lower cholesterol and prevent heart disease, also can cut the risk of developing dangerous blood clots that can lodge in the legs or lungs, a major study suggests.

The results provide a new reason for many people with normal cholesterol to consider taking these medicines, sold as Crestor, Lipitor, Zocor and in generic form, doctors say.

In the study, Crestor cut nearly in half the risk of blood clots in people with low cholesterol but high scores on a test for inflammation, which plays a role in many diseases. This same big study last fall showed that Crestor dramatically lowered rates of heart attacks, death and stroke in these people, who are not usually given statins now.

"It might make some people who are on the fence decide to go on statins," although blood-clot prevention is not the drugs' main purpose, said Dr. Mark Hlatky, a Stanford University cardiologist who had no role in the study.

Results were reported Sunday at the American College of Cardiology conference and published online by the New England Journal of Medicine.

The study was led by statistician Robert Glynn and Dr. Paul Ridker of Harvard-affiliated Brigham and Women's Hospital in Boston. Ridker is a co-inventor on a patent of the test for high-sensitivity C-reactive protein, or CRP. It is a measure of inflammation, which can mean clogged arteries or less serious problems, such as an infection or injury.

It costs about $80 to have the blood test done. The government does not recommend it be given routinely, but federal officials are reconsidering that.

For the study, researchers in the U.S. and two dozen other countries randomly assigned 17,802 people with high CRP and low levels of LDL, or bad cholesterol (below 130), to take dummy pills or Crestor, a statin made by British-based AstraZeneca PLC.

With an average of two years of follow-up, 34 of those on Crestor and 60 of the others developed venous thromboembolism — a blood clot in the leg that can travel to the lungs. Several hundred thousand Americans develop such clots each year, leading to about 100,000 deaths.

However, this is uncommon compared to the larger number who suffer heart attacks. Many doctors have been uncomfortable with expanding statin use to people with normal cholesterol because so many would have to be treated to prevent a single additional case.

"I don't know that it changes the big picture very much" to say that a statin can prevent blood clots, Hlatky said. "Where do you draw the line? Are we giving it to 10-year-old kids that are fat?"

AstraZeneca paid for the study, and Ridker and other authors have consulted for the company and other statin makers. Many doctors believe that other statins would give similar benefits, though Crestor is the strongest such drug. It also has the highest rate of a rare but serious muscle problem, and the consumer group Public Citizen has campaigned against it, saying there are safer alternatives.

Crestor costs $3.45 a day versus less than a dollar for generic drugs. Its sales have been rising even though two statins — Zocor and Pravachol — are now available in generic form.

Researchers do not know whether the benefits seen in the study were due to reducing CRP or cholesterol, since Crestor did both. Another new analysis reported Sunday and published in the British journal the Lancet found that the patients who did the best in the study were those who saw both numbers drop.

Many doctors remain reluctant to expand CRP testing or use of statins. A survey by the New England journal found them evenly divided on the questions. Others questioned why so few people in the study were getting other treatments to prevent heart problems.

"If more of them were on aspirin, you would have less benefit from the statin," said Dr. Thomas Pearson of the University of Rochester School of Medicine and Dentistry.

Dr. James Stein of the University of Wisconsin-Madison said that doctors examining treatment guidelines should pay close attention to the new results.

He said the CRP test had helped him convince patients that they need to be on a statin drug.

"There are very few times you can say to a patient, 'this medicine is going to keep you alive.' We should try not to pick apart studies that save lives," Stein said.

Stroke-blocking device shows promise, doctors say

ORLANDO, Fla., 31 mar 2009 – A novel device to treat a common heart problem that can lead to stroke showed promise in testing, but not without risk, new research shows.

The experimental device, called the Watchman, is the first to try to permanently fix atrial fibrillation, a heartbeat problem afflicting more than 2 million Americans. A federal Food and Drug Administration panel will consider it next month.

In the study, the Watchman was at least as good at preventing strokes as warfarin, sold as Coumadin and other brands. The drugs pose hazards of their own, so doctors and their patients are anxious for a better option.

But the procedure to implant the Watchman led to strokes in some patients, study results showed. Complications and side effects were twice as common with the device as with warfarin.

Despite those drawbacks, doctors who saw the results Saturday at the American College of Cardiology Conference were impressed.

"Wow. At first blush, this is very encouraging," and could help as many as two-thirds of those who have the heartbeat problem, said Dr. Richard Page, cardiology chief at the University of Washington in Seattle and an American Heart Association spokesman.

Atrial fibrillation occurs when the upper chambers of the heart quiver instead of beating properly. That lets blood pool in a pouch-like appendage. Clots can form and travel to the brain, causing a stroke.

The usual treatment is the anti-clotting drug warfarin, but getting the right dose is tricky — too little means a risk of stroke, and too much can cause fatal bleeding. The right amount varies by 10 times from one person to another, and even certain foods can throw it off. Patients must go to the doctor often for blood tests to monitor the dose.

The Watchman device is a fabric-covered metal cage that plugs the pouch. Doctors pass a hollow tube through a leg vein into the heart's right atrium, puncture the wall separating it from the left atrium, and implant the device through the tube.

Dr. David Holmes Jr. of the Mayo Clinic in Rochester, Minn., led a study of it in 707 patients in the United States and Europe.

After an average of 16 months of followup, there were 15 strokes and 17 deaths (from all causes) in the 463 who got the device and 11 strokes and 15 deaths in the 244 treated with warfarin, Holmes said.

The balance tipped in favor of the device. Just over 3 percent of Watchman patients suffered the main problems doctors were measuring in the trial (a composite of strokes, heart-related deaths and certain blood clots) versus 5 percent of those treated with warfarin.

About 90 percent of device patients were able to go off warfarin.

However, complications were twice as common — 8 percent in the device group and 4 percent on warfarin. Five strokes were triggered by implanting the device, and about 5 percent of device patients developed serious fluid buildup around the heart. Doctors were unable to implant the Watchman in 41 people assigned to get it.

These problems declined as the study went on, Holmes said.

Any new technology has "a learning curve" that improves with experience, said Dr. Ralph Brindis, a heart specialist at the California-based Kaiser Permanente health plan and spokesman for the college of cardiology.

The device's maker, Atritech Inc. of Plymouth, Minn., paid for the study, and Mayo may potentially receive future royalties from the device. Medicare paid $9,500 for the procedure, including $6,000 for the device itself, a company spokeswoman said. Hospitals typically charge two to three times the Medicare rate, she said.

Dr. Tristram Bahnson of Duke University said that if the device is approved, "patients and their physicians will have to decide whether assuming some increased risk up front is preferred to ongoing therapy with Coumadin, where there's a small risk of complications and the risk is cumulative."

For Kenneth Giunchedi, that was an easy choice. Giunchedi, 75, of suburban Chicago, had the device implanted last March by Dr. Bradley Knight of the University of Chicago Medical Center as part of the study. He had been on Coumadin for about two years.

Taking the drug was "a horrible experience for me," he said. "I was never easy to regulate — I was always in trouble. They were constantly adjusting the dosage and I would go in for a blood draw sometimes as often as three times a week. I would have done anything to get off of the Coumadin."

Study: Triathlons can pose deadly heart risks

ORLANDO, Fla., 31 mar 2009 – Warning to weekend warriors: Swim-bike-run triathlons pose at least twice the risk of sudden death as marathons do, the first study of these competitions has found.

The risk is mostly from heart problems during the swimming part. And while that risk is low — about 15 out of a million participants — it's not inconsequential, the study's author says.

Triathlons are soaring in popularity, especially as charity fundraisers. They are drawing many people who are not used to such demanding exercise. Each year, about 1,000 of these events are held and several hundred thousand Americans try one.

"It's something someone just signs up to do," often without a medical checkup to rule out heart problems, said Dr. Kevin Harris, a cardiologist at the Minneapolis Heart Institute at Abbott Northwestern Hospital. "They might prepare for a triathlon by swimming laps in their pool. That's a lot different than swimming in a lake or a river."

He led the study and presented results Saturday at an American College of Cardiology conference in Florida. The Minneapolis institute's foundation sponsored the work and tracks athlete-related sudden deaths in a national registry.

Marathon-related deaths made headlines in November 2007 when 28-year-old Ryan Shay died while competing in New York in the men's marathon Olympic trials. Statistics show that for every million participants in these 26.2-mile running races, there will be four to eight deaths.

The rate for triathletes is far higher — 15 out of a million, the new study shows. Almost all occurred during the swim portion, usually the first event.

"Anyone that jumps into freezing cold water knows the stress on the heart," said Dr. Lori Mosca, preventive cardiology chief at New York-Presbyterian Hospital and an American Heart Association spokeswoman. She had no role in the study but has competed in more than 100 triathlons, including the granddaddy — Hawaii's Ironman competition.

Cold water constricts blood vessels, making the heart work harder and aggravating any pre-existing problems. It also can trigger an irregular heartbeat. On top of this temperature shock is the stress of competition.

"It's quite frightening — there are hundreds of people thrashing around. You have to keep going or you're going to drown," Mosca said.

Swimmers can't easily signal for help or slow down to rest during swimming as they can in the biking or running parts of a triathlon, said Harris, who also has competed in these events. Rescuers may have trouble spotting someone in danger in a crowd of competitors in the lakes, rivers and oceans where these events typically are held, he added.

For the study, researchers used records on 922,810 triathletes competing in 2,846 USA Triathlon-sanctioned events between January 2006 and September 2008.

Of the 14 deaths identified, 13 occurred during swimming; the other was a bike crash. Autopsies on six of the victims showed that four had underlying heart problems. Two others had normal-looking hearts, but they may have suffered a fatal heart rhythm problem, Harris said.

A search of the Minneapolis registry and the Internet found four other triathlon-related deaths from 2006 through 2008 beyond those that occurred in the officially sanctioned events.

"While not a large risk, this is not an inconsequential number," Harris said.

Fundraising triathlons have enticed many runners to try to expand into areas like swimming, which they may not have learned to do very efficiently, to benefit particular charities, Mosca said.

"They're really recruiting people to do these events," she said. "It can be a recipe for disaster."

Doctors offer these tips to anyone considering a triathlon:

_Get a checkup to make sure you don't have hidden heart problems.

_Train adequately long before the event, including open-water swims — not just in pools.

_Acclimate yourself to the water temperature shortly before a race, and wear a wetsuit if it's too cold.

_Make sure the race has medical staff and defibrillators on site.

Monday, March 30, 2009

Drug-Induced Movement Disorders: A Clinical Review

30 mar 2009--The use of dopamine receptor-blocking drugs (DRBD) may result in a variety of acute or tardive involuntary movements. Acute-onset movement disorders arising from initiation or escalation of the dose of drugs that block dopamine receptors (primarily D2 receptors) include dystonia, parkinsonism, akathisia, and neuroleptic malignant syndrome (NMS). Late-onset (tardive) movement disorders typically manifest three months or later (this varies) after the exposure to a DRBD, with stable therapy, an increase in the dosage of the offending agent, or after discontinuation of treatment with the same. These late-onset disorders are referred to in general as tardive syndromes and may include classical dyskinesia (often referred to as a stereotypy), dystonia, chorea, akathisia and others.[1] Historically, the field of psychiatry has used the terms tardive dyskinesia and extrapyramidal symptoms to describe oro-bucco-lingual dyskinesia (OBLD) or facial dyskinesia, that result from the use of typical or atypical antipsychotic drugs. For our purposes the term tardive dyskinesia, will be used in this overview provides information on drugs that have been shown to cause movement disorders.

Types of Drug-Induced Movement Disorders (DIMD)

Drug-induced dystonia, a twisting movement or abnormal posture (or a combination thereof) may manifest as acute or tardive involuntary limb movements, facial grimacing, cervical dystonia, oculogyric crisis, rhythmic tongue protrusion, jaw opening or closing, spasmodic dysphonia, and, rarely, stridor and dyspnea. The acute form typically occurs within 2 to 5 days after initiation of treatment with a DRBD.

The features of idiopathic Parkinson disease comprise the same primary characteristics of drug-induced parkinsonism: rest tremor, bradykinesia, rigidity, and postural instability. Lack of recognition is a primary impediment to treatment, as cessation of the causal agent will lead to resolution of symptoms in drug-induced parkinsonism.[2] In some patients, however, symptoms may endure for 18 months or even longer.

The stereotypies of classic tardive dyskinesia (i.e., OBLD) are characterized by well-coordinated continual movements of the mouth, tongue, jaw, and cheeks and may include lip smacking, cheek puffing, and tongue thrusting. Jaw movements may be lateral or resemble chewing motions. The tongue movements may be writhing or twisting (choreoathetoid). In addition to having OBLD, patients treated with antipsychotic drugs may also have trunk movements, which are typically in the form of pelvic thrusting, trunk twisting, or choreoathetotic or flicking of the extremities. Some patients may have a mix of movement disorders that include OBLD, dystonia, myoclonus, akathisia, parkinsonism.

Akathisia (literally meaning, an inability to sit) manifests as an inner feeling of restlessness and stereotypic movements, such as marching in place and crossing and uncrossing the legs while sitting. Akathisia is the only drug-induced movement disorder that does not have an idiopathic counterpart, although it may be a manifestation of Parkinson's disease.

Neuroleptic malignant syndrome is an abrupt, life-threatening, idiosyncratic response that occurs in approximately 0.2% of patients after they receive a therapeutic dose of a DRBD. The symptoms include hyperthermia (> 38°C), mental status change, muscle rigidity and other movement disorders, and autonomic dysregulation.


Epidemiology

The true incidence and prevalence of drug-induced movement disorders is unknown and likely vastly underappreciated because of lack of recognition. Even among neurologists, drug-induced parkinsonism, for example, is often not recognized and, therefore, not appropriately treated.[2] Studies have shown that parkinsonism in patients in nursing homes is not recognized,[3] and, even among psychiatrists, tardive dyskinesia and akathisia are also frequently overlooked.[4] In addition, identifying the exact prevalence and incidence of DIMD has proven to be difficult because of factors such as fluctuation in symptoms; the use of drugs that can mask DIMD, thereby causing an underestimation of the prevalence; and a lack of validated criteria for the diagnosis of DIMD.

Factors affecting the rate of DIMD related to the use of antipsychotic drugs include age of the population, the drug being used and the dose, the definition of the movement being employed in the study, and the design of the study.[5] DIMD related to exposure to antipsychotic drugs is estimated to occur in 19% and 42% of patients receiving atypical (second-generation) and typical (first-generation) antipsychotic drugs, respectively.[6] With the increasing use of antipsychotic drugs in young children, the incidence of DIMD is growing in this population. In one study, 9% of children who had received an antipsychotic drug for six months or longer developed OBLD (as opposed to none in a control population), including those children who had only received atypical antipsychotics. Of note, the results of additional studies in children have been published since this was written, including those showing both and higher and a lower incidence of EPS.[7] Because of the perceived safety of atypical antipsychotics in relation to DIMD these drugs are more widely used off label for a variety of disorders, including as a sleep remedy, and this is leading to increased frequency as well.

One in 500 people who take metoclopromide are likely to develop extrapyramidal symptoms (EPS). The risk for the development of EPS is highest in infants and children and adults younger than 30 years of age. The risk of developing EPS or TD and the likely irreversibility of TD are related to the length of exposure and total accumulative exposure to the drug.[8]

Pathophysiology

The pathophysiology of DIMD is not clearly elucidated, yet is complex and multifactorial, likely related to a combination of genetic predisposition,[9] dopaminergic system hypersensitivity in the basal ganglia, decreased functional reserve, and over activation of the cholinergic system.

Postsynaptic Dopamine Receptor Hypersensitivity Theory

The chronic blocking of presynaptic dopamine receptors enhances excitatory glutamatergic neurotransmission. The neurotoxic stress in the striatum, which is caused by increasing glutamate release and extracellular glutamate levels at corticostriatal terminals, ultimately destroys the output neurons, leading to dopaminergic hypersensitivity.[9] Although this theory has been the long-held hypothesis as the cause of DIMD, it cannot completely account for the clinical findings, primarily because it does explain the fact that DIMD are not a universal phenomenon among people exposed to DRBDs.[10]

Neurotoxicity Theory

Because the use of DRBDs increases the turnover of neurotransmitters and because the basal ganglia are particularly vulnerable to the effects of membrane lipid peroxidation, this premise proposes that DIMD are caused by the neurotoxic effects of free radicals that are created as a byproduct of catecholamine metabolism. The support for this theory comes primarily from the treatment of oral buccal lingual dyskinesias with the use of branched-chain amino acids in studies in humans[11,12] and antioxidants in animal studies.[13]

Dopamine-GABA Hypothesis

In this theory, which does not disregard the fact that dopamine receptors become increasingly sensitive to the effects of DRBD, the interaction between the dopamine and gamma-aminobutyric acid (GABA) neurons plays a greater role, likely accounting for the different, yet simultaneous, effects of the DRBD. Dopamine has both inhibitory and excitatory effects on GABA neurons, determined by the location and type of the dopamine receptors in the brain. Unfortunately, this theory has not been able to be converted into a treatment paradigm because of the toxicity of the agents.[14]

Diagnosis

The differential diagnosis of a DIMD is the same as that for the primary movement disorder that is not drug induced. The diagnosis of DIMD is a two-step process: the first entails the recognition of the abnormal movement and, the second, the identification of the temporal relationship between institution of therapy with a DRBD (acute) or exposure to a DRBD within the previous three months (tardive) and the abnormal movement. DIMDs manifest identically to other movement disorders that are not caused by drugs. For example, drug-induced parkinsonism looks like Parkinson's disease and tardive craniocervical dystonia looks like idiopathic craniocervical dystonia. However, with DIMD, a few clues may assist the clinician in correctly identifying the cause of the movement. The etiology is more likely to be drug induced when the patient has:

  • Both hypokinetic and hyperkinetic movements occurring simultaneously; ie parkinsonism and OBLD

  • Certain patterns such as axial distribution dystonia, which is more typical of tardive dystonia than other forms

  • When the DIMD occurs in conjunction with an OBLD

  • A subacute onset of parkinsonism.

Obtaining a detailed history of any medication use in the previous three months and having appropriate suspicion is important in delineating the cause of DIMD . Inquiries specifically about any anesthesia exposure during surgical procedures that the patient may have undergone, antinausea treatments, and use of weight-reduction, adrenergic agents, or antidepressant medications may aid in identifying often-overlooked causes of DIMD.


Treatment

Prevention and Recognition

Prevention and early recognition of the symptoms comprise the best method of managing DIMD.

Discontinuation of DRBD in Patients With DIMD

As iatrogenic conditions, DIMDs are typically treated by discontinuation or reduction of the offending drug, if possible. In most circumstances, this is a straightforward process, and an alternative medication is available that does not cause a movement disorder. However, some patients, particularly psychiatric patients, are unable to stop their medication. Discontinuation or reduction does not always result in a resolution of the DIMD (particularly tardive dyskinesia), highlighting, again, the need for prevention.

When the use of antipsychotic agents results in a DIMD, a dilemma arises in which the patient's need for treatment of the essential psychiatric condition must be weighed against a requisite level of tolerable dyskinetic movements. No pharmacologic treatment has been proven to be universally or even typically effective in this situation. Therefore, the treatment of a DIMD in this setting, as in all others, is instead aimed at preventing, recognizing, and managing the movements. Discontinuing or reducing the level of the causative agent may lead to a reduction in the movements; however, this may occur at the expense of exacerbating the underlying illness. An apparently contradictory practice holds that DRBD may be used to treat OBLD caused by antipsychotic drugs, an effect that actually occurs through suppression and not treatment of the movements; however, this method typically leads to a rebound phenomenon and exacerbation of the movements. When the use of a typical antipsychotic medication leads to TD, switching to an atypical antipsychotic medication, particularly clozapine, may prove to be the most efficacious strategy. Studies have shown that the resolution of the symptoms of tardive disorders vary from 33% for OBLD, to 12% and 8% for tardive dystonia and akathisia, respectively.

Medications Used in the Treatment of DIMD

Amine-depleting agents such as reserpine and tetrabenazine block the reuptake of dopamine, norepinephrine, and serotonin, thereby depleting central availability of these neurotransmitters. They are most effective in treating tardive dystonia (~60% to 90%), tardive akathisia (~75%), and tardive dyskinesia (~50% to 90%) of patients whose symptoms are not resolved through the discontinuation of the causative agent.[1] Few controlled studies have been undertaken to evaluate the effectiveness of these drugs in the treatment of tardive syndromes, although they are in widespread use in clinical practice. Tetrabenazine, a monoamine-depleting drug acting as an inhibitor of vesicular monoamine transporter (VMAT2), has been approved by the FDA for the treatment of chorea in Huntington disease.[15-17] Amine-depleting agents seem to be most effective when not used concomitantly with neuroleptic therapy but, instead, when they are used after the neuroleptic drug is withdrawn. Significant side effects can limit the use of these drugs, including depression, orthostatic hypotension, and parkinsonism.

When treatment with amine-depleting drugs is not indicated, e.g., symptoms are mild or the patient has depression or another contraindication to the use of amine-depleting drugs, other medications may be effective but have not been well studied or have not been proven unequivocally in studies to provide benefit.[18] These medications include anticholinergic agents,[19] dopamine agonists,[20] GABAergic agents,[21] beta-adrenergic receptor-blocking agents, branched chain amino acids,[11,12] essential fatty acids,[22] and vitamins B6[23] and E.[24] In addition, chemodenervation with botulinum toxin injection therapy may be used to treat drug-induced focal dystonia.[25]

Summary

The use of atypical antipsychotic agents for populations that had never previously been exposed to the agents, i.e., children, adolescents, and the elderly, and a wider use in the adult population, combined with a lack of long-term trials that would accurately indicate the rate of DIMD in these populations portends an increase in the number of cases of DIMD. Other issues that may increase the number of cases of DIMD include lack of understanding or complacency among physicians when prescribing DRBD. Recognition of DIMD is often lacking, even among psychiatrists and neurologists, but is essential in the treatment of these disorders.

Can Patients with Critical Aortic Stenosis Undergo Noncardiac Surgery
without Intervening Aortic Valve Replacement?


M. Chadi Alraies
30 mar 2009--Case Presentation: A 65-year-old female patient with past medical history of
hypertension, diabetes mellitus, and hyperlipidemia was seen for preoperative
clearance for repair of right femur fracture. Patient denied chest pain but admitted
to progressively worsening dyspnea on exertion over the last few months. Her
medications were lisinopril, metformin, and simvastatin. Vital signs on admission
were stable, with a blood pressure of 136/72 mm Hg and heart rate of 92
bpm. Labs were normal. Her exam was unremarkable except for a 3/6 harsh systolic
murmur. Echocardiogram revealed critical aortic stenosis (AS) with valve
area of 0.7 cm2. Cardiology recommended aortic valve replacement (AVR), but
patient refused surgery. Patient chose to undergo fracture repair surgery despite
the explained risks. She was started on beta-blockers and appropriate anesthetic
precautions were undertaken. Her postoperative course was complicated by prolonged
ventilator support, but patient was successfully extubated after 2 days and
was discharged in stable condition.

Discussion: Per the American College of Cardiology/American Heart
Association guidelines, severe valvular disease is a major clinical predictor
of cardiac risk and elective noncardiac surgery (NCS) should be delayed for
intervening cardiac catheterization and/or possible valve surgery. However,
several reviews have suggested that patients with severe AS may undergo
NCS with relative safety if appropriate perioperative care is provided and
careful management of the pathophysiologic changes associated with AS
is undertaken. O’Keefe et al reported that in 48 severe AS patients (mean
valve area 0.6 cm2) who were not eligible for AVR and underwent NCS,
only 1 cardiac event with no deaths and a complication rate of about 2%
was seen. This would compare favorably with the national 4% mortality rate
for AVR reported by the Society of Thoracic Surgeons. On the other hand, a
subsequent report of 19 patients with severe AS (mean valve area < 0.5 cm2)
reported 2 perioperative deaths. Raymer and Yang compared 55 patients with
signifi cant AS (mean valve area 0.9 cm2) with case-matched controls with
similar preoperative risk profi les other than AS undergoing similar surgeries,
and cardiac complication rates were not signifi cantly different between
the two groups. Thus, patients with severe AS may undergo indicated NCS
provided that the presence of severe AS is recognized preoperatively and the
patients receive intensive perioperative care.

Conclusion: Critical AS needs to be detected preoperatively, given its prognostic
importance. When detected, surgery may still be considered even if AVR
is not feasible, and requires a comprehensive co-management team involving
anesthesia, cardiology, surgery, and internal medicine.
Increasing Physical Activity in Middle Age Eventually Lowers Mortality Risk

30 mar 2009— Increased physical activity in middle age is eventually associated with reduced mortality risk to the same level as that in men with constantly high physical activity, according to the results of a population-based cohort study reported in the March 6 Online First issue of the BMJ.

"About half of all middle aged men in the West do not take part in regular physical activity," write Liisa Byberg, from Uppsala University in Uppsala, Sweden, and colleagues. "Whereas being physically inactive in younger years seems detrimental, we do not know whether an increase in exercise level later in life reduces mortality rates. If the impact on mortality could be compared with the effects of other changes in lifestyle habits it would be easier to communicate this potential health benefit."

The goal of this study was to assess how change in level of physical activity after middle age affects mortality rates and to compare that change vs the effect of smoking cessation. In Uppsala, Sweden, 2205 men aged 50 years from 1970 to 1973 were followed up for 35 years and were reevaluated at ages 60, 70, 77, and 82 years. The primary study endpoint was total (all-cause) mortality.

In the groups with low, medium, and high physical activity, the absolute mortality rate was 27.1, 23.6, and 18.4 per 1000 person-years, respectively, with a relative rate reduction attributable to high physical activity being 32% vs low physical activity and 22% vs medium physical activity.

During the first 5 years of follow-up, men who increased their physical activity level between the ages of 50 and 60 years continued to have higher mortality rates (adjusted hazard ratio [HR], 2.64; 95% confidence interval [CI], 1.32 - 5.27, vs unchanged high physical activity). However, after 10 years of follow-up, increased physical activity in these men was associated with decreased mortality rates to the level of men with unchanged high physical activity (HR, 1.10; 95% CI, 0.87 - 1.38).

This reduction in mortality rates associated with increased physical activity (HR vs unchanged low physical activity, 0.51; 95% CI, 0.26 - 0.97) was comparable vs quitting smoking (HR vs continued smoking, 0.64; 95% CI, 0.53 - 0.78).

"Increased physical activity in middle age is eventually followed by a reduction in mortality to the same level as seen among men with constantly high physical activity," the study authors write. "This reduction is comparable with that associated with smoking cessation."

Limitations of this study include sample restricted to men; crude assessment of physical activity by questionnaire, with risk for misclassification possibly leading to underestimation of the results; and possible bias related to the technique of last observed value carried forward.

"Efforts for promotion of physical activity, even among middle aged and older men, are important," the study authors conclude. "Increased physical activity in middle age increases longevity after an induction period of up to 10 years of no benefit."

The Swedish Research Council supported this study. The study authors have disclosed no relevant financial relationships.

BMJ. Published online March 6, 2009.

Effect of Atorvastatin on the Pharmacokinetics and Pharmacodynamics of Prasugrel and Clopidogrel in Healthy Subjects

Study Objective.
To investigate the potential effect of atorvastatin 80 mg/day on the pharmacokinetics and pharmacodynamics of the thienopyridines prasugrel and clopidogrel.

Design. Open-label, randomized, crossover, two-arm, parallel-group study.


Setting. Single clinical research center in the United Kingdom.


Participants. Sixty-nine healthy men aged 18–60 years.


Intervention. Subjects received either a loading dose of prasugrel 60 mg followed by a maintenance dose of 10 mg/day or a loading dose of clopidogrel 300 mg followed by 75 mg/day. The drug was given as monotherapy for 10 days, and after a 6-day run-in period with atorvastatin 80 mg/day, the same dosage of atorvastatin was continued with the respective thienopyridine for 10 days. A 14-day washout period separated the treatment regimens.


Measurements and Main Results. Blood samples were collected before and at various time points after dosing on days 1 and 11 for determination of plasma concentrations of metabolites and for measurement of platelet aggregation induced by adenosine 5'-diphosphate 20 µM and vasodilator-stimulated phosphoprotein (VASP). Coadministration of atorvastatin did not alter exposure to active metabolites of prasugrel or clopidogrel after the loading dose and thus did not alter inhibition of platelet aggregation (IPA). During maintenance dosing, atorvastatin administration resulted in 17% and 28% increases in the area under the plasma concentration–time curve (AUC) values of prasugrel's and clopidogrel's active metabolites, respectively. These small changes in AUC did not result in a significant change in IPA response to prasugrel but did result in a significant increase in IPA during clopidogrel maintenance dosing at some, but not all, of the time points on day 11. Coadministration of atorvastatin with either prasugrel or clopidogrel had no effect on VASP phosphorylation relative to the thienopyridine alone after the loading dose.


Conclusion. Coadministration of atorvastatin 80 mg/day with prasugrel or clopidogrel did not negatively affect the antiplatelet response to either drug after a loading dose or during maintenance dosing. The lack of a clinically meaningful effect of high-dose atorvastatin on the pharmacodynamic response to prasugrel after the loading or maintenance dose indicates that no dosage adjustment should be necessary in patients receiving these drugs concomitantly.

More evidence that NSAIDs are harmful to heart-failure patients

30 mar 2009- Further evidence that even commonly used nonsteroidal anti-inflammatory drugs (NSAIDs) are harmful to heart-failure patients has come from a new study [1].

The study, published in the January 26, 2009 issue of the Archives of Internal Medicine, shows dose-related increases in risk of death and rehospitalization for heart failure or MI with all COX-2 inhibitors or other NSAIDs.

Lead author Dr Gunnar Gislason (Gentofte University Hospital, Hellerup, Denmark), commented to heartwire: "Although our study is observational, and you can never exclude all confounding factors, we have very consistent results estimated using two different statistical methods. And these results are similar to many other previous studies. In addition, we see a strong dose-related response. I think the data are very convincing."

And it is not just the COX-2 inhibitors that are the problem, as diclofenac showed a similar risk. "This is very disturbing, as this drug is so widely used and is available off prescription in many countries," Gislason noted.

He described the effect as "quite considerable." For example, for rofecoxib (Vioxx, Merck), the number of patients needed to treat for one year to cause one death was just nine, and the corresponding number for celecoxib (Celebrex, Pfizer) was 14 and diclofenac 11. "These numbers are very low," Gislason said, noting that for antihypertensive drugs, the number needed to treat for one year to save one life is in the range of 50 to 100. "Everyone agrees that it is worth treating hypertension. So the harmful effect of some NSAIDs is much greater than the beneficial effect of antihypertensive treatment."


Even naproxen risky at high dose

"Our results suggest that all NSAIDs have harmful effects in heart-failure patients, even naproxen at high doses. Naproxen is probably the best of the bunch, but it still increases fluid retention, which is bad news for heart-failure patients," Gislason added.

But he points out that these drugs are still being used in this population. "I don't think doctors are aware of this problem. We need to raise awareness. I think the main culprits are primary-care doctors, as these drugs are so widely prescribed in general practice," he commented. "The fact that some of these drugs are available over the counter makes the situation much worse, as anyone can buy them without advice from a doctor. All NSAIDs should be prescription-only drugs. Making them available in petrol stations and supermarkets gives the impression that they are not harmful. Many heart-disease patients will not be aware that they shouldn't take them."

In the current study, Gislason and colleagues used Danish national records of hospitalizations and pharmacy drug dispensing to identify 107 092 patients surviving their first hospitalization due to heart failure between 1995 and 2004 and their subsequent use of NSAIDs.

They found that 36 354 patients (33.9%) claimed at least one prescription of an NSAID after discharge; 60 974 patients (56.9%) died, and 8970 (8.4%) and 39 984 (37.5%) were rehospitalized with MI or heart failure.

After adjustment for age, sex, calendar year, comorbidity, medical treatment, and severity of disease, the authors found a clear dose-related increase in risk with the drugs.

Sunday, March 29, 2009

Night shift and cancer Q&A

The increased risk of breast cancer in female night shift workers is 'modest'

29 mar 2009--Denmark has begun compensating “dozens” of women who developed breast cancer after working night shifts, multiple news sources have reported. The BBC said the Danish government’s decision is based on a report from WHO’s International Agency for Research on Cancer (IARC), which concluded that working nightshifts could increase women’s risk of breast cancer.

The report from the IARC has not yet been published. A summary of the report says that most of the epidemiological studies it looked at "found a modestly
increased risk of breast cancer in long-term employees compared with those who are not engaged in shift work at night".

However, the summary also said these studies have some limitations, including the possibility that factors other than shift work may have affected the results. The UK’s Health and Safety Executive (HSE) has commissioned its own report on the health impact of night-shift work, including its effects on breast cancer risk. This research is due to be published in 2011. This report will help UK policy makers to decide whether to make changes to recommended work practices.

In the interim, Cancer Research suggests that the advice for shift workers is the same as for other women: to remain breast aware, to visit their GPs if they notice anything unusual about their breasts, and to take up invitations for breast screening.

Why is the Danish government making this payout?

The Danish government’s decision to compensate the women is based on a report by the WHO’s International Agency for Research on Cancer (IARC). The report originated from a specially commissioned expert working group. The group met in October 2007, when they concluded that “shift work that involves circadian disruption [working at night time when a person would normally be sleeping] is probably carcinogenic to humans”.

What is the evidence that working at night increases the risk of breast cancer?

The conclusion of the working group was based on “limited evidence in humans” that shift-work involving night work is carcinogenic. It also took into account the “sufficient evidence” from animal experiments that exposure to light during the daily dark period (known as biological night) is carcinogenic.

The findings of the working group were summarised in the Lancet Oncology, but the full report has not yet been published. The summary reports that six out of eight cohort studies it looked at, found a “modestly increased” risk of breast cancer among long-term employees who worked night shifts, compared to those who did not. They say that these studies do have some limitations, including the possibility that factors other than shift work may have affected the results. Additionally, the studies use different definitions of what shift work is, and several of the studies focus on single professions only (mainly nurses or flight attendants).

The working group also looked at animal experiments. They describe studies in rodents, which looked at the effect of disrupting the animals’ normal light-dark cycle on tumour development. The summary reported that more than 20 studies in rodents looked at the effects of constant light, dim light at night, simulated jet lag, or “circadian timing of carcinogens”. Most studies found an increase in the number of tumours.

It also reported that a similar number of studies in rodents looked at the effect of reducing the normal nighttime production of the hormone melatonin in rodents by removing the gland that makes this hormone. Most of these studies also reported an increased number and growth of tumours.

How great is the increase in risk?

The summary of the IARC findings does not provide an overall estimate of how much women’s risk is increased or for how long a woman has to work nights before her risk is increased.

One of the cohort studies that it reviewed was carried out in over 70,000 female nurses in the US, and followed them up for 10 years. This study found that about 42 in every 1,000 nurses who worked for 30 years or more on rotating night shifts developed breast cancer compared with about 29 in every 1,000 nurses who never worked night shifts.

This represents a 36% increase in the risk of breast cancer for women who worked for 30 years or more on rotating night shifts. About 32 out of every 1,000 nurses who worked on night shifts for less than 30 years developed breast cancer, and this represented an 8% increase in risk of breast cancer compared with women who never worked night shifts.

I read that only women exposed to asbestos are at greater peril. Is this true?

Various newspapers have compared the effects of night-shift work to that of “anabolic steroids, ultraviolet radiation and diesel engine exhaust”. The Mirror reported that “women who work nights are at such serious risk of cancer that only those exposed to substances such as asbestos are in greater peril”.

These comparisons appear to be based on the grading that the IARC have given to shift work. Based on the evidence available,the IARC grades potential cancer-causing hazards, and then groups them accordingly. There are five groups:

  • Group 1: the agent is carcinogenic to humans.
  • Group 2A: the agent is probably carcinogenic to humans.
  • Group 2B: the agent is possibly carcinogenic to humans.
  • Group 3: the agent’s carcinogenicity to humans is not classifiable.
  • Group 4: the agent is probably not carcinogenic to humans.

Shift work at night has been put in Group 2A – with asbestos rated as a Group 1 agent. However, it is important to note that this grading system is based on how much evidence there is to support the view that the agent in question has a cancer-causing (carcinogenic) effect.

Group 1 means that there is sufficient evidence to conclude that a factor causes cancer in humans, while Group 2A means that there is limited evidence that the factor causes cancer in humans, but sufficient evidence that it can cause cancer in experimental animals. Therefore, these groupings do not give a measure of how much a factor increases cancer risk.

How might working at night increase cancer risk?

It is not clear exactly how working at night might increase risk of cancer. There is a theory that disruption of the circadian system and the hormone melatonin are involved. Working at night is known to disrupt our circadian system, which regulates how we respond to night and day. This system affects how active we are, which hormones are produced, and which genes are switched on and off. Some of the genes affected by the circadian system can affect tumour growth, while the hormone melatonin, which is normally produced at night, affects immune system function.

What happens now?

The UK’s Health and Safety Executive (HSE) has commissioned its own report on the health impact of night-shift work, including its effects on breast cancer risk. This research is being carried out at the University of Oxford Cancer Epidemiology Unit, and the report is due to be published in 2011.

This report will help UK policy makers to decide whether to make changes to recommended work practices. In the interim, Cancer Research suggests that the advice for shift workers is the same as for other women: to remain breast aware, to visit their GPs if they notice anything unusual about their breasts, and to take up invitations for breast screening.

Obesity increases death risk

A person is considered obese if they have a BMI of 30 or greater

29 mar 2009--The Guardian reported that the largest ever investigation into how obesity affects mortality has found that obese people “die up to 10 years early”. The newspaper said that “moderate” obesity shortens lives by three years, while people who are severely obese will die 10 years earlier than they should.

This study pooled data from 57 separate studies in 894,576 people. It found that, after taking age and smoking into account, people with a ‘normal’ BMI (22.5–25kg/m²) had the lowest overall mortality. With every 5kg/m² increase in BMI above this range, the risk of death from any cause increased by about 30%.

Obesity is associated with diabetes, high blood pressure and ‘bad’ cholesterol, and it is probably a combination of these associated factors that raises the risk of death. This research is valuable in that it gives actual figures for how much obesity increases risk of death.

Where did the story come from?

The research was carried out by members of the Prospective Studies Collaboration from the Clinical Trial Service Unit and Epidemiological Studies Unit (CTSU), University of Oxford. The Clinical Trial Services Unit receives funding from the Medical Research Council, British Heart Foundation and various pharmacological companies. The study was published in the peer-reviewed medical journal The Lancet.

What kind of scientific study was this?

This meta-analysis combined a large number of individual cohort studies with the aim of assessing the relationship between BMI and cause-specific mortality (death from an identified cause). This sort of study requires long-term follow-up of a large number of people. The researchers included studies that had followed people for over five years.

The researchers included 57 studies, with a total of 894,576 participants. The studies were eligible for inclusion in the study if they looked at BMI and mortality; this was the researchers’ sole criterion for inclusion.

BMI was calculated as weight in kg divided by the square of height in metres. A BMI above 30kg/m² was considered obese. People with missing BMI data were excluded, as were those who were severely underweight (BMI <15kg/m²)>

In each individual study, the researchers looked for associations between BMI and other risk factors with adjustment for age. For example, they looked at whether BMI had any link with smoking status. They also looked at associations between BMI and mortality, adjusting the analyses for age, sex and smoking status. To limit the effects of any diseases on the participants’ BMI at the start of the study, the researchers excluded people from their analyses who died within the first five years of follow-up. Risk of death overall and from individual causes was calculated for different BMI categories.

What were the results of the study?

Of the 57 studies that were identified, 92% of the participants were of European origin, with the remainder from the US, Australia, Israel and Japan. The majority (85%) of the participants were recruited during the 1970s and 80s. The average age of most study members when they enrolled was 46 years, and their average BMI was 24.8kg/m². BMI at enrollment was ‘positively linearly associated’ with blood pressure and non-HDL (‘bad’) cholesterol (i.e. as BMI increased so did the other risk factor).

Of the 894,576 people who gave BMI measurements at the start of the study, 15,996 died in the first five years and were therefore excluded from the mortality analyses. During an average of eight years of further follow-up, there were 6,197 deaths from unknown causes and 66,552 deaths from known causes.

These included 30,416 deaths from vascular conditions, 2,070 deaths related to diabetes, kidney or liver disease, 22,592 cancer-related deaths, 3,770 deaths from respiratory conditions, and 7,704 from other causes. Death rates were lowest in those with BMIs between 22.5 and 25kg/m². Comparing all other BMIs to this category, each 5kg/m² rise in BMI above 25 was associated with a 30% increased risk of death overall compared with people in the normal range.

Looking at death from different causes separately, the increase in risk of dying was greatest for deaths related to diabetes, kidney or liver disease (60-120% increased risk compared to those in the normal BMI range), followed by increased risk of vascular mortality (40% compared to those in the normal range), and respiratory-related mortality (20% increased risk). The lowest increase in risk was for cancer-related mortality (10%). For people with a BMI below 22.5kg/m², the risk of death increased as BMI was reduced, mainly due to the increase in respiratory disease and lung cancer, with associations being much stronger for smokers than for non-smokers.

The researchers used the death rates of 35 to 79-year-olds in Western Europe in the year 2000 to estimate the average reduction in lifespan. They estimated that average lifespan is reduced by up to one year for people who, by about age 60, reach a BMI of 25–27.5kg/m². Lifespan was cut by one to two years for those who reach 27.5–30kg/m², and by two to four years for those who become obese (30–35 kg/m²).

For people with a BMI above 35kg/m², they estimate an eight to 10-year reduction in lifespan, although this accuracy is limited because there is much less information for this BMI category.

What interpretations did the researchers draw from these results?

The researchers conclude that BMI is in itself a strong predictor of overall mortality, both for people who are under the optimum weight (less than 22.5kg/m²) and over it (25kg/m²). The increase in mortality above this range is thought to be due mainly to vascular disease, which may also be increased by other closely-associated risk factors, such as high blood pressure. They say that other anthropometric measures, such as waist circumference and waist-to-hip ratio could add additional information to BMI.

What does the NHS Knowledge Service make of this study?This large pooling of data found that overall mortality is lowest in people whose BMI lies within the normal range of 22.5–25kg/m² (after adjustment for age and smoking). Every 5kg/m² increase in BMI above this range increased the risk of death overall, and variably increased the risk of death from individual causes (as listed above). Underweight BMI below the normal range was also associated with an increased risk of death, mainly due to smoking-related lung disease.

This valuable research is useful in that it gives actual figures for how much obesity increases risk of death. There are a few points to consider:

  • In the analyses of BMI and mortality, there were some associated risk factors (cholesterol, blood pressure and diabetes) that were not adjusted for. This is because these factors (along with obesity) are collectively associated with a raised risk of cardiovascular disease. Therefore, the increased death rate cannot be attributed to obesity alone as it is likely to be caused by a combination of associated conditions, particularly the increased risk of vascular mortality with raised BMI. In addition, the effects of diet, exercise and socioeconomic status (also related to BMI and other cardiovascular risk factors) were also not taken into account, and these could have confounded the results.
  • The participants’ BMI was measured only once in adulthood. But the researchers address this and say that a single measurement is highly correlated with a person’s long-term BMI. However, it also means that no conclusions can be made about links between obesity and excess weight in childhood and increased mortality. Other measures of waist circumference and body fat distribution may also be helpful.
  • By combining the results from a variety of different studies from around the world, there may have been differences in study reliability, methods of data collection and follow-up. This could affect how accurate the estimates are.

Maggots clean ulcers quickly

The larval therapy used maggots from the green bottle fly

29 mar 2009--Maggots are in the news today. Newspapers have taken slightly different angles on a study into the use of larval therapy for leg ulcers. The Daily Telegraph reported that “maggots are as successful at treating leg ulcers as standard dressings”. The BBC was less optimistic, saying that maggots may not have the miracle healing properties that have been claimed. Meanwhile, The Times pointed out that although maggots heal leg ulcers no quicker than the normal dressings (hydrogel), the maggots cleaned the wound five times faster.

These reports are based on a randomised controlled trial that compared loose larvae, bagged larvae and hydrogel in treating leg ulcers in 267 patients in the UK. This good-quality study found that there was no difference between larval treatment and hydrogel in healing ulcers. However, the larvae were better at debriding wounds (getting rid of dead tissue).

Questions about the use of maggots to heal wounds still remain unanswered, and further study is needed. The researchers say that “future treatment decisions should be fully informed by the finding that there is no evidence of an impact on healing time”.

Where did the story come from?

Dr Jo C. Dumville and colleagues from the University of York, the University of Warwick, Micropathology Ltd in Coventry and the University of Leeds carried out this study. The research was funded by the UK National Institute for Health’s Research Health Technology Assessment Programme. It was published in the peer-reviewed British Medical Journal.

What kind of scientific study was this?

This randomised controlled trial compared the treatment of leg ulcers with larvae from the green bottle fly (maggots) to hydrogel (a standard non-adherent, gel-like dressing). Venous and arterial ulcers result from poor blood circulation. Most leg ulcers are venous ulcers caused by faulty valves in superficial and deep veins. Due to the faulty valves, blood fails to flow out of the limb properly, which results in high venous pressure, oedema (collection of fluid in tissues) and damage to skin. This leads to ulceration. Arterial leg ulcers are different in that they are the result of a reduced blood supply from the heart to the tissues.

The treatment of leg ulcers usually involves cleaning them with saline or tap water followed by the application of a dressing. For venous leg ulcers, a compression bandage is also applied to improve blood flow from the lower limbs. The wound is cleaned and the dressing is changed regularly until healing is complete. There are several different types of dressing, including hydrogel dressings. The choice of dressing depends on the type of tissue in the wound, the presence of odour or infection, and the presence and type of exudate (fluid that oozes from blood vessels due to inflammation).

The study recruited people from 22 leg ulcer clinics across the UK. The participants all had venous leg ulcers or a mix of venous and arterial leg ulcers, with at least a quarter of the ulcer covered by necrotic tissue (dead tissue, also called slough). These are the types of wounds that larval therapy is used on. The ulcers were non-healing (no change in area in previous month), were 5cm2 or less in diameter, and they occurred and in people with more than one ulcer. The largest ulcer was chosen as the reference. Pregnant or lactating women were excluded, as were people who were allergic to hydrogel, and those who had “grossly oedematous legs” or who were taking anticoagulants (which would render larval therapy unsuitable).

There were 267 eligible patients, who were randomly allocated to receive either loose larvae, bagged larvae or hydrogel. These were applied in the debridement phase of the patient’s treatment (i.e. the phase when dead tissue is removed from the ulcer). Larvae were left on the wound for three to four days. After debridement, all patients had a standard dressing without compression. In this study, the compression aspect of treatment was not compromised and nurses used this as appropriate, although it could not be used when larvae were in place.

The researchers compared the time it took for the ulcer to completely heal between the three groups, as judged by two nurses. Photographs were taken every week for the first six months, and then monthly thereafter. These were used to independently assess healing by a third party, who was unaware of the treatment allocation. The researchers also assessed other outcomes, including the length of time until debridement, bacteria in wounds, quality of life, adverse events and pain.

What were the results of the study?

There was no difference between the three groups in the time that it took for the ulcers to heal. There was no significant difference in chance of healing between hydrogel and larval therapy (loose larvae and bagged larvae combined).

Loose larvae debrided the wounds quicker than either bagged larvae or hydrogel, but when the larval treatments were combined into one, there was no difference in time to debridement compared with hydrogel. However, the larval therapy debrided the patients' wounds twice as fast as hydrogel (HR 2.31, 95% CI 1.65 to 3.24).

The three groups showed no significant differences in bacteria levels in the wounds or in adverse events. Patients in the larval groups reported significantly more pain than those in the hydrogel groups.

What interpretations did the researchers draw from these results?

The researchers report that there is no evidence that larval therapy using loose or bagged larvae reduces the healing time of ulcers compared with hydrogel. However, their study does suggest that larvae are better at debridement than hydrogel. Although pain was greater in the larval therapy group, this was “probably transient”, and it did not have an affect on the regular quality-of-life measurements.

This randomised controlled trial provides the strongest evidence to date about the effects of larval therapy on leg ulcer healing. It found that there was no difference in the healing of leg ulcers when larval therapy was used for debridement compared with using hydrogel dressings.

These results can be interpreted in different ways, as reflected in the newspaper headlines. No difference can be interpreted as ‘just as good as’ or ‘no better than’. The important points are:

  • People receiving larval therapy reported more pain than those receiving hydrogel.
  • There may be issues of acceptability with regards to larval therapy (some people might choose not to receive it).
  • Larval therapy seems to improve wound debridement, and the researchers say that “if debridement is the goal of treatment, such as before skin grafting or other surgery, then larval therapy should be considered”.
  • In spite of this recommendation, the researchers say that the role of debridement in managing leg ulcers is not clear. While it is viewed as an important part of wound healing, more research is needed to understand whether it is indeed beneficial for patients.
  • A separate study concludes that, based on these results, larval therapy has “similar health benefits” and “similar costs” to hydrogel treatment.
  • The findings of this study apply to people with quite serious wounds, i.e. those that hadn’t improved in the month before randomisation, and wounds that took about 240 days to completely heal.

The researchers highlight some limitations of their study, including the difficulty they had in recruiting enough people who met their criteria of “sloughy” ulcers (i.e. ones with sufficient dead tissue to indicate larval therapy as an option). As such, the study is likely to be underpowered, and there is a greater risk that positive results are false positives, or that the study misses true differences between treatment groups. The researchers also did not investigate debridement in the long term, i.e. whether wounds remained debrided. Another limitation is that they only measured total bacterial load in the wound and didn’t investigate particular types of bacteria (except MRSA).

There are still unanswered questions regarding larval therapy, and the researchers say that “future treatment decisions should be fully informed by the finding that there is no evidence of an impact on healing time”.

Cancer survival rates

Cancer survival has improved overall in countries across Europe

29 mar 2009--Widespread media coverage has been given to a large study on cancer survival across Europe. The EUROCARE-4 study looked at cancer cure and survival rates between 1995 and 2004. The Guardian reported that although the number of people being cured of cancer is steadily climbing across Europe, cure rates in England and Scotland trail those in many other countries. The Daily Mail reported that “cancer survival rates in Britain [are] among the worst in Europe".

This important study analysed a vast amount of data on cancer survival in Europe. Although the newspapers and the study have given possible explanations for the variations in cancer survival between countries, the study did not examine this in detail. Various factors could have been involved, including differences in cancer prevention and detection strategies, diagnostic rates, cancer stage at diagnosis, how cancers are classified, what proportion of cancers are recorded in the cancer registries, and what treatments were given.

Further study would be needed to determine the contribution of each of these factors and how to improve survival rates.

Additionally, these figures are for cancers diagnosed more than 10 years ago, and survival rates may have improved since then.

Where did the story come from?

This research was carried out by the EUROCARE-4 study group, comprised of researchers from across Europe. The study was funded by the Compagnia di San Paolo foundation in Italy. Nine articles and an editorial about the EUROCARE-4 were published in a special edition of the peer-reviewed European Journal of Cancer. Much like the majority of the media interest, this analysis focuses on the results for five-year survival from cancer.

What kind of scientific study was this?

This registry-based cohort study, called the EUROCARE study, looked at cure and survival rates of people diagnosed with cancer in Europe. The EUROCARE study began in 1990, and papers have already been published on survival rates for people diagnosed with cancer between 1978 and 1985, 1985 and 1989, and 1990 to 1994 (EUROCARE studies one to three).

For the current study (EUROCARE-4) the researchers obtained data from 93 cancer registries in 23 countries. Thirteen of the countries, including the UK, had national cancer registries, where all cancer cases are recorded. The coverage of the registries varied among the other countries, with between 8% to 58% of their populations covered. Germany has national cancer coverage for children, but only 1.3% of adults are covered. Overall, the data covered an average of about 151,400,000 members of the population (not just those who developed cancer) from 1995 to 1999, which is about 35% of the total population of these countries.

To participate in EUROCARE-4, the registries were required to collect a standard set of data for each cancer case. This included the person’s age, gender, date of birth, diagnosis, type and location of cancer, and other cancer characteristics. The data also included which cancer came first (if they were diagnosed with multiple primary cancers), whether the person was alive or dead, and when their survival status was last checked up.

Because different cancer registries had different ways of determining the first primary tumour in people with multiple cancers, the researchers standardised this information using data from the registries and their own system. Information on the cancer stage at diagnosis was collected by some cancer registries, but not all.

The researchers looked at cancer survival rates for people diagnosed with cancer between 1995 and 1999, but also looked at survival in cases diagnosed from 1978 to 2002 using data they had collected in all the EUROCARE studies. In all, the registries contained 13,814,573 cases of cancer diagnosed between 1978 and 2002, the majority of which were malignant (92%).

Cancer site and characteristics were classified according to an internationally accepted system. Records were checked for consistency in various ways, and those that were suspected to be incorrect were returned to the registries for correction. Only a person’s first malignant cancer diagnosis was used in the analyses. Cases of cancer identified from death certificates or discovered at autopsy were excluded from the analyses.

As well as overall age-adjusted analyses, separate analyses were carried out according to year of diagnosis, registry, gender, age, and cancer site. Of the 5,753,934 malignant adult cancers used in the survival analysis, 90% had been confirmed by microscopic analysis of tumour tissue.

The paper on overall five-year survival described the results of the analysis of about 3m adult cancer cases that were diagnosed between 1995 and 1999 and followed until the end of 2003. Data on some cancer sites was missing from the Danish data, and as this could affect overall survival estimates, Denmark was excluded from these overall analyses. Survival among people with cancer relative to the expected survival of people from the general population of the same age and gender, was calculated at one and five years after diagnosis.

This method of comparing observed survival with expected survival is often used in registry studies to compare survival across countries that have different rates of non-cancer deaths (e.g. from heart disease, etc.).

The researchers also looked at the likelihood of surviving to five years if a person survived for one year after diagnosis, and compared this with overall five-year survival.

What were the results of the study?

Across the European countries studied, people diagnosed with any cancer between 1995 and 1999 had a five-year survival rate (adjusted for age) that was half that of the general population. This was an increase on the previous study, which found a 47% survival rate for cancers diagnosed between 1990 and 1994.

For individual countries, five-year relative survival was highest in Sweden (58%) and lowest in Poland (39%). In the UK and Ireland, relative survival ranged from 43-48%.

The researchers noted that the differences in cancer survival between countries, as identified in previous EUROCARE studies, have narrowed. They say that bladder cancer, prostate cancer, and chronic myeloid leukaemia showed the greatest difference in five-year relative survival between countries in this latest study.

In general, relative five-year survival from all cancers except the blood cancers (e.g. leukaemias) was highest in northern Europe (Finland, Sweden, Norway, and Iceland), “considerably” lower in Denmark and the UK, and lowest in eastern Europe (Slovenia, Poland, and the Czech Republic).

Within the UK, there was little variation in survival in most types of cancer across the 12 cancer registries from different regions.

Relative survival decreased with increasing age at diagnosis. The greatest difference between the youngest and oldest age groups in absolute five-year survival was about 40-50% for cancers of the cervix, ovary, brain and thyroid, Hodgkin’s disease and multiple myeloma. For cancers of the vagina and vulva, testis, bladder and kidney, as well as for non-Hodgkin’s lymphoma and chronic myeloid leukaemia, the difference was 31-39%. Women had better survival than men for most cancer sites, except for bladder and biliary tract cancer.

When the researchers looked at five-year survival for those who were still alive one year after diagnosis (conditional survival), this varied less between countries compared with overall five-year survival. This is because many people with advanced cancer die in the year after diagnosis, and those who survive past one year have similar cancer stages. The difference between conditional and overall five-year relative survival was largest for stomach cancer, kidney cancer, non-Hodgkin’s lymphoma, ovarian and colorectal cancer. These differences were largest in countries with low survival rates.

What interpretations did the researchers draw from these results?

The researchers concluded that the EUROCARE study “continues to provide important indications as to the relative efficiency of national health systems in caring for their cancer patients”. They say that their study “has highlighted marked differences in cancer survival across Europe”, but that “these survival differences have narrowed considerably since EUROCARE began, suggesting that inequalities in cancer care across Europe are also narrowing”.

This important study has analysed a vast amount of data on cancer survival across Europe, and it will be of great interest to health services and cancer researchers. There are a number of points to note:

  • Although the study and several newspapers have given possible explanations for why cancer survival varies across Europe, the study did not look into this in detail. Various factors could have been involved, including differences between countries in disease prevention and detection strategies, diagnostic rates (under- or over-diagnosis), cancer stage at diagnosis, how cancers are classified, what proportion of cancers are recorded in the cancer registries, and treatments that are given. Further in-depth analysis would be needed to untangle the effects of these contributing factors, and to determine how survival figures could be improved.
  • The accuracy of the figures depends on the accuracy and completeness of the recording in the original registries. Although the researchers took steps to ensure the quality of the data and took into account the coverage of the registries, these factors may still have had an effect.
  • The main analyses was of cancers diagnosed between 1995 and 1999. Survival rates for cancers diagnosed since 1999 may be different due to changes in how cancers are diagnosed and treated.
  • Although the Daily Mail reports that survival for five years or more after diagnosis in the UK is down from previous figures of 42% in men and 53% in women to 41.4% in men and 51.4% in women, it is unclear exactly which of the many Eurocare publications these previous figures come from. Figures from one of the current EUROCARE-4 publications looking at survival trends between 1988 and 1999 suggest that five-year survival among cancer patients in the UK (relative to the general population) increased over this period.

The EUROCARE study also offered the good news that cancer survival rates in Europe have improved, and the difference in survival between countries is reducing. The information provided from this and other studies will help to identify areas that could be further improved.

Oily fish and cancer

Omega-3 acids in fish appear to be protective against prostate cancer

29 mar 2009--“Oily fish can stop cancer,” reported the Daily Express. It said that a study has found that a three-ounce portion of oily fish just once a week could help men to survive prostate cancer. The newspaper added that prostate cancer can be reduced by almost 60% with a higher intake of omega-3, the fatty acids found in oily fish. The researchers claimed that omega-3 reverses the effects of an inherited gene that can lead to the development of an aggressive form of the disease.

The study looked at fish and fatty acids in the diets of men with and without aggressive prostate cancer. It found that healthy men had a higher intake of omega-3 fatty acids, and interpreted this to mean that omega-3 has a protective effect against cancer. It also found that men with a particular gene variation that codes for the COX-2 enzyme had increased risk of prostate cancer, and this risk fell with higher omega-3 consumption.

This research cannot prove that oily fish protects men from prostate cancer because diet was assessed when cancer was already established. However, it does further our understanding of possible interactions between dietary factors and genetics in the development of cancer.

Where did the story come from?

The risk factors for prostate cancer are not definite but are thought to include:

  • increased age
  • family history
  • weight
  • ethnicity (Afro-Caribbeans being at higher risk)
  • diet is also possibly involved

The research was carried out by Vincent Fradet and colleagues from the departments of Urology, Epidemiology and Biostatistics and the Institute for Human Genetics, University of California, San Francisco, and the Department of Preventive Medicine, University of Southern California. The study was funded by grants from the National Institute of Health and a Laval University McLaughlin dean’s grant. The study was published in the peer-reviewed medical journal Clinical Cancer Research.

What kind of scientific study was this?

This case-control study investigated whether omega-3 (LC n-3) polyunsaturated fatty acids (PUFA) can reduce the risk of prostate cancer. The researchers aimed to test the theory that any potential effect of omega-3 fatty acids is modified by a genetic variation in cyclooxygenase-2 (COX-2), an enzyme that is involved in the breakdown of fatty acids, and which also has a role in inflammatory processes in the body.

The researchers recruited 466 men with aggressive prostate cancer from major hospitals in Ohio. The tumours were confirmed as aggressive through several tests, including stage, Gleason score (based on histological findings) and prostate-specific antigen (PSA) levels. All men with prostate cancer (cases) were recruited within a short time of diagnosis, typically 4.7 months. A control group was identified from men undergoing standard annual check-ups at the same hospitals. These 478 men had no cancer diagnosis and were matched to the cases in terms of age and ethnicity.

All the men were given a validated food frequency questionnaire. The researchers also examined the men’s DNA, and looked at variations in the genetic sequence coding for the COX-2 enzyme.

Analysis involved determining the link between dietary intake of fish, omega-3 and omega-6 PUFAs and aggressive forms of prostate cancer.

Types of fish included:

  • Boiled or baked dark fish, e.g. salmon, mackerel and bluefish.
  • Boiled or baked white fish, e.g. sole, halibut, snapper and cod.
  • Unfried shellfish, e.g. shrimp, lobster and oysters.
  • Tuna (canned).
  • Fried fish and shellfish.

Fish intakes were classed as ‘never’, ‘one to three times per month’, or ‘one or more times per week’. In their statistical analyses, the researchers looked at associations between the genetic codes for COX-2 and aggressive prostate cancer. They also took into account the possible confounding effects of smoking, weight, family history of prostate cancer, and prior history of PSA screening.

What were the results of the study?

The average age of both cases and controls was 65 years, and 83% were of Caucasian origin. The average PSA at the time of cancer diagnosis was 13.4 ng/mL, and the majority of cases had a Gleason score of seven or greater. The cases had a more frequent family history of prostate cancer and previous history of PSA testing compared to the controls.

The cases had higher total calorie intake and higher average intake of fat, and a type of omega-6 fatty acid (linoleic acid). The controls had a significantly higher average intake of dark fish, shellfish and the omega-3 fatty acids.

The risk of prostate cancer with the highest quartile of omega-3 intake was significantly reduced compared to the lowest quartile of intake (odds ratio 0.37, 95% confidence interval 0.25 to 0.54). A particular sequence variation in the gene coding for COX-2 (SNP rs4648310) significantly affected the association between prostate cancer and omega-3 intake. Men who had this particular genetic sequence along with a low omega-3 intake had a 5.5 times increased risk of disease. Increased intake of omega-3 fatty acids reversed this risk in these men.

What interpretations did the researchers draw from these results?

The researchers conclude that long-chain omega-3 polyunsaturated fatty acids in the diet appear to be protective against aggressive prostate cancer, and this effect is modified by the genetic variation COX-2 SNP rs4648310. They say that their findings support the theory that omega-3 may have an impact on prostate inflammation and cancer development through interaction with the COX-2 enzyme.

What does the NHS Knowledge Service make of this study?

This is valuable research, which furthers understanding of the possible interaction between dietary factors and the influence of genetics in the development of cancer. However, it does have some limitations. The main one is that despite the demonstrated link, it cannot prove causation because diet was assessed when cancer was already established. Diet at that time may not reflect life-long patterns, and although a validated questionnaire was used, there is always the possibility that the participants had recall bias, and gave inaccurate estimates of the frequency and quantity of the foods they ate.

Additionally, the study findings apply to a specific group: all the cases were men with aggressive prostate cancer detected through PSA screening. Attendance for screening, as the researchers say, may reflect more health-conscious behaviour that could also have an effect on other risk factors. Different findings might be found from other stages of prostate cancer and wider population groups.

Using the results of this study, no assumptions should be made about the effects of omega-3 fatty acids on other cancers, or on the prognosis or development of prostate cancer (this study did not look at cancer treatment, response or survival).

Circumcision and STIs

Condoms are the best way to prevent the spread of STIs such as HIV

20 mar 2009--US experts have argued that “circumcision should be routinely considered as a way to reduce the risk of sexually transmitted infections,” BBC News reported. It said that circumcision is already known to greatly reduce the risk of infection from HIV, and researchers have now found that it also reduces the risk of herpes by 25%, and human papillomavirus (HPV) by a third. However, the BBC says that UK experts disagree with their US counterparts, and that “pushing circumcision as a solution sent the wrong message”.

There is some evidence that circumcision reduces the risk and spread of STIs. However, this study was carried out in Uganda, and its findings are not directly comparable to the UK. The main reason for this is the large difference in rates of STIs between the two countries. Further research in countries with a more comparable rate of STIs would give a better indication. When having sex, a condom remains the best way to avoid contracting an STI.

It is also important not to conclude that the results would be the same in other subgroups, such as men who have sex with men, or men who are circumcised as newborns. It could be that the benefits of circumcision differ among different groups.

Where did the story come from?

The research was carried out by Dr Aaron A.R. Tobian and colleagues from the Johns Hopkins University in Baltimore, US and colleagues from the Institute of Public Health at Makerere University and the Rakai Health Sciences Program in Uganda. The study was supported by grants from a range of organisations, including the National Institutes of Health and the Bill and Melinda Gates Foundation. The study was published in the peer-reviewed New England Journal of Medicine.

What kind of scientific study was this?

The study investigated whether male circumcision prevents certain sexually transmitted infections (STIs) in HIV-negative adolescent boys and men. The STIs included herpes simplex virus type 2 (HSV-2), human papillomavirus (HPV) infections as well as syphilis.

The data for this study was obtained from two previous randomised controlled trials, known as the Rakai-1 and Rakai-2 trials, and re-analysed. The Rakai-1 and Rakai-2 trials were carried out by the same researchers, and investigated circumcision and the rate of HIV infection and other STIs. These two independent trials shared the same design and used identical methods. They ran alongside each other, with Rakai-1 running from September 2003 to September 2005, and Rakai-2 running from February 2004 to December 2006. Together, both trials enrolled 6,369 males between 15 and 49 years old.

Of the 6,396 males who were initially screened in both the Rakai-1 and Rakai-2 trials, 3003 were excluded from the recent analyses because they had tested positive or had indeterminate results in tests for the HSV-2 or HIV-1 viruses.

After these exclusions, 3,393 males were included in this study and randomly allocated to either immediate circumcision (1,684 males) or circumcision after a 24-month wait (after the study has finished). In the immediate circumcision group, 134 did not eventually get circumcised, and in the waiting group 32 were circumcised elsewhere during the study.

The researchers tested the men for HSV-2 infection, HIV infection and syphilis at the start of the study and six, 12, and 24 months later. The men were also examined and interviewed at these visits. In addition, the researchers evaluated a subgroup of men for HPV infection at the beginning of the study and after 24 months.

What were the results of the study?

After 24 months, the circumcised men had a 7.8% overall chance of testing positive for the genital herpes virus, compared to a 10.3% chance in the uncircumcised group (adjusted hazard ratio 0.72, 95% confidence interval [CI] 0.56 to 0.92; P = 0.008).

In the circumcised group, the prevalence of high-risk HPV genotypes was 18% compared to 27.9% in the uncircumcised group (adjusted risk ratio 0.65, 95% CI 0.46 to 0.90; P = 0.009).

There was no significant difference between the two study groups in the proportion that developed syphilis (adjusted hazard ratio 1.10, 95% CI 0.75 to 1.65; P = 0.44).

What interpretations did the researchers draw from these results?

The researchers say that “in addition to decreasing the incidence of HIV infection, male circumcision significantly reduced the incidence of HSV-2 infection and the prevalence of HPV infection”.

They say that other related research shows that male circumcision decreases the rates of HIV, HSV-2, and HPV infections in men. In their female partners, it reduces infections of trichomoniasis, bacterial vaginosis, other sexually transmitted infections. The researchers conclude that their findings “underscore the potential public health benefits of the procedure”.

This study has important implications for the control of sexually transmitted infections in Africa, but researchers and commentators seem to disagree about the implications closer to home and in other population groups not tested in the study.

For example, an editorial written by doctors in the US and published in the same journal said, "These new data should prompt a major reassessment of the role of male circumcision.” They suggest that maternity health providers have a responsibility to educate mothers and fathers about the benefits of circumcision soon after birth.

However, UK commentators are sceptical. This seems to be because it is unclear how circumcision might protect against STIs. There are several theories for this:

  • Following circumcision, the skin covering the head of the penis becomes tougher and may protect against "microtears" during sex, which can provide a point of entry for germs.
  • The lining of the foreskin, removed during circumcision, may be the point at which germs enter the underlying skin cells.
  • After sex, the foreskin may prolong the amount of time that tender skin is exposed to germs.

Other points to note about this study are that:

  • After six months, reported condom use was higher in the circumcision group than in the control group (P<0.001),>
  • About 18% of men from both groups were lost to follow-up, died or were enrolled for an insufficient period (less than 24 months) for the analysis. This is a large proportion of those who enrolled, and it is possible that there were differences in the rates of infection between those completing the trial and those who dropped out, which could influence the overall results.
  • One of the commentators’ main concerns over this study is that it was carried out in Uganda, and the results may not be directly applicable to more developed countries. It is also important not to conclude that the results would be the same in other subgroups, such as men who have sex with men, and men who are circumcised as newborns. It could be that the benefits of circumcision differ in different groups.

The differences between the US and UK interpretations of this study may be more cultural than scientific, and circumcision has historically been much more common in the US. More research in areas with a lower prevalence of HIV will be needed in order to test the relevance of this study outside of Uganda.