Archive for January, 2013

Why We Need Labels on Factory-Farmed Food

by Ronnie Cummins {1}

Alternet (January 16 2013)

This article was published in partnership with {2}.

A growing number of organic consumers, natural health advocates and climate hawks are taking a more comprehensive look at the fundamental causes of global warming. And it’s led them to this sobering conclusion: Our modern energy-, chemical- and GMO-intensive industrial food and farming systems are the major cause of man-made global warming.

How did they reach this conclusion? First, by taking a more inclusive look at the scientific data on greenhouse gas (GHG) emissions – not just carbon dioxide (CO2), but also methane and nitrous oxide. Next, by doing a full accounting of the fossil fuel consumption and emissions of the entire industrial food and farming cycle, including inputs, equipment, production, processing, distribution, heating, cooling and waste. And finally, by factoring in the indirect impacts of contemporary agriculture, which include deforestation and wetlands destruction.

When you add it all up, the picture is clear: Contemporary agriculture is burning up our planet. And factory farms or, in industry lingo, Confined Animal Feeding Operations (CAFOs), play a key role in this impending disaster.

The science behind global warming is complex. Without question, coal plants, tar sands and natural gas fracking have contributed heavily to greenhouse gas (GHG) pollution, the major cause of global warming. We must unite to shut down these industries. Similarly, consumer overconsumption of fossil fuels represents another big piece of the climate-crisis equation. We absolutely must rethink, retrofit and/or redesign our gas-guzzling cars and our energy-inefficient buildings, if we want to reduce fossil fuel use by ninety percent over the next few decades.

But we also must address the environmental impact of factory farming.

Today, nearly 65 billion animals worldwide, including cows, chickens and pigs, are crammed into CAFOs. These animals are literally imprisoned and tortured in unhealthy, unsanitary and unconscionably cruel conditions. Sickness is the norm for animals who are confined rather than pastured, and who eat GMO corn and soybeans, rather than grass and forage as nature intended. To prevent the inevitable spread of disease from stress, overcrowding and lack of vitamin D, animals are fed a steady diet of antibiotics. Those antibiotics pose a direct threat to the environment when they run off into our lakes, rivers, aquifers and drinking water.

CAFOs contribute directly to global warming by releasing vast amounts of greenhouse gases into the atmosphere – more than the entire global transportation industry. The air at some factory farm test sites in the US is dirtier than in America’s most polluted cities, according to the Environmental Integrity Project. According to a 2006 report by the Food and Agriculture Organization of the United Nations (FAO), animal agriculture is responsible for eighteen percent of all human-induced greenhouse gas emissions, including 37 percent of methane emissions and 65 percent of nitrous oxide emissions. The methane released from billions of imprisoned animals on factory farms are seventy times more damaging per ton to the earth’s atmosphere than carbon dioxide.

Indirectly, factory farms contribute to climate disruption by their impact on deforestation and draining of wetlands, and because of the nitrous oxide emissions from huge amounts of pesticides used to grow the genetically engineered corn and soy fed to animals raised in CAFOs. Nitrous oxide pollution is even worse than methane – 200 times more damaging per ton than carbon dioxide. And just as animal waste leaches antibiotics and hormones into ground and water, pesticides and fertilizers also eventually find their way into our waterways, further damaging the environment.

Factory farms aren’t just a disaster for the environment. They’re also ruining our health. A growing chorus of scientists and public health advocates warn that the intensive and reckless use of antibiotics and growth hormones leads to factory-farmed food that contains antibiotic-resistant pathogens, drug residues such as hormones and growth promoters, and “bad fats”. Yet despite these health and environmental hazards {3}, the vast majority of consumers don’t realize that nearly 95% of the meat, dairy and eggs sold in the US come from CAFOs. Nor do most people realize that CAFOs represent a corporate-controlled system characterized by large-scale, centralized, low profit-margin production, processing and distribution systems.

There’s an alternative: A socially responsible, small-scale system created by independent producers and processors focused on local and regional markets. This alternative produces high-quality food, and supports farmers who produce healthy, meat, eggs and dairy products using humane methods.

And it’s far easier on the environment.

Consumers can boycott food products from factory farms and choose the more environmentally-friendly alternatives. But first, we have to regain the right to know what’s in our food. And that means mandatory labeling, not only of genetically engineered foods, but of the 95 percent of non-organic, non-grass-fed meat, dairy and eggs that are produced on the hellish factory farms that today dominate US food production.

In 2013, a new alliance of organic and natural health consumers, animal welfare advocates, anti-GMO and climate-change activists will tackle the next big food labeling battle: meat, eggs and dairy products from animals raised on factory farms, or CAFOs. This campaign will start with a massive program to educate consumers about the negative impacts of factory farming on the environment, on human health and on animal welfare, and then move forward to organize and mobilize millions of consumers to demand labels on beef, pork, poultry and dairy products derived from these unhealthy and unsustainable so-called “farming” practices.

Opponents and skeptics will ask, “What about feeding the world?” Contrary to popular arguments, factory farming is not a cheap, efficient solution to world hunger. Feeding huge numbers of confined animals actually uses more food, in the form of grains that could feed humans, than it produces. For every 100 food calories of edible crops fed to livestock, we get back just thirty calories in the form of meat and dairy. That’s a seventy percent loss.

With the earth’s population predicted to reach nine billion by mid-century, the planet can no longer afford this reckless, unhealthy and environmentally disastrous farming system. We believe that once people know the whole truth about CAFOs they will want to make healthier, more sustainable food choices. And to do that, we’ll have to fight for the consumer’s right to know not only what is in our food, but where our food comes from.





Categories: Uncategorized

GMO Scandal

The Long Term Effects of Genetically Modified Food on Humans

Scientific Tests Must Be Approved by Industry First

by F William Engdahl

Global Research (January 22 2013)

One of the great mysteries surrounding the spread of GMO plants around the world since the first commercial crops were released in the early 1990s in the USA and Argentina has been the absence of independent scientific studies of possible long-term effects of a diet of GMO plants on humans or even rats. Now it has come to light the real reason. The GMO agribusiness companies like Monsanto, BASF, Pioneer, Syngenta and others prohibit independent research.

An editorial in the respected American scientific monthly magazine, Scientific American, August 2009 reveals the shocking and alarming reality behind the proliferation of GMO products throughout the food chain of the planet since 1994. There are no independent scientific studies published in any reputed scientific journal in the world for one simple reason. It is impossible to independently verify that GMO crops such as Monsanto Roundup Ready Soybeans or MON8110 GMO maize perform as the company claims, or that, as the company also claims, that they have no harmful side effects because the GMO companies forbid such tests!

That’s right. As a precondition to buy seeds, either to plant for crops or to use in research study, Monsanto and the gene giant companies must first sign an End User Agreement with the company. For the past decade, the period when the greatest proliferation of GMO seeds in agriculture has taken place, Monsanto, Pioneer (DuPont) and Syngenta require anyone buying their GMO seeds to sign an agreement that explicitly forbids that the seeds be used for any independent research. Scientists are prohibited from testing a seed to explore under what conditions it flourishes or even fails. They cannot compare any characteristics of the GMO seed with any other GMO or non-GMO seeds from another company. Most alarming, they are prohibited from examining whether the genetically modified crops lead to unintended side-effects either in the environment or in animals or humans.

The only research which is permitted to be published in reputable scientific peer-reviewed journals are studies which have been pre-approved by Monsanto and the other industry GMO firms.

The entire process by which GMO seeds have been approved in the United States, beginning with the proclamation by then President George H W Bush in 1992, on request of Monsanto, that no special Government tests of safety for GMO seeds would be conducted because they were deemed by the President to be “substantially equivalent” to non-GMO seeds, has been riddled with special interest corruption. Former attorneys for Monsanto were appointed responsible in EPA and FDA for rules governing GMO seeds as but one example and no Government tests of GMO seed safety to date have been carried out. All tests are provided to the US Government on GMO safety or performance by the companies themselves such as Monsanto. Little wonder that GMO sounds to positive and that Monsanto and others can falsely claim GMO is the “solution to world hunger”.

In the United States a group of twenty four leading university corn insect scientists have written to the US Government Environmental Protection Agency (EPA) demanding the EPA  force a change to the company censorship practice. It is as if Chevrolet or Tata Motors or Fiat tried to censor comparative crash tests of their cars in Consumer Reports or a comparable consumer publication because they did not like the test results. Only this deals with the human and animal food chain. The scientists rightly argue to EPA that food safety and environment protection “depend on making plant products available to regular scientific scrutiny”.

We should think twice before we eat that next box of American breakfast cereal if the corn used is GMO.


F William Engdahl is author of Seeds of Destruction: The Hidden Agenda of Genetic Manipulation (2007) and Full Spectrum Dominance: Totalitarian Democracy in the New World Order (2009). He may be contacted via his website at

Categories: Uncategorized

Bad Pharma

Drug research riddled with half truths, omissions, lies

Industry-funded trials are too common, can’t be trusted – and bring pills to market that likely don’t work

by Ben Goldacre

Salon (January 28 2013)

Excerpted from Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (2012) by Ben Goldacre

Sponsors get the answer they want.

Before we get going, we need to establish one thing beyond any doubt: Industry-funded trials are more likely than independently funded trials to produce a positive, flattering result. This is our core premise, and one of the most well-documented phenomena in the growing field of “research about research”. It has also become much easier to study in recent years because the rules on declaring industry funding have become a little clearer.

We can begin with some recent work. In 2010, three researchers from Harvard and Toronto found all the trials looking at five major classes of drug – antidepressants, ulcer drugs and so on – and then measured two key features: were they positive, and were they funded by industry? They found over 500 trials in total: 85 percent of the industry-funded studies were positive, but only fifty percent of the government-funded trials were. That’s a very significant difference.

In 2007, researchers looked at every published trial that set out to explore the benefit of a statin. These are cholesterol-lowering drugs which reduce your risk of having a heart attack, and they are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. Once the researchers controlled for other factors (we’ll delve into what this means later), they found that industry-funded trials were twenty times more likely to give results favoring the test drug. Again, that’s a very big difference.

We’ll do one more. In 2006, researchers looked into every trial of psychiatric drugs in four academic journals over a ten-year period, finding 542 trial outcomes in total. Industry sponsors got favorable outcomes for their own drug 78 percent of the time, while independently funded trials only gave a positive result in 48 percent of cases. If you were a competing drug put up against the sponsor’s drug in a trial, you were in for a pretty rough ride: You would only win a measly 28 percent of the time.

These are dismal, frightening results, but they come from individual studies. When there has been lots of research in a field, it’s always possible that someone – like me, for example – could cherry-pick the results and give a partial view. I could, in essence, be doing exactly what I accuse the pharmaceutical industry of doing by only telling you about the studies that support my case while hiding the rest from you.

To guard against this risk, researchers invented the systematic review. In essence a systematic review is simple: Instead of just mooching through the research literature, consciously or unconsciously picking out papers here and there that support your pre-existing beliefs, you take a scientific, systematic approach to the very process of looking for scientific evidence, ensuring that your evidence is as complete and representative as possible of all the research that has ever been done.

Systematic reviews are very, very onerous. In 2003, by coincidence, two were published, both looking specifically at the question we’re interested in. They took all the studies ever published about whether industry funding is associated with pro-industry results. Each took a slightly different approach to finding research papers, and both found that industry-funded trials were, overall, about four times more likely to report positive results. A further review in 2007 looked at the new studies that had been published in the four years after these two earlier reviews: It found twenty more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.

I am setting out this evidence at length because I want to be absolutely clear that there is no doubt on the issue. Industry-sponsored trials give favorable results, and that is not just my opinion or a hunch from the occasional passing study. This is a very well-documented problem, and it has been researched extensively without anybody stepping out to take effective action, as we shall see.

There is one last study I’d like to tell you about. It turns out that this pattern of industry-funded trials being vastly more likely to give positive results persists even when you move away from published academic papers and look instead at trial reports from academic conferences, where data often appears for the first time (in fact, as we shall see, sometimes trial results only appear at an academic conference, with very little information on how the study was conducted).

Fries and Krishnan studied all the research abstracts presented at the 2001 American College of Rheumatology meetings that reported any kind of trial and acknowledged industry sponsorship in order to find out what proportion had results that favored the sponsor’s drug. There is a small punchline coming, and to understand it we need to talk a little about what an academic paper looks like. In general, the results section is extensive: The raw numbers are given for each outcome and for each possible causal factor, but not just as raw figures. The “ranges” are given, subgroups are perhaps explored, statistical tests are conducted and each detail of the result is described in table form and in shorter narrative form in the text, explaining the most important results. This lengthy process is usually spread over several pages.

In Fries and Krishnan [2004], this level of detail was unnecessary. The results section is a single, simple and – I like to imagine – fairly passive-aggressive sentence:

The results from every RCT (45 out of 45) favored the drug of the sponsor.

This extreme finding has a very interesting side effect for those interested in time-saving shortcuts. Since every industry-sponsored trial had a positive result, that’s all you’d need to know about a piece of work to predict its outcome: If it was funded by industry, you could know with absolute certainty that the trial found the drug was great.

How does this happen? How do industry-sponsored trials almost always manage to get a positive result? It is, as far as anyone can be certain, a combination of factors. Sometimes trials are flawed by design. You can compare your new drug with something you know to be rubbish – an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully so that they are more likely to get better on your treatment. You can peek at the results halfway through and stop your trial early if they look good (which is – for interesting reasons we shall discuss – statistical poison). And so on.

But before we get to these fascinating methodological twists and quirks – these nudges and bumps that stop a trial from being a fair test of whether a treatment works or not – there is something very much simpler at hand.

Sometimes drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them. This is not a new problem, and it’s not limited to medicine. In fact, this issue of negative results that go missing in action cuts into almost every corner of science. It distorts findings in fields as diverse as brain imaging and economics, it makes a mockery of all our efforts to exclude bias from our studies, and despite everything that regulators, drug companies and even some academics will tell you, it is a problem that has been left unfixed for decades.

In fact, it is so deep-rooted that even if we were to fix it today – right now, for good, forever, without any flaws or loopholes in our legislation – that still wouldn’t help because we would still be practicing medicine, cheerfully making decisions about which treatment is best, on the basis of decades of medical evidence which is – as you’ve now seen – fundamentally distorted.

But there is a way ahead.

Why missing data matters

Reboxetine is a drug I myself have prescribed. Other drugs had done nothing for this particular patient, so we wanted to try something new. I’d read the trial data before I wrote the prescription, and I had found only well-designed, fair tests, with overwhelmingly positive results. Reboxetine was better than placebo and as good as any other antidepressant in head-to-head comparisons. It’s approved for use by the Medicines and Healthcare products Regulatory Agency (the MHRA), which governs all drugs in the UK. Millions of doses are prescribed every year around the world. Reboxetine was clearly a safe and and effective treatment. The patient and I discussed the evidence briefly, and we agreed it was the right treatment to try next. I signed a prescription saying I wanted my patient to have this drug.

But we had both been misled. In October 2010, a group of researchers were finally able to bring together all the trials that had ever been conducted on reboxetine. Through a long process of investigation – searching in academic journals but also arduously requesting data from the manufacturers and gathering documents from regulators – they were able to assemble all the data, both from trials that were published and from those that had never appeared in academic papers.

When all this trial data was put together it produced a shocking picture. Seven trials had been conducted comparing reboxetine against placebo. Only one, conducted in 254 patients, had a neat, positive result, and that one was published in an academic journal for doctors and researchers to read. But six more trials were conducted in almost ten times as many patients. All of them showed that reboxetine was no better than a dummy sugar pill. None of these trials were published. I had no idea they existed.

It got worse. The trials comparing reboxetine against other drugs showed exactly the same picture: Three small studies, 507 patients in total, showed that reboxetine was just as good as any other drug. They were all published. But 1,657 patients’ worth of data was left unpublished, and this unpublished data showed that patients on reboxetine did worse than those on other drugs. If all this wasn’t bad enough, there was also the side-effects data. The drug looked fine in the trials that appeared in the academic literature. But when we saw the unpublished studies, it turned out that patients were more likely to have side effects, more likely to drop out of taking the drug and more likely to withdraw from the trial because of side effects if they were taking reboxetine rather than one of its competitors.

I did everything a doctor is supposed to do. I read all the papers, I critically appraised them, I understood them and I discussed them with the patient. We made a decision together, based on the evidence. In the published data, reboxetine was a safe and effective drug. In reality, it was no better than a sugar pill, and worse, it does more harm than good. As a doctor, I did something which, on the balance of all the evidence, harmed my patient, simply because unflattering data was left unpublished.

If you find that amazing, or outrageous, your journey is just beginning. Because nobody broke any law in that situation, reboxetine is still on the market, and the system that allowed all this to happen is still in play, for all drugs, in all countries in the world. Negative data goes missing, for all treatments, in all areas of science. The regulators and professional bodies we would reasonably expect to stamp out such practices have failed us.

“Publication bias” – the process whereby negative results go unpublished – is endemic throughout the whole of medicine and academia; regulators have failed to do anything about it, despite decades of data showing the size of the problem. But before we get to that research, I need you to feel its implications, so we need to think about why missing data matters.

Evidence is the only way we can possibly know if something works – or doesn’t work – in medicine. We proceed by testing things, as cautiously as we can, in head-to-head trials and gathering together all of the evidence. This last step is crucial: If I withhold half the data from you, it’s very easy for me to convince you of something that isn’t true. If I toss a coin a hundred times, for example, but only tell you about the results when it lands heads-up, I can convince you that this is a two-headed coin. But that doesn’t mean I really do have a two-headed coin. It means I’m misleading you, and you’re a fool for letting me get away with it. This is exactly the situation we tolerate in medicine and always have. Researchers are free to do as many trials as they wish and then choose which ones to publish.

The repercussions of this go way beyond simply misleading doctors about the benefits and harms of interventions for patients, and way beyond trials. Medical research isn’t an abstract academic pursuit: It’s about people, so every time we fail to publish a piece of research we expose real, living people to unnecessary, avoidable suffering.


In March 2006, six volunteers arrived at a London hospital to take place in a trial. It was the first time a new drug called TGN1412 had ever been given to humans, and they were paid GBP 2,000 each. Within an hour these six men developed headaches, muscle aches and a feeling of unease. Then things got worse: high temperatures, restlessness, periods of forgetting who and where they were. Soon they were shivering, flushed, their pulses racing, their blood pressure falling. Then, a cliff: one went into respiratory failure, the oxygen levels in his blood falling rapidly as his lungs filled with fluid. Nobody knew why. Another dropped his blood pressure to just 65/40, stopped breathing properly, and was rushed to an intensive care unit, knocked out, intubated, mechanically ventilated. Within a day, all six were disastrously unwell: fluid in their lungs, struggling to breathe, their kidneys failing, their blood clotting uncontrollably throughout their bodies, and their white blood cells disappearing. Doctors threw everything they could at them: steroids, antihistamines, immune-system receptor blockers. All six were ventilated on intensive care. They stopped producing urine; they were all put on dialysis; their blood was replaced, first slowly, then rapidly; they needed plasma, red cells, platelets. The fevers continued. One developed pneumonia. And then the blood stopped getting to their peripheries. Their fingers and toes went flushed, then brown, then black, and then began to rot and die. With heroic effort, all escaped, at least, with their lives.

The Department of Health convened an Expert Scientific Group to try to understand what had happened, and from this two concerns were raised. First: Can we stop things like this from happening again? It’s plainly foolish, for example, to give a new experimental treatment to all six participants at the same time in a “first-in-man” trial if that treatment is a completely unknown quantity. New drugs should be given to participants in a staggered process, slowly, over a day. This idea received considerable attention from regulators and the media.

Less noted was a second concern: Could we have foreseen this disaster? TGN1412 is a molecule that attaches to a receptor called CD28 on the white blood cells of the immune system. It was a new and experimental treatment, and it interfered with the immune system in ways that are poorly understood and hard to model in animals (unlike, say, blood pressure, because immune systems are very variable between different species). But, as the final report found, there was experience with a similar intervention: It had simply not been published. One researcher presented the inquiry with unpublished data on a study he had conducted in a single human subject a full ten years earlier using an antibody that attached to the CD3, CD2 and CD28 receptors. The effects of this antibody had parallels with those of TGN1412, and the subject on whom it was tested had become unwell. But nobody could possibly have known that because these results were never shared with the scientific community. They sat unpublished and unknown when they could have helped save six men from a terrifying, destructive, avoidable ordeal.

That original researcher could not foresee the specific harm he contributed to, and it’s hard to blame him as an individual because he operated in an academic culture where leaving data unpublished was regarded as completely normal. The same culture exists today. The final report on TGN1412 concluded that sharing the results of all first-in-man studies was essential: They should be published, every last one, as a matter of routine. But phase one trial results weren’t published then, and they’re still not published now. In 2009, for the first time, a study was published looking specifically at how many of these first-in-man trials get published and how many remain hidden. They took all such trials approved by one ethics committee over a year. After four years, nine out of ten remained unpublished; after eight years, four out of five were still unpublished.

In medicine, as we shall see time and again, research is not abstract: It relates directly to life, death, suffering and pain. With every one of these unpublished studies, we are potentially exposed, quite unnecessarily, to another TGN1412. Even a huge international news story with horrific images of young men brandishing blackened feet and hands from hospital beds wasn’t enough to get movement because the issue of missing data is too complicated to fit in one sentence.

When we don’t share the results of basic research, such as a small first-in-man study, we expose people to unnecessary risks in the future. Was this an extreme case? Is the problem limited to early, experimental new drugs in small groups of trial participants? No.

In the 1980s, doctors began giving anti-arrhythmic drugs to all patients who’d had a heart attack. This practice made perfect sense on paper: We knew that anti-arrhythmic drugs helped prevent abnormal heart rhythms; we also knew that people who’ve had a heart attack are quite likely to have abnormal heart rhythms; we also knew that often these went unnoticed, undiagnosed and untreated. Giving anti-arrhythmic drugs to everyone who’d had a heart attack was a simple, sensible preventive measure.

Unfortunately, it turned out that we were wrong. This prescribing practice, with the best of intentions, on the best of principles, actually killed people. And because heart attacks are very common, it killed them in very large numbers: well over 100,000 people died unnecessarily before it was realized that the fine balance between benefit and risk was completely different for patients without a proven abnormal heart rhythm.

Could anyone have predicted this? Sadly, yes, they could have. A trial in 1980 tested a new anti-arrhythmic drug, lorcainide, in a small number of men who’d had a heart attack – less than 100 – to see if it was any use. Nine out of 48 men on lorcainide died, compared with one out of 47 on placebo. The drug was early in its development cycle, and not long after this study, it was dropped for commercial reasons. Because it wasn’t on the market, nobody even thought to publish the trial. The researchers assumed it was an idiosyncrasy of their molecule and gave it no further thought. If they had published, we would have been much more cautious about trying other anti-arrhythmic drugs on people with heart attacks, and the phenomenal death toll – over 100,000 people in their graves prematurely – might have been stopped sooner. More than a decade later, the researchers finally did publish their results, with a mea culpa, recognizing the harm they had done by not sharing earlier:

When we carried out our study in 1980, we thought that the increased death rate that occurred in the lorcainide group was an effect of chance. The development of lorcainide was abandoned for commercial reasons, and this study was therefore never published; it is now a good example of “publication bias”. The results described here might have provided an early warning of trouble ahead.

This problem of unpublished data is widespread throughout medicine – and indeed the whole of academia – even though the scale of the problem, and the harm it causes, have been documented beyond any doubt. We will see stories on basic cancer research, Tamiflu, cholesterol blockbusters, obesity drugs, antidepressants and more, with evidence that goes from the dawn of medicine to the present day, and data that is still being withheld, right now, as I write, on widely used drugs which many of you reading this book will have taken this morning. We will also see how regulators and academic bodies have repeatedly failed to address the problem.

Because researchers are free to bury any result they please, patients are exposed to harm on a staggering scale throughout the whole of medicine, from research to practice. Doctors can have no idea about the true effects of the treatments they give. Does this drug really work best, or have I simply been deprived of half the data? Nobody can tell. Is this expensive drug worth the money, or have the data simply been massaged? No one can tell. Will this drug kill patients? Is there any evidence that it’s dangerous? No one can tell.

This is a bizarre situation to arise in medicine, a discipline where everything is supposed to be based on evidence and where everyday practice is bound up in medico-legal anxiety. In one of the most regulated corners of human conduct, we’ve taken our eyes off the ball and allowed the evidence driving practice to be polluted and distorted. It seems unimaginable.


Excerpted from Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients by Ben Goldacre. Published by Faber & Faber, an affiliate of Farrar Straus & Giroux. Copyright 2012. Republished with permission of the publisher.

Copyright (c) 2012 Salon Media Group, Inc.

SALON (R) is registered in the US Patent and Trademark Office as a trademark of Salon Media Group Inc.

Associated Press articles: Copyright (C) 2012 The Associated Press. All rights reserved.

Categories: Uncategorized


An Answer to the 2013 Edge Question: What Should We Be Worried About?

by Xeni Jardin

Edge(January 15 2013)

Each year, literary uber-agent and big idea wrangler John Brockman of poses a new question to an assortment of scientists, writers, and creative minds, and publishes a selection of the responding essays. This year’s question, which came from George Dyson, is “What SHOULD We Be Worried About?”


We worry because we are built to anticipate the future. Nothing can stop us from worrying, but science can teach us how to worry better, and when to stop worrying.

Many people more interesting than me responded – here are the 2013 contributors, and the list includes some amazing minds: Brian Eno, Daniel Dennett, Esther Dyson, George Dyson, David Gelernter, Danny Hillis, Arianna Huffington, Kevin Kelly, Tim O’Reilly, Martin Rees, Bruce Schneier, Bruce Sterling, Sherry Turkle, and Craig Venter, to name just some. And here’s an index of all the essays this year.

Following is the full text of my contribution.



Science Has Not Brought Us Closer To Understanding Cancer



We should be worried that science has not yet brought us closer to understanding cancer.

In December, 1971, President Nixon signed the National Cancer Act, launching America’s “War on Cancer”. Forty-odd years later, like the costly wars on drugs and terror, the war on cancer has not been won.

According to the National Cancer Institute, about 227,000 women were diagnosed with breast cancer in the US in 2012. And rates are rising. More women in America have died of breast cancer in the last two decades than the total number of Americans killed in World War One, World War Two, the Korean War and the Vietnam War, combined.

But military metaphors are not appropriate to describe the experience of having, treating, or trying to cure the disease. Science isn’t war. What will lead us to progress with cancer aren’t better metaphors, but better advances in science.

Why, forty years after this war was declared, has science not led us to a cure? Or to a clearer understanding of causes, prevention? Or to simply more effective and less horrific forms of treatment?

Even so, now is the best time ever to be diagnosed with cancer. Consider the progress made in breast cancer. A generation ago, women diagnosed with breast cancer would have had a prognosis that entailed a much greater likelihood of an earlier death, of more disfigurement, and a much lower quality of life during and after treatment.

Treatment-related side effects such as “chemobrain” are only just now being recognized as a scientifically valid phenomenon. A generation ago, breast cancer patients were told the cognitive impairment they experienced during and after chemotherapy was “all in their heads”, if you will.

Sure, there has been progress. But how much, really? The best that evidence-based medicine can offer for women in 2013 is still poison, cut, burn, then poison some more. A typical regimen for hormone-receptive breast cancer might be chemotherapy, mastectomy and reconstruction, radiation, at least five years of a daily anti-estrogen drug, and a few more little bonus surgeries for good measure.

There are still no guarantees in cancer treatment. The only certainties we may receive from our doctors are the kind no one wants. After hearing “we don’t really know” from surgeons and oncologists countless times as they weigh treatment options, cancer patients eventually get the point. They really don’t know.

We’re still using the same brutal chemo drugs, the same barbaric surgeries, the same radiation blasts as our mothers and grandmothers endured decades ago – with no substantially greater ability to predict who will benefit, and no cure in sight. The cancer authorities can’t even agree on screening and diagnostic recommendations: should women get annual mammograms starting at forty? Fifty? Or no mammograms at all? You’ve come a long way, baby.

Maybe to get at the bottom of our worries, we should just “follow the money”. Because the profit to be made in cancer is in producing cancer treatment drugs, machines, surgery techniques; not in finding a cure, or new ways to look at causation. There is likely no profit in figuring out the links to environmental causes; how what we eat or breathe as a child may cause our cells to mutate, how exposure to radiation or man-made chemicals may affect our risk factors.

What can make you even more cynical is looking at how much money there is to be made in poisoning us. Do the dominant corporations in fast food, chemicals, agri-business, want us to explore how their products impact cancer rates? Isn’t it cheaper for them to simply pinkwash “for the cause” every October?

And for all the nauseating pink-ribbon feel-good charity hype (an industry in and of itself!), few breast cancer charities are focused on determining causation, or funneling a substantial portion of donations to actual research and science innovation.

Genome-focused research holds great promise, but funding for this science at our government labs, NIH and NCI, is harder than ever for scientists to secure. Why hasn’t the Cancer Genome Atlas yielded more advances that can be translated now into more effective therapies?

Has the profit motive that drives our free-market society skewed our science? If we were to reboot the “War on Cancer” today, with all we now know, how and where would we begin?

The research and science that will cure cancer will not necessarily be done by big-name cancer hospitals or by big pharma. It requires a new way of thinking about illness, health, and science itself. We owe this to the millions of people who are living with cancer – or more to the point, trying very hard not to die from it. I know, I am one of them.

– Xeni Jardin, January 2013, for


Boing Boing editor/partner and tech culture journalist Xeni Jardin hosts and produces Boing Boing’s in-flight TV channel on Virgin America airlines (#10 on the dial), and writes about living with breast cancer. Diagnosed in 2011. @xeni on Twitter. email:

Categories: Uncategorized

What We Should Fear

by Gary Marcus

The New Yorker (January 15 2013)

Each December for the past fifteen years, the literary agent John Brockman has pulled out his Rolodex and asked a legion of top scientists and writers to ponder a single question: What scientific concept would improve everybody’s cognitive tool kit? (Or: What have you changed your mind about?) This year, Brockman’s panelists (myself included) agreed to take on the subject of what we should fear. There’s the fiscal cliff, the continued European economic crisis, the perpetual tensions in the Middle East. But what about the things that may happen in twenty, fifty, or a hundred years? The premise, as the science historian George Dyson put it, is that “people tend to worry too much about things that it doesn’t do any good to worry about, and not to worry enough about things we should be worrying about”. A hundred fifty contributors wrote essays for the project. The result is a recently published collection, “What SHOULD We Be Worried About?” available without charge at John Brockman’s

A few of the essays are too glib; it may sound comforting to say that “the only thing we need to worry about is worry itself” (as several contributors suggested), but anybody who has lived through Chernobyl or Fukushima knows otherwise. Surviving disasters requires contingency plans, and so does avoiding them in first places. But many of the essays are insightful, and bring attention to a wide range of challenges for which society is not yet adequately prepared.


One set of essays focusses on disasters that could happen now, or in the not-too-distant future. Consider, for example, our ever-growing dependence on the Internet. As the philosopher Daniel Dennett puts it:

We really don’t have to worry much about an impoverished teenager making a nuclear weapon in his slum; it would cost millions of dollars and be hard to do inconspicuously, given the exotic materials required. But such a teenager with a laptop and an Internet connection can explore the world’s electronic weak spots for hours every day, almost undetectably at almost no cost and very slight risk of being caught and punished.

As most Internet experts realize, the Internet is pretty safe from natural disasters because of its redundant infrastructure (meaning that there are many pathways by which any given packet of data can reach its destination) but deeply vulnerable to a wide range of deliberate attacks, either by censoring governments or by rogue hackers. (Writing on the same point, George Dyson makes the excellent suggestion of calling for a kind of emergency backup Internet, “assembled from existing cell phones and laptop computers”, which would allow the transmission of text messages in the event that the Internet itself was brought down.)

We might also worry about demographic shifts. Some are manifest, like the graying of the population (mentioned in Rodney Brooks’s essay) and the decline in the global birth rate (highlighted by Matt Ridley, Laurence Smith, and Kevin Kelly). Others are less obvious. The evolutionary psychologist Robert Kurzban, for example, argues that the rising gender imbalance in China (due to the combination of early-in-pregnancy sex-determination, abortion, the one-child policy, and a preference for boys) is a growing problem that we should all be concerned about. As Kurzban puts it, by some estimates, by 2020 “there will be thirty million more men than women on the mating market in China, leaving perhaps up to fifteen percent of young men without mates”. He also notes that “cross-national research shows a consistent relationship between imbalanced sex ratios and rates of violent crime. The higher the fraction of unmarried men in a population, the greater the frequency of theft, fraud, rape, and murder.” This in turn tends to lead to a lower GDP, and, potentially, considerable social unrest that could ripple around the world. (The same of course could happen in any country in which prospective parents systematically impose a preference for boys.)


Another theme throughout the collection is what Stanford psychologist Brian Knutson called “metaworry”: the question of whether we are psychologically and politically constituted to worry about what we most need to worry about.

In my own essay, I suggested that there is good reason to think that we are not inclined that way, both because of an inherent cognitive bias that makes us focus on immediate concerns (like getting our dishwasher fixed) to the diminishment of our attention to long-term issues (like getting enough exercise to maintain our cardiovascular fitness) and because of a chronic bias toward optimism known as a “just-world fallacy” (the comforting but unrealistic idea that moral actions will invariably lead to just rewards). In a similar vein, the anthropologist Mary Catherine Bateson argues that “knowledgeable people expected an eventual collapse of the Shah’s regime in Iran, but did nothing because there was no pending date. In contrast, many prepared for Y2K because the time frame was so specific.” Furthermore, as the historian of ideas Noga Arikha puts it, “our world is geared at keeping up with a furiously paced present with no time for the complex past”, leading to a cognitive bias that she calls “presentism”.

As a result, we often move toward the future with our eyes too tightly focussed on the immediate to care much about what might happen in the coming century or two – despite potentially huge consequences for our descendants. As Knutson says, his metaworry

is that actual threats [to our species] are changing much more rapidly than they have in the ancestral past. Humans have created much of this environment with our mechanisms, computers, and algorithms that induce rapid, “disruptive”, and even global change. Both financial and environmental examples easily spring to mind …  Our worry engines [may] not retune their direction to focus on these rapidly changing threats fast enough to take preventative action.

The cosmologist Max Tegmark wondered what will happen “if computers eventually beat us at all tasks, developing superhuman intelligence?” As Tegmark notes, there is “little doubt that that this can happen: our brains are a bunch of particles obeying the laws of physics, and there’s no physical law precluding particles from being arranged in ways that can perform even more advanced computations”. That so-called singularity – machines becoming smarter than people – could be, as he puts it, “the best or worst thing ever to happen to life as we know it, so if there’s even a one percent chance that there’ll be a singularity in our lifetime, I think a reasonable precaution would be to spend at least one percent of our GDP studying the issue and deciding what to do about it”. Yet, “we largely ignore it, and are curiously complacent about life as we know it getting transformed”.

The sci-fi writer Bruce Sterling tells us not to be not afraid, because

Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s “minds on nonbiological substrates” that might allegedly have the “computational power of a human brain”. A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there’s no there there.

But Sterling’s optimism has little to do with reality. One leading artificial intelligence researcher recently told me that there was roughly a trillion dollars “to be made as we move from keyword search to genuine [artificial intelligence] question answering based on the web”. Google just hired Ray Kurzweil to ramp up their investment in artificial intelligence, and although nobody has yet built a machine with the computational power of the human brain, at least three separate groups are actively trying, with many parties expecting success sometime in the next century.


Edison certainly didn’t envision electric guitars, and even after the basic structure of the Internet had been in place for decades, few people foresaw Facebook or Twitter. It would be mistake for any of us to claim that we know exactly what a world full of robots, 3-D printers, biotech, and nanotechnology will bring. But, at the very least, we can take a long, hard look at our own cognitive limitations (in part through increased training in metacognition and rational decision-making), and significantly increase the currently modest amount of money we invest in research in how to keep our future generations safe from the risks of future technologies.


Gary Marcus, a professor at New York University and author of Guitar Zero: The Science of Becoming Musical at Any Age (2012), has written for about the future of employment in the robot era, the facts and fictions of neuroscience, moral machines, Noam Chomsky, and what needs to be done to clean up science.

Categories: Uncategorized

History and Contingency

An Answer to the 2013 Edge Question: What SHOULD We Be Worried About?

by Paul Kedrosky

Edge (January 26 2013)

How many calls to a typical US fire department are actually about fires? Less than twenty percent. If fire departments aren’t getting calls about fires, what are they mostly getting calls about? They are getting calls about medical emergencies, traffic accidents, and, yes, cats in trees, but they are rarely being called about fires. They are, in other words, organizations that, despite their name, deal with everything but fires.

Why, then, are they called fire departments? Because of history. Cities used to be built out of pre-combustion materials – wood straight from the forest, for example – but they are now mostly built of post-combustion materials – steel, concrete, and other materials that have passed through flame. Fire departments were created when fighting fires was an urgent urban need, and now their name lives on, a reminder of their host cities’ combustible past.

Everywhere you look you see fire departments. Not, literally, fire departments, but organizations, technologies, institutions and countries that, like fire departments, are long beyond their “past due” date, or weirdly vestigial, and yet remain widespread and worryingly important.

One of my favorite examples comes from the siting of cities. Many US river cities are where they are because of portages, the carrying of boats and cargo around impassable rapids. This meant, many times, overnight stays, which led to hotels, entertainment, and, eventually, local industry, at first devoted to shipping, but then broader. Now, however, those portage cities are prisoners of history, sitting along rivers that no longer matter for their economy, meanwhile struggling with seasonal floods and complex geographies antithetical to development – all because a few early travelers using transportation technologies that no longer matter today had to portage around a few rapids. To put it plainly, if we rebooted right now most of these cities would be located almost anywhere else first.

This isn’t just about cities, or fire departments. This is about history, paths, luck, and “installed base” effects. Think of incandescent bulbs. Or universities (or tenure). Paper money. The post office. These are all examples of organizations or technologies that persist, largely for historical reasons, not because they remain the best solution to the problem for which they were created. They are often obstacles to much better solutions.

It is obvious that this list will get longer in the near future. Perhaps multilane freeways join the list, right behind the internal combustion engine. Or increasingly costly and dysfunctional public markets. Malls as online commerce casualties. Or even venture capitalists in the age of Angellist and Kickstarter. How about geography-based citizenship. All of these seem vaguely ossified, like they are in the way, even if most people aren’t noticing – yet.

But this is not a list-making game. This is not some Up With Technology exercise where we congratulate ourselves at how fast things are changing. This is the reverse. History increasingly traps us, creating paths – and endowments and costs, both in time and money – that must be traveled before we can change directions, however desirable those new directions might seem. History – the path by which we got here, and the endowments and effluvia it has left us – is an increasingly large weight on our progress. Our built environment is an installed base, like an ancient computer operating systems that holds back progress because compatibility gives such an immense advantage.

Writer William Gibson once famously said that the “The future is already here – it’s just not very evenly distributed”. I worry more that the past is here – it’s just so evenly distributed that we can’t get to the future.


Paul Kedrosky is Editor of Infectious Greed and Senior Fellow at the Kauffman Foundation.

He is an investor, speaker, writer, media guy, and entrepreneur. Kedrosky founded the first hosted blogging site, GrokSoup, worked as one of the first technology equity analysts at a major brokerage firms, and currently holds more than fifty early-stage public and private equity investments.

He has a PhD in the economics of technology, a master’s degree in finance, and an undergraduate degree in engineering. In his spare time he is a dangerous Twitterer, analyst for CNBC television, and the editor of Infectious Greed, one of the most popular financial blogs available over the Interweb.

Categories: Uncategorized

America Has Hit “Peak Jobs”

by Jon Evans

TechCrunch (January 26 2013)

“The middle class is being hollowed out”, says James Altucher. “Economists are shifting their attention toward a […] crisis in the United States: the significant increase in income inequality”, reports the New York Times.

Think all those job losses over the last five years were just caused by the recession? No: “Most of the jobs will never return, and millions more are likely to vanish as well, say experts who study the labor market”, according to an AP report on how technology is killing middle-class jobs.

When I was growing up in Canada, I was taught that income distribution should and did look like a bell curve, with the middle class being the bulge in the middle. Oh, how naive my teachers were. This is how income distribution looks in America today:

Income distribution in America, 2011

That big bulge up above? It’s moving up and to the left. America is well on the way towards having a small, highly skilled and/or highly fortunate elite, with lucrative jobs; a vast underclass with casual, occasional, minimum-wage service work, if they’re lucky; and very little in between.

But it won’t be nineteenth century capitalism redux, there’ll be no place for neo-Marxism. That underclass won’t control the means of production. They’ll simply be irrelevant.

Why? Technology. Especially robots. The Atlantic is already wringing its hands over “The End of Labor: How to Protect Workers From the Rise of Robots”. These days robots are in factories everywhere – but soon enough they’ll be doing plenty of service jobs too. Meanwhile, software is eating white-collar jobs.

Well, at least the newly unemployed can still go flip burgers … oh, wait, robots are doing that, too. (And other machines can print the meat. No, really.) No wonder people with jobs increasingly feel they have to work harder and longer.

Of course the robot manufacturers dispute this characterization. “While automation may transform the workforce and eliminate certain jobs, it also creates new kinds of jobs that are generally better paying and that require higher-skilled workers”, says the New York Times.

That’s true, and the usual retort to this kind of Luddism. But what if, as I’ve been saying for more than a year, technology is now destroying jobs faster than it’s creating them? What if America has hit peak jobs?

Here’s your answer: that’s a good thing … in the long run. Job loss isn’t actually a problem in and of itself. Instead it’s a symptom of something much larger.

Step back a minute. Way back. What precisely is the purpose of technological innovation? Why do we want to make things faster, smarter, better, healthier, new? To get rich? Okay: to generate wealth, and ultimately, eliminate scarcity. The endgame, where we’re going as a species if we don’t screw up badly and destroy ourselves or burn out all our resources before we get there, is some kind of post-scarcity society.

Will people have jobs in a post-scarcity society? No. That’s what post-scarcity means. They’ll have things to do, authorities, responsibilities, ambitions, callings, et cetera, but not jobs as we understand them. So if the endgame is a world without jobs, how will we get there? All at once? No: by a slow and inexorable decline of the total number of jobs. Today’s America is just at the edge, the very beginning, of that decline.

Trouble is, America, more than any other nation, is built around the notion that all able-bodied adults should have jobs. That’s going to be a big problem.

Paul Kedrosky recently wrote a terrific essay about what I call cultural technical debt, that is, “organizations or technologies that persist, largely for historical reasons, not because they remain the best solution to the problem for which they were created. They are often obstacles to much better solutions.” Well, the notion that ‘jobs are how the rewards of our society are distributed, and every decent human being should have a job’ is becoming cultural technical debt.

If it’s not solved, then in the coming decades you can expect a self-perpetuating privileged elite to accrue more and more of the wealth generated by software and robots, telling themselves that they’re carrying the entire world on their backs, Ayn Rand heroes come to life, while all the lazy jobless “takers” live off the fruits of their labor. Meanwhile, as the unemployed masses grow ever more frustrated and resentful, the Occupy protests will be a mere candle flame next to the conflagrations to come. It’s hard to see how that turns into a post-scarcity society. Something big will need to change.


The original version of this article, at the URL below, contains several links to further information not included here.

Categories: Uncategorized