Archive

Archive for June, 2011

The Energy Landscape of 2041

by Michael Klare

TomDispatch (June 26 2011)

Let’s see: today, it’s a story about rising sea levels.  Now, close your eyes, take a few seconds, and try to imagine what word or words could possibly go with such a story.

Time’s up, and if “faster”, “far faster”, “fastest”, or “unprecedented” didn’t come to mind, then the odds are that you’re not actually living on planet Earth in the year 2011.  Yes, a new study came out in the Proceedings of the National Academy of Sciences that measures sea-level rise over the last 2,000 years and – don’t be shocked – it’s never risen faster than now.

Earlier in the week, there was that report on the state of the oceans produced by a panel of leading marine scientists.  Now, close your eyes and try again.  Really, this should be easy.  Just look at the previous paragraph and choose “unprecedented”, and this time pair it with “loss of species comparable to the great mass extinctions of prehistory”, or pick “far faster” (as in “the seas are degenerating far faster than anyone has predicted”), or for a change of pace, how about “more quickly” as in “more quickly than had been predicted” as the “world’s oceans move into ‘extinction’ phase”.

Or consider a third story: arctic melting.  This time you’re 100% correct!  It’s “faster” again (as in “than the Intergovernmental Panel on Climate Change forecasts” of 2007).  But don’t let me bore you.  I won’t even mention the burning southwest, or Arizona’s Wallow fire, “the largest in state history”, or Texas’s “unprecedented wildfire season” (now “getting worse”), or the residents of Minot, North Dakota, abandoning their city to “unprecedented” floods, part of a deluge in the northern US that is “unprecedented in modern times”.

It’s just superlatives and records all the way, and all thanks to those globally rising “record” temperatures and all those burning fossil fuels emitting “record” levels of greenhouse gases (“worst ever” in 2010) that so many governments, ours at the very top of the list, are basically ducking.  Now, multiply those fabulous adjectives and superlative events – whether melting, dying, rising, or burning – and you’re heading toward the world of 2041, the one that TomDispatch energy expert and author of Rising Powers, Shrinking Planet (2008) Michael Klare writes about today.  It’s a world where if we haven’t kicked our fossil-fuel habit, we won’t have superlatives strong enough to describe it. Tom

_____

The New Thirty Years’ War

Winners and Losers in the Great Global Energy Struggle to Come

by Michael T Klare

A thirty-year war for energy preeminence?  You wouldn’t wish it even on a desperate planet.  But that’s where we’re headed and there’s no turning back.

From 1618 to 1648, Europe was engulfed in a series of intensely brutal conflicts known collectively as the Thirty Years’ War. It was, in part, a struggle between an imperial system of governance and the emerging nation-state.  Indeed, many historians believe that the modern international system of nation-states was crystallized in the Treaty of Westphalia of 1648, which finally ended the fighting.

Think of us today as embarking on a new Thirty Years’ War.  It may not result in as much bloodshed as that of the 1600s, though bloodshed there will be, but it will prove no less momentous for the future of the planet.  Over the coming decades, we will be embroiled at a global level in a succeed-or-perish contest among the major forms of energy, the corporations which supply them, and the countries that run on them.  The question will be: Which will dominate the world’s energy supply in the second half of the twenty-first century?  The winners will determine how – and how badly – we live, work, and play in those not-so-distant decades, and will profit enormously as a result.  The losers will be cast aside and dismembered.

Why thirty years?  Because that’s how long it will take for experimental energy systems like hydrogen power, cellulosic ethanol, wave power, algae fuel, and advanced nuclear reactors to make it from the laboratory to full-scale industrial development.  Some of these systems (as well, undoubtedly, as others not yet on our radar screens) will survive the winnowing process.  Some will not.  And there is little way to predict how it will go at this stage in the game.  At the same time, the use of existing fuels like oil and coal, which spew carbon dioxide into the atmosphere, is likely to plummet, thanks both to diminished supplies and rising concerns over the growing dangers of carbon emissions.

This will be a war because the future profitability, or even survival, of many of the world’s most powerful and wealthy corporations will be at risk, and because every nation has a potentially life-or-death stake in the contest.  For giant oil companies like BP, Chevron, ExxonMobil, and Royal Dutch Shell, an eventual shift away from petroleum will have massive economic consequences.  They will be forced to adopt new economic models and attempt to corner new markets, based on the production of alternative energy products, or risk collapse or absorption by more powerful competitors.  In these same decades, new companies will arise, some undoubtedly coming to rival the oil giants in wealth and importance.

The fate of nations, too, will be at stake as they place their bets on competing technologies, cling to their existing energy patterns, or compete for global energy sources, markets, and reserves.  Because the acquisition of adequate supplies of energy is as basic a matter of national security as can be imagined, struggles over vital resources – oil and natural gas now, perhaps lithium or nickel (for electric-powered vehicles) in the future – will trigger armed violence.

When these three decades are over, as with the Treaty of Westphalia, the planet is likely to have in place the foundations of a new system for organizing itself – this time around energy needs.  In the meantime, the struggle for energy resources is guaranteed to grow ever more intense for a simple reason: there is no way the existing energy system can satisfy the world’s future requirements.  It must be replaced or supplemented in a major way by a renewable alternative system or, forget Westphalia, the planet will be subject to environmental disaster of a sort hard to imagine today.

The Existing Energy Lineup

To appreciate the nature of our predicament, begin with a quick look at the world’s existing energy portfolio.   According to BP, the world consumed 13.2 billion tons of oil-equivalent from all sources in 2010: 33.6% from oil, 29.6% from coal, 23.8% from natural gas, 6.5% from hydroelectricity, 5.2% from nuclear energy, and a mere 1.3% percent from all renewable forms of energy.  Together, fossil fuels – oil, coal, and gas – supplied 10.4 billion tons, or 87% of the total.

Even attempting to preserve this level of energy output in thirty years’ time, using the same proportion of fuels, would be a near-hopeless feat.  Achieving a forty percent increase in energy output, as most analysts believe will be needed to satisfy the existing requirements of older industrial powers and rising demand in China and other rapidly developing nations, is simply impossible.

Two barriers stand in the way of preserving the existing energy profile: eventual oil scarcity and global climate change.  Most energy analysts expect conventional oil output – that is, liquid oil derived from fields on land and in shallow coastal waters – to reach a production peak in the next few years and then begin an irreversible decline.  Some additional fuel will be provided in the form of “unconventional” oil – that is, liquids derived from the costly, hazardous, and ecologically unsafe extraction processes involved in producing tar sands, shale oil, and deep-offshore oil – but this will only postpone the contraction in petroleum availability, not avert it.  By 2041, oil will be far less abundant than it is today and so incapable of meeting anywhere near 33.6% of the world’s (much expanded) energy needs.

Meanwhile, the accelerating pace of climate change will produce ever more damage – intense storm activity, rising sea levels, prolonged droughts, lethal heat waves, massive forest fires, and so on – finally forcing reluctant politicians to take remedial action. This will undoubtedly include an imposition of curbs on the release via fossil fuels of carbon dioxide and other greenhouse gases, whether in the form of carbon taxes, cap-and-trade plans, emissions limits, or other restrictive systems as yet not imagined.  By 2041, these increasingly restrictive curbs will help ensure that fossil fuels will not be supplying anywhere near 87% of world energy.

The Leading Contenders

If oil and coal are destined to fall from their position as the world’s paramount source of energy, what will replace them? Here are some of the leading contenders.

Natural gas:  Many energy experts and political leaders view natural gas as a “transitional” fossil fuel because it releases less carbon dioxide and other greenhouse gases than oil and coal.  In addition, global supplies of natural gas are far greater than previously believed, thanks to new technologies – notably horizontal drilling and the controversial procedure of hydraulic fracturing (“fracking”) – that allow for the exploitation of shale gas reserves once considered inaccessible.  For example, in 2011, the US Department of Energy (DoE) predicted that, by 2035, gas would far outpace coal as a source of American energy, though oil would still outpace them both.  Some now speak of a “natural gas revolution” that will see it overtake oil as the world’s number one fuel, at least for a time.  But fracking poses a threat to the safety of drinking water and so may arouse widespread opposition, while the economics of shale gas may, in the end, prove less attractive than currently assumed.  In fact, many experts now believe that the prospects for shale gas have been oversold, and that stepped-up investment will result in ever-diminishing returns.

Nuclear power:  Prior to the March 11th earthquake/tsunami disaster and a series of core meltdowns at the Fukushima Daiichi nuclear power complex in Japan, many analysts were speaking of a nuclear “renaissance”, which would see the construction of hundreds of new nuclear reactors over the next few decades.  Although some of these plants in China and elsewhere are likely to be built, plans for others – in Italy and Switzerland, for example – already appear to have been scrapped.  Despite repeated assurances that US reactors are completely safe, evidence is regularly emerging of safety risks at many of these facilities.  Given rising public concern over the risk of catastrophic accident, it is unlikely that nuclear power will be one of the big winners in 2041.

However, nuclear enthusiasts (including President Obama) are championing the manufacture of small “modular” reactors that, according to their boosters, could be built for far less than current ones and would produce significantly lower levels of radioactive waste.  Although the technology for, and safety of, such “assembly-line” reactors has yet to be demonstrated, advocates claim that they would provide an attractive alternative to both large conventional reactors with their piles of nuclear waste and coal-fired power plants that emit so much carbon dioxide.

Wind and solar: Make no mistake, the world will rely on wind and solar power for a greater proportion of its energy thirty years from now.  According to the International Energy Agency, those energy sources will go from approximately one percent of total world energy consumption in 2008 to a projected four percent in 2035.  But given the crisis at hand and the hopes that exist for wind and solar, this would prove small potatoes indeed.  For these two alternative energy sources to claim a significantly larger share of the energy pie, as so many climate-change activists desire, real breakthroughs will be necessary, including major improvements in the design of wind turbines and solar collectors, improved energy storage (so that power collected during sunny or windy periods can be better used at night or in calm weather), and a far more efficient and expansive electrical grid (so that energy from areas favored by sun and wind can be effectively distributed elsewhere).  China, Germany, and Spain have been making the sorts of investments in wind and solar energy that might give them an advantage in the new Thirty Years’ War – but only if the technological breakthroughs actually come.

Biofuels and algae:  Many experts see a promising future for biofuels, especially as “first generation” ethanol, based largely on the fermentation of corn and sugar cane, is replaced by second- and third-generation fuels derived from plant cellulose (“cellulosic ethanol”) and bio-engineered algae.  Aside from the fact that the fermentation process requires heat (and so consumes energy even while releasing it), many policymakers object to the use of food crops to supply raw materials for a motor fuel at a time of rising food prices.  However, several promising technologies to produce ethanol by chemical means from the cellulose in non-food crops are now being tested, and one or more of these techniques may well survive the transition to full-scale commercial production.  At the same time, a number of companies, including ExxonMobil, are exploring the development of new breeds of algae that reproduce swiftly and can be converted into biofuels.  (The US Department of Defense is also investing in some of these experimental methods with an eye toward transforming the American military, a great fossil-fuel guzzler, into a far “greener” outfit.)  Again, however, it is too early to know which (if any) biofuel endeavors will pan out.

Hydrogen:  A decade ago, many experts were talking about hydrogen’s immense promise as a source of energy.  Hydrogen is abundant in many natural substances (including water and natural gas) and produces no carbon emissions when consumed.  However, it does not exist by itself in the natural world and so must be extracted from other substances – a process that requires significant amounts of energy in its own right, and so is not, as yet, particularly efficient.  Methods for transporting, storing, and consuming hydrogen on a large scale have also proved harder to develop than once imagined.  Considerable research is being devoted to each of these problems, and breakthroughs certainly could occur in the decades to come.  At present, however, it appears unlikely that hydrogen will prove a major source of energy in 2041.

X the Unknown: Many other sources of energy are being tested by scientists and engineers at universities and corporate laboratories worldwide. Some are even being evaluated on a larger scale in pilot projects of various sorts.  Among the most promising of these are geothermal energy, wave energy, and tidal energy.  Each taps into immense natural forces and so, if the necessary breakthroughs were to occur, would have the advantage of being infinitely exploitable, with little risk of producing greenhouse gases.  However, with the exception of geothermal, the necessary technologies are still at an early stage of development.  How long it may take to harvest them is anybody’s guess. Geothermal energy does show considerable promise, but has run into problems, given the need to tap it by drilling deep into the earth, in some cases triggering small earthquakes.

From time to time, I hear of even less familiar prospects for energy production that possess at least some hint of promise.  At present, none appears likely to play a significant role in 2041, but no one should underestimate humanity’s technological and innovative powers.  As with all history, surprise can play a major role in energy history, too.

Energy efficiency:  Given the lack of an obvious winner among competing transitional or alternative energy sources, one crucial approach to energy consumption in 2041 will surely be efficiency at levels unimaginable today: the ability to achieve maximum economic output for minimum energy input.  The lead players three decades from now may be the countries and corporations that have mastered the art of producing the most with the least. Innovations in transportation, building and product design, heating and cooling, and production techniques will all play a role in creating an energy-efficient world.

When the War Is Over

Thirty years from now, for better or worse, the world will be a far different place: hotter, stormier, and with less land (given the loss of shoreline and low-lying areas to rising sea levels).  Strict limitations on carbon emissions will certainly be universally enforced and the consumption of fossil fuels, except under controlled circumstances, actively discouraged.  Oil will still be available to those who can afford it, but will no longer be the world’s paramount fuel.  New powers, corporate and otherwise, in new combinations will have risen with a new energy universe.  No one can know, of course, what our version of the Treaty of Westphalia will look like or who will be the winners and losers on this planet.  In the intervening thirty years, however, that much violence and suffering will have ensued goes without question. Nor can anyone say today which of the contending forms of energy will prove dominant in 2041 and beyond.

Were I to wager a guess, I might place my bet on energy systems that were decentralized, easy to make and install, and required relatively modest levels of up-front investment.  For an analogy, think of the laptop computer of 2011 versus the giant mainframes of the 1960s and 1970s.  The closer that an energy supplier gets to the laptop model (or so I suspect), the more success will follow.

From this perspective, giant nuclear reactors and coal-fired plants are, in the long run, less likely to thrive, except in places like China where authoritarian governments still call the shots.  Far more promising, once the necessary breakthroughs come, will be renewable sources of energy and advanced biofuels that can be produced on a smaller scale with less up-front investment, and so possibly incorporated into daily life even at a community or neighborhood level.

Whichever countries move most swiftly to embrace these or similar energy possibilities will be the likeliest to emerge in 2041 with vibrant economies – and given the state of the planet, if luck holds, just in the nick of time.

_____

Michael T Klare is a professor of peace and world security studies at Hampshire College, a TomDispatch regular, and the author, most recently, of Rising Powers, Shrinking Planet (2008). A documentary movie version of his previous book, Blood and Oil (2005), is available from the Media Education Foundation.

Copyright 2011 Michael T Klare

(c) 2011 TomDispatch. All rights reserved.

http://www.tomdispatch.com/blog/175409/

Categories: Uncategorized

The Unraveling of Nuclear Energy

2011/06/30 1 comment

by Tony Pereira

Culture Change (June 27 2011)

About three decades ago, the Swedes considered the risks of nuclear energy, added up the costs and did the math. What they found was that the astronomical amounts that the Swedish economy was paying in subsidies to produce electricity from nuclear energy far exceeded what they were getting out of it. Swedes aren’t dumb, and voted in a national referendum to shut down and decommission all their nuclear energy reactors by 2010. The Swedish nuclear weapons program had already been terminated early on when Sweden signed the nuclear non-proliferation treaty in 1968. With two units closed, one in 1999 and another in 2005, Sweden now operates three nuclear facilities, with a total of ten reactors generating about 45% of the country’s total electricity. By the narrowest of margins of only two votes in 2009, the Riksdag, currently under a conservative spell, allowed for the replacement of existing reactors only, without any government subsidies, with no new construction permitted. Reactor replacements will not be needed until 2030, if ever, because the opposing parties who represent the desires of the clear majority of the population already vowed to overturn this legislation.

About a decade ago, Germany arrived at identical conclusions, and the country voted landmark legislation to replace all fossil and nuclear fuels with solar, wind, geothermal and biomass renewable energy by 2030. It was under the direction of Dr Hermann Scheer, elected member of the Bundestag for 28 years and Alternate Nobel Prize Laureate, whom I had the great honor of inviting for a lecture at UCLA.

Germany is already producing twenty percent of its electricity from solar and wind. The same laws that were passed in Germany have already been approved by the 24 member nations of the European Union, and are being considered by some other forty nations around the planet.

The US, French and Japanese nuclear programs are not any different. These programs exist only at the expense of hundreds of billions in subsidies in taxpayers’ money, government loan guarantees, tax exemptions, culminating with the US Price-Anderson Act: in case of a nuclear accident, the owner- operator of the nuclear plant is liable to pay damages of up to about $12 billion US Dollars. Any amount above that – well, did you guess right? – becomes public liability: want it or not, we, you and me taxpayers foot the bill and pay the damages, whatever they might be. Corporations pocket the profits, the clean up costs are socialized. What a deal. One single accident could total upwards of US$500 billion, and go up to US$1 trillion, no one can tell. National and international polls show the public’s opposition, and want their nuclear industries shut down. A recent landslide vote in Italy forced the Berlusconi administration to abandon plans to restart Italy’s nuclear program.

The worst has already happened, not once but at least one hundred incidents in the USA alone at nuclear power plants between 1952 and 2000. The US federal government requires that incidents resulting in the loss of human life or causing more than US$50,000 of property damage must be reported to the Nuclear Regulatory Commission (NRC). In the above period, a total of US$20.5 billion in property and other damages were reported, including emergency response, environmental remediation, evacuation, lost production, fines, and court claims. At least three of these accidents involved partial core meltdowns.

The most serious of these was the Three Mile Island accident in 1979, with a price tag of US$2.4 billion in damages, and the earliest the Santa Susanna partial core meltdown in Simi Valley, California, in 1959. Cleaning up of this site is still ongoing, the final price tag and cancer effects on the local population is not known or has just been ignored. With little or no press coverage or debate, the Davis-Besse Nuclear Generating Station in Oak Harbor, Ohio, was the source of two of the top five most dangerous nuclear incidents in the United States since 1979, one of those in 1985, according to much later findings done in 2004 by the NRC.

Since the early nuclear reactor experiments in the 1940s and today, some odd sixty other accidents occurred involving nuclear weapons in military nuclear facilities and operations, including loss of life by irradiation and property damage, amounting to untallied staggering totals. No one knows the exact numbers.

In 1986, an explosion followed by fire at one of the two nuclear reactors at Chernobyl, Ukraine, released vast quantities of radioactive materials that were subsequently carried into Western Europe, over half of it to Belarus. The Russian government enlisted about 800,000 workers to perform menial tasks like dropping one bag of sand in highly radioactive areas around the damaged reactor and receive hundreds of times the legal allowed radiation yearly limits in the one single excursion, for periods of exposure lasting ninety seconds at most. Hundreds of thousands from Belarus, Russia, and Ukraine were evacuated and resettled. Entire cities were abandoned. To date about US$50 billion have been paid in direct costs alone. The total cleanup costs are extremely hard to estimate; they most likely will add up into the hundreds of billion – to many the real reason for the downfall of the former Soviet Union.

The IAEA, the International Atomic Energy Agency, a staunch proponent and supporter of nuclear energy, routinely downplayed the amount of materials damage and loss of life at Chernobyl. A recent peer reviewed publication originating from Russia puts the cancer death toll between 1986 and 2004 at a whopping one million human lives. The recent claim by the IAEA that ‘no one will die in Japan’ is nothing but another criminal lie.

Enter Fukushima and 3/11. With six nuclear reactor cores packed in close vicinity to each other in a single nuclear facility, it is exceedingly clear that if even a single reactor suffers a minor accident, it becomes exponentially complex if not altogether impossible to maneuver around the plant in any normal way, putting the other reactors at risk of a series of largely predictable cascading events, that is, successive meltdowns.

With three of the nuclear reactors operating when a whopping 9.0 Richter scale earthquake centered offshore Japan hit the country, a series of powerful tidal waves followed shortly thereafter. We now know that the earthquake damaged the cooling systems of the Fukushima plant, and that fuel core meltdown occurred hours before the tsunami hit Japan. Once cooling stops, temperatures rise very fast within the densely packed nuclear core fuel rods, and whatever cooling water is left rapidly boils and evaporates.

Zirconium (Zr) is used as the cladding material for the nuclear fuel rods due to its high permeability to neutrons, the essential particle needed to promote nuclear fission reactions. However, Zr is highly reactive and burns violently in air at extremely high temperatures of about 2,400 Fahrenheit, sufficient to melt and vaporize the packed uranium, plutonium and other highly radioactive deadly isotopes into the air and surrounding areas from where it becomes impossible to ever recover them. As soon as the coolant water in the pressurized water reactor (PWR) reactor evaporates, Zr catches fire and burns intensely. It gets more complicated. With Zr acting as a catalyst, water at high temperature splits into its elements and produces hydrogen, and the possibility of a violent explosion becomes very real. That happened.

Thirty kilometers offshore Fukushima, current radioisotope readings show levels tens of times higher than those measured in the Baltic and Black Seas following the Chernobyl accident. TEPCO, the Tokyo Electric Power Company and Fukushima’s owner/operator has confirmed that the core fuel rods at the Unit One reactor had melted before the arrival of the tidal wave. By damaging the cooling systems at the Fukushima plant, the earthquake that shook Japan also initiated the early core meltdown of at least one of its reactors. Once radiation begins to be released in huge amounts in and around the plant, things become extremely difficult if not entirely impossible to control, and events run their own course.

TEPCO has now confirmed that there are numerous holes in the containment covering Unit Two, and at least one at Unit One. The global nuclear industry has long argued that containments are virtually impenetrable. They are not. The domes at Fukushima are of a very similar design and strength as many in the US {1}.

Virtually all of Japan’s 55 reactors sit on or near earthquake faults, and along the coast where, in addition, they are also vulnerable to tsunamis. After the 3/11 tsunami, Japan shut down 35 of its 54 reactors for safety evaluations. A 2007 earthquake forced seven reactors to shut at Kashiwazaki. Japan has ordered shut at least two more nuclear reactors at Hamaoka because of their seismic vulnerability. Numerous reactors in the United States sit on or near major earthquake faults. Two each at Diablo Canyon and San Onofre, California, are within three miles of major fault lines. So is Indian Point, less than forty miles from Manhattan, New York. Millions of people live within fifty miles of Diablo Canyon, near San Franciso, California, San Onofre between San Diego and Los Angeles, California, and Indian Point, just outside of New York. On January 31 1986, the Perry reactor, 35 miles east of Cleveland on Lake Erie, was damaged by an earthquake rated between 5.0 and 5.5 on the Richter Scale, about 200,000 times weaker than the one that struck Fukushima, or the ones that could and will eventually hit the sites in California, New York and elsewhere around the globe.

TEPCO, Fukushima’s owner operator, has confirmed after months of silence that at least three of the six Fukushima reactors – Units One, Two and Three – have suffered at least partial fuel melts. In at least one case, the fuel has melted through part of the inner containment system, with liquid highly radioactive materials at extremely high temperatures melting through to the reactor floor. A wide range of sources confirm that fission is still going on in at least one Fukushima core. This clearly points out that the reactors went through complete core melts, not just partial meltdowns. The complete cleanup costs and number of victims, as in Chernobyl, are extremely hard to estimate, and will likely escalate in the hundreds of billions, and as in Russia, the possible downfall of Japan. Cancer victims due to the fallout of radioactive isotopes will continue for hundreds, thousands, hundreds of thousands of years. Fukushima is far from over.

It is beyond a shadow of any doubt that these are extremely dangerous, difficult, if not completely impossible situations to solve or deal with in physics, engineering, materials science and chemistry. We do not have the technology to safely handle such high levels of concentrated radiation. Let’s repeat this. We do not have the technology to deal with such massive levels of radiation, not now, not anytime soon. The core melts require massive cooling that will send vast quantities of radioactive water into the global ocean food chain, local water tables, surrounding soil areas, food crops, and the global atmosphere for a long time, and a much longer time still to come, for present and future generations.

These are nothing short of crimes against the planet and crimes against humanity perpetrated by the dissemination of torrents of unchecked lies and falsehoods during a period of decades that began with the end of World War Two. “Electricity from nuclear power will be too cheap to meter”. So the harp played, served with refreshments. It is time to demand an answer, from all those who have been perpetrators and accomplices in these crimes, to the question “Is this the best you can do, lie?” and hold them accountable.

Notes

{1} Is Fukushima Now Ten Chernobyls into the Sea? by Harvey Wasserman (May 26 2011)
http://www.commondreams.org/view/2011/05/26-7

{2} “‘Melt-through’ at Fukushima? / Govt report to IAEA suggests situation worse than meltdown” – official Japanese Government report to The International Atomic Energy Agency: http://www.yomiuri.co.jp/dy/national/T110607005367.htm

“A ‘melt-through’ – when melted nuclear fuel leaks from the bottom of damaged reactor pressure vessels into containment vessels – is far worse than a core meltdown and is the worst possibility in a nuclear accident”.

_____

Used by permission of the author. (c) Copyright Professor Dr Tony Pereira, UCLA ME PhD – All Rights Reserved.

Tony is Founder of the Institute for Sustainable Engineering (ISE), website http://www.ise-now.com/.

http://www.culturechange.org/cms/content/view/745/1/

Categories: Uncategorized

Modern Money Blog Number Three – Responses

by L Randall Wray

New Economic Perspectives (June 23 2011)

Thank you for comments and questions. Let me divide the responses into several different issues.

1. “Sustainability Conditions” for Government Deficits

I said:

If you want to take a guess at what our “mirror image” in the graph above will look like after economic recovery, I would guess that we will return close to our long-run average: a private sector surplus of two percent of GDP, a current account deficit of three percent of GDP and a government deficit of five percent of GDP. In our simple equation it will look like this:

Private Balance (+2) + Government Balance (-5) + Foreign Balance (+3) = 0.

And so we are back to the concept of zero!

Now I want to be clear, I said nothing to imply these particular sectoral balances would continue on through infinity to the sweet hereafter. What I gave was a contingent statement (what the balances would look like AFTER recovery and if we return to LONG-RUN AVERAGES – that is to say, average stances over the past thirty years or so, taking into account trends – essentially just eye-balling the three sectors graph provided in the blog). I have made no projection that we actually WILL recover, and it is certainly possible that even after recovery our private sector balance will remain in high surplus. Let us say it remains at six percent (which would be higher than average but consistent with an attempt to delever debt – that is, to keep consumption low in order to pay down debt). In that case, and again assuming the foreign balance remains a positive three percent (that is, our current account deficit is three percent) then the government will remain in deficit of nine percent (more or less where it is now). I will not place probabilities on these two outcomes – I think the original statement is more likely – because my main point is simply that taking balances of each of the three sectors, the overall balance must be 0.

For those who would like to balance the government budget, the burden is on them to tell me what the implied outcome for the private and foreign sectors will be. If we are not going to return to the disastrous “Goldilocks” outcome with the private sector running deficits, then a huge adjustment will be necessary in the foreign balance in order to have a balanced government budget with a private sector surplus. Virtually none of the deficit hawks consider this. But, again, I would like to hear their explanation for how we will get the current account into surplus.

On to the sustainability conditions. It has become trendy among economist wonks to look at government budget stances to determine whether they could continue forever. Many objections could be raised to such purely mental exercises. An obvious one is that no government has ever lasted forever – and so any such exercise is a waste of time. Okay that one is not something we want to contemplate. Economist Herb Stein quipped that unsustainable processes will not be sustained. Something will change. That gets us somewhat closer to the problem with such approaches. And finally, if we are dealing with sovereign budget deficits we must first understand WHAT is not sustainable, and what is. That requires that we need to do sensible exercises. The one that the deficit hysterians propose is not sensible.

Let us first look at a somewhat simpler unsustainable process. Suppose that some guy – we’ll use the name Ramanan – decides to replicate the “Supersize me” experiment (based on the 2004 documentary by Morgan Spurlock). His caloric intake is 5000 calories a day, and he burns 2000 daily. The excess 3000 calories lets him gain one pound of body weight each day. If he weighed 200 pounds on January 1, by the end of the year he weighs 565 pounds. After 100 years he’s up to 36,700 pounds – a bit on the pudgy side. But we don’t stop there. After 100,000 years he weighs 36,700,000 pounds, and after a few million years, he’s heavy enough to affect the earth’s spin on its axis and its revolutions about the sun. But, according to our policy wonks, that still is not a long enough period – we’ve got to carry this out to infinity, at which point, Ramanan is infinite sized, like the universe, and if he is growing faster than the expansion of the universe, the entire rest of the universe will eventually be infinitesimally smaller than Ramanan. I guess he’s become the black hole of the universe (but I’m no physicist or biologist). So, yes this is unsustainable. Aren’t we all clever?

But would the process actually work that way? Of course not. First, Ramanan is not going to live an infinite number of years; second, he’s either going to blow up (literally) or go on a diet; and third, and most important, his body is going to adjust. As his body mass increases, he will burn more than 2000 calories a day – perhaps he’ll get up to a 5000 calorie a day burn-rate – and his body will use the food in a less efficient manner. So he will stop gaining weight long before he becomes the universe’s black hole. Herb Stein was right.

Our little mental exercise was fundamentally flawed. It assumed a fixed caloric input (flow) and a fixed caloric burn-rate (consumption flow) with the difference between the two accumulating as a stock (weight gain) at a fixed rate (essentially, “savings”). No adjustments to behavior or metabolism are allowed. And then the whole absurd set-up is carried to the ultimate absurdity by the use of infinite horizon projections. Anything carried to a logical absurdity is unsustainable. As you will see, this is the rigged game used by deficit warriors to “prove” the US Federal budget deficit is unsustainable.

The trick used by deficit warriors is similar but with the inputs and outputs reversed. Rather than caloric inputs, we have GDP growth as the input; rather than burning calories, we pay interest; and rather than weight gain as the output we have budget deficits accumulating to government debt outstanding. To rig the little model to ensure it is not sustainable, we have the interest rate higher than the growth rate – just as we had Ramanan’s caloric input at 5000 calories and his burn rate at only 2000 – and this will ensure that the debt ratio grows (just as we ensured that Ramanan’s waistline grew without limit). Let us see how this works.

We will start with a simple example similar to the one used in our blog and response last week. Let us have two sectors, government and private. Our government follows the Goldilocks model, spending less than its income (tax revenue); the private sector by identity runs a deficit (spends more than its income). We know this means the private sector is running up debt, held by the government as its asset (surpluses are realized in the form of private sector IOUs). The private sector must service the debt by paying interest; that of course adds to its deficit (interest is additional spending it must make out of its income). In comparison to our Supersizing Ramanan, the sustainability conditions will be determined by the interest rate paid, the growth rate of income (or GDP), and the deficit of the private sector.

Jamie Galbraith laid out the typical model used to evaluate sustainability of deficit spending as follows:

The key formula is:

Δd = –s + d * [(r – g)/(1 + g)]

Here, d is the starting ratio of debt to GDP, s is the “primary surplus” or budget surplus after deducting net interest payments (as shares of GDP), r is the real interest rate, and g is the real rate of GDP growth. (http://www.levyinstitute.org/publications/?docid=1379)

Now, this is wonky but the key idea is that (given a relation between the primary surplus and starting debt – both as ratios to GDP) so long as the interest rate (r) is above the growth rate (g) the debt ratio is going to grow. (Jamie has put these key terms in “real” – that is inflation adjusted – terms but that really does not matter; we can keep it all in nominal terms since “deflation” by the inflation rate merely reduces all terms by the inflation rate). Note that the starting debt ratio (d) as well as the primary surplus (what the private sector’s budget would be if it did not have to pay interest) also play a role. (Galbraith proves that the starting debt ratio does not matter much – just as Ramanan’s initial weight will not matter since in any case he will grow to an infinite size.) But we do not need to get too hung up in math to see that all things equal if the interest rate is above the growth rate, we get a rising debt ratio. If we carry this through eternity, that ratio gets big. Really big. Okay that sounds bad. And it is. Remember, that is a big part of the reason that the GFC hit – overindebted private sector. The GFC is the equivalent to an explosion of Ramanan that would prevent him from growing to an infinite size. (A debt diet would have been far preferable, but Greenspan and Bernanke opposed “interfering” with Wall Street lender fraud.)

Now let us change all this around. Let us say that the government runs a continuous budget deficit while the private sector runs a surplus. We can obtain the same equation. It appears that a continuous government budget deficit implies a continuously rising debt to GDP ratio and surely that is unsustainable. (See the appendix below for more complex math.)

But wait a minute. Is such a mental exercise sensible? We already saw that our supersizing Ramanan is going to adjust: he will diet, explode, increase his metabolism, and reduce the efficiency of his absorption of calories. If he does not explode, he will reach some “equilibrium” in which his intake of calories will equal his burn-rate so that his waistline will stop growing. What about our supersizing government? Here are the possible consequences of a persistent deficit that implies rising interest payments and debt ratios:

1. Inflation: this tends to increase tax revenues so that they grow faster than government spending, thus lowering deficits. (Many, including Galbraith, would point to the tendency to generate “negative” interest rates.) In other words the growth rate will rise above the interest rate, and reverse the dynamics so that the debt ratio stops growing. (That is equivalent to an increase of Ramanan’s caloric burn rate – so he stops growing.)

2. Austerity: government can try to adjust its fiscal stance (increasing taxes and reducing spending to lower its defict). Of course, it takes “two to tango” – raising tax rates might not change the government’s balance. It could lower growth rates, and thereby actually increase the rate of growth of the debt ratio.

3. The private sector will adjust its flows (spending and saving) in response to the government’s stance. If government continually spends more than its income, it will be adding net wealth to the private sector; and its interest payments will add to private sector income. It is not plausible to believe that as the government’s debt ratio goes toward infinity (which means that the private sector’s net wealth ratio goes to infinity) there is no induced spending in the private sector. That is usually called the “wealth effect”. In other words, government debt is private wealth and as private wealth grows without limit this will eventually cause spending to rise relative to private sector income – reducing government deficits. In addition, private sector income includes government interest payments, so rising government interest payments on its debt could induce consumption. When all is said and done, the private sector will not be happy consuming less than its income flow – given its rising wealth – and will adjust its saving behavior. If the private sector tries to reduce its surpluses, this can be done only by reducing the government sector’s deficits. It takes two to tango and the likely result is that tax revenues and consumption will rise, the government’s deficit will fall, and the private sector’s surplus will fall.

4. Government deficit spending and interest payments could increase the growth rate; it can be pushed above the interest rate. This changes the dynamics and can stop the growth of the debt ratio.

5. The interest rate is a policy variable (as will be discussed in subsequent weeks). Ignoring all the dynamics discussed in the previous points, to avoid an exploding debt ratio, all the government needs to do is to lower the interest rate below the economic growth rate. End of story, sustainability achieved.

Finally, and this is the most contentious point. Suppose none of the dynamics just discussed come into play, so the government’s debt ratio rises on trend. Will a sovereign government be forced to miss an interest payment – no matter how big that becomes? The answer is a simple “no”. It will take weeks of explication of MMT to explain why. But let us put this in the simple terms that Chairman Bernanke used to explain all the Fed spending to bail-out Wall Street: government spends using keystrokes, or, electronic entries on balance sheets. There is no technical or operational limit to its ability to do that. So long as there are keyboard keys to stroke, government can stroke them to produce interest payments credited to balance sheets.

And that finally gets us to the difference between perpetual private sector deficit spending versus perpetual government sector deficits: the first really is unsustainable while the second is not. Now, I want to be clear. We have argued that persistent government budget deficits that increase government debt ratios and thus private wealth ratios will lead to behavioral changes. They could lead to inflation. They could lead to policy changes. Hence, they are not likely to last “forever”. So when I say they are “sustainable” I merely mean in the sense that sovereign government can continue to make all payments as they come due – including interest payments – no matter how big those payments become. It might choose not to make those payments. And the mere act of making those payments will likely cause changes in growth rates and budget deficits and growth of debt rations.

2. “Sustainability” of Current Account ratios

In the quote at the top of this response there was also a contingent statement about US current account deficits. To be more clear (and thus to respond to comments), the current account includes the balance of trade (and, more broadly, the balance between exports and imports) plus some other items including “factor payments” (interest and profits paid and received). For the US, we obviously run a trade deficit (and exports are less than imports), but the factor payments are in our favor (we receive more in profits and interest from abroad than we pay to foreign creditors and owners). In any event, our negative current account balance is offset by a positive capital account balance. To put it simply – there is a “flow” of dollars abroad due to the current account deficit that is matched by the “flow” of dollars back to the US due to our capital account surplus. This is often (misleadingly) presented as US “borrowing” of dollars to “pay for” our trade deficit. We could just as well put it this way: the US imports more than it exports because the rest of the world wants to accumulate savings in dollar-denominated assets. I do not want to go into that in detail since it is the subject of later blogs.

But here’s the question. Is a continuous current account deficit possible? A simple answer is yes, so long as “two want to tango”: if the rest of the world wants dollar assets and Americans want rest of world exports (imported to the US), this will continue. But, hold it, say the worriers. As the rest of the world accumulates dollar claims on the US, they also receive interest payments. That is a factor payment that increases our current account deficit. You can see the relation to the point above about government deficits and interest payments. The world will be flooded with dollars twice over: once from our excessive propensity to import and once from our interest payments on debt.

But here is the interesting point: even though the US is the “biggest debtor on earth”, those factor payments flow in our favor. We pay extremely low interest rates and profit rates to foreigners, and earn much higher interest rates and profits on our holdings of foreign investments and debt. Why is that? Because the US is the safest investment on earth. Anytime there is a financial crisis anywhere in the world, where do international investors run? To the US dollar. Ironically, that happens even when the crisis begins in the US! Why? The US has a sovereign government with a sovereign currency. Its interest rate is set by the Fed, which can always set the rate below the US growth rate (and, indeed, as Galbraith points out, the inflation-adjusted interest rate is often below the “real” growth rate). In spite of the deficit hysteria whipped up by hedge fund billionaire Pete Peterson, no investor in her right mind believes there is any default risk on US Treasury debt. So when global fears rise, investors run to the dollar. This could change, but not in your lifetime.

In short, I make no projection about continued US current account deficits but I believe they will continue far longer than anyone imagines. They are sustainable. They will be sustained until the rest of the world decides not to accumulate more dollars and Americans decide they really do not want the cheap junk and environment-destroying oil produced by the rest of the world. When that will happen, I do not know. It is nothing to lose sleep over. Yes we can calculate “sustainability conditions” but it would just be an exercise in mental masturbation. We’ve already done enough of that. I suppose it is titillating but ultimately unsatisfying.

3. Briefly there were several other points raised

There were some about maldistribution of income as a contributing cause of the private sector deficit. Agreed. There were some questions about stocks and relations to flows. Some of that is treated above but much more will come in the next series of blogs. There were points made about need to regulate Wall Street. Yes! There were questions about QE and the difference between helicopter drops and fiscal policy. Put it this way: Treasury SPENDS money things into existence, the Fed LENDS money things into existence. The first adds income and net wealth, the second only transforms balance sheets. This will be explained in more detail later.

Appendix: See URL for this article, below.

http://neweconomicperspectives.blogspot.com/2011/06/mmp-blog-2-responses_23.html

Categories: Uncategorized

Santa Isn’t Bringing Gigawatts

2011/06/29 1 comment

by John Michael Greer

The Archdruid Report (June 22 2011)

Through the clouds of wishful thinking that too often make up what we are pleased to call a collective conversation on the subject of energy, a ray of common sense occasionally shines through. This week’s ray came by way of a study on the Earth’s thermodynamic balance, soon to be released in no less a scientific publication than the Proceedings of the Royal Society. The study found among other things that there’s a fairly modest upper limit to the amount of energy that wind farms can extract from the atmosphere without changing the climate {1}.

So far, at least, the peak oil blogosphere hasn’t responded to this study at all. That’s not surprising, since the idea that renewable energy resources might also be subject to environmental limits is about as welcome in most alternative circles these days as a slug in a garden salad. These days, for many people who consider themselves environmentally conscious, a vision of giant wind turbines in serried ranks as far as the eye can see fills a pivotal emotional need; it allows them to pretend, at least to themselves, that it’s possible to support today’s extravagant lifestyles on renewable energy – to have our planet, one might say, and eat it too.

In the real world, things don’t work that way, but we’ve had a long vacation from having to deal with the real world. Three hundred years of ever-increasing production of fossil fuels have misled most of the population of the industrial world into thinking that it’s natural and normal to have as much cheap energy as you want and are willing to pay for. As petroleum production wobbles along a bumpy plateau and approaches the point of irreversible decline, and other fossil fuels move implacably toward their own peaks and declines, one of the prime necessities of sanity and survival involves unlearning the mental habits of the age of abundance, and coming to terms with the fact that all human activities are subject to ecological limits.

It’s as though we’re a bunch of children with very, very short memories, who wake up one morning to find that it’s Christmas Day and there are heaps of presents around the tree. Giddy with excitement, we open one package after another, revel in our shiny new toys, then delight in the holiday atmosphere of the rest of the day. As night falls, we doze off, thinking happily about how there will be another round of presents and another big meal the next day. Then the next day comes, and it’s not Christmas any more; search as we will, the area around the tree stubbornly refuses to yield any more presents, and if we strain our memories as far as they will reach, we might just remember that the other 364 days of the year follow different rules.

Especially in America, but not only in America, a great many people are basically sitting around on the day after Christmas, waiting for Santa Claus to show up with gigawatts of bright shiny new energy in his sack. The people who insist that we can keep our current lifestyles powered with giant wind farms or solar satellites or Bussard fusion reactors or free energy devices – that latter is what they’re calling perpetual motion machines these days, at least the last time I checked – are right in there with the folks who chant “Drill, baby, drill” in the fond belief that poking a hole somewhere in a continent that’s been more thoroughly prospected for oil than any other part of the Earth will somehow oblige the planet to fill ‘er up. I have too much respect for magic to dignify this sort of logic with the label of magical thinking; an initiate whose grasp of occult philosophy was that inept would be chucked out of any self-respecting magical lodge on the spot.

The realization that has to come is the realization that most current chatter about energy is trying desperately to avoid: that Santa isn’t bringing gigawatts or, if you prefer, that no law of nature guarantees us a steady supply of enough energy to maintain the fabulously extravagant habits of the recent past. Once people begin to grasp that the only meaningful answer to the question “What energy resources will allow us to keep the electricity grid running and cars on the road?” is “There aren’t any”, it’s possible to ask a different question – “What energy resources will allow us to provide for the actual necessities and reasonable wants of human beings?” – and get a more useful answer.

That’s more or less the discussion I’ve been trying to further with the posts on energy here in recent months, in the course of surveying those ways of working with energy with which I have some personal experience – conservation first and foremost, but also homescale solar and wind power. There are also plenty of other other options that I haven’t worked with personally, and they also deserve to be brought into the discussion.

“Micro-hydro” and “mini-hydro”, for example, are potentially options of great importance in the broad picture of a post-abundance energy future, but they’re not options I’ve explored personally. The “hydro” in each of these phrases, of course, is short for “hydroelectric;” micro-hydro is homescale hydroelectric power, usually produced by diverting a small amount of a stream or river on one’s property through a small turbine and using the latter to spin a generator. Back in the day there was a certain amount of work done with simple undershot waterwheels made from scrap metal, hooked up to truck alternators of the sort discussed in an earlier post on wind; I have no personal experience with how well these worked, but the concept may well be worth revisiting.

Mini-hydro is the next step up, hydroelectric power on the scale of a neighborhood or a rural town. Unlike what I suppose would have to be called mega-hydro, this doesn’t require damming up whole river basins, devastating fish runs, and the like; a small portion of a river’s flow or a small and steep stream provide the water, and the result under most circumstances is a supply of sustainably generated electricity that doesn’t suffer from the intermittency of sun and wind. Of course it depends on having the right kind of water resource close by your community, and that’s a good deal more common in some areas than others; it also requires a good deal more investment up front; but if you can get past those two obstacles, it’s hard to think of a better option.

Small amounts of electricity can be generated in a variety of other ways. Still, one of the great lessons that has to be grasped is that the thermodynamic costs of turning some other form of energy into electricity, and then turning the electricity back into some other form of energy such as rotary motion or heat, can be ignored only if you’ve got a half billion years or so of stored sunlight to burn. There are situations where those losses are worth accepting, but not that many of them, and if you can leave the energy in its original form and not take it through the detour into electricity, you’re usually better off.

Methane is an example. Methane production from manure on a small scale is a going concern in quite a few corners of the Third World; you need more raw material than a single human family will produce to get a worthwhile amount of gas, but small farms with livestock yield enough manure to keep a small kitchen stove fueled on this very renewable form of natural gas. (The residue still makes excellent raw material for compost, since only the carbon and hydrogen are involved in methane production; the nitrogen, phosphorus, potassium, and other plant nutrients come through the process untouched.) Since cooking fuel is higher on the list of basic human necessities than most things you can do with modest amounts of electricity, this is probably the best use for the technology.

Flatulence jokes aside, I don’t have any personal experience with small-scale methane production. Wood heat, on the other hand, is a technology I’ve worked with, and it’s probably going to be a major factor in the energy mix in North America in the future. It’s a simple, robust technology that works very well on the home scale – in fact, it’s not too easy to use it on any larger scale – and many wood stoves come with what’s called a waterback, which uses heat from the stove to heat domestic hot water. (Combine solar water heaters with a cooking stove equipped with a waterback, and you’ve basically got your hot water needs covered year round.) The problem here is that wood heat is a major cause of deforestation worldwide; whether or not too much windpower can mess with the climate, as the study referenced earlier in this post suggests, it’s a hard fact that too much harvesting of wood has devastated ecosystems over much of the world and caused a range of nasty blowbacks affecting human as well as biotic communities.

There’s at least one way around that problem, though it needs to be implemented soon and on a large scale. A very old technique called coppicing allows for intensive production of firewood off a fairly small acreage. The trick to coppicing is that quite a few tree species, when cut down, produce several new shoots from the stump; these grow much more rapidly than the original tree, since they have their root system already well in place. When the shoots get to convenient firewood size, the coppicer cuts them again, and yet another set of shoots come up to repeat the process. I’ve dabbled in coppicing – the vine maple of the Pacific Northwest, which grows like a weed and produces decent firewood, made that easy enough, and other regions have their own equivalents. As other fuels run short, the owner of a few acres who uses it for coppicing and sells dry wood nicely sized for wood stoves may have a steady income, or at least a perennial source of barter, on his or her hands.

Biofuels such as ethanol and vegetable oils are another source of heat energy that will probably see a great deal of use in the future, though here again the limits on production are not always recognized. In a world with seven billion mouths to feed and an agricultural system at least as dependent on fossil fuels as any other part of industrial civilization, diverting any substantial portion of farmland from growing food to producing biofuels risks a substantial political backlash. I wonder how many of the proponents of biofuels production have thought through the consequences of a future in which the hazards of driving might just include being stopped by makeshift barricades and torn to pieces by an impoverished mob that is all too aware that every drop of ethanol or biodiesel in the tank represents food taken from the mouths of their children.

Biofuels are likely to play some role in the early stages of the end of the age of abundance, then, but thereafter, at least until the world’s human population and post-petroleum agriculture have settled down into some sort of equilibrium, it’s unlikely that this role will be very extensive. Later on, it’s anyone’s guess, and the answer will be up to the people of the twenty-fourth century and onward, not us.

Methane, wood, and sunlight, then, will probably account for the great majority of heat energy in common use in the centuries immediately ahead of us. What about mechanical energy? The breakthrough that launched the industrial revolution was the discovery that heat from burning coal could be turned into mechanical energy by way of a steam engine, and much of what sets our civilization apart from other civilizations in history is precisely the ability to put almost unimaginable amounts of mechanical energy to work. If a car with a 100-horsepower engine literally had to be pulled by a hundred horses, for example, and each of those horses required the care and feeding that horses do, the number of such cars on the roads would be a very small fraction of the present total.

There are good reasons, some historical and some pragmatic, to think that the major source of mechanical energy in the post-abundance future will be what it was in the pre-abundance past, that is, human and animal muscle, amplified by a variety of clever tools. If anything, some of the more ingenious inventions of the last few centuries make muscle power even more useful now, and in the centuries ahead of us, than it was before the first steam engine hissed and groaned its way into a new age of the world. The extraordinary efficiency with which a bicycle converts muscular effort into movement is a case in point. The relatively simple metallurgy and engineering needed to build a bicycle is very likely to survive into the far future, or to be reinvented after some more or less brief interval, and the sheer value of a technology that can move people and supplies a hundred miles a day on decent roads will hardly be lost on our descendants. It’s far from unlikely, for example, that wars will be won in the post-petroleum era by those nations that have the common sense to equip their infantry with bicycle transport.

More generally, the invention of really effective gears may turn out to be one of the nineteenth century’s great contributions to the future. The Roman world had some very complex machines using cogs and gears, but the designs used at that time did a poor job of transmitting power; gearing systems originally evolved in the late Middle Ages for clockwork underwent dramatic changes once steam power created the need to transfer mechanical motion as efficiently as possible from place to place and from one direction to another. Once invented, effective gears found their way back down the technological pyramid to the realm of hand tools; anyone who has ever compared beating egg whites with a spoon to doing so with a hand-cranked beater will have a very clear idea of the difference in effort that such simple mechanical devices make possible.

That difference may not seem like much in comparison to the gargantuan achievements of current fossil fuel-powered technology, or the even more grandiose fantasies served up by a good many of those who insist that the end of the age of petroleum must, by some kind of technological equivalent of manifest destiny, usher in the beginning of the age of some even more titanic energy resource. Still, if these claims amount to sitting around the chimney on December 26 waiting for Santa’s boots to appear – and I think a very good case can be made for the comparison – it’s past time to shelve the fantasies of limitless energy and the hubris that goes with them, and start paying attention to the tools, technologies, and modest but real energy sources that can actually have a positive impact on human existence in an age when only natural phenomena have gigawatts at their disposal any more.

_____

John Michael Greer is the Grand Archdruid of the Ancient Order of Druids in America {2} and the author of more than twenty books on a wide range of subjects, including The Long Descent: A User’s Guide to the End of the Industrial Age (2008), The Ecotechnic Future: Exploring a Post-Peak World (2009), and The Wealth of Nature: Economics As If Survival Mattered (2011). He lives in Cumberland, Maryland, an old red brick mill town in the north central Appalachians, with his wife Sara.

If you enjoy reading this blog, you might want to check out Star’s Reach {3}, his blog/novel of the deindustrial future. Set four centuries after the decline and fall of our civilization, it uses the tools of narrative fiction to explore the future our choices today are shaping for our descendants tomorrow.

Links:

{1} http://www.newscientist.com/article/mg21028063.300-wind-and-wave-farms-could-affect-earths-energy-balance.html?full=true&print=true

{2} http://www.aoda.org/

{3} http://starsreach.blogspot.com/

http://thearchdruidreport.blogspot.com/2011/06/santa-isnt-bringing-gigawatts.html

Categories: Uncategorized

Three strikes and you’re hot

2011/06/28 2 comments

Time for Obama to say no to the fossil fuel wish list

by Bill McKibben

Le Monde diplomatique (June 08 2011)

In our globalized world, old-fashioned geography is not supposed to count for much: mountain ranges, deep-water ports, railroad grades – those seem so nineteenth century. The earth is flat, or so I remember somebody saying.

But those nostalgic for an earlier day, take heart. The Obama administration is making its biggest decisions yet on our energy future and those decisions are intimately tied to this continent’s geography. Remember those old maps from your high-school textbooks that showed each state and province’s prime economic activities? A sheaf of wheat for farm country? A little steel mill for manufacturing? These days in North America what you want to look for are the pickaxes that mean mining, and the derricks that stand for oil.

There’s a pickaxe in the Powder River Basin of Montana and Wyoming, one of the world’s richest deposits of coal. If we’re going to have any hope of slowing climate change, that coal – and so all that future carbon dioxide – needs to stay in the ground. In precisely the way we hope Brazil guards the Amazon rainforest, that massive sponge for carbon dioxide absorption, we need to stand sentinel over all that coal.

Doing so, however, would cost someone some money. At current prices the value of that coal may be in the trillions, and that kind of money creates immense pressure. Earlier this year, President Obama signed off on the project, opening a huge chunk of federal land to coal mining. It holds an estimated 750 million tons worth of burnable coal. That’s the equivalent of opening 300 new coal-fired power plants. In other words, we’re talking about staggering amounts of new carbon dioxide heading into the atmosphere to further heat the planet.

As Eric de Place of the Sightline Institute put it,

That’s more carbon pollution than all the energy – from planes, factories, cars, power plants, et cetera – used in an entire year by all 44 nations in Central America, South America, and the Caribbean combined.

Not what you’d expect from a president who came to office promising that his policies would cause the oceans to slow their rise.

But if Obama has admittedly opened the mine gate, it’s geography to the rescue. You still have to get that coal to market, and “market” in this case means Asia, where the demand for coal is growing fastest. The easiest and cheapest way to do that – maybe the only way at current prices – is to take it west to the Pacific where, at the moment, there’s no port capable of handling the huge increase in traffic it would represent.

And so a mighty struggle is beginning, with regional groups rising to the occasion. Climate Solutions and other environmentalists of the northwest are moving to block port-expansion plans in Longview and Bellingham, Washington, as well as in Vancouver, British Columbia. Since there are only so many possible harbors that could accommodate the giant freighters needed to move the coal, this might prove a winnable battle, though the power of money that moves the White House is now being brought to bear on county commissions and state houses. Count on this: it will be a titanic fight.

Strike two against the Obama administration was the permission it granted early in the president’s term to build a pipeline into Minnesota and Wisconsin to handle oil pouring out of the tar sands of Alberta. (It came on the heels of a Bush administration decision to permit an earlier pipeline from those tar sands deposits through North Dakota to Oklahoma). The vast region of boreal Canada where the tar sands are found is an even bigger carbon bomb than the Powder River coal. By some calculations, the tar sands contain the equivalent of about 200 parts per million carbon dioxide – or roughly half the current atmospheric concentration. Put another way, if we burn it, there’s no way we can control climate change.

Fortunately, that sludge is stuck so far in the northern wilds of Canada that getting it to a refinery is no easy task. It’s not even easy to get the equipment needed to do the mining to the extraction zone, a fact that noble activists in the northern Rockies are exploiting with a campaign to block the trucks hauling the giant gear north. (Exxon has been cutting trees along wild and scenic corridors just to widen the roads in the region, that’s how big their “megaloads” are.)

Unfortunately, the administration’s decision to permit that Minnesota pipeline has made the job of sending the tar sand sludge south considerably easier. And now the administration is getting ready to double down, with a strike three that would ensure forever Obama’s legacy as a full-on Carbon President.

The huge oil interests that control the tar sands aren’t content with a landlocked pipeline to the Midwest. They want another, dubbed Keystone XL, that stretches from Canada straight to Texas and the Gulf of Mexico. It would take the bitumen from the tar sands and pipe it across the heart of America. Imagine a video game where your goal is to do the most environmental damage possible: to the Cree and their ancestral lands in Canada, to Nebraska farmers trying to guard the Ogallala aquifer that irrigates their land, and of course to the atmosphere.

But the process is apparently politically wired and in a beautifully bipartisan Washington way. Secretary of State Hillary Clinton must approve the plan for Keystone XL because it crosses our borders. Last year, before she’d even looked at the relevant data, she said she was “inclined” to do so. And why not? I mean, the company spearheading the Keystone project, TransCanada, has helpfully hired her former deputy national campaign director as its principal lobbyist.

Meanwhile, on the other side of the political aisle, those oil barons the Koch Brothers and that fossil fuel front group the US Chamber of Commerce are pushing for early approval. Michigan Republican Congressman Fred Upton, chair of the House Energy Committee, is already demanding that the project be fast-tracked, with a final approval decision by November, on the grounds that it would create jobs. This despite the fact that even the project’s sponsors concede it won’t reduce gas prices. In fact, as Jeremy Symons of the National Wildlife Federation pointed out in testimony to Congress last month, their own documents show that the pipeline will probably cause the price at the pump to rise across the Midwest.

When the smaller pipeline was approved in 2009, we got a taste of the arguments that the administration will use this time around, all masterpieces of legal obfuscation. Don’t delay the pipeline over mere carbon worries will be the essence of it.

Global warming concerns, said Deputy Secretary of State James Steinberg then, would be “best addressed in the context of the overall set of domestic policies that Canada and the United States will take to address their respective greenhouse gas emissions”. In other words, let’s confine the environmental argument over the pipeline to questions like: How much oil will leak? In the meantime, we’ll pretend to deal with climate change somewhere else.

It’s the kind of thinking that warms the hearts of establishments everywhere. Michael Levi, author of a Council on Foreign Relations study of the Canadian oil sands, told the Washington Post that, with the decision, “the Obama administration made clear that it’s not going to go about its climate policy in a crude, blunt way”. No, it’s going about it in a smooth and … oily way.

If we value the one planet we’ve got, it’s going to be up to the rest of us to be crude and blunt. And happily that planet is pitching in. The geography of this beautiful North American continent is on our side: it’s crude and blunt, full of mountains and canyons. Its weather runs to extremes. It’s no easy thing to build a pipeline across it, or to figure out how to run an endless parade of train cars to the Pacific.

Tough terrain aids the insurgent; it slows the powerful. Though we’re fighting a political campaign and not a military one, we need to take full advantage.

_____

This article was first published in TomDispatch, 2 june 2011

Bill McKibben is Schumann Distinguished Scholar at Middlebury College, founder of 350.org, and a TomDispatch regular. His most recent book, just out in paperback, is Eaarth: Making a Life on a Tough New Planet (2010).

More by Bill McKibben: http://mondediplo.com/_Bill-McKibben_

http://mondediplo.com/openpage/three-strikes-and-you-re-hot

Categories: Uncategorized

Media Blackout

Was there a  Nuclear Incident at Fort Calhoun Nebraska?

by Patrick Henningsen

21st Century Wire (June 23 2011)

Since flooding began on June 6th, there has been a disturbingly low level of media attention given to the crisis at the Fort Calhoun Nuclear Facility near Omaha, Nebraska. But available evidence strongly suggests that something very serious could have happened there.

Unfortunately for members of the public, there is no shortage of proof that serious nuclear incidents and radiation releases have happened in America, and have been covered up each and every time.

Most Americans are completely unaware that dangerous radiation has leaked from some three-quarters of all US nuclear power stations {1} and should naturally raise concerns that much of the the country’s water supplies may be contaminated. For this reason, it is paramount that the media and the public demand every bit of information available on this latest event.

First accounts tell us that on June 7th, there was a fire {2} reported at Fort Calhoun.  The official story is that the fire was in an electrical switchgear room at the plant.  The apparently facility lost power to a pump that cools the spent fuel rod pool, allegedly for a duration of approximately ninety minutes.

Here is a video regarding the extent of flooding experienced along the Missouri River in Nebraska:
http://www.youtube.com/watch?feature=player_embedded&v=eGga2sRF9qg
http://www.youtube.com/watch?feature=player_embedded&v=6aWwfiZJ10M

The following sequence of events is documented on the Omaha Public Power District’s own website {3}, stating among other things, that here was no such imminent danger with the Fort Calhoun Station spent-fuel pool, and that due to a fire in an electrical switchgear room at FCS on the morning of June 7, the plant temporarily lost power to a pump that cools the spent-fuel pool.

In addition to the flooding that has occurred on the banks of the Missouri River at Fort Calhoun, the Cooper Nuclear Facility in Brownville, Nebraska may also be threatened by the rising flood waters.

http://21stcenturywire.files.wordpress.com/2011/06/ft-calhoun-nuclear-power-plant.jpg?w=450&h=311

As was declared at Fort Calhoun on June 7th, another  “Notification of Unusual Event {4}” was declared at Cooper Nuclear Station on June 20th.  This notification was issued because the Missouri River’s water level reached an alarming 42.5 feet. Apparently, Cooper Station is advising that it is unable to discharge sludge into the Missouri River due to flooding, and therefore “over-topped” its sludge pond.

Not surprisingly, and completely ignored by the Mainstream Media, these two nuclear power facilities in Nebraska were designated temporary restricted NO FLY ZONES  by the FAA in early June.  The FAA restrictions were reportedly down to “hazards” and were  ‘effectively immediately’, and ‘until further notice’. Yet, according to the NRC, there’s no cause for the public to panic.

A news report from local NBC 6 on the Fort Calhoun Power Plant and large areas of farm land flooded by the Missouri River, interviews a local farmer worried about the levees, “We need the Corps-Army Corps of Engineers to do more. The Corps needs to tell us what to do and where to go. This is not mother nature, this is man-made.” Nearby town Council Bluffs has already implemented its own three tier warning system {5} should residents have to leave the area quickly. Flooding fears would be dwarfed however, in the event of a radiation leak at one the region’s nuclear facilities.

To date, it is unknown to members of the public whether or not the incident at Fort Calhoun Nuclear is actually a Level Four emergency (on a US regulatory scale). A Level Four emergency would constitute an “actual or imminent substantial core damage or melting of reactor fuel with the potential for loss of containment integrity”.

If there was any core damage, there is no guarantee that officials would allow such information to be made public for fear of public panic and loss of confidence.

Serious nuclear incidents have taken place on US soil which were covered-up, in some cases for over forty years. “In 1959, a partial meltdown occurred at the Boeing-Rocketdyne {6} nuclear testing facility, about thirty miles northwest of downtown Los Angeles. The incident released the third greatest amount of radioactive iodine in nuclear history. But no one really heard about it until Boeing recently settled a class-action suit filed by local residents”, reported Living On Earth in 2006 {7}. At no point were members of the public informed about these severe radiation leaks which undoubtedly caused hundreds of cases of cancer and contributed to resident deaths. Details of this and other incidents have been kept secret for some forty or more years.

http://www.youtube.com/watch?v=eRdC5I0Yn2k&feature=player_embedded

According to the seven-level {8} International Nuclear and Radiological Event Scale, a Level Four incident requires at least one death, which has not occurred according to available reports.

According a recent report on the People’s Voice {9} website, The Fort Calhoun plant – which stores its fuel rods at ground level according to Tom Burnett {10} – is now partly submerged and Missouri River levels are expected to rise further before the summer is finished, local reports in and around the Fort Calhoun Nuclear Plant suggest that the waters are expected to rise at least five more feet.

Burnett states, “Fort Calhoun is the designated spent fuel storage facility for the entire state of Nebraska … and maybe for more than one state. Calhoun stores its spent fuel in ground-level pools which are underwater anyway – but they are open at the top. When the Missouri river pours in there, it’s going to make Fukushima look like an X-Ray.”

The People’s Voice’s report explains how Fort Calhoun and Fukushima share some of the very same high-risk factors:

In 2010, Nebraska stored 840 metric tons of the highly radioactive spent fuel rods, reports the Nuclear Energy Institute {11}. That’s one-tenth of what Illinois stores (8,440 MT), and less than Louisiana (1,210) and Minnesota (1,160). But it’s more than other flood-threatened states like Missouri (650) and Iowa (420).

Nuclear engineer Arnie Gundersen explains how cooling pumps must operate continuously, even years after a plant is shut down: http://www.youtube.com/watch?feature=player_embedded&v=mSvvmrB7qEg

As with the critical event in Fukushima, in Fort Calhoun circulatin­g water is required at all times to keep the new fuel and more importantl­y the spent radioactive material cool. The Nebraska facility houses around 600,000 – 800,000 pounds of spent fuel that must be constantly cooled to prevent it from starting to boil, so the reported ninety minute gap in service should raise alarm bells.

Conventional wisdom about what makes for a safe location regarding nuclear power facilities was turned on its head this year following Japan’s Fukushima disaster following the earthquake and tsunami which ravaged the region and triggered one of the planets worst-ever nuclear meltdowns.

TV and radio journalist Tom Hartmann explores some of these arguments here:
http://www.youtube.com/watch?v=TXk3MP4kYpM&feature=player_embedded

In addition, there are eyewitness reports of odd military movements, including unmarked {12} vehicles and soldier movements throughout the region. Should a radiation accident occur, most certainly extreme public controls would be enacted by the military, not least because this region contains some of the country’s key environmental, transportation and military assets.

Angela Tague at Business Gather {13} reports also that the recent Midwest floods may seriously impact food and gas prices.  Lost farmland may be behind the price spike to $7.55 a bushel for corn, already twice last year’s price.  Tague notes also:

Corn is a key ingredient in ethanol gasoline, feeds America’s livestock and is found in many food products including soft drinks and cereal. Prices will undoubtedly increase steadily at the grocery store, gas pump and butcher shop throughout the summer as Midwest flooding continues along the Missouri River basin. Not only are farmers losing their homes, land and fields – ultimately their bank accounts will also suffer this season.

In Summary

The nuclear industry has a very long history of withholding information and misleading the public with regards to the hazards of its industrial activities. One of the lessons we can learn for Japan’s tragic Fukushima disaster is that the government’s choice to impose a media blackout on information around the disaster may have already cost thousands of lives. Only time will tell the scope the disaster and how many victims it will claim.

More importantly, though, is that public officials might do well to reconsider the “safe” and “green” credentials of nuclear power –  arguably one of the dirtiest industries in existence today {14}. Especially up for inspection, are those of forty- to fifty-year-old facilities like Fort Calhoun in the US, strangely being re-licensed for operation past 2030. Many of these older facilities serve little on the electrical production front, and are more or less “bomb factories” that produce enriched material for nuclear weapons, and recycled nuclear waste used in deadly depleted uranium {15} munitions.

When a nuclear facility goes red, it’s game over for the population in the surrounding vicinity. Can we really afford more Fukushima-type events from a government and an industry that keeps its lips so tight?

Links:

{1} http://www.dailymail.co.uk/news/article-2006250/Dangerous-radiation-leaked-quarters-U-S-nuclear-power-plants.html#ixzz1Q6nVjXUg

{2} http://www.nrc.gov/reading-rm/doc-collections/event-status/event/2011/20110608en.html#en46932

{3} http://www.oppd.com/AboutUs/22_007105

{4} http://www.ncnewspress.com/topstories/x1774073316/Cooper-Nuclear-Station-declares-Notification-of-Unusual-Event

{5} http://www.kmtv.com/story/14901048/council-bluffs-prepares-evacuation-plan

{6} http://theintelhub.com/2011/04/15/flashback-a-nuclear-incident-%E2%80%9Cworse-than-three-mile-island%E2%80%9D-covered-up-for-forty-five-years/

{7} http://www.loe.org/shows/segments.html?programID=06-P13-00003&segmentID=1

{8} http://en.wikipedia.org/wiki/International_Nuclear_Event_Scale

{9} http://www.thepeoplesvoice.org/TPV3/Voices.php/2011/06/16/midwest-floods-both-nebraska-nuke-statio

{10} http://www.rense.com/general94/ftcal.htm

{11} http://www.nei.org/filefolder/Used_Nuclear_Fuel_Map_2010.jpg

{12} http://theintelhub.com/2011/06/16/tip-suspicious-troop-movements/

{13} http://business.gather.com/viewArticle.action?articleId=281474979453891

{14} http://21stcenturywire.com/2011/03/15/the-dirty-green-secret-shopsoiled-nuclear-goods/

{15} http://en.wikipedia.org/wiki/Depleted_uranium

_____

Related:

The Dirty Green Secret: Shopsoiled Nuclear Goods
http://21stcenturywire.com/2011/03/15/the-dirty-green-secret-shopsoiled-nuclear-goods/

 

http://21stcenturywire.com/2011/06/22/why-is-there-a-media-blackout-on-nuclear-incident-at-fort-calhoun-in-nebraska/

Categories: Uncategorized

Japan’s ‘throwaway’ nuclear workers

Reuters (June 24 2011)

by Kevin Krolicki and Chisa Fujioka

FUKUSHIMA, JAPAN (Reuters) – A decade and a half before it blew apart in a hydrogen blast that punctuated the worst nuclear accident since Chernobyl, the Number Three reactor at the Fukushima nuclear power plant was the scene of an earlier safety crisis.

Then, as now, a small army of transient workers was put to work to try to stem the damage at the oldest nuclear reactor run by Japan’s largest utility.

At the time, workers were racing to finish an unprecedented repair to address a dangerous defect: cracks in the drum-like steel assembly known as the “shroud” surrounding the radioactive core of the reactor.

But in 1997, the effort to save the 21-year-old reactor from being scrapped at a large loss to its operator, Tokyo Electric, also included a quiet effort to skirt Japan’s safety rules: foreign workers were brought in for the most dangerous jobs, a manager of the project said.

“It’s not well known, but I know what happened”, Kazunori Fujii, who managed part of the shroud replacement in 1997, told Reuters. “What we did would not have been allowed under Japanese safety standards”.

The previously undisclosed hiring of welders from the United States and Southeast Asia underscores the way Tokyo Electric, a powerful monopoly with deep political connections in Japan, outsourced its riskiest work and developed a lax safety culture in the years leading to the Fukushima disaster, experts say.

A 9.0 earthquake on March 11 triggered a fifteen-metre tsunami that smashed into the seaside Fukushima Daiichi plant and set off a series of events that caused its reactors to start melting down.

Hydrogen explosions scattered debris across the complex and sent up a plume of radioactive steam that forced the evacuation of more than 80,000 residents near the plant, about 240 kilometres (150 miles) northeast of Tokyo. Enough radioactive water to fill forty Olympic swimming pools has also been collected at the plant and threatens to leak into the groundwater.

The repeated failures that have dogged Tokyo Electric in the three months the Fukushima plant has been in crisis have undercut confidence in the response to the disaster and dismayed outside experts, given corporate Japan’s reputation for relentless organization.

Hastily hired workers were sent into the plant without radiation meters. Two splashed into radioactive water wearing street shoes because rubber boots were not available. Even now, few have been given training on radiation risks that meets international standards, according to their accounts and the evaluation of experts.

The workers who stayed on to try to stabilize the plant in the darkest hours after March 11 were lauded as the “Fukushima Fifty” for their selflessness. But behind the heroism is a legacy of Japanese nuclear workers facing hazards with little oversight, according to interviews with more than two dozen current and former nuclear workers, doctors and others.

Since the start of the nuclear boom in the 1970s, Japan’s utilities have relied on temporary workers for maintenance and plant repair jobs, the experts said. They were often paid in cash with little training and no follow-up health screening.

This practice has eroded the ability of nuclear plant operators to manage the massive risks workers now face and prompted calls for the Japanese government to take over the Fukushima clean-up effort.

Although almost 9,000 workers have been involved in work around the mangled reactors, Tokyo Electric did not have a Japan-made robot capable of monitoring radiation inside the reactors until this week. That job was left to workers, reflecting the industry’s reliance on cheap labor, critics say.

“I can only think that to the power companies, contract workers are just disposable pieces of equipment”, said Kunio Horie, who worked at nuclear plants, including Fukushima Daiichi, in the late 1970s and wrote about his experience in a book “Nuclear Gypsey”.

Tokyo Electric said this week it cannot find 69 of the more than 3,600 workers who were brought in to Fukushima just after the disaster because their names were never recorded. Others were identified by Tepco in accident reports only by initials: “A-san” or “B-san”.

Makoto Akashi, executive director at the National Institute of Radiological Sciences near Tokyo, said he was shocked to learn Tokyo Electric had not screened some of the earliest workers for radiation inside their bodies until June while others had to share monitors to measure external radiation.

That means health risks for workers – and future costs – will be difficult to estimate.

“We have to admit that we didn’t have an adequate system for checking radiation exposure”, said Goshi Hosono, an official appointed by Prime Minister Naoto Kan to coordinate the response to the crisis.

‘Broad is the Road that Leads to Destruction’

Fujii, who devoted his career to building Japanese nuclear power plants as a manager with IHI Corporation, was troubled by what he saw at Fukushima in 1997.

Now 72, he remembers falling for “the romance of nuclear power” as a student at Tokyo’s Rikkyo University in the 1960s. “The idea that you could take a substance small enough to fit into a tea cup and produce almost infinite power seemed almost like a dream” he said.

He had asked to oversee part of the job at Fukushima as the last big assignment of his career. He threw himself into the work, heading into the reactor for inspections. “I had a sense of mission”, he said.

As he watched a group of Americans at work in the reactor one day, Fujii jotted down a Bible verse in his diary that captured his angst: “Wide is the gate and broad is the road that leads to destruction and many enter through it”.

The basis for nuclear safety regulation is the assumption that cancers, including leukemia, can be caused years later by exposure to relatively small amounts of radiation, far below the level that would cause immediate sickness. In normal operations, international nuclear workers are limited to an average exposure of twenty millisieverts per year, about ten times natural background radiation levels.

At Fukushima in 1997, Japanese safety rules were applied in a way that set very low radiation exposure limits on a daily basis, Fujii said. That was a prudent step, safety experts say, but it severely limited what Japanese workers could do on a single shift and increased costs.

The workaround was to bring in foreign workers who would absorb a full-year’s allowable dose of radiation of between twenty millisieverts and 25 millisieverts in just a few days.

“We brought in workers from Southeast Asia and Saudi Arabia who had experience building oil tankers. They took a heavier dose of radiation than Japanese workers could have”, said Fujii, adding that American workers were also hired.

Tokyo Electric would admit five years later it had hid evidence of the extent of the defect in the shroud from regulators. That may have added to the pressure to finish the job quickly. When new cracks were found, they were fixed without a report to regulators, according to disclosures made in 2002

It is not clear if the radiation doses for the foreign workers were recorded on an individual basis or if they have faced any heath problems. Tepco said it had no access to the worker records kept by its subcontractors. IHI said it had no record of the hiring of the foreign workers. Toshiba, another major contractor, also said it could not confirm that foreign workers were hired.

Hosono, the government official overseeing the response to the disaster, said he was not aware of foreign workers being brought in to do repair work in the past and they would not be sent in now.

Now retired outside Tokyo, Fujii said he has come to see nuclear power as an “imperfect technology”.

“This is an unfortunate thing to say, but the nuclear industry has long relied on people at the lowest level of Japanese society”, he said.

Pay-By-The-Day

Since the late 1960s, the Kamagasaki neighborhood of Osaka has been a dumping ground for men battling drug and alcohol addiction, ex-convicts, and men looking for a construction job with few questions. It has also been a hiring spot for Japan’s nuclear industry for decades.

“Kamagasaki is a place that companies have always come for workers that they can use and then throw away”, said Hiroshi Inagaki, a labor activist.

The nearby Lawson’s store has a sign on its bathroom door warning that anyone trying to flush a used syringe down the toilet will be prosecuted. Peddlers sell scavenged trash, including used shoes and rice cookers. A pair of yakuza enforcers in black shirts and jeans walks the street to collect loans.

The center of Kamagasaki is an office that connects day laborers with the small construction firms that roll up before dawn in vans and minibuses.

Within a week after the Fukushima disaster, Tepco had engaged Japan’s biggest construction and engineering companies to run the job of trying to bring the plant under control. They in turned hired smaller firms, over 600 of them. That cascade brought the first job offers to Kamagasaki by mid-March.

One hiring notice sought a truck driver for Miyagi, one of the prefectures hit hard by the tsunami. But when an Osaka day laborer in his sixtiess accepted the job, he was sent instead to Fukushima where he was put to work handling water to cool the Number Five reactor.

The man, who did not want to be identified, was paid the equivalent of about $300 a day, twice what he was first promised. But he was only issued a radiation meter on his fourth day. Inagaki said the man was seeking a financial settlement from Tokyo Electric. “We think what happened here is illegal”, he said.

Nearby, several men waiting to be hired in Kamagasaki said they had experience working at nuclear plants.

A 58-year-old former member of Japan’s Self Defense Forces from southern Japan who asked to be identified only by his nickname, Jumbo, said he had worked at Tokyo Electric’s Kashiwazaki-Kariwa power plant for a two-month job. He knows others who have gone to Fukushima from the hiring line at Kamagasaki, he said.

“We’ve always had nuclear work here, and I would go again”, he said.

The Abandoned Spa

In Iwaki, a town south of the Fukushima plant once known for a splashy Hawaiian-themed resort, the souvenir stands and coffee shops are closed or losing money. The drinking spots known as “snacks” are starting to come back as workers far from home seek the company of bar girls.

“It’s becoming like an army base”, said Shukuko Kuzumi, 63, who runs a cake shop across from the main rail station. “There are workers who come here knowing what the work is like, but I think there are many who don’t”.

Each morning, hired workers pile into buses and beat-up vans and set out from the nearly abandoned resort. More men in the standard-issue white work pajamas pour out of the shipping containers turned into temporary housing at the Hirono highway exit where residents have fled and weeds have overgrown the sidewalks.

They gather at a now abandoned soccer complex where Argentina’s soccer team trained during the 2002 World Cup to get briefed on the tasks for the shifts ahead. They then change into the gear many have come to dread: two or three pairs of gloves, full face masks, goggles and white protective suits. More than a dozen Fukushima workers have collapsed of heat stroke, and the rising heat weighs more heavily on the minds of workers than threat of radiation.

“I don’t know how I’m going to make it if it gets much hotter than this”, a heavyset, 36-year-old Tokyo man said as he stretched out at Hirono after a day of spraying a green resin around the plant to keep radioactive dust from spreading.

The risks from the radiation hotspots at Fukushima remain considerable. A vent of steam in the Number One reactor was found earlier this month to be radioactive enough to kill anyone standing near it for more than an hour.

Tokyo Electric has been given a sanction-free reprimand for its handling of radiation exposure at Fukushima. Nine workers have exceeded the emergency exposure limit of 250 millisieverts. Another 115 have exceeded 100 millisieverts of exposure. The two workers with the highest radiation readings topped 600 millisieverts of exposure.

For context, the largest study of nuclear workers to date by the International Agency for Research on Cancer found a risk of roughly two additional fatal cancers for every 100 people exposed to 100 millisieverts of radiation.

But several Fukushima workers say they have been told not to worry about health risks unless they top 100 or near 200 millisieverts of exposure in training by contractors.

Experts say that runs counter to international standards. The International Atomic Energy Agency requires workers in a nuclear emergency to give “informed consent” to the risks they face and that they understand danger exists at even low doses.

Tokyo Electric spokesman Junichi Matsumoto said the utility could not confirm what kind of training smaller firms were providing. “The subcontractors have a responsibility as well”, he said. “I don’t know what kind of briefing they are getting”.

Kim Kearfott, a nuclear engineer and radiation health expert from the University of Michigan who toured Japan in May, said authorities needed to ensure that safety training was handled independently by outside experts.

“The potential for coercion and undue influence over a day laborer audience is high, especially when the training and consent are administered by those who control hiring and firing of workers”, she said.

Tokyo Electric has been challenged before on its training. Mitsuaki Nagao, a plumber who had worked at three plants including Fukushima, said he was never briefed on radiation dangers, and would routinely use another worker’s dosimeter to finish jobs. Some doctors worry that the same under-reporting of radiation could happen at Fukushima as well.

Nagao sued Tokyo Electric when he was diagnosed with multiple myeloma, a type of bone marrow cancer, in 2004. His lawsuit, one of two known worker cases against a Japanese utility, was rejected by a Tokyo court, which ruled no links had been proven between his radiation and his illness. He died in 2007.

Some doctors are urging Japan’s government to set up a system of health monitoring for the thousands of workers streaming through Fukushima. Some also want to see a standard of care guaranteed.

“This is also a problem of economics”, said Kristin Schrader-Frechette, a Notre Dame University professor and nuclear safety expert. “If Japan wants to know the true costs of nuclear power versus the alternatives, it needs to know what these health care costs are”.

(Editing by Bill Tarrant)

(c) Thomson Reuters 2011. All rights reserved. Users may download and print extracts of content from this website for their own personal and non-commercial use only. Republication or redistribution of Thomson Reuters content, including by framing or similar means, is expressly prohibited without the prior written consent of Thomson Reuters. Thomson Reuters and its logo are registered trademarks or trademarks of the Thomson Reuters group of companies around the world.

Thomson Reuters journalists are subject to an Editorial Handbook which requires fair presentation and disclosure of relevant interests.

This copy is for your personal, non-commercial use only. To order presentation-ready copies for distribution to colleagues, clients or customers, use the Reprints tool at the top of any article or visit: www.reutersreprints.com.

http://uk.reuters.com/article/2011/06/24/japan-nuclear-re-idUKL3E7HO0FE20110624

Categories: Uncategorized