Worst Joke Ever?

US Spy Chief Gives Saudi Prince Highest Award for “Fighting Terrorism”

by Mike Whitney

CounterPunch (February 14 2017)

On Friday, the Director of the CIA, Mike Pompeo, used his first trip abroad to present Saudi Arabia’s Crown Prince Mohammed bin Nayef with the CIA’s highest award for fighting terrorism, the George Tenet Medal. Although the ceremony wasn’t covered by any of the major media, it was picked up on various blogsites where the news was greeted with predictable howls of outrage. Not surprisingly, most American’s still see Saudi Arabia as the epicenter of global terrorism, a point which was underlined in a recent article at The Atlantic titled “Where America’s Terrorists Actually Come From”. Here’s an excerpt:

 

 

… after sifting through databases, media reports, court documents, and other sources, Alex Nowrasteh, an immigration expert at the libertarian Cato Institute, has arrived at a striking finding: Nationals of the seven countries singled out by Trump have killed zero people in terrorist attacks on US soil between 1975 and 2015.

Zero …

Nowrasteh has listed foreign-born individuals who committed or were convicted of attempting to commit a terrorist attack on US soil by their country of origin and the number of people they killed … the countries at the top of the list, including Saudi Arabia and Egypt, are not included in Trump’s ban …

The 9/11 attacks were carried out by nineteen men – from Saudi Arabia (15), the United Arab Emirates (2), Egypt (1), and Lebanon (1). The incident remains influential in how Americans think about the nature of terrorism. {1}

While it’s true that 9-11 has shaped the way that Americans think about terrorism, it’s also true that most people are unaware of the deeper operational relationship between the CIA and the Saudis that dates back to the funding of the Mujahidin in Afghanistan in the 1970’s. This is where bin Laden and al Qaida first burst onto the scene, which is to say, that the sketchy CIA-Saudi connection created the seedbed for the War on Terror. Unfortunately, even now – sixteen years after the attacks of 9-11 – the relationship between the notorious intel agency and its Middle East allies remains as foggy as ever. As a result, the Saudis are typically fingered as the main source of the problem while the CIA’s role is conveniently swept under the rug. For example, take a look at this clip from an article in The Independent:

 

 

Saudi Arabia is the single biggest contributor to the funding of Islamic extremism and is unwilling to cut off the money supply, according to a leaked note from Hillary Clinton.

The US Secretary of State says in a secret memorandum that donors in the kingdom still “constitute the most significant source of funding to Sunni terrorist groups worldwide” and that “it has been an ongoing challenge to persuade Saudi officials to treat terrorist financing emanating from Saudi Arabia as a strategic priority” …

Saudi Arabia is accused, along with Qatar, Kuwait and the United Arab Emirates (“UAE”), of failing to prevent some of its richest citizens financing the insurgency against Nato troops in Afghanistan. Fund-raisers from the Taliban regularly travel to UAE to take advantage of its weak borders and financial regulation to launder money.

However, it is Saudi Arabia that receives the harshest assessment. The country from which Osama bin Laden and most of the 9/11 terrorists originated, according to Mrs Clinton, “a critical financial support base for al-Qa’ida, the Taliban, Lashkar-e-Toiba and other terrorist groups, including Hamas, which probably raise millions of dollars annually from Saudi sources, often during the Haj and Ramadan. {2}

 

 
Then there’s this gem from ex-Vice President Joe Biden:

 

 

Biden said that “our biggest problem is our allies” who are engaged in a proxy Sunni-Shiite war against Syrian President Bashar Assad. He specifically named Turkey, Saudi Arabia and the UAE.

What did they do? They poured hundreds of millions of dollars and thousands of tons of weapons into anyone who would fight against Assad – except that the people who were being supplied were (Jabhat) Al-Nusra and al-Qaeda and the extremist elements of jihadis coming from other parts of the world, Mr Biden said. {3}

 

 

The evidence against Saudi Arabia is overwhelming and damning, and that’s what makes Pompeo’s performance in Riyadh so confusing. Why is the head of the CIA bestowing an award on a man who could undoubtedly identify some of the world’s biggest terrorist donors, unless, of course, the CIA derives some benefit from the arrangement?

Is that it? Is there is a quid pro quo between Washington and the Saudis that no one knows about but from which Washington reaps tangible geopolitical benefits?

It’s certainly within the realm of possibility.

Is it too far-fetched to think that the Saudis are actually a franchise that acts as Langley’s primary subcontractor carrying out operations deemed too sensitive for its own agents while obscuring the Company’s role behind a cloak of plausible deniability? Isn’t that what Friday’s freakishly Orwellian awards ceremony really suggests, that the skullduggery is much darker, deeper and more complicated than anyone would care to imagine?

Washington’s support for the Mujahidin helped to push the Soviets out of Afghanistan which is why the Brzezinski crowd thought it was a success story. If that’s the case, then isn’t it logical to assume that subsequent administrations might have used the same model elsewhere, like Kosovo, Somalia, Sudan, Libya, Iraq, Syria, and Afghanistan?

Isn’t it at least worth investigating?

And, another thing: Is it possible to uncover the root of terror by capturing and interrogating individual terrorists to find out what they know?

No, it’s not possible, because the individual cogs have never revealed the source of the funding-streams which originate from within the deep state. Every effort has been made to distance the authors from their illicit handiwork, to remove the tracks and erase the fingerprints. Once again, it’s all about plausible deniability and preventing the public from identifying the real perpetrators. Which means the only way to end this madness is by shedding light on the shadowy goings on between the Intel agencies and their Middle East proxies. There’s no other way.

One thing is certain, you’re not going to win the war on terror by handing out medals to the prime suspects.

Links:

{1} https://www.theatlantic.com/international/archive/2017/01/trump-immigration-ban-terrorism/514361/

{2} http://www.independent.co.uk/news/world/middle-east/saudi-arabia-is-biggest-funder-of-terrorists-2152327.html

{3} http://www.telegraph.co.uk/news/worldnews/middleeast/unitedarabemirates/11142683/Joe-Biden-forced-to-apologise-to-UAE-and-Turkey-over-Syria-remarks.html

http://www.counterpunch.org/2017/02/14/worst-joke-ever-u-s-spy-chief-gives-saudi-prince-highest-award-for-fighting-terrorism/

Categories: Uncategorized

The Anti-Empire Report #148

by William Blum

https://williamblum.org (February 04 2017)

 

 

Why, sometimes I’ve believed as many as six impossible things before breakfast.

– Alice in Wonderland

 

 

Since Yalta, we have a long list of times we’ve tried to engage positively with Russia. We have a relatively short list of successes in that regard.

– General James Mattis, the new Secretary of Defense {1}

 

 

If anyone knows where to find this long list please send me a copy.

This delusion is repeated periodically by American military officials. A year ago, following the release of Russia’s new national security document, naming as threats both the United States and the expansion of the Nato alliance, a Pentagon spokesman declared: “They have no reason to consider us a threat. We are not looking for conflict with Russia.” {2}

Meanwhile, in early January, the United States embarked upon its biggest military buildup in Europe since the end of the Cold War – 3,500 American soldiers landed, unloading three shiploads, with 2,500 tanks, trucks and other combat vehicles. The troops were to be deployed in Poland, Romania, Bulgaria, Germany, Hungary and across the Baltics. Lieutenant General Frederick Hodges, commander of US forces in Europe, said, “Three years after the last American tanks left the continent, we need to get them back”.

The measures, General Hodges declared, were a “response to Russia’s invasion of Ukraine and the illegal annexation of Crimea. This does not mean that there necessarily has to be a war, none of this is inevitable, but Moscow is preparing for the possibility.” (See previous paragraph.)

This January 2017 buildup, we are told, is in response to a Russian action in Crimea of January 2014. The alert reader will have noticed that critics of Russia in recent years, virtually without exception, condemn Moscow’s Crimean action and typically nothing else. Could that be because they have nothing else to condemn about Russia’s foreign policy? At the same time they invariably fail to point out what preceded the Russian action – the overthrow, with Washington’s indispensable help, of the democratically-elected, Moscow-friendly Ukrainian government, replacing it with an anti-Russian, neo-fascist (literally) regime, complete with Nazi salutes and swastika-like symbols.

Ukraine and Georgia, both of which border Russia, are all that’s left to complete the US/Nato encirclement. And when the US overthrew the government of Ukraine, why shouldn’t Russia have been alarmed as the circle was about to close yet tighter? Even so, the Russian military appeared in Ukraine only in Crimea, where the Russians already had a military base with the approval of the Ukrainian government. No one could have blocked Moscow from taking over all of Ukraine if they wanted to.

Yet, the United States is right. Russia is a threat. A threat to American world dominance. And Americans can’t shake their upbringing. Here’s veteran National Public Radio newscaster Cokie Roberts bemoaning Trump’s stated desire to develop friendly relations with Russia: “This country has had a consistent policy for seventy years towards the Soviet Union and Russia, and Trump is trying to undo that” {3}. Heavens! Nuclear war would be better than that!

Fake News, Fake Issue

The entire emphasis has been on whether a particular news item is factually correct or incorrect. However, that is not the main problem with mainstream media. A news item can be factually correct and still be very biased and misleading because of what’s been left out, such as the relevant information about the Russian “invasion” of Crimea mentioned above. But when it comes to real fake news it’s difficult to top the CIA’s record in Latin America as revealed by Philip Agee, the leading whistleblower of all time.

Agee spent twelve years (1957~1969) as a CIA case officer, most of it in Latin America. His first book, Inside the Company: CIA Diary (1974) revealed how it was a common Agency tactic to write editorials and phony news stories to be knowingly published by Latin American media with no indication of the CIA authorship or CIA payment to the particular media. The propaganda value of such a “news” item might be multiplied by being picked up by other CIA stations in Latin America who would disseminate it through a CIA-owned news agency or a CIA-owned radio station. Some of these stories made their way back to the United States to be read or heard by unknowing North Americans.

The Great Wall of Mr T

So much cheaper. So much easier. So much more humane. So much more popular … Just stop overthrowing or destabilizing governments south of the border.

And the United States certainly has a moral obligation to do this. So many of the immigrants are escaping a situation in their homeland made hopeless by American intervention and policy. The particularly severe increase in Honduran migration to the US in recent years is a direct result of the June 28 2009 military coup that overthrew the democratically-elected president, Manuel Zelaya, after he did things like raising the minimum wage, giving subsidies to small farmers, and instituting free education. The coup – like so many others in Latin America – was led by a graduate of Washington’s infamous School of the Americas.

As per the standard Western Hemisphere script, the Honduran coup was followed by the abusive policies of the new regime, loyally supported by the United States. The State Department was virtually alone in the Western Hemisphere in not unequivocally condemning the Honduran coup. Indeed, the Obama administration refused to even call it a coup, which, under American law, would tie Washington’s hands as to the amount of support it could give the coup government. This denial of reality continued to exist even though a US embassy cable released by Wikileaks in 2010 declared: “There is no doubt that the military, Supreme Court and National Congress conspired on June 28 [2009] in what constituted an illegal and unconstitutional coup against the Executive Branch”. Washington’s support of the far-right Honduran government has continued ever since.

In addition to Honduras, Washington overthrew progressive governments which were sincerely committed to fighting poverty in Guatemala and Nicaragua; while in El Salvador the US played a major role in suppressing a movement striving to install such a government. And in Mexico, over the years the US has been providing training, arms, and surveillance technology to Mexico’s police and armed forces to better their ability to suppress their own people’s aspirations, as in Chiapas in 1994, and this has added to the influx of the oppressed to the United States, irony notwithstanding.

Moreover, Washington’s North American Free Trade Agreement (“Nafta”), has brought a flood of cheap, subsidized US agricultural products into Mexico, ravaging campesino communities and driving many Mexican farmers off the land when they couldn’t compete with the giant from the north. The subsequent Central American Free Trade Agreement (“Cafta”) brought the same joys to the people of that area.

These “free trade” agreements – as they do all over the world – also resulted in government enterprises being privatized, the regulation of corporations being reduced, and cuts to the social budget. Add to this the displacement of communities by foreign mining projects and the drastic US-led militarization of the War on Drugs with accompanying violence and you have the perfect storm of suffering followed by the attempt to escape from suffering.

It’s not that all these people prefer to live in the United States. They’d much rather remain with their families and friends, be able to speak their native language at all times, and avoid the hardships imposed on them by American police and other right-wingers.

Mr T, if one can read him correctly – not always an easy task – insists that he’s opposed to the hallmark of American foreign policy: regime change. If he would keep his Yankee hands off political and social change in Mexico and Central America and donate as compensation a good part of the billions to be spent on his Great Wall to those societies, there could be a remarkable reduction in the never-ending line of desperate people clawing their way northward.

Murders: Putin and the Clintons

Amongst the many repeated denunciations of Russian president Vladimir Putin is that he can’t be trusted because he spent many years in the Soviet secret intelligence service, the KGB.

Well, consider that before he became the US president George H W Bush was the head of the CIA.

Putin, we are also told, has his enemies murdered.

But consider the case of Seth Rich, the 27-year-old Democratic National Committee staffer who was shot dead on a Washington, DC street last July.

On August 9, in an interview on the Dutch television program Nieuwsuur, Julian Assange seemed to suggest rather clearly that Seth Rich was the source for the Wikileaks-exposed DNC emails and was murdered for it.

Julian Assange: “Our whistleblowers go to significant efforts to get us material and often face very significant risks. A 27-year-old that works for the DNC, was shot in the back, murdered just a few weeks ago for unknown reasons, as he was walking down the street in Washington, DC.”

Reporter: “That was just a robbery, I believe. Wasn’t it?”

Julian Assange: “No. There’s no finding. So … I’m suggesting that our sources take risks”. (See also Washington Post, January 19 2017)

But … but … that was Russian hacking, wasn’t it? Not a leak, right?

If you’ve been paying attention over the years, you know that many other murders have been attributed to the Clintons, beginning in Arkansas. But Bill and Hillary I’m sure are not guilty of all of them. (Google “murders connected clintons”.)

America’s Frightening Shortage of Weapons

President Trump signed an executive order Friday to launch what he called “a ‘great rebuilding of the Armed Forces’ that is expected to include new ships, planes, weapons and the modernization of the US nuclear arsenal”. {4}

This is something regularly advocated by American military and civilian leaders.

I ask them all the same question: Can you name a foreign war that the United States has ever lost due to an insufficient number of ships, planes, tanks, bombs, guns, or ammunition, or nuclear arsenal? Or because what they had was outdated, against an enemy with more modern weapons?

That Tired Old Subject

Senator Jeff Sessions, Donald Trump’s pick for Attorney General, declared two years ago: “Ultimately, freedom of speech is about ascertaining the truth. And if you don’t believe there’s a truth, you don’t believe in truth, if you’re an utter secularist, then how do we operate this government? How can we form a democracy of the kind I think you and I believe in … I do believe that we are a nation that, without God, there is no truth, and it’s all about power, ideology, advancement, agenda, not doing the public service.” {5}

So … if one is an atheist or agnostic one is not inclined toward public service. This of course is easily disproved by all the atheists and agnostics who work for different levels of government and numerous non-profit organizations involved in all manner of social, poverty, peace and environmental projects.

Who is the more virtuous – the believer who goes to church and does good deeds because he hopes to be rewarded by God or at least not be punished by God, or the non-believer who lives a very moral life because it disturbs him to act cruelly and it is in keeping with the kind of world he wants to help create and live in? Remember, the God-awful (no pun intended) war in Iraq was started by a man who goes through all the motions of a very religious person.

Christopher Hitchens, in 2007, in response to conservative columnist Michael Gerson’s article, “What Atheists Can’t Answer”, wrote: “How insulting is the latent suggestion of his position: the appalling insinuation that I would not know right from wrong if I was not supernaturally guided by a celestial dictatorship … simply assumes, whether or not religion is metaphysically ‘true’, that at least it stands for morality … Here is my challenge. Let Gerson name one ethical statement made or one ethical action performed by a believer that could not have been uttered or done by a nonbeliever.”

Gerson, it should be noted, was the chief speechwriter for the aforementioned very religious person, George W Bush, for five years, including when Bush invaded Iraq.

Phil Ochs

I was turning the pages of The Washington Post’s Sunday (January 29) feature section, Outlook, not finding much of particular interest, when to my great surprise I was suddenly hit with a long story about Phil Ochs. Who’s Phil Ochs? many of you may ask, for the folksinger died in 1976 at the age of 35.

The Post’s motivation in devoting so much space to a symbol of the American anti-war left appears to be one more example of the paper’s serious displeasure with Donald Trump. The article is entitled “Phil Ochs is the obscure ’60s folk singer we need today”.

My favorite song of his, among many others, is “I ain’t marching anymore”:

Oh I marched to the battle of New Orleans
At the end of the early British war
The young land started growing
The young blood started flowing
But I ain’t marchin’ anymore

For I’ve killed my share of Indians
In a thousand different fights
I was there at the Little Big Horn
I heard many men lying, I saw many more dying
But I ain’t marchin’ anymore

(chorus)

It’s always the old to lead us to the war
It’s always the young to fall
Now look at all we’ve won with the saber and the gun
Tell me is it worth it all?

For I stole California from the Mexican land
Fought in the bloody Civil War
Yes I even killed my brothers
And so many others
But I ain’t marchin’ anymore

For I marched to the battles of the German trench
In a war that was bound to end all wars
Oh I must have killed a million men
And now they want me back again
But I ain’t marchin’ anymore

(chorus)

For I flew the final mission in the Japanese sky
Set off the mighty mushroom roar
When I saw the cities burning
I knew that I was learning
That I ain’t marchin’ anymore

Now the labor leader’s screamin’
when they close the missile plants,
United Fruit screams at the Cuban shore,
Call it “Peace” or call it “Treason”,
Call it “Love” or call it “Reason”,
But I ain’t marchin’ any more,
No, I ain’t marchin’ any more

Ironically, very ironically, Donald Trump may well be less of a war monger than Barack Obama or Hillary Clinton.

Notes:

{1} Washington Post (January 13 2017)

{2} Agence French Presse (January 04 2016)

{3} NPR (January 09 2017)

{4} Washington Post (January 28 2017)

{5} The Daily Beast (January 12, 2017), reporting on remark made November 14 2014

Any part of this report may be disseminated without permission, provided attribution to William Blum as author and a link to williamblum.org is provided.

_____

William Blum is an author, historian, and renowned critic of US foreign policy. He is the author of Killing Hope: US Military and CIA Interventions Since World War Two (2014) and Rogue State: A Guide to the World’s Only Superpower (2005), among others. Read more at http://williamblum.org/about/.

Books: https://williamblum.org/books

The Anti-Empire Report: https://williamblum.org/aer

Essays and Speeches: https://williamblum.org/essays

To rescue an old man from the clutches of the capitalist imperialist meanies: https://www.paypal.com/en_US/i/btn/btn_donate_LG.gif

https://williamblum.org/aer/read/148

Categories: Uncategorized

The Prison Labor Complex

by Joshua B Freeman & Steve Fraser

TomDispatch.com (September 11 2011)

CounterPunch (April 19 2012)

Sweatshop labor is back with a vengeance. It can be found across broad stretches of the American economy and around the world. Penitentiaries have become a niche market for such work. The privatization of prisons in recent years has meant the creation of a small army of workers too coerced and right-less to complain.

Prisoners, whose ranks increasingly consist of those for whom the legitimate economy has found no use, now make up a virtual brigade within the reserve army of the unemployed whose ranks have ballooned along with the US incarceration rate. The Corrections Corporation of America and G4S (formerly Wackenhut), two prison privatizers, sell inmate labor at subminimum wages to Fortune 500 corporations like Chevron, Bank of America, AT&T, and IBM.

These companies can, in most states, lease factories in prisons or prisoners to work on the outside. All told, nearly a million prisoners are now making office furniture, working in call centers, fabricating body armor, taking hotel reservations, working in slaughterhouses, or manufacturing textiles, shoes, and clothing, while getting paid somewhere between 93 cents and $4.73 per day.

Rarely can you find workers so pliable, easy to control, stripped of political rights, and subject to martial discipline at the first sign of recalcitrance – unless, that is, you traveled back to the nineteenth century when convict labor was commonplace nationwide. Indeed, a sentence of “confinement at hard labor” was then the essence of the American penal system. More than that, it was one vital way the United States became a modern industrial capitalist economy – at a moment, eerily like our own, when the mechanisms of capital accumulation were in crisis.

A Yankee Invention

What some historians call “the long Depression” of the nineteenth century, which lasted from the mid-1870s through the mid-1890s, was marked by frequent panics and slumps, mass bankruptcies, deflation, and self-destructive competition among businesses designed to depress costs, especially labor costs. So, too, we are living through a twenty-first century age of panics and austerity with similar pressures to shrink the social wage.

Convict labor has been and once again is an appealing way for business to address these dilemmas. Penal servitude now strikes us as a barbaric throwback to some long-lost moment that preceded the industrial revolution, but in that we’re wrong. From its first appearance in this country, it has been associated with modern capitalist industry and large-scale agriculture.

And that is only the first of many misconceptions about this peculiar institution. Infamous for the brutality with which prison laborers were once treated, indelibly linked in popular memory (and popular culture) with images of the black chain gang in the American South, it is usually assumed to be a Southern invention. So apparently atavistic, it seems to fit naturally with the retrograde nature of Southern life and labor, its economic and cultural underdevelopment, its racial caste system, and its desperate attachment to the “lost cause.”

As it happens, penal servitude – the leasing out of prisoners to private enterprise, either within prison walls or in outside workshops, factories, and fields – was originally known as a “Yankee invention”.

First used at Auburn prison in New York State in the 1820s, the system spread widely and quickly throughout the North, the Midwest, and later the West. It developed alongside state-run prison workshops that produced goods for the public sector and sometimes the open market.

A few Southern states also used it. Prisoners there, as elsewhere, however, were mainly white men, since slave masters, with a free hand to deal with the “infractions” of their chattel, had little need for prison. The Thirteenth Amendment abolishing slavery would, in fact, make an exception for penal servitude precisely because it had become the dominant form of punishment throughout the free states.

Nor were those sentenced to “confinement at hard labor” restricted to digging ditches or other unskilled work; nor were they only men. Prisoners were employed at an enormous range of tasks from rope- and wagon-making to carpet, hat, and clothing manufacturing (where women prisoners were sometimes put to work), as well coal mining, carpentry, barrel-making, shoe production, house-building, and even the manufacture of rifles. The range of petty and larger workshops into which the felons were integrated made up the heart of the new American economy.

Observing a free-labor textile mill and a convict-labor one on a visit to the United States, novelist Charles Dickens couldn’t tell the difference. State governments used the rental revenue garnered from their prisoners to meet budget needs, while entrepreneurs made outsized profits either by working the prisoners themselves or subleasing them to other businessmen.

Convict Labor in the “New South”

After the Civil War, the convict-lease system metamorphosed. In the South, it became ubiquitous, one of several grim methods – including the black codes, debt peonage, the crop-lien system, lifetime labor contracts, and vigilante terror – used to control and fix in place the newly emancipated slave. Those “freedmen” were eager to pursue their new liberty either by setting up as small farmers or by exercising the right to move out of the region at will or from job to job as “free wage labor” was supposed to be able to do.

If you assumed, however, that the convict-lease system was solely the brainchild of the apartheid all-white “Redeemer” governments that overthrew the Radical Republican regimes (which first ran the defeated Confederacy during Reconstruction) and used their power to introduce Jim Crow to Dixie, you would be wrong again. In Georgia, for instance, the Radical Republican state government took the initiative soon after the war ended. And this was because the convict-lease system was tied to the modernizing sectors of the post-war economy, no matter where in Dixie it was introduced or by whom.

So convicts were leased to coal-mining, iron-forging, steel-making, and railroad companies, including Tennessee Coal and Iron (“TC&I”), a major producer across the South, especially in the booming region around Birmingham, Alabama. More than a quarter of the coal coming out of Birmingham’s pits was then mined by prisoners. By the turn of the century, TC&I had been folded into J P Morgan’s United States Steel complex, which also relied heavily on prison laborers.

All the main extractive industries of the South were, in fact, wedded to the system. Turpentine and lumber camps deep in the fetid swamps and forest vastnesses of Georgia, Florida, and Louisiana commonly worked their convicts until they dropped dead from overwork or disease. The region’s plantation monocultures in cotton and sugar made regular use of imprisoned former slaves, including women. Among the leading families of Atlanta, Birmingham, and other “New South” metropolises were businessmen whose fortunes originated in the dank coal pits, malarial marshes, isolated forests, and squalid barracks in which their unfree peons worked, lived, and died.

Because it tended to grant absolute authority to private commercial interests and because its racial make-up in the post-slavery era was overwhelmingly African-American, the South’s convict-lease system was distinctive. Its caste nature is not only impossible to forget, but should remind us of the unbalanced racial profile of America’s bloated prison population today.

Moreover, this totalitarian-style control invited appalling brutalities in response to any sign of resistance: whippings, water torture, isolation in “dark cells”, dehydration, starvation, ice-baths, shackling with metal spurs riveted to the feet, and “tricing” (an excruciatingly painful process in which recalcitrant prisoners were strung up by the thumbs with fishing line attached to overhead pulleys). Even women in a hosiery mill in Tennessee were flogged, hung by the wrists, and placed in solitary confinement.

Living quarters for prisoner-workers were usually rat-infested and disease-ridden. Work lasted at least from sunup to sundown and well past the point of exhaustion. Death came often enough and bodies were cast off in unmarked graves by the side of the road or by incineration in coke ovens. Injury rates averaged one per worker per month, including respiratory failure, burnings, disfigurement, and the loss of limbs. Prison mines were called “nurseries of death”. Among Southern convict laborers, the mortality rate (not even including high levels of suicides) was eight times that among similar workers in the North – and it was extraordinarily high there.

The Southern system also stood out for the intimate collusion among industrial, commercial, and agricultural enterprises and every level of Southern law enforcement as well as the judicial system. Sheriffs, local justices of the peace, state police, judges, and state governments conspired to keep the convict-lease business humming. Indeed, local law officers depended on the leasing system for a substantial part of their income. (They pocketed the fines and fees associated with the “convictions”, a repayable sum that would be added on to the amount of time at “hard labor” demanded of the prisoner.)

The arrest cycle was synchronized with the business cycle, timed to the rise and fall of the demand for fresh labor. County and state treasuries similarly counted on such revenues, since the post-war South was so capital-starved that only renting out convicts assured that prisons could be built and maintained.

There was, then, every incentive to concoct charges or send people to jail for the most trivial offenses: vagrancy, gambling, drinking, partying, hopping a freight car, tarrying too long in town. A “pig law” in Mississippi assured you of five years as a prison laborer if you stole a farm animal worth more than $10. Theft of a fence rail could result in the same.

Penal Servitude in the Gilded Age North

All of this was only different in degree from prevailing practices everywhere else: the sale of prison labor power to private interests, corporal punishment, and the absence of all rights including civil liberties, the vote, and the right to protest or organize against terrible conditions.

In the North, where eighty percent of all US prison labor was employed after the Civil War and which accounted for over $35 billion in output (in current dollars), the system was reconfigured to meet the needs of modern industry and the pressures of “the long Depression”. Convict labor was increasingly leased out only to a handful of major manufacturers in each state. These textile mills, oven makers, mining operations, hat and shoe factories – one in Wisconsin leased that state’s entire population of convicted felons – were then installing the kind of mass production methods becoming standard in much of American industry. As organized markets for prison labor grew increasingly oligopolistic (like the rest of the economy), the Depression of 1873 and subsequent depressions in the following decades wiped out many smaller businesses that had once gone trawling for convicts.

Today, we talk about a newly “flexible economy”, often a euphemism for the geometric growth of a precariously positioned, insecure workforce. The convict labor system of the nineteenth century offered an original specimen of perfect flexibility.

Companies leasing convicts enjoyed authority to dispose of their rented labor power as they saw fit. Workers were compelled to labor in total silence. Even hand gestures and eye contact were prohibited for the purpose of creating “silent and insulated working machines”.

Supervision of prison labor was ostensibly shared by employers and the prison authorities. In fact, many businesses did continue to conduct their operations within prison walls where they supplied the materials, power, and machinery, while the state provided guards, workshops, food, clothing, and what passed for medical care. As a matter of practice though, the foremen of the businesses called the shots. And there were certain states, including Nebraska, Washington, and New Mexico, that, like their Southern counterparts, ceded complete control to the lessee. As one observer put it, “Felons are mere machines held to labor by the dark cell and the scourge”.

Free market industrial capitalism, then and now, invariably draws on the aid of the state. In that system’s formative phases, the state has regularly used its coercive powers of taxation, expropriation, and in this case incarceration to free up natural and human resources lying outside the orbit of capitalism proper.

In both the North and the South, the contracting out of convict labor was one way in which that state-assisted mechanism of capital accumulation arose. Contracts with the government assured employers that their labor force would be replenished anytime a worker got sick, was disabled, died, or simply became too worn out to continue.

The Kansas Wagon Company, for example, signed a five-year contract in 1877 that prevented the state from raising the rental price of labor or renting to other employers. The company also got an option to renew the lease for ten more years, while the government was obliged to pay for new machinery, larger workshops, a power supply, and even the building of a switching track that connected to the trunk line of the Pacific Railway and so ensured that the product could be moved effectively to market.

Penal institutions all over the country became auxiliary arms of capitalist industry and commerce. Two-thirds of all prisoners worked for private enterprise.

Today, strikingly enough, government is again providing subsidies and tax incentives as well as facilities, utilities, and free space for corporations making use of this same category of abjectly dependent labor.

The New Abolitionism

Dependency and flexibility naturally assumed no resistance, but there was plenty of that all through the nineteenth century from workers, farmers, and even prisoners. Indeed, a principal objective in using prison labor was to undermine efforts to unionize, but from the standpoint of mobilized working people far more was at stake.

Opposition to convict labor arose from workingmen’s associations, labor-oriented political parties, journeymen unions, and other groups which considered the system an insult to the moral codes of egalitarian republicanism nurtured by the American Revolution. The specter of proletarian dependency haunted the lives of the country’s self-reliant handicraftsmen who watched apprehensively as shops employing wage labor began popping up across the country. Much of the earliest of this agitation was aimed at the use of prisoners to replace skilled workers (while unskilled prison labor was initially largely ignored).

It was bad enough for craftsmen to see their own livelihoods and standards of living put in jeopardy by “free” wage labor. Worse still was to watch unfree labor do the same thing. At the time, employers were turning to that captive prison population to combat attempts by aggrieved workers to organize and defend themselves. On the eve of the Civil War, for example, an iron-molding contractor in Spuyten Duyvil, north of Manhattan in the Bronx, locked out his unionized workers and then moved his operation to Sing Sing penitentiary, where a laborer cost forty cents, $2.60 less than the going day rate. It worked, and Local 11 of the Union of Iron Workers quickly died away.

Worst of all was to imagine this debased form of work as a model for the proletarian future to come. The workingman’s movement of the Jacksonian era was deeply alarmed by the prospect of “wage slavery”, a condition inimical to their sense of themselves as citizens of a republic of independent producers. Prison labor was a sub-species of that dreaded “slavery”, a caricature of it perhaps, and intolerable to a movement often as much about emancipation as unionization.

All the way through the Gilded Age of the 1890s, convict labor continued to serve as a magnet for emancipatory desires. In addition, prisoners’ rebellions became ever more common – in the North particularly, where many prisoners turned out to be Civil War veterans and dispossessed working people who already knew something about fighting for freedom and fighting back. Major penitentiaries like Sing Sing became sites of repeated strikes and riots; a strike in 1877 even took on the transplanted Spuyten Duyvil iron-molding company.

Above and below the Mason Dixon line, political platforms, protest rallies, petition campaigns, legislative investigations, union strikes, and boycotts by farm organizations like the Farmers Alliance and Grange cried out for the abolition of the convict-lease system, or at least for its rigorous regulation. Over the century’s last two decades, more than twenty coal-mine strikes broke out because of the use of convict miners.

The Knights of Labor, that era’s most audacious labor movement, was particularly exercised. During the Coal Creek Wars in eastern Tennessee in the early 1890s, for instance, TC&I tried to use prisoners to break a miners’ strike. The company’s vice president noted that it was “an effective club to hold over the heads of free laborers”.

Strikers and their allies affiliated with the Knights, the United Mine Workers, and the Farmers Alliance launched guerilla attacks on the prisoner stockade, sending the convicts they freed to Knoxville. When the governor insisted on shipping them back, the workers released them into the surrounding hills and countryside. Gun battles followed.

The Death of Convict Leasing

In the North, the prison abolition movement went viral, embracing not only workers’ organizations, sympathetic rural insurgents, and prisoners, but also widening circles of middle-class reformers. The newly created American Federation of Labor denounced the system as “contract slavery”. It also demanded the banning of any imports from abroad made with convict labor and the exclusion from the open market of goods produced domestically by prisoners, whether in state-run or private workshops. In Chicago, the construction unions refused to work with materials made by prisoners.

By the latter part of the century, in state after state penal servitude was on its way to extinction. New York, where the “industry” was born and was largest, killed it by the late 1880s. The tariff of 1890 prohibited the sale of convict-made wares from abroad. Private leasing continued in the North, but under increasingly restrictive conditions, including Federal legislation passed during the New Deal. By World War Two, it was virtually extinct (although government-run prison workshops continued as they always had).

At least officially, even in the South it was at an end by the turn of the century in Tennessee, Louisiana, Georgia, and Mississippi. Higher political calculations were at work in these states. Established elites were eager to break the inter-racial alliances that had formed over abolishing convict leasing by abolishing the hated system itself. Often enough, however, it ended in name only.

What replaced it was the state-run chain gang (although some Southern states like Alabama and Florida continued private leasing well into the 1920s). Inmates were set to work building roads and other infrastructure projects vital to the flourishing of a mature market economy and so to the continuing process of capital accumulation. In the North, the system of “hard labor” was replaced by a system of “hard time”, that numbing, brutalizing idleness where masses of people extruded from the mainstream economy are pooled into mass penal colonies. The historic link between labor, punishment, and economic development was severed, and remained so … until now.

Convict Leasing Rises Again

“Now”, means our second Gilded Age and its aftermath. In these years, the system of leasing out convicts to private enterprise was reborn. This was a perverse triumph for the law of supply and demand in an era infatuated with the charms of the free market. On the supply side, the US holds captive 25% of all the prisoners on the planet: 2.3 million people. It has the highest incarceration rate in the world as well, a figure that began skyrocketing in 1980 as Ronald Reagan became president. As for the demand for labor, since the 1970s American industrial corporations have found it increasingly unprofitable to invest in domestic production. Instead, they have sought out the hundreds of millions of people abroad who are willing to, or can be pressed into, working for far less than American workers.

As a consequence, those back home – disproportionately African-American workers – who found themselves living in economic exile, scrabbling to get by, began showing up in similarly disproportionate numbers in the country’s rapidly expanding prison archipelago. It didn’t take long for corporate America to come to view this as another potential foreign country, full of cheap and subservient labor – and better yet, close by.

What began in the 1970s as an end run around the laws prohibiting convict leasing by private interests has now become an industrial sector in its own right, employing more people than any Fortune 500 corporation and operating in 37 states. And here’s the ultimate irony: our ancestors found convict labor obnoxious in part because it seemed to prefigure a new and more universal form of enslavement. Could its rebirth foreshadow a future ever more unnervingly like those past nightmares?

Today, we are being reassured by the president, the mainstream media, and economic experts that the Great Recession is over, that we are in “recovery” even though most of the recovering patients haven’t actually noticed significant improvement in their condition. For those announcing its arrival, “recovery” means that the mega-banks are no longer on the brink of bankruptcy, the stock market has made up lost ground, corporate profits are improving, and notoriously unreliable employment numbers have improved by several tenths of a percent.

What accounts for that peculiarly narrow view of recovery, however, is that the general costs of doing business are falling off a cliff as the economy eats itself alive. The recovery being celebrated owes thanks to local, state, and Federal austerity budgets, the starving of the social welfare system and public services, rampant anti-union campaigns in the public and private sector, the spread of sweatshop labor, the coercion of desperate unemployed or underemployed workers to accept lower wages, part-time work, and temporary work, as well as the relinquishing of healthcare benefits and a financially secure retirement – in short, to surrender the hope that is supposed to come with the American franchise.

Such a recovery, resting on the stripping away of the hard won material and cultural achievements of the past century, suggests a new world in which the prison-labor archipelago could indeed become a vast gulag of the downwardly mobile.

_____

Steve Fraser is Editor-at-Large of New Labor Forum, co-founder of the American Empire Project (Metropolitan Books). He is author of Wall Street: America’s Dream Palace (2009). He teaches history at Columbia University.

Joshua B Freeman teaches history at Queens College and at the Graduate Center of the City University of New York, is affiliated with its Joseph S Murphy Labor Institute, and is author of American Empire (2013).

http://www.tomdispatch.com/blog/175439/tomgram%3A_fraser_and_freeman,_taps_for_the_unemployed/

http://www.counterpunch.org/2012/04/19/the-prison-labor-complex/

Categories: Uncategorized

Most Government Workers …

… Could Be Replaced by Robots, New Study Finds

by Emily Zanotti

HeatSt.com (February 08 2017)

Zero Hedge (February 08 2017)

A study by a British think tank, Reform, says that ninety percent of British civil service workers have jobs so pointless, they could easily be replaced by robots, saving the government around $8 billion per year.


The study, published this week, says that robots are “more efficient” at collecting data {1}, processing paperwork, and doing the routine tasks that now fall to low-level government employees. Even nurses and doctors, who are government employees in the UK, could be relieved of some duties by mechanical assistants.

There are “few complex roles” in civil service, it seems, that require a human being to handle.

“Twenty percent of public-sector workers hold strategic, ‘cognitive’ roles”, Reform’s press release on the study says. “They will use data analytics to identify patterns – improving decision-making and allocating workers most efficiently.

“The NHS, for example, can focus on the highest risk patients, reducing unnecessary hospital admissions. UK police and other emergency services are already using data to predict areas of greatest risk from burglary and fire.”

The problem, Reform says, is that public sector employee unions have bloated the civil service ranks, forcing government agencies to keep on older employees, and mandating hiring quotas for new ones. The organizational chart looks like a circuit board – and there’s no incentive to streamline anything.

Unfortunately for civil service workers, it seems the study is just the latest in a series of research that won’t save their jobs. Oxford University and financial services provider Deloitte, both of whom comissioned their own studies concur with Reform’s conclusions. The Oxford University study said that more than 850,000 public sector jobs {2} could fall to robots over the course of the next decade.


Reform suggests that government employees should probably look into opportunities presented by the “sharing economy”, like driving for Uber – at least until robots replace those, too.

Links:

{1}https://www.theguardian.com/technology/2017/feb/06/robots-could-replace-250000-uk-public-sector-workers

{2} https://www.theguardian.com/society/2016/oct/25/850000-public-sector-jobs-automated-2030-oxford-university-deloitte-study

http://heatst.com/tech/british-study-finds-most-government-workers-could-be-replaced-by-robots/

http://www.zerohedge.com/news/2017-02-08/most-government-workers-could-be-replaced-robots-new-study-finds

Categories: Uncategorized

The World as Representation

by John Michael Greer

The Archdruid Report (February 08 2017)

Druid perspectives on nature, culture, and the future of industrial society

It can be hard to remember these days that not much more than half a century ago, philosophy was something you read about in general-interest magazines and the better grade of newspapers. Existentialist philosopher Jean-Paul Sartre was an international celebrity; the posthumous publication of Pierre Teilhard de Chardin’s Le Phenomenon Humaine (1955) – the English translation, predictably, was titled The Phenomenon of Man (1959) – got significant flurries of media coverage; Random House’s Vintage Books label brought out cheap mass-market paperback editions of major philosophical writings from Plato straight through to Nietzsche and beyond, and made money off them.

Though philosophy was never really part of the cultural mainstream, it had the same kind of following as avant-garde jazz, say, or science fiction. At any reasonably large cocktail party you had a pretty fair chance of meeting someone who was into it, and if you knew where to look in any big city – or any college town with pretensions to intellectual culture, for that matter – you could find at least one bar or bookstore or all-night coffee joint where the philosophy geeks hung out, and talked earnestly into the small hours about Kant or Kierkegaard. What’s more, that level of interest in the subject had been pretty standard in the Western world for a very long time.

We’ve come a long way since then, and not in a particularly useful direction. These days, if you hear somebody talk about philosophy in the media, it’s probably a scientific materialist like Neil deGrasse Tyson ranting about how all philosophy is nonsense. The occasional work of philosophical exegesis still gets a page or two in The New York Review of Books now and then, but popular interest in the subject has vanished, and more than vanished: the sort of truculent ignorance about philosophy displayed by Tyson and his many equivalents has become just as common among the chattering classes as a feigned interest in the subject was a half century in the past.

Like most human events, the decline of philosophy in modern times was overdetermined; like the victim in the murder-mystery paperback who was shot, strangled, stabbed, poisoned, whacked over the head with a lead pipe, and then shoved off a bridge to drown, there were more causes of death than the situation actually required. Part of the problem, certainly, was the explosive expansion of the academic industry in the US and elsewhere in the second half of the twentieth century. In an era when every state teacher’s college aspired to become a university and every state university dreamed of rivaling the Ivy League, a philosophy department was an essential status symbol. The resulting expansion of the field was not necessarily matched by an equivalent increase in genuine philosophers, but it was certainly followed by the transformation of university-employed philosophy professors into a professional caste which, as such castes generally do, defended its status by adopting an impenetrable jargon and ignoring or rebuffing attempts at participation from outside its increasingly airtight circle.

Another factor was the rise of the sort of belligerent scientific materialism exemplified, as noted earlier, by Neil deGrasse Tyson. Scientific inquiry itself is philosophically neutral – it’s possible to practice science from just about any philosophical standpoint you care to name – but the claim at the heart of scientific materialism, the dogmatic insistence that those things that can be investigated using scientific methods and explained by current scientific theory are the only things that can possibly exist, depends on arbitrary metaphysical postulates that were comprehensively disproved by philosophers more than two centuries ago. (We’ll get to those postulates and their problems later on.) Thus the ascendancy of scientific materialism in educated culture pretty much mandated the dismissal of philosophy.

There were plenty of other factors as well, most of them having no more to do with philosophy as such than the ones just cited. Philosophy itself, though, bears some of the responsibility for its own decline. Starting in the seventeenth century and reaching a crisis point in the nineteenth, western philosophy came to a parting of the ways – one that the philosophical traditions of other cultures reached long before it, with similar consequences – and by and large, philosophers and their audiences alike chose a route that led to its present eclipse. That choice isn’t irreparable, and there’s much to be gained by reversing it, but it’s going to take a fair amount of hard intellectual effort and a willingness to abandon some highly popular shibboleths to work back to the mistake that was made, and undo it.

To help make sense of what follows, a concrete metaphor might be useful. If you’re in a place where there are windows nearby, especially if the windows aren’t particularly clean, go look out through a window at the view beyond it. Then, after you’ve done this for a minute or so, change your focus and look at the window rather than through it, so that you see the slight color of the glass and whatever dust or dirt is clinging to it. Repeat the process a few times, until you’re clear on the shift I mean: looking through the window, you see the world; looking at the window, you see the medium through which you see the world – and you might just discover that some of what you thought at first glance was out there in the world was actually on the window glass the whole time.

That, in effect, was the great change that shook western philosophy to its foundations beginning in the seventeenth century. Up to that point, most philosophers in the western world started from a set of unexamined presuppositions about what was true, and used the tools of reasoning and evidence to proceed from those presuppositions to a more or less complete account of the world. They were into what philosophers call metaphysics: reasoned inquiry into the basic principles of existence. That’s the focus of every philosophical tradition in its early years, before the confusing results of metaphysical inquiry refocus attention from “What exists?” to “How do we know what exists?” Metaphysics then gives way to epistemology: reasoned inquiry into what human beings are capable of knowing.

That refocusing happened in Greek philosophy around the fourth century BCE, in Indian philosophy around the tenth century BCE, and in Chinese philosophy a little earlier than in Greece. In each case, philosophers who had been busy constructing elegant explanations of the world on the basis of some set of unexamined cultural assumptions found themselves face to face with hard questions about the validity of those assumptions. In terms of the metaphor suggested above, they were making all kinds of statements about what they saw through the window, and then suddenly realized that the colors they’d attributed to the world were being contributed in part by the window glass and the dust on it, the vast dark shape that seemed to be moving purposefully across the sky was actually a beetle walking on the outside of the window, and so on.

The same refocusing began in the modern world with Rene Descartes, who famously attempted to start his philosophical explorations by doubting everything. That’s a good deal easier said than done, as it happens, and to a modern eye, Descartes’ writings are riddled with unexamined assumptions, but the first attempt had been made and others followed. A trio of epistemologists from the British Isles – John Locke, George Berkeley, and David Hume – rushed in where Descartes feared to tread, demonstrating that the view from the window had much more to do with the window glass than it did with the world outside. The final step in the process was taken by the German philosopher Immanuel Kant, who subjected human sensory and rational knowledge to relentless scrutiny and showed that most of what we think of as “out there”, including such apparently hard realities as space and time, are actually artifacts of the processes by which we perceive things.

Look at an object nearby: a coffee cup, let’s say. You experience the cup as something solid and real, outside yourself: seeing it, you know you can reach for it and pick it up; and to the extent that you notice the processes by which you perceive it, you experience these as wholly passive, a transparent window on an objective external reality. That’s normal, and there are good practical reasons why we usually experience the world that way, but it’s not actually what’s going on.

What’s going on is that a thin stream of visual information is flowing into your mind in the form of brief fragmentary glimpses of color and shape. Your mind then assembles these together into the mental image of the coffee cup, using your memories of that and other coffee cups, and a range of other things as well, as a template onto which the glimpses can be arranged. Arthur Schopenhauer, about whom we’ll be talking a great deal as we proceed, gave the process we’re discussing the useful label of “representation”; when you look at the coffee cup, you’re not passively seeing the cup as it exists, you’re actively representing – literally re-presenting – an image of the cup in your mind.

There are certain special situations in which you can watch representation at work. If you’ve ever woken up in an unfamiliar room at night, and had a few seconds pass before the dark unknown shapes around you finally turned into ordinary furniture, you’ve had one of those experiences. Another is provided by the kind of optical illusion that can be seen as two different things. With a little practice, you can flip from one way of seeing the illusion to another, and watch the process of representation as it happens.

What makes the realization just described so challenging is that it’s fairly easy to prove that the cup as we represent it has very little in common with the cup as it exists “out there”. You can prove this by means of science: the cup “out there”, according to the evidence collected painstakingly by physicists, consists of an intricate matrix of quantum probability fields and ripples in space-time, which our senses systematically misperceive as a solid object with a certain color, surface texture, and so on. You can also prove this, as it happens, by sheer sustained introspection – that’s how Indian philosophers got there in the age of the Upanishads – and you can prove it just as well by a sufficiently rigorous logical analysis of the basis of human knowledge, which is what Kant did.

The difficulty here, of course, is that once you’ve figured this out, you’ve basically scuttled any chance at pursuing the kind of metaphysics that’s traditional in the formative period of your philosophical tradition. Kant got this, which is why he titled the most relentless of his analyses Prolegomena to Any Future Metaphysics (1783); what he meant by this was that anybody who wanted to try to talk about what actually exists had better be prepared to answer some extremely difficult questions first. When philosophical traditions hit their epistemological crises, accordingly, some philosophers accept the hard limits on human knowledge, ditch the metaphysics, and look for something more useful to do – a quest that typically leads to ethics, mysticism, or both. Other philosophers double down on the metaphysics and either try to find some way around the epistemological barrier, or simply ignore it, and this latter option is the one that most Western philosophers after Kant ended up choosing. Where that leads – well, we’ll get to that later on.

For the moment, I want to focus a little more closely on the epistemological crisis itself, because there are certain very common ways to misunderstand it. One of them I remember with a certain amount of discomfort, because I made it myself in my first published book, Paths of Wisdom (1996). This is the sort of argument that sees the sensory organs and the nervous system as the reason for the gap between the reality out there – the “thing in itself” (Ding an Sich), as Kant called it – and the representation as we experience it. It’s superficially very convincing: the eye receives light in certain patterns and turns those into a cascade of electrochemical bursts running up the optic nerve, and the visual centers in the brain then fold, spindle, and mutilate the results into the image we see.

The difficulty? When we look at light, an eye, an optic nerve, a brain, we’re not seeing things in themselves, we’re seeing another set of representations, constructed just as arbitrarily in our minds as any other representation. Nietzsche had fun with this one:

 

 

What? And others even go so far as to say that the external world is the work of our organs? But then our body, as a piece of this external world, would be the work of our organs! But then our organs themselves would be – the work of our organs!”

 

 

That is to say, the body is also a representation – or, more precisely, the body as we perceive it is a representation. It has another aspect, but we’ll get to that in a future post.

Another common misunderstanding of the epistemological crisis is to think that it’s saying that your conscious mind assembles the world, and can do so in whatever way it wishes. Not so. Look at the coffee cup again. Can you, by any act of consciousness, make that coffee cup suddenly sprout wings and fly chirping around your computer desk? Of course not. (Those who disagree should be prepared to show their work.) The crucial point here is that representation is neither a conscious activity nor an arbitrary one. Much of it seems to be hardwired, and most of the rest is learned very early in life – each of us spent our first few years learning how to do it, and scientists such as Jean Piaget have chronicled in detail the processes by which children gradually learn how to assemble the world into the specific meaningful shape their culture expects them to get.

By the time you’re an adult, you do that instantly, with no more conscious effort than you’re using right now to extract meaning from the little squiggles on your computer screen we call “letters”. Much of the learning process, in turn, involves finding meaningful correlations between the bits of sensory data and weaving those into your representations – thus you’ve learned that when you get the bits of visual data that normally assemble into a coffee cup, you can reach for it and get the bits of tactile data that normally assemble into the feeling of picking up the cup, followed by certain sensations of movement, followed by certain sensations of taste, temperature, et cetera. corresponding to drinking the coffee.

That’s why Kant included the “thing in itself” in his account: there really does seem to be something out there that gives rise to the data we assemble into our representations. It’s just that the window we’re looking through might as well be a funhouse mirror: it imposes so much of itself on the data that trickles through it that it’s almost impossible to draw firm conclusions about what’s “out there” from our representations. The most we can do, most of the time, is to see what representations do the best job of allowing us to predict what the next series of fragmentary sensory images will include. That’s what science does, when its practitioners are honest with themselves about its limitations – and it’s possible to do perfectly good science on that basis, by the way.

It’s possible to do quite a lot intellectually on that basis, in fact. From the golden age of ancient Greece straight through to the end of the Renaissance, in fact, a field of scholarship that’s almost completely forgotten today – topics – was an important part of a general education, the kind of thing you studied as a matter of course once you got past grammar school. Topics is the study of those things that can’t be proved logically, but are broadly accepted as more or less true, and so can be used as “places” (in Greek, topoi) on which you can ground a line of argument. The most important of these are the commonplaces (literally, the common places or topoi) that we all use all the time as a basis for our thinking and speaking; in modern terms, we can think of them as “things on which a general consensus exists”. They aren’t truths; they’re useful approximations of truths, things that have been found to work most of the time, things to be set aside only if you have good reason to do so.

Science could have been seen as a way to expand the range of useful topoi. That’s what a scientific experiment does, after all: it answers the question, “If I do this, what happens?” As the results of experiments add up, you end up with a consensus – usually an approximate consensus, because it’s all but unheard of for repetitions of any experiment to get exactly the same result every time, but a consensus nonetheless – that’s accepted by the scientific community as a useful approximation of the truth, and can be set aside only if you have good reason to do so. To a significant extent, that’s the way science is actually practiced – well, when it hasn’t been hopelessly corrupted for economic or political gain – but that’s not the social role that science has come to fill in modern industrial society.

I’ve written here several times already about the trap into which institutional science has backed itself in recent decades, with the enthusiastic assistance of the belligerent scientific materialists mentioned earlier in this post. Public figures in the scientific community routinely like to insist that the current consensus among scientists on any topic must be accepted by the lay public without question, even when scientific opinion has swung around like a weathercock in living memory, and even when unpleasantly detailed evidence of the deliberate falsification of scientific data is tolerably easy to find, especially but not only in the medical and pharmaceutical fields. That insistence isn’t wearing well; nor does it help when scientific materialists insist – as they very often do – that something can’t exist or something else can’t happen, simply because current theory doesn’t happen to provide a mechanism for it.

Too obsessive a fixation on that claim to authority, and the political and financial baggage that comes with it, could very possibly result in the widespread rejection of science across the industrial world in the decades ahead. That’s not yet set in stone, and it’s still possible that scientists who aren’t too deeply enmeshed in the existing order of things could provide a balancing voice, and help see to it that a less doctrinaire understanding of science gets a voice and a public presence.

Doing that, though, would require an attitude we might as well call epistemic modesty: the recognition that the human capacity to know has hard limits, and the unqualified absolute truth about most things is out of our reach. Socrates was called the wisest of the Greeks because he accepted the need for epistemic modesty, and recognized that he didn’t actually know much of anything for certain. That recognition didn’t keep him from being able to get up in the morning and go to work at his day job as a stonecutter, and it needn’t keep the rest of us from doing what we have to do as industrial civilization lurches down the trajectory toward a difficult future.

Taken seriously, though, epistemic modesty requires some serious second thoughts about certain very deeply ingrained presuppositions of the cultures of the West. Some of those second thoughts are fairly easy to reach, but one of the most challenging starts with a seemingly simple question: is there anything we experience that isn’t a representation? In the weeks ahead we’ll track that question all the way to its deeply troubling destination.
_____

 

John Michael Greer is Past Grand Archdruid of the Ancient Order of Druids in America {1}, current head of the Druidical Order of the Golden Dawn {2}, and the author of more than thirty books on a wide range of subjects, including peak oil and the future of industrial society. He lives in Cumberland, Maryland, an old red brick mill town in the north central Appalachians, with his wife Sara.

If you enjoy this blog and can handle discussions of Druidry, magic, and occult philosophy, you might like my other blog, Well of Galabes {3}.

Links:

{1} http://www.aoda.org/

{2} http://www.druidical- gd.org/

{3} http://galabes.blogspot.com/

https://thearchdruidreport.blogspot.jp/2017/02/the-world-as-representation.html

Categories: Uncategorized

“The End of Employees”

by Yves Smith

Naked Capitalism (February 03 2016)

The Wall Street Journal has an important new story, The End of Employees {2}, on how the big company love of outsourcing means that traditional employment has declined and is expected to fall further.

Some key sections of the article:

 

 

Never before have American companies tried so hard to employ so few people. The outsourcing wave that moved apparel-making jobs to China and call-center operations to India is now just as likely to happen inside companies across the US and in almost every industry.

The men and women who unload shipping containers at Wal-Mart Stores Inc. warehouses are provided by trucking company Schneider National Inc.’s logistics operation, which in turn subcontracts with temporary-staffing agencies. Pfizer Inc. used contractors to perform the majority of its clinical drug trials last year … .

The shift is radically altering what it means to be a company and a worker. More flexibility for companies to shrink the size of their employee base, pay and benefits means less job security for workers. Rising from the mailroom to a corner office is harder now that outsourced jobs are no longer part of the workforce from which star performers are promoted …

For workers, the changes often lead to lower pay and make it surprisingly hard to answer the simple question “Where do you work?” Some economists say the parallel workforce created by the rise of contracting is helping to fuel income inequality between people who do the same jobs.

No one knows how many Americans work as contractors, because they don’t fit neatly into the job categories tracked by government agencies. Rough estimates by economists range from 3% to 14% of the nation’s workforce, or as many as 20 million people.

 

 

As you can see, the story projects this as an unstoppable trend. The article is mainly full of success stories, which naturally is what companies would want to talk about. The alleged benefits are two-fold: that specialist contractors can do a better job of managing non-core activities because they are specialists and have higher skills and that using outside help keeps companies lean and allows them to be more “agile”.

The idea that companies who use contractors are more flexible is largely a myth. The difficulty of entering into outsourcing relationships gives you an idea of how complex they are. While some services, like cleaning, are likely to be fairly simple to hand off, the larger ones are not. For instance, for IT outsourcing, a major corporation will need to hire a specialist consultant to help define the requirements for the request for proposal and write the document that will be the basis for bidding and negotiation. That takes about six months. The process of getting initial responses, vetting the possible providers in depth, getting to a short list of 2-3 finalists, negotiating finer points with them to see who has the best all-in offer, and then negotiating the final agreement typically takes a year. Oh, and the lawyers often fight with the consultant as to what counts in the deal.

On the one hand, the old saw of “a contract is only as good at the person who signed it” still holds true. But if a vendor doesn’t perform up to the standards required, or the company’s requirements change in some way not contemplated in the agreement, it is vastly more difficult to address than if you were handling it internally. And given how complicated contracting is, it’s not as if you can fire them.

So as we’ve stressed again and again, these arrangements increase risks and rigidity. And companies can mis-identify what is core or not recognize that there are key lower-level skills they’ve mis-identified. For instance, Pratt & Whitney decided to contract out coordination of deliveries to UPS. Here is the critical part:

 

 

For years, suppliers delivered parts directly to Pratt’s two factories, where materials handlers unpacked the parts and distributed them to production teams. Earl Exum, vice president of global materials and logistics, says Pratt had “a couple hundred” logistics specialists. Some handlers were 20- or 30-year veterans who could “look at a part and know exactly what it is”, he adds … .

Most of the UPS employees had no experience in the field, and assembly kits arrived at factories with damaged or missing parts. Pratt and UPS bosses struggled to get the companies’ computers in sync, including warehouse-management software outsourced by UPS to another firm, according to Pratt..

 

 

The result was $500 million in lost sales in a quarter. Pratt & Whitney tried putting a positive spin on the tale, that all the bugs were worked out by the next quarter. But how long will it take Pratt & Whitney to recover all the deal costs plus the lost profits?

There’s even more risk when the company using contractor doesn’t have much leverage over them. As a Wall Street Journal reader, Scott Riney, said in comments:

 

 

Well managed companies make decisions based on sound data and analysis. Badly managed companies follow the trends because they’re the trends. A caveat regarding outsourcing is that, as always, you get what you pay for. Also, the vendor relationship needs to be competently managed. There was the time a certain, now bankrupt technology company outsourced production of PBX components to a manufacturer who produced components with duplicate MAC addresses. The contract manufacturer’s expertise obviously didn’t extend to knowing jack about hardware addressing, and the management of the vendor relationship was incompetent. And what do you do, in a situation like that, if your firm isn’t big enough that your phone calls get the vendor’s undivided attention? Or if you’re on different continents, and nothing can get done quickly?

 

 

We’ve discussed other outsourcing bombs in past posts, such as when British Airways lost “tens of millions of dollars” when its contractor, Gate Gourmet, fired employees. Baggage handlers and ground crew struck in sympathy, shutting down Heathrow for 24 hours. Like many outsourced operations, Gate Gourmet had once been part of British Airways. And passengers blamed the airline, not the workers {2}.

Now admittedly, there are low-risk, low complexity activities that are being outsourced more, such as medical transcription, where 25% of all medical transcriptionists now work for agencies, up by 1/3 since 2009. The article attributes the change to more hospitals and large practices sending the work outside. But even at its 2009 level, the use of agencies was well established. And you can see that it is the sort of service that smaller doctor’s offices would already be hiring on a temp basis, whether through an agency or not, because they would not have enough activity to support having a full-time employee. The story also describes how SAP has all its receptionists as contractors, apparently because someone looked at receptionist pay and concluded some managers were paying too much. So low level clerical jobs are more and more subject to this fad. But managing your own receptionists is hardly going to make a company less flexible.

Contracting, like other gig economy jobs, increase insecurity and lower growth. I hate to belabor the obvious, but people who don’t have a steady paycheck are less likely to make major financial commitments, like getting married and setting up a new household, having kids, or even buying consumer durables. However, one industry likely makes out handsomely: Big Pharma, which no doubt winds up selling more brain-chemistry-altering products for the resulting situationally-induced anxiety and/or depression. The short-sightedness of this development on a societal level is breath-taking, yet overwhelmingly pundits celebrate it and political leaders stay mum.

With this sort of rot in our collective foundation, the rise of Trump and other “populist” candidates should not come as a surprise.

Links:

{1} https://www.wsj.com/articles/the-end-of-employees-1486050443

{2} http://www.aviationpros.com/news/10432989/all-striking-british-airways-staff-return-after-24-hour-walkout

http://www.nakedcapitalism.com/2017/02/the-end-of-employees.html

Categories: Uncategorized

Castigating Trump for Truth-Telling

President Trump says much that is untrue, but he draws some of Official Washington’s greatest opprobrium when he speaks the truth, such as noting that senior US officials have done a lot of killing.

by Robert Parry

Consortium News (February 07 2017)

Gaining acceptance in Official Washington is a lot like getting admittance into a secret society’s inner sanctum by uttering some nonsensical password. In Washington to show you belong, you must express views that are patently untrue or blatantly hypocritical.

For instance, you might be called upon to say that “Iran is the principal source of terrorism” when that title clearly belongs to Saudi Arabia and other Gulf state allies that have funded Al Qaeda, the Taliban and the Islamic State. But truth has no particularly value in Official Washington; adherence to “group think” is what’s important.

Similarly, you might have to deny any “moral equivalence” between killings attributed to Russian President Vladimir Putin and killings authorized by US presidents. In this context, the fact that the urbane Barack Obama scheduled time one day a week to check off people for targeted assassinations isn’t relevant. Nor is the reality that Donald Trump has joined this elite club of official killers by approving a botched and bloody raid in Yemen that slaughtered a number of women and children (and left one US soldier dead, too).

You have to understand that “our killings” are always good or at least justifiable (innocent mistakes do happen from time to time), but Russian killings are always bad. Indeed, Official Washington has so demonized Putin that any untoward death in Russia can be blamed on him whether there is any evidence or not. To suggest that evidence is needed shows that you must be a “Moscow stooge”.

To violate these inviolable norms of Official Washington, in which participants must intuitively grasp the value of such “group think” and the truism of “American exceptionalism”, marks you as a dangerous outsider who must be marginalized or broken.

Currently, President Trump is experiencing this official opprobrium as he is widely denounced by Republicans, Democrats and “news” people because he didn’t react properly to a question from Fox News‘ Bill O’Reilly terming Putin “a killer”.

“There are a lot of killers”. Trump responded.

 

 

We’ve got a lot of killers. What do you think – our country’s so innocent. You think our country’s so innocent?

 

 

Aghast at Trump’s heresy, O’Reilly sputtered, “I don’t know of any government leaders that are killers”.

Trump: “Well – take a look at what we’ve done too. We made a lot of mistakes. I’ve been against the war in Iraq from the beginning.”

O’Reilly: “But mistakes are different than – ”

Trump: “A lot of mistakes, but a lot of people were killed. A lot of killers around, believe me.”

‘Moral Equivalence’

Though Trump is justly criticized for often making claims that aren’t true, here he was saying something that clearly was true. But it has drawn fierce condemnation from across Official Washington, not only from Democrats but from Trump’s fellow Republicans, too. Neoconservative Washington Post opinion writer Charles Krauthammer objected fiercely to Trump’s “moral equivalence”, and CNN’s Anderson Cooper chimed in. lamenting Trump’s deviation into “equivalence”, that is, holding the US government to the same ethical standards as the Russian government.

This “moral equivalence” argument has been with us at least since the Reagan administration when human rights groups objected to President Reagan’s support for right-wing governments in Central America that engaged in “death squad” tactics against political dissidents, including the murders of priests and nuns and genocide against disaffected Indian tribes. To suggest that Reagan and his friends should be subjected to the same standards that he applied to left-wing authoritarian governments earned you the accusation of “moral equivalence”.

Declassified documents from Reagan’s White House show that this public relations strategy was refined at National Security Council meetings led by US intelligence propaganda experts. Now the “moral equivalence” theme is being revived to discredit a new Republican president who dares challenge this particular Official Washington “group think”.

Lots of Killing

The unpleasant truth is that all leaders of major countries and many leaders of smaller countries are “killers”. President Obama admitted that he had ordered military strikes in seven different countries to kill people. His Secretary of State Hillary Clinton rejoiced over the grisly murder of Libyan leader Muammar Gaddafi with a clever twist on a famous Julius Caesar boast of conquest: “We came, we saw, he died”, Clinton chirped.

President George W Bush launched an illegal war against Iraq based on false pretenses, causing the deaths of hundreds of thousands of Iraqis, many of them children and other civilians.

President Bill Clinton ordered a vicious bombing campaign against the Serbian capital of Belgrade, which included intentionally targeting the Serb TV building and killing sixteen civilian employees because Clinton considered the station’s news reports to be “propaganda”, that is, not in line with US propaganda.

President George H W Bush slaughtered scores of Panamanians who happened to live near the headquarters of the Panamanian Defense Forces and he killed tens of thousands of Iraqis, including incinerating a civilian bomb shelter in Baghdad, after he brushed aside proposals for resolving Iraq’s invasion of Kuwait peacefully. (Bush wanted a successful war as a way to rally the American people behind future foreign military operations, so, in his words, the country could kick “the Vietnam Syndrome once and for all”.)

Other US presidents have had more or less blood on their hands than these recent chief executives, but it is hard to identify any modern US president who has not been a “killer” in some form, inflicting death upon innocents whether as part of some “justifiable” mission or not.

But the mainstream US press corps routinely adopts double standards when assessing acts by a US president and those of an “enemy”. When the US kills people, the mainstream media bends over backwards to rationalize the violence, but does the opposite if the killing is authorized by some demonized foreign leader.

That is now the case with Putin. Any accusation against Putin – no matter how lacking in evidence – is treated as credible and any evidence of Putin’s innocence is ridiculed or suppressed.

That was the case with a documentary that debunked claims that hedge fund accountant Sergei Magnitsky was murdered in a Russian prison because he was a whistleblower when the documentary showed that he was a suspect in a massive money-laundering scheme and died of natural causes. Although produced by a documentarian who started out planning to do a sympathetic portrayal of Magnitsky, the facts led in a different direction that caused the documentary to be shunned by the European Union and given minimal distribution in the United States.

By contrast, the ease with which Putin is called a murderer – based on “mysterious deaths” inside Russia – is reminiscent of how American right-wing groups suggested that Bill and Hillary Clinton were murderers by distributing a long list of “mysterious deaths” somehow related to the Clinton “scandals” from their Arkansas days. While there was no specific evidence connecting the Clintons to any of these deaths, the sheer number created suspicions that were hard to knock down without making you a “Clinton apologist”. Similarly, a demand for actual evidence proving Putin’s guilt in a specific case makes you a “Putin apologist”.

However, as a leader of a powerful nation facing threats from terrorism and other national security dangers, Putin is surely a “killer”, much as US presidents are killers. That appears to have been President Trump’s point, that the United States doesn’t have clean hands when it comes to shedding innocent blood.

But telling such an unpleasant albeit obvious truth is not the way to gain entrance into the inner sanctum of Official Washington’s Deep State. The passwords for admission require you to say a lot of things that are patently false. Any inconvenient truth-telling earns you the bum’s rush out into the alley, even if you’re President of the United States.

_____

Investigative reporter Robert Parry broke many of the Iran-Contra stories for the Associated Press and Newsweek in the 1980s. You can buy his latest book, America’s Stolen Narrative (2012), either in print here or as an e-book (from Amazon and barnesandnoble.com).

https://consortiumnews.com/2017/02/07/castigating-trump-for-truth-telling/

Categories: Uncategorized