It Should Never Be Done Again: Hiroshima, 70 Years Later

Hiroshima after the bomb
Hiroshima after the bomb

W.J. Astore

August 6, 1945.  Hiroshima.  A Japanese city roughly the size of Houston.  Incinerated by the first atomic bomb.  Three days later, Nagasaki.  Japanese surrender followed.  It seemed the bombs had been worth it, saving countless American (and Japanese) lives, seeing that a major invasion of the Japanese home islands was no longer needed.  But was the A-bomb truly decisive in convincing the Japanese to surrender?

President Truman’s decision to use atomic bombs against Japan is perhaps the most analyzed, and, in the United States, most controversial decision made during World War II.  The controversy usually creates more heat than light, with hardliners posed on mutually opposed sides.  The traditional interpretation is that Truman used the A-bombs to convince a recalcitrant Japanese Emperor that the war was truly lost.  A quick Japanese surrender appeared to justify Truman’s choice.  It also saved tens of thousands of Allied lives in the Pacific (while killing approximately 250K Japanese).  This thesis is best summed up in Paul Fussell’s famous essay, “Thank God for the Atomic Bomb.”

Even before Hiroshima, however, a small number of scientists argued that the A-bomb should not be used against Japan without a prior demonstration in a remote and uninhabited location.  Later, as the horrible nature of radiation casualties became clearer to the American people, and as the Soviet Union developed its own arsenal of atomic weapons, threatening the United States with nuclear Armageddon, Americans began to reexamine Truman’s decision in the context of the Cold War and the nuclear arms race.  Gar Alperovitz’s revisionist view that Truman was practicing “atomic diplomacy” won its share of advocates in the 1960s. (Alperovitz expanded upon this thesis in the 1990s.)  Other historians suggested that racism and motives of revenge played a significant role in shaping the U.S. decision.  This debate reached its boiling point in the early 1990s, as the Smithsonian’s attempt to create a “revisionist” display to mark the bomb’s 50th anniversary became a lightning rod in the “culture wars” between a Democratic administration and a resurgent Republican Congress.

Were the atomic bombs necessary to get the Japanese to surrender?  Would other, more humane, options have worked, such as a demonstration to the Japanese of the bomb’s power?  We’ll never know with certainty the answer to such questions.  Perhaps if the U.S. had been more explicit in their negotiations with Japan that “unconditional surrender” did not mean the end of Japan’s Emperor, the Japanese may have surrendered earlier, before the A-bomb was fully ready.  Then again, U.S. flexibility could have been interpreted by Japanese hardliners as a sign of American weakness or war fatigue.

Unwilling to risk appearing weak or weary, U.S. leaders dropped the A-bomb to shock the Japanese into surrendering. Together with Stalin’s entry into the war against Japan, these shocks were sufficient to convince the Japanese emperor “to bear the unbearable,” in this case total capitulation, a national disgrace.

A longer war in the Pacific — if only a matter of weeks — would indeed have meant higher casualties among the Allies, since the Japanese were prepared to mount large-scale Kamikaze attacks.  Certainly, the Allies were unwilling to risk losing men when they had a bomb available that promised results.  The mentality seems to have been: We developed it.  We have it.  Let’s use it.  Anything to get this war over with as quickly as possible.

That mentality was not humane, but it was human.  Truman had a weapon that promised decisiveness, so he used it.  The attack on Hiroshima  was basically business as usual, especially when you consider the earlier firebombing raids led by General Curtis LeMay.  Indeed, such “conventional” firebombing raids continued after Hiroshima and Nagasaki until the Japanese finally sent a clear signal of surrender.

Of course, an event as momentous, as horrific, as Hiroshima took on extra meaning after the war, given the nuclear arms race, the Cold War and a climate represented by the telling acronym of MAD (mutually assured destruction). U.S. decisionmakers like Truman were portrayed as callous, as racist, as war criminals.  Yet in the context of 1945, it’s difficult to see any other U.S. president making a different decision, especially given Japan’s apparent reluctance to surrender and their proven fanaticism at Iwo Jima, Okinawa and elsewhere.

As Andrew Rotter notes in Hiroshima: The World’s Bomb (2008), World War II witnessed the weakening, if not erasure, of distinctions between combatants and non-combatants, notably during LeMay’s firebombing of Tokyo in March 1945 but in many other raids as well (Rotterdam and Coventry and Hamburg and Dresden, among so many others). In his book, Rotter supports the American belief that Japan would fight even more fanatically for their home islands than they did at Iwo Jima and Okinawa, two horrendous battles in 1945 that preceded the bomb. But he argues that Truman and Secretary of War Henry Stimson engaged in “self-deception” when they envisioned that the effects of the atomic bomb could be limited to “a purely military” target.

A quarter of a million Japanese died at Hiroshima and Nagasaki and in the years and decades following.  They died horrible deaths.  And their deaths serve as a warning to us all of the awful nature of war and the terrible destructiveness of nuclear weapons.

Hans Bethe worked on the bomb during the Manhattan Project.  A decent, humane, and thoughtful man, he nevertheless worked hard to create a weapon of mass destruction. His words of reflection have always stayed with me.  They come in Jon Else’s powerful documentary, “The Day After Trinity: J. Robert Oppenheimer and the Atomic Bomb.”

Here is what Bethe said (edited slightly):

The first reaction we [scientists] had [after Hiroshima] was one of fulfillment.  Now it has been done.  The second reaction was one of shock and awe: What have we done?  What have we done.  The third reaction was it should never be done again.

It should never be done again: Just typing those words here from memory sends chills up my spine.

Let us hope it is never done again.  Let us hope a nuclear weapon is never used again.  For that way madness lies.

What Is Terrorism?

My copy.  Not the sexiest cover, but a good primer nonetheless
My copy. Not the sexiest cover, but a good primer nonetheless

When I entered the Air Force in 1985, I grabbed a pamphlet by Brian M. Jenkins of Rand.  The title caught my eye: International Terrorism: The Other World War.  Back then, the country was focused on the Cold War against the Evil Empire of the Soviet Union.  Jenkins suggested there was another war we should be focusing on.

In his pamphlet, he provided a “working definition” of terrorism:

“Terrorism is the use of criminal violence to force a government to change its course of action.”

And: “Terrorism is a political crime.  It is always a crime…”

But Jenkins also knew that terrorism, as a word and concept, was contentious and politicized.  As he explained:

“Some governments are prone to label as terrorism all violent acts committed by their political opponents, while antigovernment extremists frequently claim to be the victims of governmental terror.  Use of the term thus implies a moral judgment.  If one group can successfully attach the label terrorist to its opponent, then it has indirectly persuaded others to adopt its moral and political point of view, or at least to reject the terrorists’ view.  Terrorism is what the bad guys do.  This drawing of boundaries between what is legitimate and what is illegitimate, between the right way to fight and the wrong way to fight, brings high political stakes to the task of definition.”

Jenkins correctly notes that the word “terrorism” implies both a political and moral (and legal) judgment.  By his working definition, to be a terrorist is to be a criminal.

Can nation-states be terrorists?  Interestingly, no.  Not if you accept the definitional imperative common to international relations.  Nation-states draw their identity (and authority) in part by and through their ability to monopolize the means of violence.  Because a state monopolizes or “controls” violence in a legally sanctioned international system, it cannot commit a criminal act of terror, however terrorizing that act might be.  (By this definition, dropping atomic bombs on Hiroshima and Nagasaki and killing 200,000 people were not terrorist acts, even though the intent was to terrorize the Japanese into surrendering.) Put differently, a state can sponsor terrorism, but it cannot commit it.

It’s an unsatisfying definition to many.   As Glenn Greenwald, constitutional lawyer and journalist for the Guardian, has noted many times, terrorism as a concept is now so highly politicized, so narrowly defined and closely tied to evil acts committed by Muslim extremists, that the word itself has become polluted.  It’s more weapon than word, with an emotional impact that hits with the explosive power of a Hellfire missile.

Terrorism, in short, has become something of an Alice in Wonderland word.  As Humpty Dumpty put it, “When I use a word, it means just what I choose it to mean — neither more nor less.”  Such is the case with “terrorist” and “terrorism”: they’re often just epithets, ones we reserve for people and acts we find heinous.

Terrorism exists, of course.  But so too does politically-motivated manipulation of the English language, as George Orwell famously warned.  If terrorist = criminal = always them but never us (because we’re a nation, and a good-hearted one at that), we absolve ourselves of blame even as we shout, like the Queen of Hearts in Alice, “Off with their heads!” at the “terrorists.”

That shout may be satisfying, but it may also be all too easy — and all too biased.

W.J. Astore