Is the American male dead? I’ve seen enough articles and books espousing a “war” on men and boys, amounting to a concerted attack on masculinity, to suggest that males are, if not dead, very much in decline in America, threatened by a “feminized” society that devalues manly virtues.
An article at the National Review, “Understanding the Inescapable Reality of Masculinity,” suggests that men as men have an “essential nature,” one that is “physical, aggressive, violent,” but that these traits are under attack as wider American society works to deny men their “inherent masculinity.” The article further argues there aren’t enough male role models in the lives of young boys – especially fathers and father-figures. This is a well-worn argument on the vital importance of the nuclear family with a man like Ward Cleaver in charge of it. There’s nothing wrong with that, except not all fathers are patient, kind, and intelligent mentors like Ward on “Leave it to Beaver.” Sadly, more than a few drive young boys to be aggressive and violent in selfish and dangerous ways.
Leaving that aside, it seems odd that this narrative of the decline of masculinity persists so strongly in Trump’s America. Now there’s a man! He’s physical, aggressive, unafraid to boast of pussy-grabbing or the size of his penis. He’s urged his followers at rallies to get physical with protesters. He supports torture and even hints at shooting immigrants as a rational “get tough” policy. Posing like Winston Churchill, he scowls and frowns in a simulacrum of manly determination. If the president is America’s chief role model, Trump’s doing his best to project masculinity as he understands it.
Indeed, you might argue Trump won the presidency in part because of his unapologetic “masculine” posing. Contrast this to Hillary Clinton, often portrayed as a “ball-buster,” an emasculating female. (Indeed, I had a Hillary nutcracker, a novelty gift from a friend.) Male voters (joined by a majority of White women) in 2016, perhaps looking for a “real” man to vote for and turned off by an alleged nut-cracking harridan, broke for Trump.
Trump’s win—and continued tolerance of his bullying, boastful, and bellicose manner—give the lie to the decline of masculinity narrative in America. Why does it persist, then? Because it’s yet another way to divide us. Consider similar narratives of an alleged war on Christianity, or that higher education is driven by hegemonic liberal/leftist agendas. In fact, Christianity is more powerful than ever in America—just look at Mike Pence and the influence of evangelicals in the U.S. government—and higher education is increasingly about serving the needs of business, industry, and the military-industrial complex.
But truth is unimportant when the object is stirring up divisiveness. Tell American men they’re threatened: that radical feminists, effete city dwellers, Ivy League elites, and other disreputable elements are out to get them. Then urge “threatened” males to vote for retrograde (fake) tough guys like Trump. It may not be the most subtle tactic, but it works.
In this narrative, masculinity is defined in “can-do,” action-oriented ways. Man as Alpha male, as doer, as fighter, whether in a bad way (as a killer) or in a good way (as a protector). It’s warrior-and empire-friendly. And indeed U.S. foreign policy today is distinctly masculine, with loads of emphasis on domination, on bossing other peoples around, simply because we’re bigger and badder than them.
What’s truly worrisome is not false narratives about masculinity’s decline but how it’s narrowly defined in violent and aggressive ways. We forget that macho posturing by America’s “leaders” has created enormous problems. Just think of George W. Bush and all his macho strutting before and during the Iraq war.
America needs fewer calls about putting on “big boy” pants and more emphasis on engaging in negotiation and diplomacy, along with action to end America’s chaotic and unwinnable wars. America is already carrying a big stick. It can afford to speak softly instead of shouting.
Perhaps the profession that requires job security more than any other is teaching, especially college teaching. Tenure traditionally meant that a teacher/professor could be terminated only for moral turpitude (e.g. sexual abuse of students), blatant racism, unfair or unjust grading, gross incompetence, failure to obey basic institutional rules such as not showing up to class on time, or not teaching the subject matter he or she was hired to teach. Nowadays, however, “Just Cause” is often grounds for termination of a tenured faculty member. But “Just Cause” in any work contract is far too flexible an instrument for employers and far too vague for employees who rightly worry about job security.
Job insecurity prior to acquiring tenure and tenure granted with a “Just Cause” basis for termination of employment work to stifle academicians’ free expression of creative ideas, theories, and perspectives in and outside of the classroom. Any psychiatric or psychological clinician knows, or should know, that the threat of losing one’s livelihood produces stress and anxiety. Going to work each day, knowing your job is “contingent,” can become a dreaded and stressful experience.
Not only does academic tenure reduce or eliminate anxiety and stress: It ensures the free expression in the classroom of controversial and unorthodox ideas and pedagogical methods. Colleges, all schools for that matter, should remain faithful to the ultimate purpose of education, to bring students out of darkness—e-ducare in Latin. It therefore should be difficult to dismiss a teacher/professor once that person has acquired tenure.
Alas, much has changed in the groves of academe. “Make America Great Again” has come to mean—long before Trump—make life easier for administrators of educational institutions, especially those who primarily view education as preparation for the world of work. Colleges and universities are top-heavy with administrators. In fact, it’s easier to find employment as an administrator than it is as a full-time faculty member.
Colleges are also becoming increasingly technocratic in their organizational structure. Form is becoming more important than content. The typical teacher/professor is expected to be virtually robotic in his/her performance. (God help a member of a college faculty nowadays who does not know the finer points of PowerPoint or refuses to use technology at all in the classroom.) Scores on multiple-choice faculty evaluations are more valued than what students are learning. The goal (often unstated) of pedagogy is to prepare students for becoming employees who will fit neatly and quietly into niches in the business and corporate world. Professors are subtly urged, sometimes threatened, to become unindicted co-conspirators in what appears to be the ultimate purpose of education in contemporary American society: to produce graduates who will unreflectively accept the status quo.
Today’s system of compromised tenure limits the ability of teachers/professors to encourage students to question and challenge the status quo. At its best, traditional tenure promoted an atmosphere in the classroom where teachers felt free to discuss contemporary political, social, and science/technology issues. Job security encouraged teachers to provide the cognitive tools for what Neil Postman called “crap detecting” (critical thinking) in his book, “Teaching as a Subversive Activity.” Education for Postman included the ability to distinguish reality from propaganda—and it often worked. For example, college-educated students were more likely to resist the draft, protest the Vietnam War, and oppose Richard Nixon’s invasion of Cambodia. In short, they questioned authority because they had the tools, mindset, and commitment to do so.
In his 1923 book, “The Goose Step: A Study of American Education,” Upton Sinclair had this to say regarding colleges and universities: “Suppose I was to tell to tell you that this education machine has been stolen? That a bandit crew have got hold of it and have set it to work, not for your benefit, nor for the benefit of your sons and daughters, but for an end very far from these? That our six hundred thousand young people (supposedly in higher education) are being taught, deliberately and of set purpose, not wisdom but folly, not justice but greed, not freedom but slavery, not love but hate.” Worshiping or conforming to a socio-economic system based on the values and goals of capitalism is the leading obstacle to an education that promotes democratic and humanitarian values, according to Sinclair.
Sinclair further argued that college professors should not “merely have job security” but also should have “collective control of that job.” He insisted that the faculty “must take from the trustees, and from the man they hired, the president, the greater part of their present functions.” Sinclair’s message is telling: It’s undesirable for democracy for administrators to treat professors as employees who are readily dismissible.
“Readily dismissible” is an apt description of adjunct/contingent faculty today. The number of adjuncts teaching college courses now outnumbers full-time tenured faculty. On the adjunct level there is no job security from semester to semester. The academic goosestep is always outside the door.
Teachers on all levels of formal education have vital roles to play in getting all of us to question authority. How can they do that, however, when their jobs can be eliminated by administrators whose first loyalty is often to an establishment that sustains that authority? To challenge hegemonic social systems and structures, teachers and professors need job security. They need tenure. Is that why they’re not getting it?
Richard Sahn, a retired professor of sociology, taught at the collegiate level for four decades.
I know: who cares about the education of our kids as the redacted Mueller Report dominates the airwaves on CNN, MSNBC, and similar cable “news” networks?
I care. I spent fifteen years as a history professor, teaching mostly undergraduates at technically-oriented colleges (the Air Force Academy; the Pennsylvania College of Technology). What I experienced was the slow death of education in America. The decline of the ideal of fostering creative and critical thinking; the abandonment of the notion of developing and challenging young people to participate intelligently and passionately in the American democratic experiment. Instead, education is often a form of social control, or merely a means to an end, purely instrumental rather than inspirational. Zombie education.
Nowadays, education in America is about training for a vocation, at least for some. It’s about learning for the sake of earning, i.e. developing so-called marketable skills that end (one hopes) in a respectable paycheck. At Penn College, I was encouraged to meet my students “at their point of need.” I was told they were my “customers” and I was their “provider.” Education, in sum, was transactional rather than transformational. Keep students in class (and paying tuition) and pray you can inspire them to see that the humanities are something more than “filler” to their schedules — and their lives.
As a college professor, I was lucky. I taught five classes a semester (a typical teaching load at community colleges), often in two or three subjects. Class sizes averaged 25-30 students, so I got to know some of my students; I had the equivalent of tenure, with good pay and decent benefits, unlike the adjunct professors of today who suffer from low pay and few if any benefits. I liked my students and tried to challenge and inspire them to the best of my ability.
All this is a preface to Belle Chesler’s stunning article at TomDispatch.com, “Making American Schools Less Great Again: A Lesson in Educational Nihilism on a Grand Scale.” A high school visual arts teacher, Chesler writes from the heart about the chronic underfunding of education and how it is constricting democracy in America. Here she talks about the frustrations of classes that are simply too big to teach:
[Class sizes grew so large] I couldn’t remember my students’ names, was unable to keep up with the usual grading and assessments we’re supposed to do, and was overwhelmed by stress and anxiety. Worst of all, I was unable to provide the emotional support I normally try to give my students. I couldn’t listen because there wasn’t time.
On the drive to work, I was paralyzed by dread; on the drive home, cowed by feelings of failure. The experience of that year was demoralizing and humiliating. My love for my students, my passion for the subjects I teach, and ultimately my professional identity were all stripped from me. And what was lost for the students? Quality instruction and adult mentorship, as well as access to vital resources — not to mention a loss of faith in one of America’s supposedly bedrock institutions, the public school…
The truth of the matter is that a society that refuses to adequately invest in the education of its children is refusing to invest in the future. Think of it as nihilism on a grand scale.
Nihilism, indeed. Why believe in anything? Talk about zombie education!
What America is witnessing, she writes, is nothing short of a national tragedy:
Public schools represent one of the bedrock institutions of American democracy. Yet as a society we’ve stood aside as the very institutions that actually made America great were gutted and undermined by short-term thinking, corporate greed, and unconscionable disrespect for our collective future.
The truth is that there is money for education, for schools, for teachers, and for students. We just don’t choose to prioritize education spending and so send a loud-and-clear message to students that education doesn’t truly matter. And when you essentially defund education for more than 40 years, you leave kids with ever less faith in American institutions, which is a genuine tragedy.
Please read all of her article here at TomDispatch.com. And ask yourself, Why are we shortchanging our children’s future? Why are we graduating gormless zombies rather than mindful citizens?
Perhaps Trump does have some relevance to this article after all: “I love the poorly educated,” sayeth Trump. Who says Trump always lies?
About fifteen years ago, I wrote a short history of World War II for an encyclopedia on military history. I was supposed to be paid for it, but apparently the money ran out, though my article and the encyclopedia did appear in 2006. Having not been paid, I still own the rights to my article, so I’m posting it today, hoping it may serve as a brief introduction for a wider audience to a very complex subject. A short bibliography is included at the end.
Dr. William J. Astore
World War II (1939-1945): Calamitous global war that resulted in the death of sixty million people. The war’s onset and course cannot be understood without reference to World War I. While combat in the European theater of operations (ETO) lasted six years, in Asia and the Pacific combat lasted fourteen years, starting with the Japanese invasion of Manchuria in 1931. Unprecedented in scale, World War II witnessed deliberate and systematic killing of innocents. Especially horrific was Germany’s genocidal Endlösung (Final Solution), during which the Nazis attempted to murder all Jewish, Sinti, and Roma peoples, in what later became known as the Holocaust.
Rapid campaigns, such as Germany’s stunning seven-week Blitzkrieg (lightning war) against France, characterized the war’s early years. Ultimately, quick victories gave way to lengthy and punishing campaigns from mid-1942 to 1945. Early and rapid German and Japanese advances proved reversible, although at tremendous cost, as the Soviet Union and the United States geared their economies fully for war. The chief Axis powers (Germany, Japan, and Italy) were ultimately defeated as much by their own strategic blunders and poorly coordinated efforts as by the weight of men and matériel fielded by the “Big Three” Allies (Soviet Union, United States, and Great Britain).
Militant fascist regimes in Italy and Germany and an expansionist military regime in Japan exploited inherent flaws in the Versailles settlement, together with economic and social turmoil made worse by the Great Depression. In Germany, Adolf Hitler dedicated himself to reversing what he termed the Diktat of Versailles through rearmament, remilitarization of the Rhineland, and territorial expansion ostensibly justified by national representation.
Concealing his megalomaniac intent within a cloak of reasoned rhetoric, Hitler persuaded Britain’s Neville Chamberlain and France’s Édouard Daladier that his territorial demands could be appeased. But there was no appeasing Hitler, who sought to subjugate Europe from the Atlantic to the Urals, re-establish an African empire, and ultimately settle accounts with the United States. For Hitler, only a ruthless rooting out of a worldwide “Jewish-Bolshevik conspiracy” would gain the Lebensraum (living space) a supposedly superior Aryan race needed to survive and thrive.
Less ambitious, if equally vainglorious, was Italy’s Benito Mussolini. Italian limitations forced Il Duce to follow Germany. Disparities in timing made the “Pact of Steel,” forged by these countries in 1936, fundamentally flawed. The Wehrmacht marched to war in 1939, four years before the Italian military was ready (it was still recovering from fighting in Ethiopia and Spain). Yet Mussolini persevered with schemes to dominate the Mediterranean.
Japan considered its war plans to be defensive and preemptive, although in their scope they nearly equaled Hitler’s expansionist ambitions. The Japanese perceived the alignment of the ABCD powers (America, Britain, China, and Dutch East Indies) as targeted directly against them. The ABCD powers, in contrast, saw themselves as deterring an increasingly bellicose and aggressive Japan. As the ABCD powers tightened the economic noose to compel Japan to withdraw from China, Japan concluded it had one of two alternatives: humiliating capitulation or honorable war. Each side saw itself as resisting the unreasonable demands of the other; neither side proved willing to compromise.
Nevertheless, Japan looked for more than a restoration of the status quo. Cloaked in the rhetoric of liberating Asia from Western imperialism, Japanese plans envisioned a “Greater East Asian Co-Prosperity Sphere,” in which Japan would obtain autarky and Chinese, Koreans, and Filipinos would be colonial subjects of the Japanese master race. In their racial component and genocidal logic, made manifest in the Rape of Nanking (1937), Japanese war plans resembled their Nazi equivalents.
European Theater of Operations (ETO), 1939-1941
1939-1941 witnessed astonishing successes by the Wehrmacht. With its eastern border secured by the Molotov-Ribbentrop Non-Aggression Pact, Germany invaded Poland on 1 September 1939. Two days later, Britain and France declared war on Germany. As French forces demonstrated feebly along Germany’s western border, Panzer spearheads supported by Luftwaffe dive-bombers sliced through Poland. Attacked from the east by Soviet forces on 17 September, the Poles had no choice but to surrender.
Turning west, Hitler then attacked and subdued Denmark and Norway in April 1940. By gaining Norway, Germany safeguarded its supply of iron ore from neutral Sweden and acquired ports for the Kriegsmarine and bases for the Luftwaffe to interdict shipping in the North Sea, Arctic, and North Atlantic. Throughout this period, Germany and France engaged in Sitzkrieg or Phony War.
Phony War gave way on 10 May 1940 to a massive German invasion of the Low Countries and France. A feint on the extreme right by Germany’s Army Group B in Belgium drew French and British forces forward, while the main German thrust cut through the hilly and forested Ardennes region between Dinant and Sedan. The German plan worked to perfection since the French strategy was to engage German forces as far as possible from France’s border. The Wehrmacht’s crossing of the Meuse River outpaced France’s ability to react. Their best divisions outflanked, the Franco-British army retreated to Dunkirk, where the Allies evacuated 335,000 men in Operation Dynamo. The fall of Paris fatally sapped France’s will to resist. The eighty-four-year-old Marshal Philippe Pétain oversaw France’s ignominious surrender, although the French preserved nominal control over their colonies and the rump state of Vichy.
Surprise, a flexible command structure that encouraged boldness and initiative, high morale and strong ideological commitment based on a shared racial and national identity (Volksgemeinschaft), and speed were key ingredients to the Wehrmacht’s success. Intoxicated by victory, the Wehrmacht’s rank-and-file looked on the Führer as the reincarnation of Friedrich Barbarossa. Higher-ranking officers who disagreed were bribed or otherwise silenced.
Hitler next turned to Britain, which under Winston Churchill refused to surrender. During the Battle of Britain the Luftwaffe sought air superiority to facilitate a cross-channel invasion (Operation Sea Lion). This goal was beyond the Luftwaffe’s means, however, especially after Hitler redirected the bombing from airfields to London. By October the Luftwaffe had lost 1887 aircraft and 2662 pilots as opposed to RAF losses of 1023 aircraft and 537 pilots. Temporarily stymied, Hitler ordered plans drawn up for the invasion of the Soviet Union. Stalin’s defeat, Hitler hoped, would compel Churchill to sue for peace.
Hitler’s victories stimulated Japan to conclude, on 27 September 1940, the Tripartite Pact with Germany and Italy. Japan also expanded its war against China while looking avariciously towards U.S., British, Dutch, and French possessions in Southeast Asia and the Pacific. Meanwhile, Mussolini, envious of Hitler’s run of victories, invaded Greece in October. The resulting Italo-Greek conflict ran until April 1941 and exposed the Italian military’s lack of preparedness, unreliable equipment, and incompetent leadership. Italian blunders in North Africa also led in Libya to Britain’s first victory on land. The arrival of German reinforcements under General Erwin Rommel reversed the tide, however. Rommel’s Afrika Korps drove British and Dominion forces eastwards to Egypt even faster than the latter had driven Italian forces westwards. Yet Rommel lacked sufficient forces to press his advantage. Meanwhile, German paratroopers assaulted Crete in May 1941, incurring heavy losses before taking the island. Events in the Mediterranean and North Africa soon took a backseat to the titanic struggle brewing between Hitler and Stalin.
The Eastern Front, 1941
After rescuing the Italians in Greece and seizing the Balkans to secure his southern flank, Hitler turned to Operation Barbarossa and the invasion of the Soviet Union. Deluded by his previous victories and a racial ideology that viewed Slavs as Untermenschen (sub-humans), Hitler predicted a Soviet collapse within three months. Previous Soviet incompetence in the Russo-Finnish War (1939-40) seemed to support this prediction. The monumental struggle began when Germany and its allies, including Hungary, Slovakia, Rumania, Bulgaria, Italy, and Finland, together with volunteer units from all over Europe, invaded the USSR along a 1300-mile front on 22 June 1941. The resulting death struggle pit fascist and anti-Bolshevik Europe against Stalin’s Red Army. For Hitler the crusade against Bolshevism was a Vernichtungskrieg (war of annihilation). Under the notorious Commissar Order, the Wehrmacht shot Red Army commissars (political officers) outright. Mobile killing units (Einsatzgruppen) rampaged behind the lines, murdering Jews and other racial and ethnic undesirables.
The first weeks of combat brought elation for the Germans. Nearly 170 Soviet divisions ceased to exist as the Germans encircled vast Soviet armies. Leningrad was surrounded and endured a 900-day siege. But by diverting forces towards the vast breadbasket of the Ukraine and the heavy manufacturing and coal of the Donets Basin, Hitler delayed the march on Moscow for 78 days. By December, sub-zero temperatures, snow, and fresh Soviet divisions halted exhausted German soldiers on the outskirts of Moscow. A Soviet counteroffensive (Operation Typhoon) threw Hitler’s legions back 200 miles, leading him to relieve two field marshals and 35 corps and division commanders. Hitler also dismissed the commander-in-chief of the army, Walter von Brauchitsch, and assumed command himself. His subsequent “stand fast” order saved the Wehrmacht the fate of Napoleon’s army of 1812, but this temporary respite came at the price of half a million casualties from sickness and frostbite.
A crucial Soviet accomplishment was the wholesale evacuation of its military-industrial complex. By November the Soviets disassembled 1500 industrial plants and 1300 military enterprises and shipped them east, along with ten million workers, to prepared sites along the Volga, in the Urals, and in western Siberia. Out of the range of the Luftwaffe, Soviet factories churned out an arsenal of increasingly effective weapons, including 50,000 T-34s, arguably the best tank of the war. Hitler now faced a two-front war of exhaustion, the same strategic dilemma that in World War I had led to the Second Reich’s demise.
Hitler arguably lost the war in December 1941, especially after declaring war on the United States on 11 December, which soon became the “arsenal of democracy” whose Lend-Lease policy shored up a reeling Red Army. Operation Barbarossa, moreover, highlighted a failure of intelligence of colossal proportions as the Wehrmacht fatally underestimated the reserves Stalin could call on. As Franz Halder, chief of the army general staff noted in his diary, “We reckoned with 200 [Soviet] divisions, but now [in August 1941] we have already identified 360.” As German forces plunged deeper into Soviet territory, they had to defend a wider frontage. A front of 1300 miles nearly doubled to 2500 miles. The vastness, harshness, and primitiveness of Mother Russia attenuated the force of the Panzer spearheads, giving Soviet forces space and time to recover from the initial blows of the German juggernaut. When the Red Army refused to die, Germany was at a loss at what to do next. Well might the Wehrmacht have heeded the words of the famed military strategist, Antoine Jomini: “Russia is a country which it is easy to get into, but very difficult to get out of.”
The Eastern Front, 1942-1945
Soviet strategy was to draw Germany into vast, equipment-draining confrontations. Germany, meanwhile, launched another Blitzkrieg, hoping to precipitate a Soviet collapse. Due to the previous year’s losses, the Wehrmacht in 1942 could attack along only a portion of the front. Hitler chose the southern half, seeking to secure the Volga River and oil fields in the Caucasus. Initial success soon became calamity when Hitler diverted forces to take Stalingrad.
The battle of Stalingrad lasted from August 1942 to February 1943 as the city’s blasted terrain negated German advantages in speed and operational art. As more German units were fed into the grinding street fighting, the Soviets prepared a counteroffensive (Operation Uranus) that targeted the weaker Hungarian, Italian, and Rumanian armies guarding the German flanks. Launched on 19 November, Uranus took the Germans completely by surprise. Encircled by 60 Red Army divisions, the 20 divisions of Germany’s Sixth Army lacked adequate strength to break out. The failure of Erich von Manstein’s relief force to reach Sixth Army condemned it to death. Although Hitler forbade it, the remnants of Sixth Army capitulated on 2 February 1943.
Stalingrad was a monumental moral victory for the Soviets and the first major land defeat for the Wehrmacht. After losing the equivalent of six months’ production at Stalingrad, Hitler belatedly placed the German economy on a wartime footing, but by then it was too late to close an ever-widening production gap. The Wehrmacht bounced back at Kharkov in March 1943, but it was to be their last significant victory. In July Hitler launched Operation Citadel at Kursk, which resulted in a colossal battle involving 1.5 million soldiers and thousands of tanks. Remaining on the defensive, the Red Army allowed the Wehrmacht to expend its offensive power in costly attacks. After fighting the Wehrmacht to a standstill, the Red Army drove it back to the Dnieper.
The dénouement was devastating for Germany. Preceded by a skilful deception campaign, Operation Bagration in Byelorussia in June 1944 led to the collapse of Germany’s Army Group Center. When Hitler ordered German forces to stand fast, 28 German divisions ceased to exist. By 1945, the Wehrmacht could only sacrifice itself in futile attempts to slow the Soviet steamroller. Soviet second-line forces used terror, rape, and wanton pillaging and destruction to avenge Nazi atrocities. Soviet forces had prevailed in the “Great Patriotic War” but at the staggering price of ten million soldiers killed, another 18 million wounded. Soviet civilian deaths exceeded 17 million. The Germans and their allies lost six million killed and another six million wounded. Hitler’s overweening ambition and fatal underestimation of Soviet resources and will led directly to Germany’s destruction.
The Anglo-American Alliance and the ETO, 1942-1945
In 1942 two-thirds of Americans wanted to defeat Japan first, but Franklin Delano Roosevelt and Churchill agreed instead on a “Germany first” policy. Their decision reflected concerns that Germany might defeat the Soviet Union in 1942. That year U.S. Army Chief of Staff George C. Marshall argued for a cross-channel assault, but the British preferred to bomb Germany, invade North Africa, and advance through Italy and the Balkans. This indirect approach reflected British memories of the Western Front in World War I and a desire to secure lines of communication in the Mediterranean to the Suez Canal and ultimately to India. British ideas prevailed because of superior staff preparation and the reality that the Allies had to win the Battle of the Atlantic before assaulting Germany’s Atlantic Wall in France.
Operation Torch in November 1942 saw Anglo-American landings in North Africa, in part to assure Stalin that the United States and Britain remained committed to a second front. Superior numbers were telling as Allied forces drove their Axis counterparts towards Tunisia, although the U.S. setback at Kasserine Pass in February 1943 reflected the learning curve for mass citizen armies. Fortunately for the Allies, Hitler sent additional German units in a foolhardy attempt to hold the remaining territory. With the fall of Tunisia in May 1943 the Axis lost 250,000 troops.
The Allies next invaded Sicily in July but failed to prevent the Wehrmacht’s withdrawal across the Straits of Messina. Nevertheless, the Sicilian Campaign precipitated Mussolini’s fall from power and Italy’s unconditional surrender on 8 September. Forced to occupy Italy, Hitler also rushed 17 divisions to the Balkans and Greece to replace Italian occupation forces. Churchillian rhetoric of a “soft underbelly” in the Italian peninsula soon proved misleading. The Allied advance became a slogging match in terrain that favored German defenders. At Salerno in September, Allied amphibious landings were nearly thrown back into the sea. At Anzio in January 1944, an overly cautious advance forfeited surprise and allowed German forces time to recover. Allied forces finally entered Rome on 4 June 1944 but failed to reach the Po River valley in northern Italy until April 1945.
The Italian campaign became a sideshow as the Allies gathered forces for a concerted cross-channel thrust (Operation Overlord) in 1944. It came in a five-division assault on 6 June at Normandy. Despite heavy casualties at Omaha Beach, the Allies gained a strong foothold in France. Success was due to brilliant Allied deception (Operation Fortitude) in which the Allies convinced Hitler that the main attack was still to come at Pas de Calais and that they had 79 divisions in Britain (they had 52). Germany’s best chance was to drive the Allies into the sea on the first day, but Hitler refused to release reserves. Once ashore in force, and with artificial harbors (Mulberries), Allied numbers and air supremacy took hold. In 80 days the Allies moved two million men, half a million vehicles, and three million tons of equipment and supplies to France. Once the Allies broke out into open country, there was little to slow them except their own shortages of fuel and supplies. After destroying Germany’s Army Group B at Falaise, the Allies liberated Paris on 25 August. Field-Marshal Bernard Montgomery’s attempt in September at vertical envelopment (Operation Market Garden) failed miserably, however, as paratroopers dropped into the midst of Panzer divisions. High hopes that the war might be over in 1944 faded as German resistance stiffened and Allied momentum weakened.
Hitler chose December 1944 to commit his strategic reserve in a high-stakes offensive near the Ardennes. Known as the Battle of the Bulge, initial Allied disorder and panic gave way to determined defense at St. Vith and Bastogne. Once the weather cleared, Allied airpower and armor administered the coup de grâce. The following year the Allies pursued a broad front offensive against Germany proper, with George S. Patton’s Third Army crossing the Rhine River at Remagen in March. Anglo-American forces met the Red Army on the Elbe River in April, with Soviet forces being awarded the honor of taking Berlin.
The second front in France was vital to Germany’s defeat. Yet even after D-Day German forces fighting the Red Army exceeded those in France by 210 percent. Indeed, 88 percent of the Wehrmacht’s casualties in World War II came on the Eastern Front. That the U.S. Army got by with just 90 combat divisions was testimony to the fact that the bulk of German and Japanese land forces were tied down fighting Soviet and Chinese armies, respectively. Helping the Allies to husband resources in the ETO was a synergistic Anglo-American alliance, manifested by joint staffs, sharing of intelligence, and (mostly) common goals.
The Air War in Europe
The air forces of all the major combatants, the USAAF and RAF excepted, primarily supported ground operations. U.S. and British air power theory, however, called for concerted strategic bombing campaigns against enemy industry and will. Thus these countries orchestrated a combined bomber offensive (CBO) as a surrogate second front in the air. While the USAAF attempted precision bombing in daylight, RAF Bomber Command employed area bombing by night. The CBO devastated Cologne (1942), Hamburg (1943), and Dresden (1945), but Germany’s will remained unbroken. The CBO succeeded, however, in breaking the back of the Luftwaffe during “Big Week” (February 1944) in a deadly battle of attrition. Eighty-one thousand Allied airmen died in the ETO, with the death rate in RAF Bomber Command alone reaching a mind-numbing 47.5 percent. Hard fought and hard-won, air supremacy proved vital to the success of Allied armies on D-Day and after.
Battle of the Atlantic
Nothing worried Churchill more than the Kriegsmarine’s U-boats (submarines). Surface raiders like the Bismarck or Graf Spree posed a challenge the Royal Navy both understood and embraced with relish. Combating U-boats, however, presented severe difficulties, including weeks of tedious escort duty in horrendous weather. Despite Allied convoys and fast merchantmen, U-boats sank an average of 450,000 tons of shipping each month from March 1941 to December 1942. In March 1943 the Allies lost 627,000 tons, which exceeded the rate of replacement.
Yet only two months later, the tide turned against Germany. Allied successes in reading the Kriegsmarine’s Enigma codes proved vital both in steering convoys away from U-boat “wolf packs” and in directing naval and air units to attack them. Decimetric radar and high-frequency directional finding helped the Allies detect U-boats; B-24 Liberators armed with depth charges closed a dangerous gap in air coverage; and escort groups (including carriers) made it perilous for U-boats to attack, especially in daylight. These elements combined in May 1943 to account for the loss of 41 U-boats, 23 of which were destroyed by air action. Faced with devastating losses of experienced crews, Grand-Admiral Karl Dönitz withdrew his U-boats from the North Atlantic. They never regained the initiative. Germany ultimately lost 510 U-boats while sinking 94 Allied warships and 1900 merchant ships. Because the Kriegsmarine pursued lofty ambitions of building a blue-water navy, however, Germany never could produce enough U-boats to cut Britain’s economic lifeline. Poor resource allocation and strategic mirror imaging ultimately doomed the Kriegsmarine to defeat.
The Rising Sun Ascendant, 1937-1942
By 1938 the Imperial Japanese Army (IJA) had 700,000 soldiers in China. In 1939 the IJA attempted to punish the Soviets for supplying China only to be defeated at the battle of Khalkin Gol. After this defeat, and spurred on by the Imperial Japanese Navy (IJN), Japanese leaders increasingly looked southward, especially as British, Dutch, and French possessions became vulnerable when Germany ran rampant in the ETO. Bogged down in an expensive war with China, and facing economic blockade, Japan decided to seize outright the oil, rubber, tin, bauxite and extensive food resources of the Malay Peninsula, the Dutch East Indies, and Southeast Asia.
After concluding a non-aggression pact with Stalin in April 1941, Japan viewed Britain’s Royal Navy and the U.S. Pacific Fleet as its chief obstacles. To destroy the latter, Japan launched a surprise attack on Pearl Harbor on 7 December 1941, followed by attacks on British and Dutch naval units and invasions of Burma, Malaya, the Philippines, and other island groups using quick-moving, light infantry. Employing islands as unsinkable aircraft carriers, the Japanese hoped to establish a strong defensive perimeter as a shield, with the IJN acting as a mobile strike force or javelin. When the Allies confronted this “shield and javelin” strategy, Japan hoped their losses would prove prohibitive, thereby encouraging them to seek an accommodation that would preserve Japan’s acquisitions.
Japan’s key strategic blunder was that of underestimating the will of the United States, partly due to faulty intelligence that mistakenly stressed American isolationism. Pearl Harbor became for Americans a “date that shall live in infamy,” which permitted neither negotiation nor compromise. Japanese leaders knew they could not compete with U.S. industry (U.S. industrial capacity was nine times that of Japan’s), but they failed to develop feasible plans for ending the war quickly.
Nevertheless, until April 1942 the Japanese enjoyed a string of successes. Pearl Harbor was followed by attacks against the Philippines, where the United States lost half its aircraft on the ground. British attempts to reinforce Singapore led to the sinking of the battlecruiser Repulse and battleship Prince of Wales. At the Battle of Java Sea in February 1942 the IJN destroyed the Dutch navy. For the Allies, disaster followed disaster. At minimal cost, Japan seized Hong Kong, Malaya, most of Burma, and Singapore. Singapore’s surrender on 15 February was psychologically catastrophic to the British since they had failed at what they believed they did best: mounting a staunch defense. From this shattering blow the British Empire never fully recovered. By May 1942 remnants of the U.S. Army at Bataan and Corregidor surrendered, and the Japanese were in New Guinea. To this point not one of the IJN’s eleven battleships, ten carriers, or cruisers had been sunk or even badly damaged.
Eclipse of the Rising Sun, 1942-1945
The IJN suffered its first setback in May 1942 at the Battle of Coral Sea, where the USN stopped Japanese preparatory moves to invade Australia. The IJN next moved against Midway Island, hoping to draw out the U.S. Pacific Fleet and destroy it. The Japanese plan, however, was overcomplicated. It included coordination of eight separate forces and a diversionary assault on the Aleutians. Planned as a battleship fight by Admiral Isoroku Yamamato, the USN was forced instead to rely on carrier strike forces. Japanese indecision and American boldness, enhanced by effective code-breaking (known as MAGIC in the Pacific), led to the loss of four Japanese carriers. Midway was the major turning point in the Pacific theater. After this battle, the USN and IJN were equal in carrier strength, but the United States could build at a much faster rate. From 1942 to 1945 the USN launched 17 fleet carriers and 42 escort carriers, whereas the Japanese launched only four, two of which were sunk on their maiden voyage. Japan also lost its best admiral when U.S. code-breaking led, in April 1943, to the shooting down of Yamamoto’s plane.
The Japanese compounded defeat at Midway by failing to build an adequate merchant marine or to pursue anti-submarine warfare to defend what they had. Constituting less than two percent of USN manpower, American submariners accounted for 55 percent of Japanese losses at sea, virtually cutting off Japan’s supply of oil and reducing imports by 40 percent. By the end of 1944 U.S. submarines had sunk half of Japan’s merchant fleet and two-thirds of its tankers.
Much difficult fighting on land and sea remained. The United States adopted a Twin-Axis strategy designed to give the army and navy equal roles. While General Douglas MacArthur advanced through New Guinea in the southwest Pacific, neutralizing the major Japanese base at Rabaul to prepare for the invasion of the Philippines, Admiral Chester Nimitz island-hopped through the central Pacific. Guadalcanal (Operation Watchtower) in the Solomons turned into a bloody battle of attrition from August 1942 to February 1943 that ultimately favored U.S. forces. Tarawa in the Gilberts (Operation Galvanic) was the first test of the Fleet Marine Concept (FMC) that shortened the logistical tail of the U.S. Pacific Fleet. U.S. landings nearly proved disastrous, however, when Japanese defenders inflicted 42 percent casualties on the invading force. But the USN and Marines learned from their mistakes, and subsequent island operations had high yet sustainable casualty rates.
Battles such as Tarawa highlighted the astonishing viciousness and racism of both sides in the Pacific, with Americans depicting Japanese as “monkeys” or “rats” to be exterminated. Reinforcing the fight-to-the-death nature of warfare was the Japanese warrior code of Bushido that considered surrender as dishonorable. Jungle warfare on isolated islands left little room for maneuver or retreat and bred claustrophobia and desperate last stands. Ruthlessness extended to the U.S. air campaign against Japan that included the firebombing of major cities such as Tokyo, where firestorms killed at least 83,000 Japanese and consumed 270,000 dwellings.
The U.S. invasion of Saipan in June 1944 led to the “Great Marianas Turkey Shoot” in which U.S. pilots shot down 243 of 373 attacking Japanese aircraft while losing only 29 aircraft. Most devastating to Japan was the irreplaceable loss of experienced pilots. To pierce American defenses, Japan employed suicide pilots or Kamikazes at the Battle of Leyte Gulf in October and in subsequent battles. Leyte Gulf in the Philippines was a decisive if close-run victory for the USN, since the IJN missed a golden opportunity to crush Allied landing forces. Costly U.S. campaigns in 1945 led to the capture of Iwo Jima in March and Okinawa in June before the United States dropped atomic bombs on Hiroshima and Nagasaki in August. Together with Soviet entry into the war against Japan, these atomic attacks convinced the Japanese Emperor to surrender, with formal ceremonies being held on the USS Missouri on 2 September 1945.
Japan’s unconditional surrender highlighted what had been a fundamental, and ultimately fatal, schism between the IJA and IJN. Whereas the IJA had focused on the Asian continent to neutralize China and the Soviet Union, the IJN had identified the United States and Britain as its principal enemies. The IJA had been more influential in Japanese politics and dominated Imperial general headquarters. Interservice rivalry led to haphazard coordination and bureaucratic infighting that degraded the Japanese war effort. Like their nominal allies the Germans, Japan had essentially engaged in a two-front war of exhaustion against foes possessing superior resources. IJA gains in the China-Burma-India theater had not been sustainable, especially as British, Chinese, and Indian forces learned to counter Japanese infantry tactics under the determined tutelage of William Slim, Orde Wingate, and “Vinegar Joe” Stilwell.
Technology and Medicine
World War II is known as the “physicist’s war” due to the success of the U.S./British/Canadian Manhattan Project that developed atomic bombs, as well as the invention and use of radar. Germany was especially innovative, developing the V-1 cruise missile and V-2 ballistic missile as “vengeance” weapons. While a remarkable technical achievement, the V-2 was ultimately a waste of precious resources. Its circular error probable (CEP) of 20 kilometers and small one-ton warhead made it little more than a deadly nuisance. Germany also developed the Me-262, the world’s first operational jet fighter, but its late deployment in small numbers had little impact on the air war. Less spectacular, but more telling, was the Allied emphasis on fielding large numbers of proven weapons, such as Soviet T-34 and U.S. M-4 Sherman tanks; aircraft such as P-51 long-range escort fighters and Lancaster four-engine bombers; and Higgins boats for amphibious operations.
Penicillin and DDT, both developed by the Allies, were the leading medical developments. Penicillin saved the lives of untold tens of thousands of wounded Allied troops, and DDT vastly reduced casualties due to mosquito-borne diseases in the Pacific. The Germans developed nerve gas but decided against employing it, apparently because they (wrongly) believed the Allies also had it. Unlike the previous world war, chemical weapons were rarely used. Finally, Allied code-breaking efforts such as ULTRA saw the development of primitive computers.
Legacies of the War
World War II saw the emergence of the United States and Soviet Union as superpowers. The resulting Cold War between them created a bi-polar world until the Soviet Union’s collapse in the early 1990s. With the end of the myth of Western superiority came the decline of colonial empires and the independence of countries such as India (1947). The war also resulted in the division of Germany (reunited in 1989) and the occupation and democratization of Japan; the creation of the United Nations and the state of Israel; and the rise of leaders formed in the crucible of war, such as Dwight D. Eisenhower and Charles de Gaulle. A vastly destructive war with tragic consequences, World War II nevertheless saw the demise of Hitler’s Third Reich, a regime based on mass slavery of “inferiors” and the categorical extermination of “undesirables” (Jews, Gypsies, the handicapped and mentally ill, etc.), as well as the overthrow of a Japanese regime that glorified militarism and justified slavery and racial discrimination on a massive scale.
Bartov, Omer. The Eastern Front, 1941-45: German Troops and the Barbarisation of Warfare, New York, 1986.
Churchill, Winston S. The Second World War, 6 vols, New York, 1948-53.
Costello, John. The Pacific War 1941-1945, New York, 1982.
Dear, I.C.B., ed. The Oxford Companion to World War II, New York, 1995.
Dower, John W. War Without Mercy: Race and Power in the Pacific War, New York, 1986.
Gilbert, Martin. The Second World War: A Complete History, New York, 1989.
Glantz, David M. and Jonathan M. House. When Titans Clashed: How the Red Army Stopped Hitler, Lawrence, KS, 1995.
Iriye, Akira. Power and Culture: The Japanese-American War, 1941-1945, Cambridge, MA, 1981.
Jones, R.V. The Wizard War: British Scientific Intelligence 1939-1945, New York, 1978.
Keegan, John. The Second World War, New York, 1990.
Lukacs, John. The Last European War, September 1939/December 1941, New Haven, CT, 1976, 2000.
Miller, Edward. War Plan Orange: The U.S. Strategy to Defeat Japan, 1897-1945, Annapolis, MD, 1991.
Murray, Williamson and Allan R. Millett. A War to be Won: Fighting the Second World War, Cambridge, MA, 2000.
Overy, Richard. Russia’s War: A History of the Soviet War Effort: 1941-1945, New York, 1997.
Overy, Richard. Why the Allies Won, New York, 1995.
Parker, R.A.C. Struggle for Survival: The History of the Second World War, New York, 1990.
Spector, Ronald H. Eagle Against the Sun: The American War with Japan, New York, 1985.
Stoler, Mark A. Allies and Adversaries: The Joint Chiefs of Staff, The Grand Alliance, and U.S. Strategy in World War II, Chapel Hill, NC, 2000.
Taylor, A.J.P. The Origins of the Second World War, New York, 1961.
Thorne, Christopher G. Allies of a Kind: The United States, Britain, and the War Against Japan, 1941-1945, New York, 1978.
Van der Vat, Dan. The Atlantic Campaign: World War II’s Great Struggle at Sea, New York, 1988.
Weinberg, Gerhard L. A World At Arms: A Global History of World War II, Cambridge, 1994.
Willmott, H.P. The Great Crusade: A New Complete History of the Second World War, New York, 1991.
In the 19th century, many people believed in polygenism, and others used the concept of “the races of man,” where by “race” they often meant species. At home, I have a framed copy of the races of man taken from an encyclopedia published in the 1890s. Here’s a photo of it:
Of course, there’s always an assumed hierarchy to the races of man concept. White Europeans are at the top, since it’s they who defined and ordered the hierarchy. Surprise!
In my photo, White Europeans take pride of place in the center, with some swarthy Italians at the top right (I’m half-Italian). Meanwhile, Polynesian (pink flowers in hair) and Indian (from South America) women are shown with bare breasts. “Primitives” are primitive precisely because they’re “immodest” in dress, a convention that allowed publishers to show nudity in illustrations and photos without being accused of pornography. You might call this the “National Geographic” dispensation for nudity.
My college students were often amazed when I told them that science shows that all of us — all humans — came out of Africa. Far too many people today still think of race as both definitive and as a rung on a ladder, and naturally they always place their own “race” on the top rung.
Even more disturbing is the resurgence of racialized (and often racist) thinking in the United States. The idea of the races of man and the “scientific” ordering of the same was debunked a century ago, yet it’s back with a vengeance in much of the U.S.
Naturally, those who promote racialized thinking always put their own perceived race at the top. In that sense, nothing whatsoever has changed since the 19th century and the “races of man” concept.
When I was in CCD and preparing to be confirmed at St. Patrick’s Church in the late 1970s, our teachers tried to teach us kids what “love” is. We were asked to give definitions. As teenagers, we came up with the usual definitions of romantic love, all valentines and holding hands and smooching.
No, our teachers explained, love should be selfless. It’s not about you. Love is about giving without expecting anything in return.
Throughout her long life, Aunt Mary demonstrated that kind of love. She gave to her own mother, caring for her as she aged. She gave to her sister Corrine through her struggles. She gave to her brother Gino. She gave to us all, and she did so with generosity and goodness and grace. She gave without expecting anything in return.
So, if my old CCD teachers asked me today for a definition of love, my answer would be a simple one. “ ‘Love,’ ” I’d say, “is my Aunt Mary.”
Aunt Mary blessed our lives for 94 years. Let us give thanks to God that she was with us for so long. And let us all learn from her shining example the true meaning of love.
These are not exactly happy times. Americans have fewer safeguards for their jobs, financial well-being, and, ultimately, their very lives. Uncertainty and insecurity have become more prevalent than ever I can remember. As a consequence, insomnia, depression, angst seem to be characteristic of an increasing number of people across the country, almost as American as apple pie. Just as being a divorcee in California is nothing to write home about—you’re even considered odd if you have never been divorced—so is the sense that something “bad” can happen at any time, without warning. Sociologists—I am a sociologist–call this condition “anomie,” a concept formulated by one of the founders of sociology, Emile Durkheim. The Trump presidency, it may be argued, exacerbates anomie since we seem to be moving closer to economic nightmares and possibly nuclear holocaust than in recent decades.
What exactly is anomie? Anomie literally means without norms. It’s a psychological condition, according to Durkheim, in which an individual member of a society, group, community, tribe, fails to see any purpose or meaning to his/her own life or reality in general. Anomie is the psychological equivalent to nihilism. Such a state of mind is often characteristic of adults who are unemployed, unaffiliated with any social organization, unmarried, lack family ties and for whom group and societal norms, values, and beliefs have no stabilizing effect.
The last situation can readily be the outcome of exposure to sociology courses in college, providing the student does not regularly fall asleep in class. I’ve always tried to caution my own students that sociology could be, ironically, dangerous to their mental health because of the emphasis on critical thinking regarding social systems and structures. Overcoming socialization or becoming de-socialized from one’s culture—when one begins to question the value of patriotism, for instance—can be conducive to doubt and cynicism which may give rise to anomie. Of course, I also emphasize the benefits of the sociological enterprise to the student and to society in general. For example, a sociology major is perhaps less likely to participate voluntarily in wars that only favor special interests and which unnecessarily kill civilians.
Clinical depression is virtually an epidemic in the U.S. these days. Undoubtedly, anomie is a major factor, especially in a culture where meaningful jobs or careers are difficult to obtain. To a great extent social status constitutes one’s definition of self. In Western societies the answer to the perennial philosophical question, “Who are you?” is one’s name and then job or role in the social structure. Both motherhood and secure jobs or careers are usually antidotes against anomie. Childless women and unemployed or under-employed males are most susceptible to anomie.
What does postmodernism offer to combat the anomie of modern society and now the Trump era itself? An over-simplification of post-modernism or the postmodern perspective is that there is no fixed or certain reality external to the individual. All paradigms and scientific explanations are social constructs, one model being no more valid than the other. A good example of the application of the postmodern perspective is the popular lecture circuit guru, Byron Katie. Ms. Katie has attracted thousands of followers by proclaiming that our problems in life stem from our thoughts alone. The clutter of consciousness, thoughts, feelings, can simply be recognized as such during meditation and then dismissed as not really being real. Problems gone.
An analogy here is Dorothy Day, the founder of the Catholic worker movement, proclaiming in the 1960s that “our problems stem from our acceptance of this filthy, rotten system.” Postmodernists would claim that our problems stem from our acceptance of the Enlightenment paradigm of reality—the materialist world-view, rationality itself. Reality, postmodernists claim, is simply what we think it is. There is no “IS” there. Postmodern philosophers claim that all experiences of a so-called outside world are only a matter of individual consciousness. Nothing is certain except one’s own immediate experience. The German existential philosopher, Martin Heidegger, contributed significantly to the postmodern perspective with his concept, “dasein,” or “there-being.” Dasein bypasses physiology and anatomy by implying that neurological processes are not involved in any act of perception, that what we call “scientific knowledge” is a form of propaganda, that is, what we are culturally conditioned to accept as real. There is no universal right or wrong, good or bad.
The great advantage of adopting the postmodern perspective as a way of overcoming anomie is the legitimacy or validation it gives to non-ordinary experiences. If the brain and nervous system are social constructs then so-called altered states of consciousness such as near death experience (NDE), out-of-body experience, reincarnation, time-travel, spirits, miraculous healing become plausible. Enlightenment science and rationality are only social constructs, byproducts of manufactured world-views. The “new age” idea that we create our own reality rather than being immersed in it has therapeutic value to those suffering from anomie.
Sociologist Peter Berger employs the concept of “plausibility structures” to legitimize (make respectable) views of the “real world” which conflict with the presuppositions of Enlightenment science. Science then becomes “science” or social constructs which may or may not have validity even though they are widely accepted as such. A good example is the postmodern practice of deconstructing or calling into question empirical science and rational thought itself, disregarding the brain as source of all perceptions, feelings, desires, and ideas. Postmodernists maintain that only individual consciousness is real; the brain is a social construct which doesn’t hold water—no pun intended—as the source of what it means to be human.
The postmodern perspective may work for a while in suppressing anomie and dealing with the horrors of a hostile or toxic social and political environment. Sooner or later, however, existential reality intervenes. The question is, can postmodernism alleviate physical pain, the death of a loved one, personal injury and illness, the loss of one’s home and livelihood? At this point in the evolution of my philosophical reflections I would argue that postmodernism can reduce or eliminate the depression that inevitably comes from too much anomie–but only temporarily. The postmodern perspective is not up to the task of assuaging the truly catastrophic events in one’s life. As much as I would like not to believe this, I’m afraid only political and social action can help us out when the going really gets rough, although I don’t recommend sacrificing the teaching of critical thinking, a possible cause of anomie, in regard to society’s values and institutions.
Richard Sahn, a professor of sociology in Pennsylvania, is a free-thinker.
Fans of George Orwell’s 1984 will recall Newspeak, the development of a new language that also involved the elimination of certain words and concepts. The method is clearly defined by the character of Syme in Orwell’s book:
“You think … our chief job is inventing new words. But not a bit of it! We’re destroying words—scores of them, hundreds of them, every day. We’re cutting language down to the bone … You don’t grasp the beauty of the destruction of words. Do you know that Newspeak is the only language in the world whose vocabulary gets smaller every year?”
“Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thought-crime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten … Every year fewer and fewer words, and the range of consciousness always a little smaller… The Revolution will be complete when the language is perfect…”
Trumpspeak is America’s version of Newspeak. Whatever you choose to call it, the intent is clear: the control of thought through the elimination of certain words and concepts. Today at TomDispatch.com, Karen Greenberg documents the destruction of certain words and concepts within the Trump administration. These words and concepts include refugees, climate change, greenhouse gases, America as a nation of immigrants, and even the notion of science-based evidence.
The suppression or elimination of words and phrases is one big step toward thought control; so too is the parroting of certain pet phrases and concepts, such as “support our troops” or “make America great again” or “homeland security.” In an article about Alfred Döblin’s Berlin Alexanderplatz that appeared in the Nation, Adam Kirsch writes of how Döblin recognized the “sinister” nature of “the colonization of the individual mind by parasitic discourses,” the way in which reality itself is “cloaked” by “predigested phrases.” Döblin wrote of how “The words come rolling toward you, you need to watch yourself, see that they don’t run you over.”
And I think something like this is happening in America today. We’re being run over by certain words and concepts, even as other words and concepts related to democracy and cherished freedoms are carefully elided or eliminated.
Of course, Orwell wrote about this as well. “Predigested phrases” is captured by Orwell’s concept of duck-speak, in which proles just quack like ducks when they speak, echoing the sounds fed to them by party operatives. Quacking like a duck requires no thought, which is precisely the intent.
Pay attention, America, to the words you’re losing before they’re gone forever; and also to the words you’re using before they run you over.
Mercy has been on my mind since re-watching “The Lord of the Rings” trilogy. There’s a nasty little character known as Gollum. Before he was seduced by Sauron’s ring (the one ring of power), Gollum was known as Smeagol. Twisted and consumed by the Dark Lord’s ring, Smeagol becomes a shadow of himself, eventually forgetting his real name and becoming Gollum, a name related to the guttural coughs and sounds he makes.
Gollum loses the Ring to Bilbo Baggins, a Hobbit of the Shire. The Ring extends Bilbo’s life but also begins to twist him as well. As Sauron returns to power in Mordor, he needs only to regain the Ring to defeat the combined might of the peoples of Middle Earth. Bilbo passes the Ring to his much younger cousin, Frodo, who together with a Fellowship consisting of representatives drawn from men, elves, dwarfs, and hobbits as well as the wizard Gandalf, journeys to Mordor to destroy the Ring and vanquish Sauron.
Early in his journey to Mordor, Frodo says he wishes Bilbo had killed Gollum when he’d had the opportunity. (Gollum, drawn by the Ring, is shadowing the Fellowship on its journey.) Gandalf sagely advises Frodo that Gollum may yet play an important role, and that mercy is not a quality to disparage. As the Fellowship is separated and Frodo has to journey to Mordor with only his faithful friend Sam beside him, Gollum soon becomes their indispensable guide, and Frodo begins to pity him. Frodo, by showing Gollum mercy, reawakens the good within him, calling him Smeagol and preventing Sam from hurting him.
But the corrupting power of the Ring overtakes Smeagol again, and Gollum reemerges. Even so, without Gollum’s help, Frodo and Sam would never have made it to Mordor and the fires of Mount Doom. On the brink of destroying the Ring, Frodo too becomes consumed by its power, choosing to use it instead of casting it into the fire. Here again, Gollum emerges as an instrumental character. He fights Frodo for the Ring, gains it, but loses his footing and falls into the fires of Mount Doom, destroying himself as well as the Ring and saving Middle Earth.
It was Bilbo and Frodo’s mercy that spared the life of Gollum, setting the stage for Gollum’s actions that ultimately save Frodo and the rest of Middle Earth from Sauron’s dominance. Without Gollum’s help, Frodo and Sam would never have made it to Mount Doom; or, if by some miracle they had, Frodo in donning the Ring would have been ensnared by Sauron’s power and executed by him. If Frodo is the hero of the tale, Gollum is the anti-hero, as indispensable to Middle Earth’s salvation as Frodo and the Fellowship.
Another story about the role of mercy came in one of my favorite “Star Trek” episodes, “Arena.” In this episode, Captain Kirk has to fight a duel with an enemy captain of a lizard-like species known as the Gorn. It’s supposed to be a fight to the death, overseen by a much superior species known as the Metrons. When Kirk succeeds in besting the Gorn captain, however, he refuses to kill the Gorn, saying that perhaps the Gorn had a legitimate reason for attacking a Federation outpost. A Metron spokesperson appears and is impressed by Kirk, saying that he has demonstrated the advanced trait of mercy, something the Metrons hardly suspected “savage” humans were capable of showing.
Perhaps war between the Federation and the Gorn is not inevitable, this episode suggests. Diplomacy may yet resolve a territorial dispute without more blood being shed, all because Kirk had the courage to show mercy to his opponent: an opponent who wouldn’t have shown mercy to him if their fates had been reversed.
Mercy, nowadays, is not in vogue in the USA. America’s enemies must always be smited, preferably killed, in the name of righteous vengeance. Only weak people show mercy, or so our national narrative appears to suggest. But recall the saying that in insisting on an eye for an eye, soon we’ll all be blind.
The desire for murderous vengeance is making us blind. The cycle of violence continues with no end in sight. Savagery begets more savagery. It’s as if we’ve put on Sauron’s ring and become consumed by it.
Do we have the courage of Bilbo and Frodo Baggins, and even of that man of action, Captain Kirk? Can our toughness be informed by and infused with mercy?
I’ve never gotten excited about or interested in a particular sports team, whether professional or amateur. I don’t care whether a particular team wins or loses and I go out of my way not to watch games on TV or listen to a radio broadcast.
Prior to this year’s Super Bowl game, I listened to people chant, on the phone or in person, “Go Patriots” or “Go Eagles.” Even a Catholic priest at the end of a mass I attended recently couldn’t leave the altar before letting the parishioners know he was a Patriots fan.
Spectator sports have always been a secular religion in most developed countries but with no promise of any form of salvation, afterlife, or reincarnation. The most you can really expect from your team is winning a bet on the game. But spectator sports is a distraction with negative consequences, ultimately, to society and the individual sports fan—such as having no understanding of the actions of political parties.
And because each season of the year has its athletic contests there is no letup. A fan is deluged all year round with games as well as incessant commentaries on athletes and the points they score or might score. Athletic contests and players, even on the high school level, are a major topic of conversation, especially among adult males I view such conversations as not only boring but irrelevant to my own life, to what I would call meaningful concerns.
In fact, I would argue spectator sports discussions have no lasting therapeutic value in dealing with the real “slings and arrows of outrageous fortune.” Political philosopher Noam Chomsky recently said, probably somewhat sarcastically, that if as much mental energy was expended in solving the social and economic problems of the world as is expended in trying to explain why a given team wins or loses a game, much socially and politically induced suffering and death could be eliminated.
Eavesdrop on virtually any conversation, especially at World Series, Super Bowl, or NBA playoff times, and you’ll hear conversations that would make you believe you were in a think-tank rivaling the Institute for Advanced Study at Princeton.
Now, as a sociologist, I realize the important function of sports in society. That function, of course, is a distraction from life’s existential problems and dilemmas. Death, loss of loved ones, nuclear war, global warming are certainly among those problems. And, most assuredly, being a spectator sports fanatic is a far better alternative than being a drug addict or engaging in anti-social behavior. I also admit spectator sports have a limited psycho-therapeutic effect on some people.
My quarrel is with the level of energy spent watching and then discussing sports events. Even expressing one’s preference for one team or another I find disturbing, mainly because I feel there are more worthwhile causes to champion. Agonizing, so it seems, over the prowess of individual players and their team’s chances of winning playoffs or championships is a waste of time and energy. Simply put, I cannot empathize in the slightest with the sports fan. In that respect I guess I’m a type of sociopath since sociopaths can’t empathize with other human beings in general.
Arguably, spectator sports also contribute to the “us” versus “them” perspective toward social life, the belief that life is not interesting or worthwhile unless “us” is always trying to defeat “them,” whether “them” be a rival team or country–in other words, not “us.”
The great (former) coach of the Green Bay Packers, Vince Lombardi once proclaimed, “Winning isn’t everything; it’s the only thing.” Could Lombardi’s philosophy be applied to our current president who is also an ardent sports fan? Could Donald Trump’s insistence on America becoming “great again,” with all the dire consequences to minority groups and the underclass, not to mention the world in general, be the by-product of his obsessive interest in spectator sports? At one time our president wanted to be owner of an NFL team. What does that tell us?
Two psychological processes seem to account for the prevalence of the typical sports fan. These are vicarious identification and reification. Vicarious identification is thinking that one “IS” actually the team he or she is watching. The team’s victory or defeat is his/her victory or defeat. Being able to enjoy plays, movies, and novels entails the same process; for the moment, one is a character in a work of fiction. The ability of consciousness (mind, soul, brain, spirit, if you prefer) to immerse itself in a story or situation that is fictitious is, for sure, one of the great joys of life. From time to time I’ve watched certain films or videos multiple times and can still fool myself into thinking that I don’t really know the outcome. Perhaps spectator sports allow male fans in particular to be the macho male, the alpha male they’re not in everyday life, without having to perform in any way. No need to resort to violent behavior if one vicariously identifies with a football team or professional wrestlers.
Reification is psychologically treating an abstract concept or mental construct as if it were real, as if it were empirical or tangible reality. Semanticists will say “the word is not the thing” or “the map is not the territory.” Nations, states, cities do not exist as realities (sui generis); they are only abstract concepts, in other words, words. People exist, athletes exist, and games are played, but the sports fan wants his/her “team” to win because the name of the team itself is regarded as if it were a live person or group of people.
It doesn’t matter, usually, who the real life players are or even if there are any real life players. It’s the “team” itself—the word is the thing. I once asked my students who were fans of the Pittsburgh Steelers whether they would still want the Steelers to defeat the Dallas Cowboys if the teams’ executives exchanged players and coaches. The Steelers fans said they would still support or root for the Steelers over the Cowboys. I tried to point out the error in their thinking, that there is no such reality as the “Steelers” or the “Cowboys,” that only players and their coaches exist. No, the Steelers fans would remain Steelers fans and want the team to win because they are “The Steelers.”
Existence precedes essence, say the existentialists. Existence is what is tangibly real, for example, what could physically maim, hurt, kill. Essence refers to words, ideas, concepts. (For example, essence would be the “thoughts and prayers” for gun victims–what we hear so much these days from our politicians in the wake of shooting violence.) Scoring a touchdown is “existence.” The team that fans roots for is “essence,” in other words, nothing but an idea with no more substance than the number “5.” When one regards spectator sports existentially it becomes difficult to be a fan, although one may enjoy viewing brilliantly executed plays on the field or in the arena.
My argument here, then, is that the serious spectator sports fan is likely to be distracted from engaging in philosophical, political, aesthetic, critical thinking or reflection. Now, I have no doubt that one could be a sports fan, even a fanatical sports fan, and be a social activist, an artist, a scholar, a reflective person capable of deep meditation. I just see spectator sports as tending to obstruct or preclude intellectual and aesthetic development in the general population of a given country.
Professional and collegiate athletic events do benefit our economic system by creating all kinds of jobs and careers, and not just for the players. But spectator sports may also stand in the way of the fan being exposed to and contemplating the vital social and political issues of the times. It is reasonable to ask whether being a serious sports fan erodes participation in the democratic process. Why are most universities known for their teams and not for what their faculties teach? What’s the first thing an American thinks of when he or she thinks of “Ohio State” or “Notre Dame” or “Penn State”? Is it higher learning? Or football?
Richard Sahn teaches sociology at a college in Pennsylvania.