The USA No Longer Sees Freedom and Liberty as Core Strengths

liberty-tree1
Why are we so intent on chopping it down?

W.J. Astore

In the crusade against Communism, otherwise known as the Cold War, the U.S. saw “freedom” as its core strength.  Our liberties were contrasted with the repression of our chief rival, the USSR.  We drew strength from the idea that our system of government, which empowered people whose individualism was guided by ethics based on shared values, would ultimately prevail over godless centralism and state-enforced conformity.  An important sign of this was our belief in citizen-soldiers rather than warriors, and a military controlled by democratically-elected civilians rather than by dictators and strong men.

Of course, U.S. foreign policy during the Cold War could be amoral or immoral, and ethics were often shunted aside in the name of Realpolitik.  Even so, morality was nevertheless treated as important, and so too were ethics.  They weren’t dismissed out of hand.

Fast forward to today.  We no longer see “freedom” as a core U.S. strength.  Instead, too many of us see freedom as a weakness.  In the name of defeating radical Islamic terrorism, we’ve become more repressive, even within the USA itself.  Obedience and conformity are embraced instead of individualism and liberty.  In place of citizen-soldiers, professional warriors are now celebrated and the military is given the lion’s share of federal resources without debate.  Trump, a CEO rather than a statesman, exacerbates this trend as he surrounds himself with generals while promising to obliterate enemies and to revive torture.

In short, we’ve increasingly come to see a core national strength (liberty, individualism, openness to others) as a weakness.  Thus, America’s new crusades no longer have the ethical underpinnings (however fragile they often proved) of the Cold War.  Yes, the Cold War was often unethical, but as Tom Engelhardt notes at TomDispatch.com today, the dirty work was largely covert, i.e. we were in some sense embarrassed by it.  Contrast this to today, where the new ethos is that America needs to go hard, to embrace the dark side, to torture and kill, all done more or less openly and proudly.

Along with this open and proud embrace of the dark side, America has come increasingly to reject science.  During the Cold War, science and democracy advanced together.  Indeed, the superior record of American science vis-à-vis that of the Soviet Union was considered proof of the strength and value of democracy.  Today, that is no longer the case in America.  Science is increasingly questioned; evidence is dismissed as if it’s irrelevant.  “Inconvenient truths” are no longer recognized as inconvenient — they’re simply rejected as untrue.  Consider the astonishing fact that we have a president-elect who’s suggested climate change is a hoax perpetrated by China.

Yesterday, I saw the following comment online, a comment that summed up the new American ethos: “Evidence and facts are for losers.”  After all, President-elect Trump promised America we’d win again.  Let’s not let facts get in the way of “victory.”

That’s what a close-minded crusader says.  That the truth doesn’t matter.  All that matters is belief and faith.  Obey or suffer the consequences.

Where liberty is eroded and scientific evidence is denied, you don’t have democracy.  You have something meaner.  And dumber.  Something like autocracy, kleptocracy, idiocracy.  And tyranny.

The U.S. Military in Science Fiction

W.J. Astore

Two weeks ago, I did an interview with TheoFantastique on the military in science fiction. I’d like to thank John Morehead, the site’s creator, for inviting me to answer a few questions on a subject near and dear to my heart.

TheoFantastique: Bill, thanks for making a little time to respond to a few questions related to the subject matter of your article. What are some general observations you have made about the shift in science fiction film depictions of the American military from the post-World War II period to the present?

3175_8dayearth_lgBill Astore: Thanks for inviting me, John. I grew up in the late 1960s and 1970s, in the immediate aftermath of the Vietnam War and Watergate. Films of that era were generally critical of the establishment, including sci-fi films. I fondly recall Planet of the Apes with its anti-nuclear message. Also Soylent Green with its warning about over-population, but even more dire was the way in which the authorities hid from the people the true nature of their new food source. Think also of Capricorn One, hardly a great film, but one which exposed a government conspiracy at the heart of the first manned mission to Mars. And Silent Running with Bruce Dern. The basic message was how humans were destroying planet earth, often due to nuclear war or environmental destruction, or both. Finally, Logan’s Run was a favorite of mine, but again the message was how the government of that world hid from the people the true nature of life outside of the bubble.

I remember seeing Alien in the theater and being blown away by the alien “birth” scene. But again the theme of that film was you can’t trust the authorities, who wanted the “alien” at any cost, i.e. the crew was expendable. Think of Outland as well with Sean Connery: yet more corruption among the establishment, this time involving drugs and production quotas in space mining. Here the workers were expendable.

I know I’m digressing from your question, but my general point is this: Sci-Fi films (and stories) are generally questioning (or questing, perhaps). They are usually not pro-military or pro-authority. Put differently, for every Starship Troopers there’s a Bill the Galactic Hero as a counterweight.

Think of one of my all-time favorite films, The Day the Earth Stood Still. The military is completely ineffectual in that film. Worse: the military contributes to the problem. Similarly, in the 1950s lots of films were made about the dangers of nuclear war and radiation. The military usually didn’t emerge in a favorable light in those films, if I recall correctly.

I think this began to change with films like Star Wars and Close Encounters of the Third Kind. Star Wars could be read as apolitical (“a long time ago, in a galaxy far, far away”), even if that wasn’t George Lucas’s intent. In Close Encounters, a terrific film that I saw in the theater, the authorities actually know what they’re doing. They greet the alien mothership peacefully, and communicate with music and light instead of guns and nukes. Again, I don’t think Spielberg was making a pro-authority or pro-military film, but I believe he didn’t want to make a political film, a film like The Day the Earth Stood Still.

7ef4082d1After these two films, Hollywood embraced space operas and feel-good movies. There were exceptions, of course. One of my favorite movies is Starman with Jeff Bridges. Again, the authorities only want the alien for the powers he brings with him. Think too of The Man Who Fell to Earth and the way in which his life is corrupted by human excess. Doesn’t he get addicted to television?

The movie that really changed it all was Independence Day, a perfect film in the aftermath of Desert Storm (the expulsion of Iraq from Kuwait). Here, of course, the militaries of various countries come together to defeat the aliens, led by an American president who climbs into the cockpit to lead the charge himself. This proved so popular that it’s no surprise George W. Bush tried to replicate the scene in the aftermath of the U.S. invasion of Iraq in 2003 (his infamous landing on an aircraft carrier, followed by his “Mission Accomplished” victory speech).

TheoFantastique: What represents much of the portrayal of the U.S. and its military, and what does this say back to us by way of reflection on American militarism around the world?

Bill Astore: I think many, if not most, Americans now want to see the U.S. military portrayed in a positive light in films. Since the 1980s, and especially since the 1990s, Americans have been told to “support our troops.” After 9/11, ordinary Americans were taught and told we live in a dangerous world filled with “alien” terrorists, and that we had to submit to authority to combat and defeat those “aliens.”

area51-independence-day-attackSome recent sci-fi films, I believe, have come to celebrate the military, its weaponry, and its can-do spirit of “warriors.” They’ve played it safe, in other words. In some cases, film makers may have curried favor with the Pentagon as a way of securing military cooperation in filming. For example, to secure access to bases, to advanced technologies such as the F-22 and F-35 jet fighters, and so on. It makes their films “sexier” to have such access.

I’m sure some would say, So what? What’s wrong with a summer blockbuster that portrays military action in a favorable light? To that I’d say: reel war is nothing like real war. The best science fiction films — or the memorable ones — inspire us to dream of bettering ourselves as individuals and as a species. And I think the best films still seek to challenge us to be more noble, more benevolent, more compassionate.

TheoFantastique:
How do you feel as a retired Air Force officer about current science fiction’s perspective on the U.S. military?

Bill Astore:
I have mixed feelings. On the one hand, I’m glad that films are not universally anti-military. On the other hand, I’m upset that many films tend to glorify battle and war. War often looks very sexy and exciting in today’s crop of sci-fi action flicks. We need to remember that war is bloody awful, and that lasers and light sabers would not make it any less awful.

Check out TheoFantastisque, a meeting place for myth, imagination, and mystery in pop culture.

The Challenger Shuttle Disaster, Thirty Years Later

Challenger_flight_51-l_crew
The Crew of the Challenger

W.J. Astore

When the Challenger blew up thirty years ago this January, I was a young Air Force lieutenant working an exercise in Cheyenne Mountain Command Center near Colorado Springs, Colorado.  I remember the call coming in to the colonel behind me.  I heard him say something like, “Is this real world?”  In other words, is this really happening, or is it part of the exercise?  The answer at the other end was grim, our exercise was promptly cancelled, and we turned on the TV and watched the explosion.

Our initial speculation that day was that an engine had malfunctioned (the explosion appeared to have occurred when the shuttle’s engines were reaching maximum thrust).  But it turned out the shuttle had a known technical flaw that had not been adequately addressed.  Something similar would happen to the Columbia in 2003: a known technical flaw, inadequately addressed, ended up crippling the shuttle.

When I taught a course on “technology and society” at the collegiate level, I had my students address the non-technical causes of the Challenger and Columbia disasters.  Here is the question I put to them in the course syllabus:

NASA lost two space shuttles: the Challenger in 1986 and the Columbia in 2003.  Tragically, both these accidents were preventable.  Both had clear technical causes.  In 1986, faulty O-rings on the solid rocket boosters allowed gas to escape, leading to an explosion of the center fuel tank.  In 2003, insulation foam that detached from the shuttle upon liftoff damaged the heat insulation tiles that protect the shuttle from the intense heat of reentry, leading to internal explosions as the Columbia reentered the atmosphere.

Both accidents also highlighted wider issues involving risk management, institutional culture, and control of highly complex machinery.  Before each accident, NASA engineers had warned managers of preexisting dangers.  In the case of the Challenger, it was the risk of launching in low temperatures, as shown by previous data of gas leakage at O-ring seals when the air temperature was below sixty degrees Fahrenheit.  In the case of the Columbia, visual data suggested the shuttle had sustained damage soon after liftoff, a fact that could have been confirmed by cameras and/or a space walk.  In both cases, managers overruled or disregarded the engineers’ concerns, leading to catastrophe.

Question: What do you think were the key non-technical factors that interacted with the technical flaws?  What lessons can we learn from these accidents about controlling complex technical systems?

I wanted my students to focus on issues such as group think, on management concerns about cost and schedule and how those might cloud judgment, on the difficulty of managing risk, on the possibilities of miscommunication among well-intentioned people operating under stress.

I ended the lesson with a quote from Richard Feynman, the Nobel Prize winning scientist who had served on the Challenger board of inquiry after the accident.  Feynman’s honest assessment of the critical flaws in NASA’s scheme of management was shunted to an appendix of the official report.  It’s available in his book, “What Do You Care What Other People Think?”

This is what Feynman had to say:

“For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”

It was a devastating conclusion – a much needed one then, and arguably even more needed today.

Charles Darwin Has Much to Teach Us About War

Charles Darwin
Charles Darwin

W.J. Astore.  Also at Huff Post.

America’s thinking about military action is impoverished. The U.S. military speaks of precision munitions and surgical strikes, suggesting a process that is controllable and predictable. Experts cite Prussian military theorist Carl von Clausewitz for his axiom that war is a continuation of political discourse with the admixture of violent means. Here, military action is normalized as an extreme form of politics, suggesting again a measure of controllability and predictability.

But what if war is almost entirely imprecise and unpredictable? What if military action and its impacts are often wildly out of line with what the “experts” anticipate? In fact, this is precisely what military history shows, time and time again, to include recent U.S. military actions in Iraq and Afghanistan.

U.S. military action essentially acts like hammer blows that upset the state of nature within the complex ecologies of societies like Iraq and Afghanistan. These blows ripple in unpredictable directions, creating new states of nature that change the ecologies of these societies in fundamental ways. They further generate fault lines that are often contrary to U.S. goals and interests.

Charles Darwin can lend a hand in explaining why this is so. Darwin is best known for his theory of evolution with its idea of “the survival of the fittest,” although Darwin did not use that term when he originally published The Origin of Species in 1859. Indeed, Darwin’s view of evolution was highly complex and multifaceted, as befits a man who studied the natural world in great detail for his entire adult life.

In an earlier, unpublished version of his masterwork, Darwin employed a complex image, known as the “wedge” metaphor, to explain interactions within the natural world that led to species extinction. Here is the way Darwin described “The Struggle for Existence” in his Notebook prior to The Origin of Species:

Nature may be compared to a surface covered with ten‐thousand sharp wedges, many of the same shape & many of different shapes representing different species, all packed closely together & all driven in by incessant blows: the blows being far severer at one time than at another; sometimes a wedge of one form & sometimes another being struck; the one driven deeply in forcing out others; with the jar & shock often transmitted very far to other wedges in many lines of direction: beneath the surface we may suppose that there lies a hard layer, fluctuating in its level, & which may represent the minimum amount of food required by each living being, & which layer will be impenetrable by the sharpest wedge.

In his model of the face of nature, Darwin showcases the interconnectedness of all species, together with the way in which changes to that face (the hammer blows) favor some species (wedges) while forcing out others. The hard layer, which represents the minimum amount of food for all, and which Darwin says cannot be penetrated, suggests an ecology that will continue to sustain life even as some species (wedges) are forced out and die off. The face of nature constantly changes, some species perish, but life itself endures.

How does Darwin’s wedge metaphor apply to military action? Consider, for example, U.S. airstrikes in the Middle East. They are the hammer blows, if you will, to the face of nature in the region. The wedges are various groups/sects/factions/tribes in the region. The U.S. believes its hammer blows will force out “bad” wedges, driving them toward extinction, which will ultimately improve the prospects of “good” wedges, such as so-called moderates in Syria. But what if U.S. blows (airstrikes and other violent military action) are driving radical sects (wedges) more deeply into the face of nature (in this case, the face of politics and society in the Middle East)? What if these radical sects, like Darwin’s driven wedges, are forcing out rival sects that are more moderate? What if the “jar & shock” of these U.S. military hammer blows is being propagated throughout Middle Eastern societies and Islam in ways that are as unpredictable as they are long-lasting?

Darwin’s complex wedge metaphor should make us think more deeply about the results of blows to complex, interconnected, and interdependent systems. Using military strikes in an attempt to destroy “bad” wedges may have the very opposite effect than the one intended. Instead of being destroyed, such wedges (such as the Islamic State) are driven deeper into the ecology of their communities, helping them to thrive, even as they send out vibrations “in many lines of direction” that harden the new ecology of the region against U.S. interests.

What, then, to make of Darwin’s “hard layer” in his wedge metaphor, which varies in its level but which persists in that no wedge may penetrate it? The “hard layer” represents that which all wedges can’t do without. All species are dependent on a source of food and energy, a source of sustenance to sustain reproduction. Darwin notes that the hard layer fluctuates, and though he doesn’t explicitly state it, those fluctuations must also act much like blows, displacing some wedges while favoring others with effects that ripple across the face of nature.

Rise or fall, the “hard layer” persists, meaning life on earth persists, even as individual species perish. Darwin explicitly states that no wedge can penetrate the hard layer, but here his metaphor breaks down when we consider humans as a wedge. Because humans can and do penetrate that layer. As a species, we do have the capacity to damage, even to destroy, the hard layer of nature upon which all species are dependent. We’re the killer wedge in the wedge metaphor.

Politically speaking, piercing that hard layer in the Middle East would be equivalent to igniting a new Crusade that leads to world war, one involving nuclear weapons or other forms of WMD. Devolution in place of evolution.

Of course, one shouldn’t push any metaphor too far. That said, Darwin’s “wedge” metaphor, in its imagery and subtlety, is more useful in understanding the complexity and unpredictability of military action than analogies that reduce war to exercises in precision surgery or power politics.

William Astore is a retired lieutenant colonel (USAF) and former professor of history who edits the blog The Contrary Perspective.

Technology and the Role of Scientists and Engineers in Society

Earth as seen from orbit by Apollo 11 in 1969
Earth as seen from orbit by Apollo 11 in 1969

W.J. Astore

Twenty-five years ago, I wrote the following paper for a class in the history of technology.  Back then, chlorofluorocarbons (CFCs) and acid rain as well as global warming were issues highlighting the drawbacks of technology.  CFCs were damaging the ozone layer, acid rain was poisoning our lakes and streams and damaging trees, with the buildup of greenhouse gases looming as a future threat.  The future is now, of course, since we’ve done virtually nothing to address global warming.  If anything, the debate in 1989 was far more sober, since back then there were no “climate change deniers.”

Written at the tail end of the Cold War, my paper from 1989 is colored by the threat of nuclear annihilation, another threat (like acid rain and CFCs) that has abated in the last two decades.  Reason for hope, perhaps?

Yet in those 25 years, technology has only proliferated even as compassion for those less fortunate has declined.  I wrote this paper before there was an Internet and World Wide Web, before cell phones and smart phones became ubiquitous, before we had so much conclusive evidence of the dangers of man-accelerated global warming.  I was attempting to argue that scientists and engineers had an obligation to consider the larger impact of their work, to include the moral implications of their research.

I’ve made one major change to this paper as written 25 years ago.  Back then, I concluded with the idea that an ethics based on Christianity needed to inform the work of scientists and engineers.  Today, this argument seems far too parochial and limiting, so I have removed it.

Technology and the Role of Scientists and Engineers in Modern Society (1989)

What is the proper role of scientists and engineers in modern society?  This question is especially relevant today, as can readily be confirmed by opening the September 1989 special issue of Scientific American entitled “Managing Planet Earth.”  Technology, it seems, has spawned many monsters: chlorofluorocarbons that tear holes in our protective ozone shield, factory smoke that turns our rain acidic, carbon dioxide that threatens to convert our planet into one big greenhouse.  The contributors to Scientific American assert that humanity must regain control over technology before its monsters inflict irreparable damage to the earth.

Defenders of technology, not surprisingly, advance the opposite thesis.  Samuel Florman, an engineer and the author of Blaming Technology, counters that “technology is still very much under society’s control, that it is in fact an expression of our very human desires, fancies, and fears.”  In Florman’s opinion, engineers should dedicate themselves to doing works for the good of society, but they should not try to define what is good for society.  Their mission, Florman holds, is to achieve rather than to set society’s goals.

Florman does not exonerate engineers from all responsibility, however.  He asserts that engineers must be guided by their individual consciences, but he also suggests that society should not expect any “special compassion” from its engineers.  In fact he implies that society must resign itself to emotionally-detached engineers: “If we accept the single-minded dedication of ballet dancers and other artists,” Florman analogizes, “we should be able to accept, however regretfully, the same characteristic in a number of scientists and engineers.”

But a serious flaw lies at the heart of Florman’s plea for the sanctity of the engineering profession.   He disregards the vastly different societal roles of artists versus scientists and engineers, as well as the serious dangers of a powerful technical elite.  The philosopher Hannah Arendt noted these dangers in the context of atomic experimentation:

     The simple fact that physicists split the atom without any hesitations … although they realized full well the enormous destructive potentialities … demonstrates that the scientist qua scientist does not even care about the survival of the human race on earth or, for that matter, about the survival of the planet itself.

Arendt makes an important point here.  Scientists and engineers sometimes pursue their interests even when they threaten the survival of humanity (or themselves for that matter).  Evidence from the Manhattan Project lends credibility to this argument.  Most scientists who worked on the project were too caught up in the technical challenges of building the atomic bomb to entertain moral qualms about the bomb’s purpose.  Robert R.  Wilson, the leader of the cyclotron group during the Project, observed that he never considered quitting:

We were the heroes of our epic, and there was no turning back.  We were working on a problem to which we were completely committed; there was little time to re-examine our moral position from day to day.

The atomic bomb was the grail for these knights of science; they focused on their pursuit and little else.  Perhaps they believed they could wash their hands clean of the stains of Hiroshima and Nagasaki, for they neither made the decision to drop the bombs nor did they pilot the planes.  Yet they could not deny that it was their expertise that brought humanity to the brink of its own destruction during the Cold War.

So what does our nuclear heritage teach us?  It teaches us that humanity needs a more humane technology and more humane engineers.  In sum, we need a new purpose for technology, one that is inspired by social and humanitarian concerns.

Jules Verne captured the risk of failing to do so.  “If men go on inventing machinery, they’ll end by being swallowed up by their own inventions,” Verne prophesized.  There are still some people, however, who continue to believe that technological advances themselves will eliminate technology’s harms.   Charles F. Kettering, a remarkably inventive General Motor’s executive and a quintessential company man, captured this idea.  In Paul de Kruif’s words, Kettering felt that

You cannot put the brakes on any discovery … you’ve got to go on with it even if we’re all blown to hell with it.  What you should do is step up the study of human nature, you may even find a chemical, a vitamin, a hormone, a simple pill to take the devil out of human nature….

Here one cannot help but be reminded of Aldous Huxley’s Brave New World, where another automotive engineer, Henry Ford, was god, morality was but a faint memory, and drugs were the panacea for human ills.

Elting Morison, in Men, Machines, and Modern Times (1984), suggests that since technology forces humanity into its categories, humanity has no choice but to create a new culture to accommodate it.  He proposes that a series of small experiments be performed world-wide, with “man as the great criterion” (or, perhaps more accurately, the great guinea pig).  Apparently, a successful experiment will be one in which humans thrive, while an unsuccessful one will be one where humans “break down.”  Rather oddly, Morison believes the military provides us with the paradigm of how to proceed.  In his words:

They [the military] have the nuclear weapon that has fulfilled the exaggerated extreme toward which the system always tends … But for practical purposes they have created around this extreme a whole arsenal of carefully graded instruments of limited destruction – old-fashioned armaments of lesser power and new weapons of modulated nuclear energy.

It’s shocking how Morison waxes nostalgic over those “old-fashioned” weapons, and his addition of “modulation” to atomic bombs makes them seem downright cozy.  As George Orwell observed in his famous 1946 essay entitled Politics and the English Language, “such phraseology is needed if one wants to name things without calling up mental pictures of them.”   Thus cluster bombs that send shrieking hunks of shrapnel through the air, napalm that sears lungs and burns human skin, and atomic artillery shells that annihilate armies (but not cities, we hope) become, for Morison, “modest examples of how to begin to proceed.”

A more pessimistic prospectus for the future of technology is held by Arnold Pacey in The Maze of Ingenuity (1980).  For Pacey, history reveals that technology cannot “easily accommodate the broad aims and the mixture of human and technical factors which a socially-orientated direction of progress in technology … require[s].  Thus the efforts made to encourage a more directly social form of technical progress … have been relatively ineffective.”

Pacey attributes this failure to the dominance of the mechanical world view.  Beginning with Galileo, Pacey maintains, scientists and engineers restricted their own view of the world, blinding themselves to the larger purposes of technology.

Pacey does more than lament, though.  He offers several potential solutions, all of which seem flawed.  He assumes that new, less destructive, technologies are needed to meet human needs, or to ease poverty, yet the world currently has enough resources to end poverty, and present technology could doubtless be used more constructively.  Pacey also unconsciously undermines his argument by citing education and medical care as “examples of how continuous improvement is possible without any large accompanying drain on material resources.”  Unfortunately for Pacey, both education and medical care are currently (and rightly) under siege in this country.  Despite large sums of money spent and countless reform proposals, education remains mediocre, while medical care remains compassionless and costly.

No wonder Pacey despairs.  He half-heartedly mentions other potential balms, e.g. critical science, which pursues “careful, rigorous researches into the relationship between technical innovation, nature and society,” and general systems theory, yet it is unclear from reading Pacey how critical science differs from general systems theory.   In the end, Pacey supplies the reader with little in the way of hope, for he despondently observes that systems theory is corruptible.

In the end, we’re left with today’s dehumanizing technological imperative, as noted by Carlo Cipolla, a noted historian of technology, in this passage:

Each new machine … creates new needs, besides satisfying existing ones, and breeds newer machines.  The new contrivances modify and shape our lives and our thoughts; they affect the arts and philosophy, and they intrude even into our spare time.

To prevent this dominance of the machine, science and technology need to serve social and humanitarian needs more directly.  In “Thinking about Human Extinction,” George Kateb holds that individuals must attach themselves first and foremost to existence.  This attachment “cannot be cultivated by way of a theology that bestows [from the outside] meaning or worth on existence,” and it must be able to withstand “all temptations to go along with policies that may lead to human and natural extinction.”

Existence is justified by a sense of beauty; specifically, Martin Heidegger’s wonderment at the very indefiniteness of existence.  For Kateb, “because there could have been earthly nothingness … one must finally attach oneself to earthly existence, whatever it is, and act to preserve it … [To this end] persons must be schooled in beauty to acquire the disposition to sustain wonder that there is earthly existence rather than none.”  In sum, we must learn to revel in the very fact of humanity’s existence against the longest of cosmic odds.

In a world that grows ever more fragile with each passing day, an appreciation for the fragility of our existence, as well as an abiding compassion for humanity, is exactly what we need from our scientists and engineers.

________________________

Sources in order of citation

Samuel C. Florman, Blaming Technology: The Irrational Search for Scapegoats (New York: St. Martin’s Press, 1981).

Hannah Arendt, “A Symposium on Space: Has Man’s Conquest of Space Increased or Diminished his Stature?”, The Great Ideas Today 1963 (Chicago: Encyclopedia Britannica, Inc., 1963).

Robert R. Wilson, “The Scientists who Made the Atom Bomb,” Science, Conflict and Society (San Francisco: W.H. Freeman, 1969).

Jules Verne, Five Weeks in a Balloon (1862), quoted in James R. Newman, “The History and Present State of Science Fiction,” Science, Conflict and Society (San Francisco: W.H.  Freeman, 1969).

Paul de Kruif, Life Among the Doctors (New York: Harcourt, Brace, 1949), p. 445, quoted in William Leslie, Boss Kettering (New York: Columbia University Press, 1983).

Elting E. Morison, Men, Machines, and Modern Times (Cambridge, Mass: MIT Press, 1966, 1984).

Arnold Pacey, The Maze of Ingenuity: Ideas and Idealism in the Development of Technology (New York: Holmes/Meier, 1974, 1980).

Carlo M. Cipolla, Clocks and Culture 1300-1700 (New York: W.W. Norton & Co., 1978).

George Kateb, “Thinking about Human Extinction: (I) Nietzsche and Heidegger,” Raritan (Fall 1986), pp. 1-28.

Is the Digital World Too Ephemeral?

Give me hardcopy!
Give me hardcopy!

W.J. Astore

A concern I have about the new borderless digital world is its ephemeral nature.  Even though I keep a blog and write a lot online, I still prefer books and hardcopy.  I clip newspaper articles.  I file them away and then occasionally resuscitate them and use them in class when I teach.

Hardcopy has a sense of permanence to it.  A certain heft.  Whereas our new digital world, as powerful as it is for instant access and personal customization, seems much more ephemeral to me.

I know similar complaints have been made throughout history.  The proliferation of books was deplored as leading to the decline of visual memory skills.  Television was equated with the end of civilization, with the medium becoming the message.

Perhaps what I’m truly lamenting is the slow decline of context, together with the erosion of deep memory.  The digital world we increasingly inhabit seems to encourage an ephemeral outlook in which history just becomes one damn thing after another.

To switch metaphorical images, the dynamism and flash of the digital world is much like a landscape with lots of beautiful shiny leaves and glistening flowers to attract our attention.

Yet, at least in our minds, the landscape is rootless.  Our gaze is enraptured, our minds are intrigued, but the moment is fleeting, and we fail to act.  We fail to act because we are entertained without being nurtured.

Let’s take smartphones, for example.  With their instant access to data, they seem to make us very smart indeed.  But access to knowledge (data recall) isn’t intelligence.  There’s simply no substitute for deep-seated intellectual curiosity and the desire to learn.

Smart phones are useful tools — a gateway to a dynamic digital world. But they’re not making us any smarter.  Perhaps they’re helping us to connect certain dots a little faster.  But are we connecting them in the right way?  And are they the right dots to connect?

Those are questions that smartphones can’t answer.  Those are questions that require deep, contextual, thinking.  And group discussion. Think Socrates and his followers, debating and discoursing. And acting.

Sometimes it’s best to disconnect from the matrix, find a quiet place for reflection, sink down some roots, and hit the books.  Then find other informed people and bounce your ideas off them.  Collisions of minds in informed discourse. Competing ideas feed the completing of actions for the common good.

As the Moody Blues might say, it’s a question of balance. The astral planes of the digital world can open new vistas, but let’s not forget the need to return to earth and get things done.

Peter Medawar’s “The Limits of Science”

Owen Hannaway
Owen Hannaway

W.J. Astore

Note to reader: I wrote this back in 1988 when I was a first-year graduate student in the history of science at Johns Hopkins University.  I took my first graduate seminar with Owen Hannaway, a distinguished professor of early modern science and alchemy.  He asked us to do a book review, and I chose Peter Medawar’s The Limits of Science.  I dedicate this article to the memory of Owen Hannaway (1939-2006), a distinguished scholar and a gallant man.

The Limits of Science is an intentionally short book dealing with topics in the history and philosophy of science. It consists of three different essays written in three different styles, yet it yields a general outlook on science which can be nicely summarized.  Sir Peter sees science as the most successful of man’s enterprises, but he is quick to observe that science has limits, although the growth of science itself is not self-limited.

Medawar first defines science.  Science, he says, is not a mere collection of facts but organized knowledge, knowledge that can be used to predict the behavior of the sensible world.  Medawar is careful to emphasize the difficulty of obtaining scientific knowledge, and the need for confidence based on trust within the scientific community.

Medawar then discusses whether there is such a thing as the scientific method and traces the development of different approaches.  Before the Renaissance, deduction in the form of the Aristotelian syllogism was used to advance science, while intuition and revelation were used to support science.  For philosophers in the Middle Ages, divine revelation guaranteed absolute certainty.  Francis Bacon lit a new path for enlightenment in the late sixteenth and early seventeenth centuries through the use of induction.  Bacon’s new method was the development of general premises through the use of experimentation and the collection of observations.  The frontispiece of Bacon’s Novum Organum summed up the new ideal of Plus Ultra (more beyond): it depicted the pillars of Hercules with a biblical inscription (Daniel 12:4) prophesying the advancement of knowledge.

Medawar next examines deduction and induction and finds them lacking.  The chief difficulty with deduction is that it begs the question; it can only discover something already contained in the major premise, therefore it is not a way to new knowledge.  By comparison, a major premise arrived at through induction cannot contain more information than the sum of its known instances.  A theory consisting of a legion of facts summarized by an iterative inductive process can thus be overthrown by a solitary contradictory instance.  In sum, a deductive premise merely makes explicit information that is already present in the premise, while an inductive premise is no better than the sum of its parts.  Neither method leads to new knowledge.

Considering these arguments, Medawar sides with the conclusion of Bertrand Russell and Karl Popper that there is no scientific method.  The myth of induction as the method for scientific advancement, developed by John Stuart Mill and Karl Pearson in the nineteenth century, persists today mainly because it agrees best with the public’s conception of science and the scientist’s desire for a positive self-image.

What then is the catalyst for advances in science? Medawar adopts Shelley’s idea of poesis in poetry: creation through the act of imagination.  The source of scientific hypotheses is these flashes of vision, and it is these hypotheses which guide and limit further science.  Medawar clearly rejects the idea that scientific discovery can be premeditated and cites the role of luck in scientific discovery.  He carefully qualifies the role of luck by showing how the scientist places himself in a certain mindset amenable to luck through his studies and associations with other scientists.

Medawar’s last essay discusses the limits of science. His fundamental assertion is that science does not yield absolute knowledge, and he quotes Kant as support: “Hypotheses always remain hypotheses, i.e., suppositions to the complete certainty of which we can never attain.” Science’s goal then is not the absolute but the nearest approximation possible; the nearer the approximation, the better its predictive capability.

Continuing the discussion, Medawar observes that there could be either a cognitive inadequacy or a restriction arising out of the nature of the human reasoning process that limits the growth of science, but since any such limitations would be present from conception we would never know of them (just as we could never perceive the Pythagorean celestial music due to its continuous presence in our lives). Are there then limits of science?  Not if science is understood as the art of the soluble.  If something is possible in principle, Medawar states, it can be done if the intention is sufficiently resolute and sustained.

The one limit to science as Medawar sees it is that it cannot answer ultimate questions, e.g. “Does God exist?” Medawar goes on to say he is not indicting science; rather he is recognizing that these questions require transcendent answers, which neither arise from nor require validation by empirical evidence.  He actually takes this argument one step further and asserts these questions have no possible answers. (Medawar recognizes that Immanuel Kant felt the opposite; since somehow man’s nature drives him to ask these questions, Kant felt that answers necessarily exist.)

According to Medawar, the question of whether God exists is outside the realm of science; the leap of faith required for a belief in God is one he himself is unwilling to make.  Although Medawar did not personally believe in transcendent answers, he did feel that these answers had a usefulness measured by the peace of mind they bring people.

I bought this book because as a Roman Catholic I was interested in what a scientist had to say about the limits of science in answering ultimate questions.  Medawar confirmed my suspicions that science can play at best only a subsidiary role with regards to these ultimate questions and the religious beliefs they help spawn.

For anyone looking for an introduction into what science is, how it advances, and what questions it can and cannot answer, Medawar’s book is excellent.  Perhaps the one idea I am always left with after reading this book is although science has limits, as long as man retains his ability to create imaginative hypotheses and his inclination to ascertain whether his guesses correspond to reality, there will always be more beyond for intrepid explorers in the realm of science.

Professor Hannaway appended the following note at the end of my review:

“What do you think your reaction would have been if you had read a book by a scientist less sympathetic to the claims of religion?  Perhaps you can find one, read it, and then critically assess the arguments of Medawar.”

“Why do you think a famous scientist like Medawar was so concerned by such questions to write about them in this way?  Could you find out something about his life that might explain this?  Try sources like the Times obituary columns, Nature, Notes and Records of the Royal Society.”

That was Owen: always generous with advice, and always trying to spur you to dig deeper, to learn more.

Bonus Anecdote: I’ll never forget this saying of Owen’s: “Scotch is for after dinner.” The last time I saw him in Denver at a conference, I was really pleased to track down a glass of single malt whisky for him.  He was a wonderful man.