Killer Robots

terminatorresistance-blogroll-1573775949328
My money is on the killer robots

W.J. Astore

Killer robots!  How many “Terminator” movies do we have to see before we conclude this is not a good idea?

You guessed it: the U.S. military is at it again.  Awash in cash, it’s investigating killer robots in earnest, striving for ever more “autonomy” for its robots, thereby reducing the need for humans in the loop.  Part of this drive for robotic warfare comes from the Covid-19 pandemic, notes Michael Klare at TomDispatch.com.  America’s tech-heavy approach to warfare puts lots of people in close proximity in confined spaces, whether on ships and submarines or in planes and tanks.  “Social distancing” really isn’t practical even on the largest ships, such as the aircraft carrier Theodore Roosevelt, briefly put out of commission by the pandemic.  So why not build ships that need few or no people?  Why not build autonomous killer robot ships?

Obviously, the Pentagon thinks that movies like “The Terminator” and “The Matrix,” among so many others that warn about humanity’s overreliance on machinery and the possibility the machines themselves might become conscious and turn on their creators, are just that: movies.  Fantasies.  Because technology never has unpredictable results, right?

So, killer robots are on the horizon, making it even easier for the U.S. military to wage war while risking as few troops as possible.  I’m sure once America invests billions and billions in high-tech semi-autonomous or fully autonomous killing machines, we’ll keep them in reserve and use them only as a last resort.  Just like we do with our big bombs.

To read Michael Klare’s piece on killer robots, follow this link.

52 thoughts on “Killer Robots

  1. In 1970 or may 1971 I had an assignment (TDY) to Nellis AFB. I was a geodetic computer and surveyor (1st GSS). We were there to put geodetic positions (latitude, longitude, elevation and azimuths to local target locations) on cinetheodolites used to track bomb drops and missle flights at the bombing range about 50 miles north of Las Vegas.
    A number of things happened there. For example on our first day pulling up to the blockhouse from the highway we saw a large dust cloud and a loud chunk/kuwoom about 300 meters out. An F-111 (swing-wing fighter bombers based at Nellis) had just dropped a dummy bomb not far from us. Over the coms loudspeaker we heard a sarcastic voice say, “right on, only two and a half miles off target.” We looked at each other and said, “We’re losing in Vietnam.” (troop sarcasm)
    A lot of our work was at night (positions and azimuths using stars as targets). We were told in no uncertain terms to never move so much as a muscle without calling the blockhouse first. They had ongoing operations across the range. Up until then, I had thought of drones as target drones.
    Probably about two or three in the morning one of our teams (observer and recorder) packed up their equipment and headed in but forgot to notify the blockhouse. Suddenly the blockhouse is yelling at them on the radio, “turn off your lights.” Just in time they heard it an turned off their head lights. Then, WHOOSH KABOOOOOM right across the hood of their pickup truck and into the side of the hill the two were coming around.
    They were doing tests with helicopter drones and missiles following targeting laser lights, which would also follow other lights. Bruce had to see the colonel the next morning to get chewed out for wasting his missile.
    That Bruce had to see the colonel, well, we thought that was hilarious. Of course it would not have been funny had either of them been hurt or killed. That was the first time I realized drones were deploying weapons.

    Liked by 2 people

  2. So … will they be lauded as heroes and presented with medals for doing what they will be programmed to do to preserve “our way of life” from the Third World? Or, as it’s likely they will have some kind of human controller/handler, will they both be presented with medals and be praised as “a great team”? The new “America’s best & brightest”?
    The military’s fascination with technology – the sole purpose of which is to kill more non-Americans faster – is boundless.
    In closing, here are two memorable quotes, though you only ever hear/read the last three words of the first:

    “I am tired and sick of war. Its glory is all moonshine. It is only those who have neither fired a shot nor heard the shrieks and groans of the wounded who cry aloud for blood, for vengeance, for desolation. War is hell.”

    “In our country … one class of men makes war and leaves another to fight it out.”

    —William Tecumseh Sherman

    Liked by 1 person

  3. Spot on. Disguised as ‘entertainment’, its all about killing, police, war-fare and end times. I’m sick of it. I want to see an alternative future with hope, a cleaned up planet, diplomacy, cooperation and collaboration, clean water, sanitation, a rudimentary standard of life, education, music, the arts, wellness . . . a new golden era. We’re better than this extinction model.

    Liked by 3 people

      1. Here’s The Formula: get Big Bucks for the technology to wipe a place out, then get Even Bigger Bucks to rebuild the new wasteland which can take God alone knows how long.
        It’s a twist on how the expressways (like the always hot & heavy Dan Ryan) were built in Chicago: a construction company gets the contract which includes maintenance & repair before final acceptance & conclusion of the contract. So, you use sub-spec concrete and other materials, then drive overloaded concrete trucks and other construction vehicles over them – while construction continues – which continually breaks down the sub-spec materials, so the original contract keeps getting extended. They were working on the Dan Ryan Expressway when I was 5 years old. I’m now 66, and they’re still working on the Dan Ryan. The grandkids of the original contractors are probably drawing paychecks on that boondoggle.
        So turn that around a bit … once you’ve devastated a country and decimated its population, how long can it take to rebuild a country and its infrastructure or, more likely, create a “modern” infrastructure where one didn’t previously exist? That used to be part of “exporting democracy.” And that’s money beyond the dreams of avarice, beyond even the dreams of Dick Cheney. Proof positive you can eat your cake and have it, too.
        And have it all policed by robots. Once their destructive capabilities have been displayed, they could simply be placed in strategic locations, sinister, silent, waiting. Like Gort, but without a semblance of Michael Rennie’s compassion and – yes – humanity for guidance.

        Liked by 1 person

        1. How well I remember that Cheney & Co. moved into Iraq with (we were told!) a COMPLETE plan for “rebuilding” the country, right down to designs for new postage stamps! Then there was the little matter of $6 billion in freshly printed currency going missing, oops! The Green Zone, with its medieval fortifications, is about the only infrastructure the US carried out….Here in bucolic Connecticut, we have the same issue with road construction absolutely endless. I think “planned obsolescence” is definitely in the mix. When they repave, they never fix the road bed properly and same potholes reappear after a couple of winters….As a great fan of “The Day the Earth Stood Still,” I have to come to Gort’s defense. “Sinister”? I think not. Klaatu (played so memorably by Englishman Michael Rennie–Spencer Tracy and Ronald Reagan [!!!] had been considered for the role) has come to Earth to deliver an ultimatum: agree to live in peace with the other nearby inhabited planets, do not extend your atomic weaponry into space. He’s not interested in Earth’s petty squabbles and has little patience with human stupidity. Klaatu is companion, temporarily in human form, to the robot peacekeeper Gort. The robots are really the ones in charge, tasked with punishing aggression. Gort can wield enough firepower to destroy the whole freaking planet, Klaatu warns. That does not make him “sinister” unless you’re a bad guy out to commit aggression. He is neutral and objective. Klaatu departs Earth (taking Gort along), saying the decision–live in peace or perish–is up to us, the inhabitants. One of the things I love about this movie is that, in the thick of McCarthyism, it dared suggest that the Russkies are only part of the equation of nuclear menace to our planet. It sort of puts an equal sign between the US and the USSR; the latter is not specifically named in the dialogue, but it’s unmistakable who’s being referenced. [NOTE: If you’ve read this far, thanks for your patience. And your reward is this WARNING: Do not, repeat do NOT, bother with the truly dreadful “reimagining” of “The Day the Earth Stood Still,” with Keanu Reeves as Klaatu, from about 20 years ago. Heed me, Earthlings!!]

          Liked by 1 person

  4. Thanks, Bill. DoD has been on quest for autonomous target ID and trigger pull capability since at least the early 00’s, but really far earlier when one considers “Fail Safe” ideas like the 1960s “Launch on Warning” system that fortunately was never turned on (as far as I know). The complaint from up the chain down to us systems geeks centered on the non-integrated C2 systems in the Combined Air Ops Centers (CAO), the necessity of “sneaker nets” to overcome the lack of machine integration, and the overall laborious (supposedly) time that CAOC vetting added to the kill chain.

    I think that part of the goal is to have a global bug-zapping capability. Everyone recalls the civ/mil leadership frustration about those pesky mobile SCUDs from Desert Storm, vehicles moving around under the weather in OAF, and those elusive “High Value Targets” from the early days of the Drug War to today. But I also think that part of the goal is to diffuse accountability -> “Wasn’t me, was the autonomous system.”

    “Well, okay, who developed the system?”

    “Oh, it wasn’t the vendor, it was the software.”

    “Who wrote the software?”

    “It was a team, we don’t really know….”

    If anyone is interested, I provided a bit of background of the course of the killer robot concept in DoD in an article last year – https://original.antiwar.com/Dave_Foster/2019/04/09/of-course-the-pentagon-is-pursuing-autonomous-killer-robots/

    Best/Dave

    Liked by 2 people

  5. I skimmed the original article on TomDispatch. I can make two brief statements on this topic with great confidence: 1.) Dept. of Perpetual Warfare is, indeed, very interested in and desirous of robot killing machines (research is done under rubric of DARPA); 2.) if you think these devices would ONLY be used on foreign soil, you’re dangerously naive. A crude device on wheels, part of Bomb Squad arsenal, was used to blow up a suspect in cop shootings in Dallas (I think that was the location). Right now, Martial Law is getting a dress rehearsal in Portland, Oregon. As Al Jolson used to say, “You ain’t seen nothin’ yet, folks!”

    Liked by 1 person

  6. Oh, and I should add that I have no doubt many private enterprise ops are eager to get contracts to provide robotic killing machines. Indeed, the gov’t will surely express a preference for the private sector to take the lead, as those folks are usually way ahead of gov’t agencies. As the shareholders in those companies watch their share price soar on the stockmarket, do you think they’ll give a hoot about how the end product may ultimately be deployed??

    Liked by 1 person

    1. Always loved this scene: who cares if the robot-killer works when we have a government contract for 25 years, including spare parts!

      Liked by 1 person

  7. As we computer programmers used to say back in the engineering department of the former Hughes Aircraft Company (or “Huge Aircrash,” as my Mom used to call it): “When a man makes a mistake, he makes a mistake. When a machine makes a mistake … makes a mistake … makes a mistake … makes a …

    Or, as US military doctrine expresses itself in Beginners All-purpose Instruction Code (or BASIC):

    START: GO TO START

    Sometimes, what you don’t tell a machine to do — like stop or do something else — will simply free the machine to go on doing what it just did until it exhausts some input resource like time, money, lives, or all of these.

    One Friday afternoon, I had to write up a simple program to produce a print-out of a small text file that some engineer wanted. No problem. We had this big laser printer over in another building where we had all of our printing done so I just had to write, compile, and execute a few lines of code and someone would deliver the printed document to our office the following Monday morning. Easiest programming assignment I ever received.

    Monday morning came around and we got a delivery of five (or more, I can’t remember just how many) boxes of paper print out, each page containing one line — the same line — of text (the first line of text in the file). (oops). The delivery guy said that my print job had caused a “paper out” alarm to go off on the big laser printer, the only time he had ever heard of that happening in all the years he worked at Hughes. As it happened (fortunately in my case) an electrical storm Friday afternoon had caused a temporary glitch in the building’s power supply and the lights had flickered a bit, so this must have caused the problem, he guessed. I suspected otherwise and quickly re-checked the code I had (I thought) written:

    Open the file
    Read a line from the file
    Do (While Not End of File)
    Print the line
    Read a line from the file
    End Do
    Close File

    Except I hadn’t written that. What I had written caused me to suspect that my career as a self-taught computer programmer at Hughes just might not flourish:

    Open the file
    Read a line from the file
    Do (while Not End of File)
    Print the line
    End Do

    Since I had left out the command to read another line of text (inside the DO-loop) the program, in effect, told the printing machine to print the first line of text in the file and then just keep on doing that, which the machine did. Noticing this, I quickly edited the program to insert the necessary line, hoping that no one had checked up on my code in the meantime. Thank goodness for afternoon electrical storms and flickering lights.

    Sometimes when a man makes a mistake his machines go on replicating that mistake. Sometimes this only results in a few boxes of wasted paper. Sometimes the waste devastates entire countries, economies, populations, societies, cultures and even planetary environmental systems. Men and women need to take much greater care in what they tell themselves and their machines to do (or not do).

    Liked by 2 people

    1. Ha. Fun code times. At one spot I worked my immediate boss, Ken, would test our entry programs by throwing ANYthing at the program. Too often testing means testing for what you expect to be entered and assuming it is data. It could be an injection designed to be seen as code or just (more likely) the wrong stuff. Ken would pretty much throw it anything on a keyboard plus expected attacks. If our code lasted 5 minutes with Ken pushing it we figured it was (probably) robust.

      Which brings me to what I call AS, Artificial Stupidity. A few years ago, I put in several job applications which requested resumes as doc files. My resume is not exactly standard and there are neither set formal standards nor either explicit or implicit field format syntaxes, so any parser has to look for a small library of expected text matches. Lord knows I’ve written enough parsers over the years.
      Well, besides programming, I’ve taught a course in animating choreography in 3D for the conservatory (5 years in the mid 2000’s [note this date and the next date), shot photos and video for years starting in journalism and specializing in dance the last 20-25 years, been a reporter, a baker, a massage therapist, waiter, bartender and geodetic/astronomic surveyor in the Air Force (68-72).
      Obviously the job site used a parser (which seemed to be the same parser for several such sites) and it came up with me as a professor of ballet (remember the dates? before it was in the Air Force). I went right over to the dance division and told them I was ready to take my position and lead classes, now that a computer said (and a computer is always right) I was a professor of ballet. I even made up a name tag as a professor of ballet. We laughed and they said to step right in and teach (joking of course). The job site was wrong but having written parsers I could figure where it got tidbits to come up with the items it put together wrongly, scrambling dates and positions and qualifications.
      I had fun with it. The dance division had fun with it. And it is funny. Since then I’ve been sent jobs for programmer positions (ok, sensible), photographer (ok), reporter (ok) as well as jobs ranging from nurse (hunh?) to cardiothoracic surgeon (hunh?) again over the road CDL driver (hunh?), even stable boy (hunh?). All the “hunh?’s” are ones I can’t even imagine how their parser(s) got so far afield.
      But behind it I realized immediately is a much darker problem. No funny at all. These AI (AS is more like it) programs make simple mistakes a 5 year old would catch or not make at all. And these are more and more being put in charge of not only weapon systems but life-controlling actions such as sentencing judgements or credit approvals or just approvals to be a full citizen (China).
      Funny LOL and NOT funny not LOL.

      Liked by 2 people

      1. As you no doubt understand utterly: “GIGO” (or Garbage In, Garbage Out). Once I had occasion to do some programming for the Fabrication Facility over in building 607 where mechanics bent and drilled metal into the fixtures that would support the electronics assemblies that other departments would attach to them. The software shop’s most important output: a telephone-book-sized print-out, listing all the many vendors and subcontractors whose parts, services, and materials (and the cost of of these) went into the fabrication process. The project managers and engineers considered that print-out their “Bible” and worshipped it accordingly.

        One day a group of these very important persons came into our little office, foaming at the mouth in apoplectic rage, demanding to know how we had managed to royally fuck up their sacred scripture. One project manager threw a copy of the now-desecrated holy book on someone’s desk and screamed at us something like: “What have you done?! This piece of shit print-out isn’t fit to wipe a monkey’s ass — or yours either!” Or words to that effect, only more profane.

        We insisted that none of us programmers had touched the software that produced that print out. Not in quite a long time. Not the least bit satisfied with that truthful answer, our visitors issued instructions (i.e., demands) for us to (1) find out what had happened; (2) fix it, fast; or (3) start looking for employment elsewhere (if we could find someone idiotic enough to hire such morons as us in the first place). Message received and assimilated, we had no idea where to start but decided, logically, that if the software hadn’t changed but the output had, then most likely something had gone wrong with the input data. GIGO.

        Specifically, the input data to our software came from various sections of the company — engineering, manufacturing, business, management — which utilized different computer systems manufactured by different companies: Digital Equipment (DEC), IBM, DataBus, etc. Every project had a Cost Account Number which the various department computers used in common as a sort-key to organize their outputs to us. As it happened, someone in business management decided (for whatever reason) to start using letters in the Cost Account data field as well as the usual numbers. This didn’t matter when dealing with pencil marks on paper that humans can read (at least some of them). But when dealing with the “1”s and “0”s that machines read, . . . well . . .

        Anyway, the various department computers and their software systems went on sorting their output data to us just as before, only not all the computer systems sorted numbers and letters the same way. Most computers used the ASCII binary digit data representation standard, but IBM — wouldn’t you know it — used a proprietary digital data representation called EBCDIC (Extended Binary Coded Decimal Interchange Code). These two digital coding schemes do not sort numbers and letters in the same way, so when the combined inputs to our software mixed together, we got insane garbage of cosmic proportions (at least in our little cosmos).

        The Solution? The computers that used internal ASCII binary code couldn’t do anything to change how they sorted data but the IBM computers could sort data according to the ASCII data format like everyone else. So we asked the programmers of the IBM systems to do that for us. They did. The input data again merged seamlessly. We produced a “New Testament” scripture satisfactory to engineering and management theologians. And we kept our jobs. (At least until Raytheon bought up the company, sold it off in pieces, and sent most of us packing.)

        You can’t always blame the machines for what their human “masters” feed them.

        Liked by 1 person

    2. My programming days go back to cathode ray tubes (text screens) and memory so limited that we divided bytes up into individual bits. One of my routines caused the screen to periodically and unpredictably go blank for several seconds. Did no apparent damage, program continued normally afterward, but the blank screen had to be explained. I debugged for months and so did two other guys that worked with me. Big mystery.

      I had written a routine that would measure the length of input and then would erase the remaining underlines that depicted the input field. Simple. Take the permitted length, subtract the inputted length, and print that many spaces. Unfortunately, I was assigning the result of the subtraction to an “unsigned integer” variable. You can’t put negative numbers in that type of memory reservation. What the computer does is think that it is looking at a very large number. It thinks that a -1 is a +32,767. So if the user typed in one character MORE than was allowed, the routing would print 32,767 spaces to the screen, which makes the screen full of spaces (blank) and takes about two seconds.

      I was, to say the least, very embarrassed. Making the mistake took a couple of seconds. Finding it took a couple thousand man-hours.

      Liked by 2 people

      1. In the mid 90’s I was programming for a utility which converted AutuCADD files to PostScript. I had an error in the way blocks were displayed in one particular client example. (a block is a sort of sub-drawing, for example a fixture that gets used multiple times can be drawn once, as a block and then the block inserted where needed).
        Anyway, I debugged the “fixed” it. But it broke for another client. So, debugging again, I went back and undid my first fix then looked deeper. It was a sign on a block insertion. Either a plus sign that should have been a minus or vice versa, don’t remember for sure anymore but it was all about the single character. Once I changed the sign (+ or -) correctly, every customer sample worked fine.

        Also, about the same time, I had our new Windows version written in VisualBasic, ready for the first beta testers. Great, they said, I had all the items they had been requesting. It all worked great. EXCEPT it was about 1/10th the speed of the DOS version in Borland C. That meant they couldn’t use (buy) it.
        So, I retooled and came up with a specialized file read/write API in assembler. It took another two months but it was about 10-15% faster than the C version in DOS. That one we put out.
        It was also the last time I used a super-duper, fastest I could get, fanciest computer just because I was the hot-shot developer. I changed all my development to the slowest, junkiest machine I could expect my customers to be using. I wanted to feel their pain before they had to. Saved me a lot of work and re-work later and kept me on target.

        Liked by 1 person

  8. I’d like to make a plea for robo-cops, but not the violent movie kind. Not armed robots, but ones that could behave free of emotion as cops cannot do. Spock-like with death not an issue, these robots could be used to walk up to the door of a stopped vehicle, up to the door of a house of a suspect, to approach an armed person or a mentally ill person. There would be no weaponry, so no “I was afraid” incidents and a robot can take being hit by fists or weapons incurring only a repair, not a funeral. This is something for the near future. Immediately, I would stop taking vets of the armed forces into police work. We don’t need people trained for combat, to see an enemy and take that enemy out. We recruit too many police from a group trained to do what we don’t want police to do and then wonder why we get such poor results from police work.

    Like

    1. Interesting concept, but a “benign peacekeeping robot” is NOT what the Powers That Be are shopping for. Salivating for, indeed.

      Liked by 1 person

    2. I will state a differing opinion regarding veterans. I do research in this area am not aware of any evidence that veterans who become police officers are a particular problem. The veterans I know who have become police officers have been quite committed to public service and making a positive difference in their community.

      My sense is that one issue is lack of training, not just training in de-escalation but all training. Most people do not realize that there is no standard training program for police officers in the US. Furthermore some departments require only 12 weeks of training. That is very little for someone to be given a gun and a license to kill with little accountability.

      Contrast that to Finland where all police recruits have to complete three years of training, effectively a college degree. De-escalation is part of the curriculum. If we had that kind of training requirement here in the US, we would probably get much different results.

      Like

      1. Finland, which I was honored to visit in 2005 (didn’t wander beyond Helsinki, though, as I was there for an international sports event), comes in near the top of opinion polls for overall contentedness of its citizens. There’s a very simple reason USA won’t invest that much time and money in training LEOs (though police departments don’t turn away college grads): the LIVES of the people against which lethal force is most likely to be wielded are not VALUED like a Scandinavian–indeed, any civilized society–country would. That US Navy veteran who received “the John Lewis treatment”–gassed and beaten–certainly got an answer to the question he said he wanted to pose to the unidentified uniformed thugs now trying to “dominate” (as Trump would say) the streets of Portland, OR. He told the media he simply wanted to ask these people in their combat gear if they thought they were upholding the Constitution to which they presumably swore an oath of defense. I say “presumably” because they are anonymous personnel with zero accountability to the citizenry. I’d call that a bit of a problem.

        Liked by 2 people

          1. It is entertainment. Just a bit more than 43 years ago I was driving in to my new gig as a bartender at the Royals Stadium Club, wearing my blue Royals bartender vest and everything. I didn’t know where to park and the police were directing all the cars but as an employee I knew I had to park in a particular spot. So, I pulled over to ask for help from the officers at the intersection leading to the parking lots to ask where I needed to go. Instead of help the front cop grabbed my head and pulled it up into the driver’s side window frame bouncing me off the top inside of the frame a couple time, then shoved me back into the car and told me to get, don’t bother them. As I left, the lot of them laughed like crazy. I was so much fun for them. Amusing. Were I black they might have had even more “fun.”
            Years later, my partner told me than when she was a lab supervisor at UMKC the cops would visit her (as “god’s gift” to women) and would boast of games they played on civilians. I realized then what it was those cops were doing when they bounced my head up into the window frame in spring 1977. What I now call “cop-tainment.”

            Liked by 1 person

            1. Those coppers couldn’t take the stress of directing traffic into a baseball game?? Had to “let off a little steam” at your expense? Imagine how quick they’d’ve been with their trigger fingers in a real threatening situation! Nowadays, of course, they say they always feel like their lives are in danger on the job–sure, there’s a kernel of truth to that, but being a LEO is far from the most dangerous occupation for injury/death. In my state, it’s become popular for cops to claim a motorist they’d stopped tried to run them over and they “had to” unleash lethal force. “Funnily enough,” there’s never any video evidence to prove this claim.

              Like

      2. JPA: Lack of training may well be a part of the problem with US policing. It alarms me that Israel provides training (for free, it is privately funded) for American police when the behavior of Israeli police and the IDF toward Palestinians (well documented on YouTube) is precisely the thing we don’t want to be practiced here, but are seeing more and more. Jewish Voice for Peace (JVP) has a campaign against this “training” called Deadly Exchange that I encourage Americans to support. I have written my town’s police chief on this issue.

        Like

        1. Yes, this “exchange” of training methods on how to ruthlessly suppress dissent/resistance has been going on for many years. The Trump administration is conducting an experiment in ruthlessness on the streets of Portland, OR right now. MSM are suggesting the presence of these “anonymous” goons is actually ratcheting up the level of resistance. I can only hope this is true, but I hope the major injury (and the pretty well inevitable, under the circumstances) death tolls do not climb too much higher.

          Like

        2. Yes. JVF long with B’Tselem are two of a mere handful of Jewish organizations standing up for the rights of Palestinians. Here on their site is an account of how IDF uses teacher’s housing for live training exercises. Should be all too familiar and recognizable, looking as if we are headed to this spot with these un-labeled city invaders. :
          https://www.btselem.org/routine_founded_on_violence/20200610_military_repeatedly_commandeers_housing_complex_near_nablus_for_training
          We still don’t know who they are. But with the federals using contractors (mercenaries) to pretend that the government isn’t doing what the government is doing I would be surprised if Erik Prince has a share of this. It sounds like something he was proposing, offering to be a private army hired by the US instead of regular (accountable, supposedly) troops.

          Like

          1. Heard a soundbite with the “Acting” Director or maybe Deputy Director (Trump’s musical chairs Cabinet!) of DHS reassuring the public that of course these guys are accountable, they’re working for his department! Neatly sidestepping the issue that they wear no name tags or badge numbers, rendering each individually EXEMPT from accountability for beating someone half to death or spraying irritants directly into their faces at close range. They didn’t invent the latter technique, of course. I remember years ago–the details of the confrontation escape me–cops were filmed spraying CS or similar material directly into faces of demonstrators who were sitting on the floor totally peacefully at some building they’d occupied. Would’ve been difficult for the Forces of Law & Order to claim they were being pelted with rocks or bottles. I think the cops in this old incident were Federal Marshals, come to think of it.

            Like

  9. Warbots. Hmmm. Yesterday I tried to upload some new playlists from my Mac to my iPhone. Not only did the process fail, but it also wiped my playlists off my Mac. I have never had that happen before. Apple support couldn’t explain this. I was able to restore my playlists on my Mac from a backup.

    My point is that Apple’s Music app runs perfectly well on millions of computers and iOS devices. It ran well on mine too. Except yesterday when it failed completely and destroyed data in the process.

    If that happens with something as simple as uploading a new playlist from one computer to another, then it will certainly happen with a warbot.

    Except when a warbot makes a mistake it won’t destroy data or playlists. You can’t restore lives from a Time Machine backup.

    Like

  10. I have smart readers who make telling comments. Thank you!

    For me, only half this blog is what I write. The other half is my readers, especially those willing to share thoughts and experiences that relate to my post, however tangentially.

    Which makes me think of a colleague who, when he evaluated teachers, told me: I can forgive most things, but I cannot forgive a teacher who’s consistently boring.

    Liked by 1 person

  11. Most readers of this blog no doubt have heard of Isaac Asimov’s Three Laws of Robotics:

    (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    How quaint. In reality, the world now has new robotic directives conceived and executed by a Nobel Peace Prize winning “constitutional law” lecturer promoted to President of the United States. No kidding. Perhaps we should call these psychotic permissions:

    Obamanable’s Three Homicidal Absolutions:

    (1) A US robot may murder any human being on the US President’s Tuesday morning kill list — or just any random person who happens to find themselves within the blast radius that defines “militant activities.”

    (2) A robot may murder American Muslim preachers for protesting the wanton murders of Muslims, and may murder, as well, the preacher’s teenage sons and even younger daughters on the grounds that “they should have had a more responsible father.”

    (3) Humans murdered by the US robot’s bombs may — if they object to the manner of their deaths — return from the Afterlife and “posthumously” prove their innocence in an American court.

    Hard to imagine that robots, if left to develop by themselves, wouldn’t develop more humane rules for themselves to follow rather than the insane directives that US presidents and generals have devised for them.

    Like

    1. Many thanks for the Asimov reference, Mike. A favorite of mine is Asimov’s Foundation trilogy.

      Of course, humans are still in the loop, so they say, to control robotic drone killings. How long before we get rid of the humans and go “full robot” — or the robots get rid of us?

      Humans in the loop imply humaneness, but we haven’t seen much of that lately. If robots really did follow Asimov’s Three Laws, perhaps we really should step aside. I for one welcome our humane robot overlords! 🙂

      Liked by 1 person

      1. I loved how the candidate for president was rumored to be a robot, and there were people upset with the possibility of a robot president. The candidate proved he was not a robot by punching someone who dared him to. Of course that “person” was another robot. The robot candidate won the election and with the robots in charge, as I recall, things got a lot more peaceful.

        Like

      2. “Replicants”–essentially robot slaves, with a pretty realistically human outer layer–wishing for independence from their overlords is, of course, at the heart of “Bladerunner.” In “The Day the Earth Stood Still,” which turns 70 years old next year (!), our more advanced galactic neighbors have developed a “race” of robots to patrol the planets and maintain peace. They have the ability to objectively determine who is the aggressor and who the victim and are programmed to ruthlessly suppress the aggressor. I imagine had such an emissary landed on Earth during the American War in Viet Nam, the US would have pleaded until (red, white and) blue in the face that WE were the victims of “Communist aggression”! It should not have taken the robot long to discover that the Vietnamese had NEVER attacked the USA and our national leadership was a pack of lying hyenas! Oops, there goes the White House, vaporized in a flash!

        Like

    2. Ah, yes … Isaac’s “Three Laws”: I have occasionally found myself in a discussion with people who believe robots could never be used to harm people or could never have a “systems glitch” that made them dangerous “because of those Laws about robots (!). That’s just Hollywood stuff (!!!).” Any objection on the grounds of “those Laws” being a literary plot device are invariably dismissed out of hand.

      Like

      1. Looking at the state of the world today, we must hold all robots innocent. The major malfunctions (a phrase I first encountered in the US Army) are entirely in the heads of humans.

        Like

  12. Shia groups were able to hack drones flying over Iraq over 10 years ago. Granted, they were only able to get to the video feed rather than actual control of the aircraft – but the intelligence harvesting of the video feed was likely signficant.

    What happens when a more sophisticated hacker is able to penetrate a semi- or fully automatous robotic weapons platform and turns it back against those who deployed it? Anything one group can program another group can hack.

    We seem to be living in the reality of futures predicted by dystopian movies of the past where machines displace human values and judgment. Whether it is Terminator: Judgement Day, Colossus: The Forbin Project, Blade Runner, the Alien series, or many others, sure feels as if the majority of humanity will be rendered irrelevant and unneeded.

    Like

    1. Good point. Reading this I recall something from Tolkein’s response to those who wondered if The Lord of the Rings was written as an allegory for WWII. He replied that it was not, and if it had been then the One Ring would have been seized and used against Sauron who would not have been annihilated but enslaved. Saruman would eventually have made his own Great Ring and challenged the wielder of the One Ring. “In that conflict both sides would have held hobbits in hatred and contempt; they would not have survived long even as slaves.”

      Like

    2. How will robots do as CONSUMERS?? Humans must be retained to absorb marketing messages and shop ’til they drop to keep the economy going! Industrial robots, of course, have been in factories for decades now. They just hum away (aside from occasional breakdowns) contentedly, don’t take vacations and don’t try to strike for higher wages. But human consumers are required to help the manufacturer realize the profit from sale of the end product!

      Like

      1. Greg, that’s a good point. But with the financialization of most aspects of the economy and the excessive use of synthetic shares (i.e., derivatives), it seems as if the actual economic relationship between producers and consumers has been shredded. Even now, the stock market is held up as the state of the economy and it goes up as millions go unemployed.

        I really hope you’re right.

        Like

        1. Oh, certainly, the US “economy” is a web of gossamer, ectoplasmic delusions these days. When this house of cards finally collapses, look out! But nevertheless, it’s still a consumer-driven economy, to extent of 2/3 of GDP, economists have been telling us for decades. And I think they’re actually right about that. One of Lord Keynes’s greatest contributions was his observation that Economics is “the Dismal Science,” but with manufacturing outsourced to cheaper labor overseas (or over land, in the case of Mexico and Latin America farther south), consumers are about all we’re left with here to spark what life there is economically.

          Like

  13. Not even close. There’s not going to be any autonomous robots in the foreseeable future and probably. Read Gary Smith’s “the AI delusion” or Gary Marcus’s “Rebooting AI”. There are some funny examples of not so great “military” applications but the larger point is (and should make anyone convinced) that we’re nowhere near. The only harm is the wasted resources along with the usual lack of accountability.

    Liked by 1 person

    1. Personally, I definitely believe all these hopes being pinned on AI = hype, hype, hype. The damage human activity has done to Mother Nature is NOT a problem AI can “fix.” Of that I’m very certain.

      Like

  14. What a fascinating thread!
    Commenting on the original post, and the dangers of automating military applications and placing them under the control of AI, the real question – from long before Mary Shelly’s Frankenstein till years from now, in the future (I hope) is, what is humanity going to do about its technological creations when, if ever, they develop consciousness? Self-awareness? Individuality? Feelings! What will we do if they become like people?
    Of course, that’s not going to happen any time soon…
    A reassuring thought, given Stephen Hawking, Bill Gates, Elon Musk and other luminaries’ concerns about the ‘existential risk’ posed by AI.
    Or is it? Take a look at this report, Evolving Robots Learn to Lie to Each Other. https://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other/
    In 2008, researchers in Lausanne, Switzerland programmed ‘good’ robots to find and freely share a desirable ‘food’ resource. Within 50 generations their digital offspring had realized that there wasn’t enough to go round, and had learned to conceal their discovery of the ‘food’ by broadcasting false information.
    Their digital ‘genome’ was minute, only 264 bits in size.
    Well (you’ll probably object) learning to flash the wrong colour light when you find ‘food’ hardly amounts to consciousness. They didn’t evolve into persons.
    Uh, huh… But their behaviour displayed some of the baseline characteristics of sentience: they were motivated, they were selfish (they had a ‘sense of self’) and they modelled their own behaviour and that of their competitors in order to achieve winning strategies. (OK, that’s a huge assumption: their behaviour was probably more akin to inherited instinct (like insect behaviours) than primate social modelling and self-awareness.
    But, can we be sure? Nobody knew then, and nobody knows now, what’s actually going on inside those little AI neural networks – the programmers know how the system works, but they don’t know the actual chain of digital decisions the network makes. So how can we be sure that the little robot isn’t learning to think like us?
    Or, alternatively, how to think like something completely alien?)
    And, to speculate further, how do we know they don’t have ‘feelings’. Contemporary (and classic) thought about emotion recognises motivation as the prime mover of feelings. Humans have an elaborate plumbing system that squirts hormones in response to motivational challenge, and the result of that usually occupies centre stage in our meat-ware mediated awareness of our consciousness. But it’s just a mechanism. Without hormones, a thinking entity will still have motivations (like these little robots) and will ‘feel’ them. Perhaps strongly. Just, in a very different way from animals and mammals, and us.
    And that is scary.
    We understand each other. We even (imperfectly) understand psychopathology – as in Hitler, or Idi Amin.
    What about reptiles? Say we’re chatting to a lizard over coffee (this is SF, OK) – can we understand it’s reptile emotions?
    And what about an intelligence far quicker than ours, and of much greater capacity, that has motivations and feelings that have no biological underpinning at all?
    OK, I know my speculations are quick and dirty – I’m no scientist – but my common-sensometer is red-lining. You make something that is selfish and knows how to lie and you let it lose in the world. Sure, there are safeguards coded in, but…
    Chernobyl!
    Just one word. One technology.
    Sure, we know that Covid 19 was not a human lab-error. Or deliberate bio-weapon.
    Do we?
    The point is, those who promise safeguards…
    Can’t really promise anything.
    Serious stuff. And, getting back to AI, I suggest it’s already real, and getting realler. We (will) interact with it every second of the day over our computers and phones. And AR glasses, and whatever internet and communications technology the future brings.
    I think Hawkings, Gates, Musk, et al are right to be very, very concerned.

    Like

    1. It’s funny: I was just watching “Ex Machina” last night. The moral of that story (and so many others): Our AI will outpace us, or at the very least it will find ways to perpetuate itself, to subvert the purposes of its human creators. Yet we humans keep plugging away, as if we’re in complete control.

      Like

    2. I enjoyed your thoughtful and thought-provoking comments. AI in its current nascent phase is being used primarily to market to us junk we don’t really need, and of course to spy on private citizens (and foreign entities viewed as “enemies” of US Global Hegemony). The damage humans have managed to do to our only planet’s environment, an ongoing catastrophe, is a far greater existential threat to human life than the possible evolution of malevolent Machine Intelligence. And it’s my view that any hullabaloo about AI “saving us” from our own foolishness is about as valuable as a “promise” from Donald J. Trump.

      Like

Comments are closed.