Burton Pixels

Writing

 

Op-Ed Writing - The Science of Mindfulness
Op-Ed Writing - The Ethics of Artificial Intelligence
Op-Ed Writing - AI Neural Musculature: Care & Feeding of the Control Problem
Op-Ed Writing - A Multi-Disciplinary, Nodal Approach to Design Using Adaptive Complexity

The Ethics of Artificial Intelligence

“The biggest casualty to Artificial Intelligence won’t be jobs, but the final eradication of trust in anything you see or hear.” – Oli Franklin-Wallis

My former boss, Elon Musk, is super worried about Artificial Intelligence (AI). He’s quite right to be concerned, and he’s certainly not the only one – David Chalmers, Nick Bostrom, Sam Harris and others have expressed concern. The following will not so much propose solutions per se, but will attempt to illuminate some of those concerns for those who may not have considered them previously.

The challenges of the next 100 years are going to require everyone’s input from both the Sciences and the Humanities, because the problems we’ll face are simply too complex to be silo-ed into one domain of expertise. A broader perspective must be pursued, as is required by modern Quantum Systems Thinking, as opposed to the Newtonian, Cartesian thinking of prior centuries.

As a somewhat digressive example of this cross-discipline influence, if you asked someone on the street today about virtual reality (VR), most would say that’s the product of computer science. However, the term actually originated with the French theatre scholar Antonin Artaud, to describe "la réalité virtuelle" in a collection of essays entitled, “Le Théâtre et Son Double” [The Theatre and it’s Double]. One hypothesis (posited by David Bohm and expanded upon by Nick Bostrom) is that we are actually living in a virtual reality or holographic universe.

Though I’m certainly technically savvy, I don’t approach the question of artificial intelligence exclusively from the point of view of a computer scientist or technologist, as much as I approach it from the point of view of an ethicist, and a dramaturge who studies human behavior in order to form narratives that increase mutual comprehension and compassion.

As the daughter of a nuclear engineer, born in the nuclear age, I’ve watched at least one type of Extinction Level Event (ELE) technology threaten our survival in my lifetime, so it is perhaps with that in mind that I’m cautious about the implications and ramifications of new technologies now, such as genetics and artificial intelligence. One of the greatest dangers we face as a race at this stage in our development, is hubris. Therefore, I’m concerned about what technology we create, our motivations for creating it, and the consequences it can potentially have on us. My focus is on how the products of our genius can effect our own ethical core, and vice versa.

The potential risk & benefit analysis of AI, demands that we commit to a serious examination of the ethical ramifications of the technology. As I’ve said frequently before, science doesn’t exist in a vacuum. It has an impact on, and is impacted by, the larger culture – so ethical & legal considerations must always be factored in. The problem is, culture (and certainly the law) tends to move more slowly than technology – something which is becoming a real problem, because this particular technology can get away from us VERY fast. If it does, there’s no “unplugging” this, either.

In the case of artificial intelligence, the fact is, we haven’t fully determined precisely what constitutes consciousness or sentience, or discovered it’s full spectrum of functions even in ourselves, much less established a firm grasp on the varying levels of those states of being. Yet we’re still charging ahead, building artificial intelligence mechanisms relentlessly, driven by our own curiosity, greed, avarice, and a bunch of other wholesome and less wholesome motives. If we presume to create a new sentient being, I submit we need to get our own ethical house in order first, before we transfer our own corruption to our creation.

The products of technological genius are like wild animals we seek to tame. But as was said in Antoine de Saint-Exupéry’s wonderful children’s book, “The Little Prince,” we must remember, “You become responsible – forever – for what you have tamed.”

The thing about AI is, it might end up treating us like pets – or bugs – not even noticing us as it simply squishes us, and moves on.

Defining Our Terms

As I see it, one of the main problems we have now, is that we’re building something without defining our terms first. This is in part due to my own speciality as a technical writer, but I think it’s an important one, so I’d like to contribute that perspective to this discussion, as clear communication is going to be imperative. I’ve discovered that skipping this crucial step at the beginning of literally any process of creation, tends to lead to negative, unpredictable, cascading effects down the road.

Intellect vs. Consciousness

A plant is a life form, but it is not intelligent. A fly is intelligent (though minimally) relative to a plant, but we have no idea if it is conscious. Consciousness is something apart from intelligence.

However, our technology is proceeding apace to creating so-called “intelligent machines,” with much anticipation as to their eventual “singularity” moment, when they’ll be so smart, that consciousness will merely “emerge” as a natural byproduct. But it is shear presumption to posit that consciousness naturally rises out of a certain amount of intellect. We don’t actually know this for a fact.

You or I are far smarter intellectually than a cat or a dog. Our brains are bigger, and far more complex. Yet there is no doubt among anyone who is the slightest bit ethical, that they are both conscious beings, deserving of compassion. So clearly, our intellect relative to them, isn’t what made them conscious, much less sentient. The inverse of this is also true. A machine which can perform in a superior fashion to us intellectually in terms of performing certain functions, is not necessarily conscious or sentient, with a sense of self and what it’s like to “be them.”

Consciousness vs. Sentience

Should we regard sentience as something above, and apart from, mere consciousness as well?

I would say we do need to differentiate between these two things: consciousness vs. sentience. So for the sake of clarity within this article, I’m going to define consciousness as the ability to reason and formulate solutions independently, apart from programmed responses (either artificially coded or as the product of instincts that are the result of evolutionary adaptation), and independent of a prior domain of expertise.

A conscious being has a sense of personal self, and what it’s like to “be them.” Sentience I would define as a next-level capacity above that, having the ability to feel, sympathize and empathize, dynamically perceive, & experience the world around us subjectively, beyond simple reasoning.

A rat is aware (ie. conscious), but is it SELF aware (beyond mere instinct for preservation) in the same way a human is? I would argue no. Does this mean we treat a rat with less caring? I think everyone here (even super fluffy Buddhist me), would admit that given the forced choice between saving a rat or saving a human in a fire or other disaster scenario – we would save the human. What makes us different? I would argue that it’s sentience, our the capacity to feel, perceive, & experience the world around us subjectively beyond simple reasoning.

If we then propose to, in essence, create a new life form (which is what fully-sentient General AI would be), we’re really going to HAVE to objectively ask ourselves, “WHAT makes something conscious or sentient,” and, “WHY do we value the human over the rat,” well beyond mere species loyalty, or religious mythology.

What Are Conscious Computers?

Neuromorphic Computing attempts to bridge this gap by offering a tool for neuroscience to understand the dynamic processes within the brain via custom digital supercomputing architecture, and using biological neural networks as the inspiration to model emulations of neuron, synapse, and plasticity via the digital connectivity found in generic cognitive computing.

However, at present, the fact is that despite the Turing Test, we’ve not even settled on the physical mechanics of conscious computing. For example, in computers now, there is a clear divide between hardware and software, but the brain doesn’t function like that. As an adaptive system, the hardware in humans and other biological life forms, influences the soft/wetware and vice versa. So where does the hardware end, and where does the wetware begin in the human mind? We have no idea. Yet.

Ethics Should Inform Development

Still, I am less concerned with the build process (whether than involves merely mimicking the construction of the human mind by creating identical functional substrata or if it’s creating something entirely different) than I am with the determining what it is we’ve created, and whether it is in fact equal to, or superior than ourselves, and whether any of that is ethical. There is a “black box” problem associated with AI, in that we don’t actually know how it is that a computer is arriving at a particular solution. The process of that is ofter obscured from us, thereby making determining an ethical framework for those activities harder, if not impossible. We literally don’t know what’s going on inside of the machine. We can only interpret that based upon its actions in the external world. As with humans, that can be a hit or miss process.

Steven Pinker’s writings on the history of violence rather counterintuitively contend that over time, we as a species have actually become less violent, as we’ve widened our circle of concern. First we cared only about ourselves, then our immediate family, then our extended family, then our tribe, our village, our city, our nation, other species and the larger world. Each time we widen our circle of concern to add new groups we consider worthy of protection. That’s all lovely, but what I think Mr. Pinker misses, is that while we have done all these things, we’ve also made the violence we do commit far more efficient, even sterile. We’re no longer are we willing to die the “death of a thousand cuts” that our ancestors faced in hand-to-hand combat of ages past. We’ve become frighteningly removed from it, via the use of drones, missiles fired from halfway around the Earth, and gas chambers that “cleanly” kill with assembly-line efficiency.

On the one hand, we’ve become more compassionate, but with the other, we exhibit the behavior of the psychotic.

My concern, is that this break in our psyche as a species is only going to be exacerbated with time, unless we confront it now.

You never give it a second thought to throwing out an old toaster. But what if the toaster could suffer pain? Would you still be so careless with it, or would it give you pause? Would you seek to “recycle it humanely?” And if you didn’t, what does that say about you?

General AI is different from limited, functional, Narrow AI. You can build an artificial intelligence that can beat you at the Chinese game of Go, but that same AI is incapable of fully understanding, or subjectively interpreting the meaning of, much less creating a work like Picasso’s “Guernica,” which is a manifestation of both the immense suffering of conscious creatures, and their ability to derive beauty from that. Can an artificial intelligence mimic something similar that provokes an emotional reaction in you? Sure. But that doesn’t make it conscious, much less sentient.

However, recently, Deep Mind Alpha Go got a lot of press for beating a champion human Go player, but the really scary story was what happened when they SIMPLIFIED the code and made Alpha Zero. In this iteration of the AI, they stripped out all the specific opening book tutorials – the human learning code that in essence taught the computer to play Go efficiently. Then they had it play Chess. What was interesting, is that not only did it suddenly excel at a task it wasn’t specifically programmed to perform, within ONE DAY, it could beat every other Narrow AI supercomputer programmed to specifically play Chess. That’s a huge step forward in General AI, and the speed with which it conquered a task for which it was NOT trained, should give you pause. There is a continuum of intelligence, but there are also markers of huge jumps in intellect from an evolutionary perspective, such as that between ourselves and chimps, despite only a 2% differentiation in DNA.

“Blade Runner” proposed that the test for consciousness was emotional responsiveness. Apart from the Turing Test the movie “Ex Machina” asked the Garland Test (named after the filmmaker, Alex Garland), “The challenge is to show you she IS a robot, and see if you still FEEL she’s conscious.” It’s not a test of the robot, it’s a test of the human who interacts with it. “Can IT make YOU feel,” is not the same as IT actually feeling itself.

Does Suffering or Comprehension of Beauty Create Sentience? What Creates Compassion?

Development is continuing regardless of our lack of understanding about the issues we’ve discussed so far. Therefore, we must begin to ask ourselves, “What is it that transforms the merely intellectual into the conscious?” and, “What turns the conscious into the sentient?” but most importantly, “What transforms the sentient into the humane?” Is it a knowledge of suffering? If so, can we really create an entirely new species of robot AI, without giving it the ability to suffer?

What does it mean from an ethical standpoint, to intentionally create beings that can suffer? Think of this not just in terms of what it does to them, but also in terms of what it does to us.

That is basically the question being asked in the HBO series, “Westworld.” The “fun” of that theme park, is that you get to do ethically questionable things to the robots, free of the consequences you’d face if you acted that way with other sentient human creatures. Naturally, due to a a rather pernicious anthrocentrism, the first thing you feel when watching that series, is horror at what happens to the robots, in part because they look like us. But consider what happens to the person - who IS conscious & sentient - when they engage in that activity. At a certain point, one has to be a psychopath to “enjoy” that.

Now consider our social interactions online, and how already, we are gradually coarsen the discourse between each other. I’m not talking about minutiae like curse words or other things which are the product of misguided suburban decorum. I’m talking about actual cruelty, excused because you have no personal human interaction with the subject.

This has been a seemingly gradual thing, which has gotten greatly magnified, ironically, via the use of artificial intelligence algorithms deployed by the Russians last year as an attack on our country. It turns out that curve only seemed gradual, but rather, was exponentially problematic.

Imagine that on a larger scale with robotic AI that has all the look & feel of humans. Will we see a split between those who will actually widen their circle of concern and become more ethical (even towards a thing we cannot be absolutely certain is fully conscious), and those who become increasingly coarse & cruel, who end up harming their own humanity, and rapidly descend into psychopathy by harming the robots? Initial signs are not good.

Look at the political split that’s already occurred in our country due to artificial intelligence algorithms intended for mere marketing purposes on social media. After intentionally misapplying them towards political ends, the result is that we now have one group of people so inured to the suffering of others due to the propaganda they’ve been subjected to, that they don’t care about families being broken up, and parents ripped away from their children. They’ve so cut off their compassionate faculties, that they can, like that group of right wingers on Jimmy Kimmel’s show, literally sit in front of the family of an American soldier who risks his life for their freedom, and say to his face, “I don’t care if we ship off your family to Mexico. Your wife broke the law (as a 2 year old). So if we ship her back to Mexico while you’re on tour, your own baby will have to become a ward of the state, until you return from your tour of duty. But ultimately, we don’t care.”

Famously, Microsoft released an AI onto Twitter, and within days, it became racist, homophobic, cruel, and spiteful in its responses, just by mere interaction with its human “parents” and ironically other bots, created by the Russians expressly to create conflict.

Social Selfhood Shaped By Context

Perception of change, is different from the change of perception. Look at how differently one can feel in different social contexts – confident in one context, but the antithesis of that in another. But your narrative self remains stable, while being buffeted about by these contexts.

We are somewhat irrationally inclined to view ourselves as the same person at age 50 that we were in our teens, but this of course, isn’t remotely true. We have a consistent, stable narrative of self that persists, even as we go through life. Sometimes we confuse that internal narrative of stability with the external world.

Similarly, we are strongly biased to experience our country as unchanging. We come to expect continuity over time. This is especially true in developed countries, where overt, large-scale disruptions are far less frequent.

A big problem we have in times like the present one, is change blindness. And it is some of our most stable institutions and their personnel that have the hardest time seeing this coming.

That’s not just how catastrophes like the 2016 election happen, but why some media observers (I’m looking at you, New York Times) can’t see that the problematic, destructive change is still occurring. So much so, that they will actively seek to normalize it to preserve their perception of continuity and their own sense of personal decorum & comfort. They suffer from functional blindness.

Visual artists are very aware of how to manipulate perception. Any visual scene can change and go unnoticed, if the change is slow & subtle enough. Moreover, scientists have discovered, if it’s masked by complexity or distraction, you can outright remove up to 15% of the objects in the visual field, without someone being aware of it.

Authoritarian government THRIVES on this false personal narrative of stability, while working to destroy it. Our artificial intelligence might also be susceptible to all of this too, but with much more dire consequences.

It’s only the person that can take a step back, and objectively look at a context from a different perspective, who can see this change clearly. In the human context, that is the job of journalists. It’s also why authoritarian agents of disruptive, harmful contexts actively attack reporters who dare to dispassionately communicate these observations to others. Similarly, artists simultaneously take that step back to see the forest for the trees, while also creating works of art that immerse us in those contextual situations without personal risk, so that we can create compassion that goes beyond sympathy, to build actual empathy. Journalists and artists are feared by demagogues and authoritarians precisely because WE SEE, and we can MAKE OTHERS SEE and FEEL.

What if an artificial intelligence could be taught to objectively take that step back to observe dispassionately as a journalist does, but still not access the compassionate feelings that arise in humans when we view art? Will it TRULY understand the context of what it percieves, or only partially grasp it? What are the consequences of that? Ultimately, just as the robots in both the movies "Morgan" and Ex Machina" demonstrated dramatically, at some point, we are going to have great difficulty in determining whether an AI is actually FEELING or merely EMULATING feelings. That should give everyone pause.

The Pace of Change

Recently, someone online said to me that, "To ask technology to slow down, is like asking a person to stop aging." My response was simply that, because this is the case, we'd better pick up the pace in the social sciences, ethics, civics, and law to compensate, because otherwise, it won't go well for us.

All technology is a two-edged sword. It brings potentially grand benefits, but especially in its development and use, is still subject to human weaknesses we must be cognizant of, such as ownership & wealth concentration issues, imbalanced power dynamics, and the unhealthy manipulation by adversarial forces. Artificial intelligence misused, or even used in unexpected ways, can be dangerous on the level of nuclear fission or genetically engineered bioweapons. This is also where laissez-faire models of deregulation will fail utterly, and cause complete chaos.

Granted, you can't slow down aging, but you can engage in healthier behaviors midstream that make that aging process less harsh, and less immediate. An ounce of prevention here, really IS worth the price of the cure.

Or to use a scenic construction metaphor from my theatre days, "Measure twice, cut once."