Home About EIS →

Reverse Engineering the Brain and the ELIZA Effect: Is Believability Ethical?

believability

Pet Society, Tamagotchi, Milo

Over winter break this past year, I went to a conference in Chicago for Graduate and Faculty Christians. I found myself having to choose between the Engineering track and the Math track (I went with Engineering). At the conference were some well known researchers, such as Fred Brooks and Francis Collins.  It seemed, to me (at least), that this conference would be quite the unique experience (…and I can now say that I’ve sung hymns with a room full of engineers). I mean, how often do we encounter a large gathering of the intersection between Christians and Professors? … I digress; however, within the community of Christian “intellectuals,” there were some interesting presentations on non-religious research. In particular, was a talk titled, “Discerning Technology or Hippocratic Engineering.”

In his introduction, the speaker uses Spore as an example to demonstrate how we’ve managed to take recreate life within technology. He quotes, “SPORE isn’t a game for re-educating the intelligent design proponents of the present; it’s a game for inspiring the intelligent designers of the future.” At such an unusual conference, I gladly found myself at a session where 5 of the first 7 slides were celebrating video games. This leads the speaker into a discussion of “Technology Assessment, an implicit mandate.” He asks the question, should we be creating technologies just because we can?, giving quite a number of interesting cases and scenarios to consider (and concludes with a few under-explained tables and figures– “Base vectors of technological progress” and the “Environmentally Responsible Product Assessment Matrix” for example.) Overall, there was one point that remained unsettling for me….

In his discussion of technological progress, he uses the NAE‘s “Engineering’s Grand Challenges” to give examples for consideration. These challenges supposedly advance technology and benefit mankind, which all seem straightforwardly good, except, the speaker points out, the challenge to “reverse-engineer the brain.”  He indicates that it might be irresponsible for researchers to work towards recreating technology to be human-like. More simply put, the potential bad outweighs the potential good in such a technology, and we shouldn’t try things for the sake of trying them.

I sought clarification on this, because “reverse-engineering the brain” is a bit obscure. Does this involve advancements in  human cognition, neuroscience, biotechnology, or the whole area of AI?  Surely, he does not think that the whole area of AI could be unethical.

Russell and Norvig divide areas in AI into 4 quadrants:

  1. acting human (Turing test approach)
  2. thinking human (Cognitive modeling approach)
  3. acting rationally (Rational agent approach)
  4. thinking rationally (Laws of thought approach)

On the NAE’s page, advancements in many medical and other practical applications were listed as “reverse-engineering the brain.” At the very least, these contributions are more likely ethical than not.  For clarification, the speaker argues it is not apparent that creating computers to be indistinguishable to human beings in thought (and, potentially, action) is a positive contribution.  Similar to the speaker, Russell and Norvig, although for different reasons, are also less inclined towards to human-based AI within computer science:

The study of AI as rational agent design therefore has two advantages. First, it is more general than the ”laws of thought” approach, because correct inference is only a useful mechanism for achieving rationality, and not a necessary one. Second, it is more amenable to scientific development than approaches based on human behavior or human thought, because the standard of rationality is clearly defined and completely general. Human behavior, on the other hand, is well-adapted for one specific environment and is the product, in part, of a complicated and largely unknown evolutionary process that still may be far from achieving perfection.

Upon clarification, it sounds like the speaker is skeptical of believable AI, but probably ok with other aspects of reverse-engineering the brain. So, how is a person who just celebrated the advancements in video games able to say that believability may not be ethical?

matrix-posterI take away 2 conclusions his puzzling stand: (1) the speaker is not familiar with research in game ai, and (2) the speaker does not know enough about the positive applications of believability.

Still, he raises a good point in asking what is “Hippocratic engineering?” Particularly, what are the concerns that face believability and its future? Perhaps, the speaker was wary of post-apocalyptic futures prophesied by science fiction, like the Matrix or, more recently, Shane Acker’s movie, 9. The fear that machines will outdo us and take us over may be a bit far fetched, but there is evidence that our relationships with technology have gone sour in the past.

9poster

The most famous example was with a program called ELIZA, named after Eliza Doolittle from George Bernard Shaw’s Pygmalion.  The Wikipedia articles describes the ELIZA effect as:

The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors. In its specific form, the ELIZA effect refers only to “the susceptibility of people to read far more understanding than is warranted into strings of symbols — especially words — strung together by computers”

The concept of “the ELIZA effect” was based off of (ELIZA’s creator) Joseph Weizenbaum’s experience with his chatterbot’s reception.  Famously, he is quoted in saying, “I had not realized that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

At some point, between the ELIZA effect and total machine domination (inclusively), we are no longer solely using the technology, rather we are also being used by the technology. To a smaller extent, in this age, some people develop unhealthy dependencies or addictions to our current virtual worlds. So, am I going into an area that will eventually become a great detriment to society? A side from the scenarios that science fiction presents, I don’t find myself concerned with the ethics of believability. Maybe if I were researching how to make weapons of mass distruction, I’d be more thoughtful about what it is I’m doing.

Whether technology becomes its own intelligence or have its own individual cognitive process, is an ongoing discussion in philosophy of mind.  Regardless, believability, whether substantial or faked, has taken many forms in application. For instance, preteens take their $20 giga-pet eggs around with them, feeding, playing, and cleaning their virtual pets. If mistreated, they (the pets) even die. These days, I’m playing the cuter-than-ever facebook version of tamagotchi– Playfish’s Pet Society.  In the age of instant messaging, chatbots have become mainstream.  My favorites were the ones created for the unknowing Turing-test evaluator.

Since ELIZA, a good public understanding of the technology seems to have made it less likely for technology to be misused.   What predicates the ethical may then be contingent on whether a society is mature enough to use a technology wisely.

So, society, are we mature enough to handle Milo?

If all goes well, we’ll someday find ourselves informed like the stick figure in this xkcd comic and not Weizenbaum’s secretary …

This entry was posted in Gaming Culture and tagged , , , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.

4 Comments

  1. WordPress › Error

    There has been a critical error on this website.

    Learn more about troubleshooting WordPress.