Wednesday, February 16, 2011

Praxeology

Praxeology; Truth in Language, Ethics in Action

 

By John Taylor; 2011 Feb 16, Mulk 10, 167 BE

 

The birth of science in the Sixteenth Century was accompanied by dozens of attempts to devise artificial languages that would approach the perfect, primal language that Adam was thought to have invented in Eden. Invented languages were all the rage among progressive thinkers. Many new languages were proposed based not only on existing languages but also everything from music to logic to mathematics. John Amos Comenius contributed an artificial spoken language whose grammar, he claimed, made it impossible for a speaker to tell a lie. Many of his writings were burned by pillagers in a war in present day Poland, from which Comenius barely escaped. It is conceivable that he did make up notes for such a language. I often wonder, though, how such an inherently ethical language might have worked.

 

Surely, though, free will and ill will, and therefore the ability to lie, are basic to thought. How could thought even exist if there were no way to make anything but perfectly veridical statements? Would that not be as impossible as legs that cannot trip or fall? A biped's very ability to walk depends upon a sort of controlled falling. Even if a totally truthful language were possible, would it be desirable? In that case, there would be nothing to distinguish truth tellers from liars. As the Ricky Gervais film, "The Invention of Lying," demonstrates, the inhabitants of a world where anything less than total candor was never encountered in everyday speech would be hopelessly naive.

 

On the other hand, computer brains may well advance in sophistication to the point where they gain free will, and therefore the potential to consciously lie. Or maybe that will never be possible. There may not even be a need to introduce fail safes. As anyone who has written computer code knows, as soon as a computer registers a "divide by zero" error, it crashes immediately. Perhaps when some high level computer language in the future attempts to tell a lie it will encounter the Pseudomenon, the Liar Paradox, like a brick wall. As soon as it thinks, "I am telling a lie," the paradox would kick in, register the self-contradiction, and bring the whole chain of thought crashing to the ground.

 

In that case, Comenius's dream might be not only possible but unavoidable. We may imagine surgeons implanting computer chips with language circuitry into the brains of patients deprived of speech for neurological reasons. Then, when the patients open their mouths to lie, their speech chip would short-circuit, shutting down their voice box. Only silence come out.

 

Even if we have to introduce artificial fail-safe mechanisms in order to assure that intelligent machines do not lie to us, this is little different from what we already do with humans, each of whom is a potential liar. This is just what Isaac Asimov proposed with his famous "Three Laws of Robotics," laws that were built into the "positronic brains" of his fictional robots.

 

clip_image001

 

Many philosophical problems arise from this idea, however. Some, Asimov dealt with himself in his series of novels and short stories about robots. Others cropped up later on. One contradiction is the theme of the movie based on Asimov's book of the same name, "I, Robot," starring Will Smith. Some of these dilemmas are: If a robot cannot harm a human being (1st Law of Robotics), how do you define "harm?" How can any agent know beforehand all the results of a given action? An entire discipline, history, debates causes and interactions among conscious agents in the past. Surely deciding the consequences in future of present actions poses difficulties many orders of magnitude more daunting than what historians deal with.

 

On the other hand, in spite of these reservations, once we build a working moral compass, no matter how inadequate and complex it may be, it would then be a simple matter to miniaturize its circuitry until it fits onto a single chip. This is just what happened with Geo-synchronous Positioning Satellite devices, designed to calibrate one's location on earth using signals from overhead satellites. These calculations are so demanding that early GPS devices were as large and heavy as a small automobile. Now GPS chips are tiny and inexpensive, built into cell phones and many other hand-held gadgets.

 

So, if GPS detectors can be made ubiquitous, why not a working moral compass? At the very least, this fail-safe device would assure that no robot could intentionally harm a human being. The "Predator" drones which already are dropping in price and which are capable of attacking any point on the surface of the earth, would cease to pose the clear and present danger that they do now. At least one activist is even suggesting that Asimov's Three Laws be built into corporate charters. This, he suggests, would reduce the harm from human-run institutions as well as from autonomous machines.

 

Robotics is already advancing to the point where, it is claimed, the first "moral" robot has already been built. Named "Nao," this machine works as an orderly in a nursing home. Nao moves about the facility performing rudimentary services like making deliveries, distributing medication and changing the channel on the residents' common television set. Its programming is based on a decision tree of priorities allowing Nao to answer queries and requests from residents, while making elementary choices among several possible courses of action. (Michael Anderson and Susan Leigh Anderson, "Robot be good; Independent minded machines will soon play a big role in our lives. It's time they learned how to behave ethically." Scientific American, October, 2010, p. 72). Nao can decide, for example, whether to answer a request to change the channel or before that administering pills to a certain bed ridden patient. The makers of Nao hope that as working ethical robots gain and share their experience, they will be ever more prepared as they come across new situations.

 

Like a human worker, when they encounter an unresolvable problem they have the choice of calling for help. The moral robot does this wirelessly, even sending out "crowd sourcing" queries onto the Internet. With the aid of experts, lawyers and ethicists, and even random groups of interested parties on the Internet, robots could conceivably answer the most difficult ethical issues in real time.

 

clip_image002

 

 

Such updating and revision is already being done, and not only in software but also in a faster, partly malleable type of hardware known as firmware. For example, Microsoft's controller-free "Kinect" interface is run by dynamically updated firmware. Off-board firmware updates, constantly revised and updated through a live wireless feed to the Internet, allows the devise to recognize each user, and respond appropriately to that person alone using visual cues alone. Although currently intended only for the XBox gaming platform, it appears that Kinect will soon be our main interface that we will use to interact with all computers. The personal computer, for the first time, will be both personal and personable.

 

Or, maybe not so personal. Who is overseeing the updating of this firmware? Can we trust a single corporation's proprietary feed to be updating the firmware that runs our world? In a time when free access to information is a hot button issue, we cannot afford to leave this issue aside. The release of unauthorized information by the likes of Wikileaks has already provoked two popular revolutions in the Middle East, with many more crowds of unhappy citizens standing in line to be next.

 

clip_image003

 

As democracy spreads, and as transparency and freedom of information become more sophisticated, one word will become a byword around the world: praxeology (http://en.wikipedia.org/wiki/Praxeology), the science that informs decisions and action. Ethical choices are made at every level, not just the top. Good leadership is a question of praxeology; the more democratic our world becomes, not only political but economic democracy, and every other kind of democracy, the better our praxeology must be. It is praxeology that decides what course of action machines like Nao the robot will follow. The rules of computer interfaces like Kinect are entirely praxeological in nature, and should be decided by democratic, open means.

No comments: