13 May 2009

My thought processes

...shudder. Here are three pages of my rambling thoughts as I desperately tried to develop a thesis for my Human Event paper.


My paper will discuss the topic of artificial intelligence (and really, artificial humanity) with perspective provided by our later readings (especially Jonas). It will include discussion of the nature of thought, the inner self, and the essence of humanity. Essentially, I plan to answer the question "What would it mean for a machine to possess human intelligence?" [this was the topic I assigned myself.]

Alan Turing, [This was my attempt to begin writing the paper before I gave up and began brainstorming.]


Artificial intelligence does not mean artificial humanity
Def intelligence: ability to solve the same problems humans do: engineering, proofs, etc. Still deterministic in some sense
Humanity would be the soft things. Conversation, art, music. Is it distinguishable? Is that the issue? I think not. The question is ultimately whether the computer experiences the same thing as a human while producing it. It may be possible to program a computer to produce output indistinguishable from Bach’s, using a set of rules. This is not essentially different from passing the Turing test. The question is the computer’s inner life. And since it’s almost definitionally impossible to tell…

But perhaps we could have a circumscribed turing test, where the bounds of conversation are set. We might require that the computer need to answer truthfully every question. And simply ask it the question “are you self-aware” If it doesn’t understand, not aware. Else, aware.

Why should we want our computers to be indistinguishable from humans in order to be intelligent anyway? They should be their own form of intelligence.

Does intelligence = sentience? How do we tell if something is sentient? Ask it? Can we say computer sentience is ultimately untestable? In fact, the sentience of others is ultimately untestable. Reasonable assumptions -- if it is constructed like me and acts like me, it’s self aware? Build biological computers? Err on the side of caution? Does it come in degrees? Is it possible in a quantized mechanism?

But if the computer is completely deterministic, how can we say it has free will?

The sentience of another being is ultimately a matter of faith, as we could never experience being them.

Thesis: The question of a machine’s possessing a human intelligence is ultimately untestable. While we may indeed produce

-intelligence/problemsolving is easier than sentience is easier than humanity.
-can a machine be human? Without emulating the biological components?
-is something sentient? Must use heuristics; leaves room for doubt.
-if it acts sentient without being programmed so, pretty good guess is that it’s sentient.
-turing test not sufficient for humanity or really even sentience.
-we can analyze human conversation as a stochastic sequence of linguistic events, but that doesn’t mean that each element doesn’t have a reason
-my sentience, at least, cannot be reduced to a physical explanation, since no physical explanation generates consciousness; no physical explanation is being me.
-since I have sentience, which is nonphysical, it’s not a big logical leap to allow for free will apart from physical determinism.
-if I can’t tell whether a machine has sentience or not, a good ethical rule of thumb would be to treat it as if it did.

Can a computer think? It’s unclear: any position must be flexible with regard to new evidence.
From a reductionist perspective: yes. From a radical science/Jonas perspective, maybe not.
But there’s always room for denial.

Nature of thought/sentience: a feedback loop? Creativity?

Can a strange loop (as Hofstader calls it) really demonstrate sentience? Things are not merely their outward manifestations. Scientific theories are, but I. Experience. Reality. That is not an outward manifestation. Whatever it is, it’s a mystery; I wouldn’t be surprised if there were never a scientific answer. But to make a computer sentient, we need an answer to exactly what sentience is (unless it happens on accident). But then the question of whether a computer is sentient is the same as whether another human being is sentient.

a) can I imagine myself in their shoes?
b) Do they seem to have external motivation?
c) Do they seem to have internal motivation?
d) Do they seem to have volition?
e) Do they attest to their sentience? (Note here sentience would be separate from communicative intelligence.)

Volition and sentience seem to go hand in hand, but they are separable. Specifically, volition requires sentience.

Indeed, (e) is so great an indicator that it may override the others, as in a turing test. But with a computer we have a few more things to check:
a) has the computer been programmed to imitate human action? At some level imitation becomes reality, but if it merely does a probabilistic conversational choice…
b) Does the creator understand how it works? If so, that should make us a bit more skeptical.
c) Deprived from input, can it still act humanlike? (sensory deprivation chamber?)

It could be sentient and intelligent without being indistinguishable from a human, though. (stochastically, at least. Just as Shakespeare and Milton are distinguishable for the most part.)

THE TURING TEST IS A RED HERRING!

Intelligence is ultimately creative, not communicative. And it does require either society or proof of volition. Is intelligence inherently sentient?

But ultimately this is a matter of faith, just as the acceptance of science is a matter of faith.

Could a computer be human?


Characteristics of humanity

Problem solving
Sentience
Socialization
Volition
Creativity (included in volition?)

If all of these are possessed by a machine, then we may say that that machine’s intelligence is at a human level. Unfortunately, several are unverifiable unless one is the machine. (Ooh. Then in theory we could have a sentient being made up of sentient beings by having a huge group of humans follow the program of the computer.) How do we know if a computer is truly self-aware? Or if it has volition? We don’t. All we have are heuristics. And we can guess. And have faith. Same as science. Booyah.

This brainstorming is actually two-thirds of the length of my paper.

No comments:

Post a Comment