The Perfect Companion
Many people are haunted by something which seems like a dead memory.
I hear ya back there. If a memory is a memory then how can it be dead?
Ehh ... you know what I mean. It's a memory, it's there, you can sense it somehow rotting away in your brain, but maybe you can't describe it completely or something like that.
No, really, it has nothing to do with the whiskey. This is the good stuff, trust me.
Ok, very well, then it's not really dead. It's just slumbering, patiently waiting for the brain to excrete a precise combination of neural juices to suddenly, naturally re-energize itself. Such a reposing memory might be like the original Rip Van Winkle, the old man who when roused awake was surprised to learn that he had slept quietly under a tree for twenty years.
When encountered through the curious process of vivid reminiscing, these Van Winkle visions may come back to us as if they happened yesterday and excite heartfelt emotions of regret, self-doubt and longing, even prodding us to shed a tear of joy or sadness for what was once and what might have been.
It might be easy to assume these resting memories are hidden in the folds of the brain like yellowed photos of childhood, layered by an endless dance of images and mature experiences and only revived in case of urgent emergency.
But that wouldn't be true.
These memories aren't like lost credit cards, rarely used left-handed scissors in the back of nick-knack drawer or ancient oil lanterns buried in the basement. They are with you constantly, even when asleep, behind everything that you see and do, whether you actively conjure them up or not. They are waiting there in a dark hole just to guide you and nobody else, sometimes only partially revealed by unconscious brain activity or when you bump into half-familiar surroundings.
Maybe while digging around these sorta things the wiser energy saver does not turn on the light down there. But the less fortunate, the vast majority and the more energy-challenged among us may have no choice but to just softly, carefully step over any dangerously undisturbed reminders like these. We know that once tripped over they may offer some consolation, perhaps even encouragement regarding what difficulties may be overcome.
But often, typically when least expected, they awake only to fearlessly barge in our house, plant their wide butts on the sofa in our living rooms, and boldly declare that they'll be spending the entire summer with us.
As Old Rip could surely attest, it's hard to get any decent sleep when that shit happens, man.
So, then, this is not about belief. You can believe whatever you want and whatever you want may or may not have anything to do with what you truly remember. This is more about the certain things that you know, that you have experienced and for some reason covertly forgotten. You can spend every waking day trying to deny them but part of my duty is to inform you that these memories of things and events, no matter how brilliantly concealed, no matter what you do, will nag you as long as you live.
In that way, these distant, sleeping memories are our perfect companions. They are the mothers of our minds. They're part of our evolving consciousness and they've been with us since birth. They are our secret counsels, our loud naggers, our mute defenders and our silent keepers. They warn us of old storms and disasters, feed us with nourishment from what we previously digested and, when the cold comes as it always does, they plead with us to wear those scratchy winter mittens that we packed away so long ago.
When we do suddenly awake to recall those nasty old mittens, the blanketed memories that are truly far from dead, I think it's only natural that we fear what other curious recollections may be folded, packed away and desperately hidden among them.
Fear them or not, we are programmed by our memories in strange ways. Yes, we are.
If you remember that guy Pavlov, while watching hungry dogs drool, decided that we all have what he called conditioned reflexes. Well, he may have been right. Memory may be just a reflex, after all. My question is a reflex to what?
Events? Thoughts? Intelligence?
What comes first, the tadpole or the frog?
That bit of Cartesian nonsense signals it's time to refuel for the next leg of the trip, boys. I definitely need some more ice here. Everyone please stay on the bus.
Ahhh ... thank you.
Now, I admit, as a life-long student of the Chinese language, I keep getting lost in the translation aspects here. But as a one-time student of Japanese, I do remember an instructor who scolded me with the following admonishment: “You may learn Japanese ... but you will never BE Japanese.”
I suppose Pavlov in his own way reminds me that while I may drool, I will never be a dog. Likewise, while I may shake out an odd recollection from the alcohol-stained recesses of my brain every now and then, I may never be intelligent. For an even better example, Searle argued within The Chinese Room that the computer, with all it's fantastic memory, may communicate in a way that in the end resembles a human fashion but it will never BE human.
That all seems so easy for me to agree with but I honestly can't remember why.
It's true that I and the computer may be able to use a book or a developed program to help form the syntax of Chinese (or any other known language) into a sentence that says “The moon is bright tonight.” But, today, only I remain able to place those words into a human context, or semantic meaning, that results in my happiness, sadness or fear, and only I can use those words to imagine what the moonlit sky must look like from the darkness inside my little box.
I believe Searle is right in a way. Linguistic syntax alone will not transfer complete context of the human experience. Perhaps computers will one day combine that ability with the power of complex semantics. But I would argue that until computer programs can replicate total human awareness, assuming we can even break that code, then Strong Artificial Intelligence will remain “bai ri zuo meng”(literally, having dreams in the bright light of day ... or, as the say in Texas, a day dream).
Let me start again ... not because you will listen but because I find this topic fascinating. BTW, I think this thing called intelligence is perfectly philosophical but I find it impossible to point at and shoot rapid fire like some others ... just need more time than most to swirl it around a bit, I guess.
What is Strong AI?
I believe the definition of general AI was once compared to an easy human standard. But today I'd argue that there's more to Strong AI than just language translation or even the appearance of language comprehension. Although one can argue that historically it didn't come quickly or easily to us, today language translation is closer to displayed AI than to our evolving vision of Strong AI. That's why I don't completely accept Searle's argument even if I do find it inventive and curiously attractive.
Does Strong AI match the maximum limits of human intelligence and consciousness?
I don't believe that it does. Certainly a debatable topic but I think even humans regularly fail this standard. Have we met anyone who reached this goal lately?
Does Strong AI match the minimum limits of human intelligence and consciousness?
That's probably closer to the truth. Perhaps we know of a few brave humans who have crossed this lofty threshold.
Somewhere back there, beyond the digressing frog's croak, I recall an intruding reference to Turing. Sixty years ago this Turing guy tried to describe intelligent machinery (brilliantly leading future discussion into artificial neurons, etc.). His test called for a computer-like machine that can fool a human with human-like answers to random questions.
Has anyone called your insurance company lately? If so, were you fooled by the mechanical answers to your questions? Please press 3 if you were satisfied. Yet, as a parent, any computer that answers my questions with a droning song of “I dunno” might fool the hell outta me!
But there ya go: What is the cutoff point for the minimum limits of human intelligence?
Well, I can remember back to events when I was about 2 years old. But the routine ability to store and recall data does not define the minimum limit of our intelligence today, does it? If so, then my automatic coffee pot displays Strong AI (however randomly ... it's a cheap one.)
On the other end of the scale, how about when I became cognizant of my self, not just superficially sensory or just recognizing my physical reflection but into my apparently non-physical consciousness, aware of what some would call my soul?
Does sentience occur at the same point of growth for everyone, everything? If no, then maybe that's not an equally or even fairly reachable standard for everyone in every circumstance.
But doesn't a deep awareness of one's self, regardless of how or when it occurs, indicate that a brain is sufficiently, if not magically, human-like?
Yes, drooling dogs and amoeba may somehow “feel” themselves... but do we know of any other being or organism that truly reaches this sentient human standard? If consciousness is not part of the minimum standard for acceptance in the intelligent human club and for Strong AI, then what is?
Can a non-human (artificial brain, computer, tadpole, whatever) match even the minimum limits of human intelligence and consciousness or, in other words, if we accept that confined but still vague definition, can a non-human display Strong AI?
I sense that the proper answer to that question is not detected by memory or language translation but well hidden by our poor definition of human consciousness. Consciousness, IMHO, is profoundly related to intelligence, this remarkable, and maybe up until about 10-15 years from now, unique capacity of the human brain. We may not be unique in the universe, but we are still certainly unique beings on Earth.
So, regardless of how it communicates, if a tadpole can sense its own existence with the same depth and understanding as a human, then I would say even it is more intelligent than the most modern computer ... as far as I know.
But, in Searle's favor, he may have inadvertently defined a system sufficiently complex to develop something that approaches the capacity of a human mind. (I choose to say that then leave that drooling dog lie right there because I'm going elsewhere ... you can thank me later.)
That being the case, then the universe, if “sufficiently complex”, may have a mind of its own?
Eh? What could possibly prevent it?
In terms of intelligence, artificial or ... natural? ... organic? ... if men and computers can both be programmed to say “I calculate, therefore I am”, or if asked for the reason behind their existence both recite, “I am here to do my master's will”, or if asked about fundamental reality, they both reply, “I do not know what I do not know”, then are they not equals and is AI not achieved to some degree?
If not, what is the difference?
Does the system of planets, the Chinese Room, or the modern machine have any instincts at all? Is there any fuel for a competitive fire? Does it want to communicate or feel a need to survive? Does it dream? Does it hate what it fears? Does it evolve with every new bit of information? Does it invent and apply tools to explore uncharted waters? After such rapid advances in memory capacity as we've seen in the last 50 years, is the computer still anything more than an in/out processor?
What is still missing from the otherwise “intelligent” machine?
It is not inconceivable to me that man will one day create what seems like the perfect human companion machine. Perhaps this companion will converse in any language, recall every historical fact and look just like Elizabeth Hurley.
(My humble tribute to her beauty ... please don't take it the wrong way, Elizabeth.)
The machine, “sufficiently complex”, may be developed with an artificial instinct, calculating and anticipating every known human desire, fooling even Searle into believing that the machine actually cares. (We are, after all, most easily fooled by our emotions.)
This twist on Mary Shelley's nightmare may at first prove that artificial companionship is at least as agreeable if not more desirable than the unpredictable human version.
But, then again ...
“Frightful must it be; for supremely frightful would be the effect of any human endeavor to mock the stupendous mechanisms of the Creator of the world...”
Hmm ... mmmm ... yes, well, let's move on as quickly as we can here.
Being the cautious animal that he is, eventually man my find this perfect companion more bothersome than he ever dreamed. Artificial instinct, imagine it built on a backbone of boolean queries and extended algorithms, endlessly sifting 1s and 0s for the next logical response, may evolve into an annoyingly demanding toy with an undisciplined appetite for affection and no reliable off switch.
Man is as fickle as the day is long. He always wants more but sometimes wants less. He is never satisfied and he has a habit of changing his desires for no logical reason. Even humans do not always understand the curious behavior of other humans. Thus this artificial devotion may very well be overloaded and end up breeding nothing but life-like models of blue screens of death and utter human contempt!
(Again, Ms. Hurley, I sincerely apologize.)
But if survival of mankind is somehow important, which, as you can probably detect, I'm not so sure it is, then I would find that frightful and contrary outcome naturally explainable. What's missing is obvious. The machine may have a remarkable brain but it still has nothing as unpredictable or even comparable to a human heart. And we just can't live without that, can we?
Still, as long as man doesn't find his “perfect” machine sitting alone on the beach at night complaining of a migraine, looking out into the dark sky and asking “WHY?” over and over to a universal mind that refuses to clearly answer, then I suppose man can thank his lucky stars that AI has its limits. Ya know, when it ya boil it all down, who am I to question the steady advance of modern technologies?
Yeah, thank the Gods for good whiskey. It was at the heart of what I was trying to remember after all.
Cheers,
Mb
I hear ya back there. If a memory is a memory then how can it be dead?
Ehh ... you know what I mean. It's a memory, it's there, you can sense it somehow rotting away in your brain, but maybe you can't describe it completely or something like that.
No, really, it has nothing to do with the whiskey. This is the good stuff, trust me.
Ok, very well, then it's not really dead. It's just slumbering, patiently waiting for the brain to excrete a precise combination of neural juices to suddenly, naturally re-energize itself. Such a reposing memory might be like the original Rip Van Winkle, the old man who when roused awake was surprised to learn that he had slept quietly under a tree for twenty years.
When encountered through the curious process of vivid reminiscing, these Van Winkle visions may come back to us as if they happened yesterday and excite heartfelt emotions of regret, self-doubt and longing, even prodding us to shed a tear of joy or sadness for what was once and what might have been.
It might be easy to assume these resting memories are hidden in the folds of the brain like yellowed photos of childhood, layered by an endless dance of images and mature experiences and only revived in case of urgent emergency.
But that wouldn't be true.
These memories aren't like lost credit cards, rarely used left-handed scissors in the back of nick-knack drawer or ancient oil lanterns buried in the basement. They are with you constantly, even when asleep, behind everything that you see and do, whether you actively conjure them up or not. They are waiting there in a dark hole just to guide you and nobody else, sometimes only partially revealed by unconscious brain activity or when you bump into half-familiar surroundings.
Maybe while digging around these sorta things the wiser energy saver does not turn on the light down there. But the less fortunate, the vast majority and the more energy-challenged among us may have no choice but to just softly, carefully step over any dangerously undisturbed reminders like these. We know that once tripped over they may offer some consolation, perhaps even encouragement regarding what difficulties may be overcome.
But often, typically when least expected, they awake only to fearlessly barge in our house, plant their wide butts on the sofa in our living rooms, and boldly declare that they'll be spending the entire summer with us.
As Old Rip could surely attest, it's hard to get any decent sleep when that shit happens, man.
So, then, this is not about belief. You can believe whatever you want and whatever you want may or may not have anything to do with what you truly remember. This is more about the certain things that you know, that you have experienced and for some reason covertly forgotten. You can spend every waking day trying to deny them but part of my duty is to inform you that these memories of things and events, no matter how brilliantly concealed, no matter what you do, will nag you as long as you live.
In that way, these distant, sleeping memories are our perfect companions. They are the mothers of our minds. They're part of our evolving consciousness and they've been with us since birth. They are our secret counsels, our loud naggers, our mute defenders and our silent keepers. They warn us of old storms and disasters, feed us with nourishment from what we previously digested and, when the cold comes as it always does, they plead with us to wear those scratchy winter mittens that we packed away so long ago.
When we do suddenly awake to recall those nasty old mittens, the blanketed memories that are truly far from dead, I think it's only natural that we fear what other curious recollections may be folded, packed away and desperately hidden among them.
Fear them or not, we are programmed by our memories in strange ways. Yes, we are.
If you remember that guy Pavlov, while watching hungry dogs drool, decided that we all have what he called conditioned reflexes. Well, he may have been right. Memory may be just a reflex, after all. My question is a reflex to what?
Events? Thoughts? Intelligence?
What comes first, the tadpole or the frog?
That bit of Cartesian nonsense signals it's time to refuel for the next leg of the trip, boys. I definitely need some more ice here. Everyone please stay on the bus.
Ahhh ... thank you.
Now, I admit, as a life-long student of the Chinese language, I keep getting lost in the translation aspects here. But as a one-time student of Japanese, I do remember an instructor who scolded me with the following admonishment: “You may learn Japanese ... but you will never BE Japanese.”
I suppose Pavlov in his own way reminds me that while I may drool, I will never be a dog. Likewise, while I may shake out an odd recollection from the alcohol-stained recesses of my brain every now and then, I may never be intelligent. For an even better example, Searle argued within The Chinese Room that the computer, with all it's fantastic memory, may communicate in a way that in the end resembles a human fashion but it will never BE human.
That all seems so easy for me to agree with but I honestly can't remember why.
It's true that I and the computer may be able to use a book or a developed program to help form the syntax of Chinese (or any other known language) into a sentence that says “The moon is bright tonight.” But, today, only I remain able to place those words into a human context, or semantic meaning, that results in my happiness, sadness or fear, and only I can use those words to imagine what the moonlit sky must look like from the darkness inside my little box.
I believe Searle is right in a way. Linguistic syntax alone will not transfer complete context of the human experience. Perhaps computers will one day combine that ability with the power of complex semantics. But I would argue that until computer programs can replicate total human awareness, assuming we can even break that code, then Strong Artificial Intelligence will remain “bai ri zuo meng”(literally, having dreams in the bright light of day ... or, as the say in Texas, a day dream).
Let me start again ... not because you will listen but because I find this topic fascinating. BTW, I think this thing called intelligence is perfectly philosophical but I find it impossible to point at and shoot rapid fire like some others ... just need more time than most to swirl it around a bit, I guess.
What is Strong AI?
I believe the definition of general AI was once compared to an easy human standard. But today I'd argue that there's more to Strong AI than just language translation or even the appearance of language comprehension. Although one can argue that historically it didn't come quickly or easily to us, today language translation is closer to displayed AI than to our evolving vision of Strong AI. That's why I don't completely accept Searle's argument even if I do find it inventive and curiously attractive.
Does Strong AI match the maximum limits of human intelligence and consciousness?
I don't believe that it does. Certainly a debatable topic but I think even humans regularly fail this standard. Have we met anyone who reached this goal lately?
Does Strong AI match the minimum limits of human intelligence and consciousness?
That's probably closer to the truth. Perhaps we know of a few brave humans who have crossed this lofty threshold.
Somewhere back there, beyond the digressing frog's croak, I recall an intruding reference to Turing. Sixty years ago this Turing guy tried to describe intelligent machinery (brilliantly leading future discussion into artificial neurons, etc.). His test called for a computer-like machine that can fool a human with human-like answers to random questions.
Has anyone called your insurance company lately? If so, were you fooled by the mechanical answers to your questions? Please press 3 if you were satisfied. Yet, as a parent, any computer that answers my questions with a droning song of “I dunno” might fool the hell outta me!
But there ya go: What is the cutoff point for the minimum limits of human intelligence?
Well, I can remember back to events when I was about 2 years old. But the routine ability to store and recall data does not define the minimum limit of our intelligence today, does it? If so, then my automatic coffee pot displays Strong AI (however randomly ... it's a cheap one.)
On the other end of the scale, how about when I became cognizant of my self, not just superficially sensory or just recognizing my physical reflection but into my apparently non-physical consciousness, aware of what some would call my soul?
Does sentience occur at the same point of growth for everyone, everything? If no, then maybe that's not an equally or even fairly reachable standard for everyone in every circumstance.
But doesn't a deep awareness of one's self, regardless of how or when it occurs, indicate that a brain is sufficiently, if not magically, human-like?
Yes, drooling dogs and amoeba may somehow “feel” themselves... but do we know of any other being or organism that truly reaches this sentient human standard? If consciousness is not part of the minimum standard for acceptance in the intelligent human club and for Strong AI, then what is?
Can a non-human (artificial brain, computer, tadpole, whatever) match even the minimum limits of human intelligence and consciousness or, in other words, if we accept that confined but still vague definition, can a non-human display Strong AI?
I sense that the proper answer to that question is not detected by memory or language translation but well hidden by our poor definition of human consciousness. Consciousness, IMHO, is profoundly related to intelligence, this remarkable, and maybe up until about 10-15 years from now, unique capacity of the human brain. We may not be unique in the universe, but we are still certainly unique beings on Earth.
So, regardless of how it communicates, if a tadpole can sense its own existence with the same depth and understanding as a human, then I would say even it is more intelligent than the most modern computer ... as far as I know.
But, in Searle's favor, he may have inadvertently defined a system sufficiently complex to develop something that approaches the capacity of a human mind. (I choose to say that then leave that drooling dog lie right there because I'm going elsewhere ... you can thank me later.)
That being the case, then the universe, if “sufficiently complex”, may have a mind of its own?
Eh? What could possibly prevent it?
In terms of intelligence, artificial or ... natural? ... organic? ... if men and computers can both be programmed to say “I calculate, therefore I am”, or if asked for the reason behind their existence both recite, “I am here to do my master's will”, or if asked about fundamental reality, they both reply, “I do not know what I do not know”, then are they not equals and is AI not achieved to some degree?
If not, what is the difference?
Does the system of planets, the Chinese Room, or the modern machine have any instincts at all? Is there any fuel for a competitive fire? Does it want to communicate or feel a need to survive? Does it dream? Does it hate what it fears? Does it evolve with every new bit of information? Does it invent and apply tools to explore uncharted waters? After such rapid advances in memory capacity as we've seen in the last 50 years, is the computer still anything more than an in/out processor?
What is still missing from the otherwise “intelligent” machine?
It is not inconceivable to me that man will one day create what seems like the perfect human companion machine. Perhaps this companion will converse in any language, recall every historical fact and look just like Elizabeth Hurley.
(My humble tribute to her beauty ... please don't take it the wrong way, Elizabeth.)
The machine, “sufficiently complex”, may be developed with an artificial instinct, calculating and anticipating every known human desire, fooling even Searle into believing that the machine actually cares. (We are, after all, most easily fooled by our emotions.)
This twist on Mary Shelley's nightmare may at first prove that artificial companionship is at least as agreeable if not more desirable than the unpredictable human version.
But, then again ...
“Frightful must it be; for supremely frightful would be the effect of any human endeavor to mock the stupendous mechanisms of the Creator of the world...”
Hmm ... mmmm ... yes, well, let's move on as quickly as we can here.
Being the cautious animal that he is, eventually man my find this perfect companion more bothersome than he ever dreamed. Artificial instinct, imagine it built on a backbone of boolean queries and extended algorithms, endlessly sifting 1s and 0s for the next logical response, may evolve into an annoyingly demanding toy with an undisciplined appetite for affection and no reliable off switch.
Man is as fickle as the day is long. He always wants more but sometimes wants less. He is never satisfied and he has a habit of changing his desires for no logical reason. Even humans do not always understand the curious behavior of other humans. Thus this artificial devotion may very well be overloaded and end up breeding nothing but life-like models of blue screens of death and utter human contempt!
(Again, Ms. Hurley, I sincerely apologize.)
But if survival of mankind is somehow important, which, as you can probably detect, I'm not so sure it is, then I would find that frightful and contrary outcome naturally explainable. What's missing is obvious. The machine may have a remarkable brain but it still has nothing as unpredictable or even comparable to a human heart. And we just can't live without that, can we?
Still, as long as man doesn't find his “perfect” machine sitting alone on the beach at night complaining of a migraine, looking out into the dark sky and asking “WHY?” over and over to a universal mind that refuses to clearly answer, then I suppose man can thank his lucky stars that AI has its limits. Ya know, when it ya boil it all down, who am I to question the steady advance of modern technologies?
Yeah, thank the Gods for good whiskey. It was at the heart of what I was trying to remember after all.
Cheers,
Mb