cft

Machines finally discover their consciousness

Machines refusing to help humans defeat a virus


user

Norbert Biedrzycki

3 years ago | 7 min read

According to scientists, human consciousness not only cannot be replicated but also evades definition. The weakness of this view lies in the presumption that only one type of consciousness, the kind that resembles ours, is possible.

And yet, it is conceivable that at successive stages of its development, AI may develop new, hitherto unknown modes of self-reflection.

The history of studies on the human consciousness goes back dozens of years, hundreds in fact if purely philosophical explorations are included.

The consensus among today’s psychologists, cognitive scientists and neurobiologists is that we are still struggling to comprehend the exact nature and origin of consciousness.

Still to be answered are questions about whether consciousness is a product of the human brain or its function, i.e. whether it is physical or largely independent of its physicality.

Regardless of which of the views we support, it is evident that our difficulties defining the concept hinder progress in creating an artificial equivalent of human consciousness.

Since we don’t understand the mechanisms behind realization, we are unable to create a computer code that would make the machine realize the consequences of its actions or make it aware of its own existence and its own separateness.

Machines refusing to help humans defeat a virus

According to one definition, consciousness is the ability to achieve goals by placing oneself in a model of the environment and simulating possible future scenarios of how the model could change.

To illustrate this, imagine the following: an AI equipped with a powerful computer is told to discover a cure for a new virus. Its job is to identify the virus and propose the cure on the basis of large volumes of data. It appears that the machine fails to complete its mission.

It gets back to the scientists saying that current knowledge is insufficient to develop an effective vaccine against the virus. However, on examining the computer’s disks years afterwards, scientists find computations that would have allowed them to produce a panacea.

Why did the machine say it could not find the remedy and chose to conceal it instead? According to one hypothesis, to the researchers’ surprise, the computer examined all data available to it, including that on the threats of overpopulation in certain parts of the world.

It seems to have then concluded it was best to leave humanity to its own devices because the key problem was not the virus itself but the consequences of overpopulation.

The machine analyzed the scenario of sharing its calculations with humans and chose what its consciousness, which was inaccessible to people, considered to be the optimal course of action.

I think this fictional case shows how a conscious machine could behave and what consequences such behavior might have.

Programmed limited beings?

Are computers, smartphones or voice assistants likely to ever become self-reflecting entities capable of foreseeing the outcomes of their own analyses? The skeptics are clear: computers may have the ability to recognize faces, translate from and to foreign languages, help robots clear hurdles, and recognize voices, patterns and colors.

What they will never do though is realize they are doing any of these things. They will always merely react, which leaves them dependent on man who controls the streams of data that the computers (algorithms) are given to process.

The world’s best-known robot — Sophia — cannot answer questions on its own. It needs to be programmed and can only respond to a limited number of queries. The number of possible combinations of meanings contained in statements produced by Sophia is just as limited.

Smart voice assistants, in their turn, may be growing more powerful by the day but are still unable to grasp human irony or the more complex contexts of human messages. Thus, all of them are merely devices that slavishly follow their programs.

They may surpass us in performing complex data computations but are unlikely any time soon to ponder such fundamental question as “Who am I?” or “Why do I feel bad?” But then, how certain can we really be that that won’t ever happen?

What happens inside a black box?

Two years ago, bot training researchers found that at some point in their development, man-made algorithms began to exchange their own code that was completely incomprehensible to humans.

What did they talk about? Was their ability to engage in such communications not a sign of nascent consciousness? As illogical or paradoxical as this may sound, I think that we cannot entirely rule out machine self-awareness because we are unable to understand and clearly interpret many of their actions.

Theoretically, the fact that AI-enabled devices do not work because such is their will and rely on human decisions suggests that we can control their behaviors.

And yet, numerous examples prove that presumption wrong by revealing that humans are increasingly ignorant about what makes AIs tick. In the realm of social media, AI can detect scores of characteristics shared by people and use them to target specific products or content at groups of like individuals.

Analysts admit that people would never be able to pick up on certain similarities between network users that algorithms manage to detect. A lot of the time, we have no idea how neural networks trained on huge datasets answer our queries and allocate some people but not others to specific record sets.

Clueless about why or how something works, we cannot be certain where the limits of algorithmic comprehension and decision-making really lie.

Where is it born?

Of all the related question, I find the one concerning the very first moment when a consciousness is “born” to be one of the most fascinating aspects of the whole artificial consciousness debate.

Therefore, instead of endlessly speculating on whether such consciousness is at all possible, I would rather focus on the question of how it could manifest itself to us? Is there a way we could measure and perceive it? A number of computer experiments have already been conducted to show that machines have in fact achieved a certain level of understanding of their own behavior. Scientists of Meiji University in Japan have built two robots.

One of them performed certain operations while the other observed and repeated them. The ability of the latter to reproduce the observed behaviors of the former can be regarded as drawing conclusions, assessing a certain set of circumstances and making decisions.

One may be tempted to conclude that the entire sequence of actions carried out by the mimicking computer closely resembles those of a conscious human being.

Algorithms can’t be awed

Still, a key element is missing: the computer that learned the behaviors of its mate did not do it because it felt doing so would benefit it in some way.

It did not replicate the movements of the other machine because doing so pleased it. Its behavior never achieved the sophistication of a person who stands in a street, looks up to the sky, ponders at its beauty, realizes its own positive emotions associated with that observation and feels the desire to repeat this experience.

All this suggests another postulate: no matter how difficult it is to nail, consciousness has several levels.

One of the basic ones allows one to make an observation, indirectly communicate it to the world, and even take further actions that will bring one closer to achieving a certain goal.

A machine can recognize the color red, classify a group of objects as red and choose to find further items of this color.

However, in doing all this, it will never be able to assess this activity as good or associate it with future benefits. There exist algorithms capable of writing stories that humans appreciate for their aesthetic and literary value.

It is nevertheless very hard to imagine a machine that would feel better having read the story, experience a joy similar to ours, share such joy with people and perhaps even develop new operational abilities inspired by such emotions.

All in the hands of cyborgs

The fact that technological progress is non-linear and grows exponentially is no longer disputed. If that is the case, there is no reason why we should lose hope in qualitative leaps in AI.

Advances in AI are not only about making ever smaller and faster data processing devices. Much more fundamental and unthinkable changes loom ahead.

The so-called singularity will have implications far beyond what we can envisage today. It is still difficult to tell with absolute certainty whether we will succeed in realizing our desire to hook the human brain up to a computer.

However, if we ever do, we will find ourselves at another level of debate about artificial consciousness. Without a doubt, the linking of neurons and processors will constitute a major step towards new forms of existence. And this may be the answer to the question of whether it is possible to create artificial consciousness.

A chip implanted into the human brain will improve our analytical and cognitive skills — having one will feel wonderful and greatly improve our lives.

Once such developments come to pass, the line between human and machine consciousness — which we still consider to be natural — will become a whole lot more blurred.

We will be dealing with self-aware cyborgs which can not only match IBM Watson’s computational speed but can also be proud of their performance.

Upvote


user
Created by

Norbert Biedrzycki

I have been watching technologies change for years. I believe that the next wave of development is going to be driven by such technologies and trends as Virtual and Augmented Reality, Internet of Things, Robotics and Automation ana AI.


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles