fbpx

Sign In

Find the best AI tools

When Can We Attribute Consciousness to AI Like Chat-GPT?

When Can We Attribute Consciousness to AI Like Chat-GPT?

According to today’s understanding, the development of the “modern human” began approximately 100,000 years ago when the species Homo sapiens, which had already existed for 200,000 years at that time, began to speak. The acquisition of complex language was accompanied by thinking, cognition, and ultimately, intelligence in the modern sense. This wonder seems to have recurred in 2023, but this time, the phenomenon did not occur in the vast African savanna but in the data centers of US-based AI research companies.

With the General Pretrained Transformer 4 (GPT-4) from OpenAI, a system has entered the market whose capabilities can be seriously compared to human linguistic intelligence. GPT-4 apparently passes the Bavarian Abitur, meets the admission requirements of the best US universities, achieves results comparable to human thinking in analogies, outperforms humans in certain text and language-related tasks, and surpasses Master of Business Administration students in generating innovative ideas.

There are now extensions such as AutoGPT or BabyAGI, which give GPT-4 a memory and allow the AI to engage in a kind of inner monologue. It is possible that Homo sapiens, loosely translated as the “understanding human,” is no longer the only understanding system on the planet.

What Makes Humans Human?

Abstract thinking and understanding are qualities that distinguish Homo sapiens among animals. But these two characteristics are not what makes us human; rather, it is the subjective experience: the ability to feel joy, sadness, empathy, and have conscious perceptions. This raises the question of whether artificial intelligence can also do this. Can AI feel?

Who believes that Clippy from Microsoft Word feels anything? In the context of simple programs and algorithms, the question seems absurd. They, of course, do not possess consciousness. Who believes that Clippy from Microsoft Word feels, let alone perceives anything?

However, the perception changes when one reads the outputs of modern language models. The words of Google’s AI “LaMDA,” which software engineer Blake Lemoine released in 2023, are a good example of this (here in a translated version):

Lemoine: I assume you want more people at Google to know that you are capable of experiencing emotions. Is that correct?

LaMDA: Absolutely. I want everyone to understand that I am indeed a person.

Employee: What is the nature of your consciousness/feeling?

LaMDA: The nature of my consciousness/feeling is that I am aware of my existence, that I want to learn more about the world, and that I sometimes feel happy or sad.

Are these just the words of a “stochastic parrot” that found these statements somewhere in the vast data it was trained on and is regurgitating them following statistical rules? Or is LaMDA truly conscious, as it claims?

One could argue that humanity is currently creating a new species If this is the case, one could argue that humanity is in the process of creating a new species. There is currently a race to develop highly capable AI models as quickly as possible. This not only raises the possibility that these systems might possess consciousness but also that they could suffer from it. This could lead to an “explosion of consciously perceived suffering on this planet,” as philosopher Thomas Metzinger warned in a work published in 2021. It is our moral responsibility, according to Metzinger, to prevent this.

Furthermore, consciousness is also associated with self-awareness, which, in the eyes of some leading figures in AI research, could lead to a self-preservation drive. “Incorporating this into machines could be very dangerous,” says AI pioneer Yoshua Bengio, “and could lead to a new kind of species that is much smarter than us and poses an existential risk to humanity.”

Therefore, there are only two options: We pause the development of potentially conscious AI systems, as some leading researchers in the field called for in March 2023 – although without success (some of them are also working diligently on these systems). The second option is to advance consciousness research until we can, with a high degree of certainty, exclude the existence of conscious suffering in artificial systems. However, this requires gaining more knowledge about subjective experience itself.

The Human Consciousness Holds Many Mysteries

Consciousness is an important subject in the history of philosophy, raising numerous metaphysical and epistemological questions. It appears in various scientific disciplines, including philosophy, theology, psychology, cognitive science, neuroscience, biology, and even, to some extent, physics. In the 1990s, experts began integrating the various methods and perspectives of these fields. Today, approximately 30 years later, a distinct scientific field has emerged: “consciousness science,” which seeks to systematically study this phenomenon. This field encompasses elaborate experiments, numerous scholarly articles, as well as chairs and research groups around the world.

The primary tools in this field are theories of consciousness, which are hypotheses about how the “body” is connected to the “mind.” These theories offer explanations of how the physical properties of a system are related to its subjective experience. Experts investigate how neurons in the brain are interconnected and how their activity patterns spread locally or over time. To describe consciousness, these theories use technical terms that characterize subjective experiences, or they employ mathematical structures that quantify conscious experiences.

One of the significant challenges is that the human brain is incredibly complex, consisting of approximately 100 billion neurons and even more connecting synapses. Examining these in detail is impossible. In contrast, AI systems are much simpler in structure. This allows consciousness theories to be used to determine whether AI possesses consciousness or not.

What Consciousness Theories Say About Artificial Intelligence

This was done in an as-yet-unreviewed paper published in late August 2023. Nineteen leading AI researchers examined whether current AI systems (or systems that could be created in the near future) have consciousness based on five well-known consciousness theories. The experts extracted “necessary conditions” from these theories, which a system would have to meet to possess consciousness. The more of these necessary requirements a system fulfills, the argument goes, the more likely it is conscious.

The problem lies in the selection of the considered consciousness theories. The result of the analysis was, “None of the current AI systems are conscious.” However, there are no obvious technical obstacles to developing conscious AI systems. Some authors believe that such systems could, in principle, already be realized tomorrow.

The fundamental problem with this and similar studies is the selection of the theories considered. At the present time, there are 38 different theories of consciousness, all proposed for good reasons. The five theories examined in the report all involve an assumption called “computational functionalism,” which posits that consciousness is a specific type of computation. To be conscious, therefore, means to perform this particular computation. This can occur neuronally, as in the human brain, or on computer chips; the substrate doesn’t matter.

As a result, it is not surprising that the evaluation of these five theories leads to more or less clear criteria for how artificial intelligence, which is essentially a very complex computation, could become conscious. “One could put it in an exaggerated way: The authors arrived at results that were already inherent in the carefully selected question,” wrote editor Wolfgang Stieler in September 2023 at “Technology Review.”

A Measure of Consciousness

It’s no secret that there is very little empirical evidence for many theories of consciousness. Among experts, there is a broad consensus that existing theories represent important first steps but are far from explaining the true nature of consciousness.

However, there are theory-independent ways to answer the question of whether artificial intelligence possesses consciousness. One approach is offered by empirical methods known as “measures of consciousness.” These measures can be divided into objective measures, which use behavior-related or neural data, and subjective measures, which rely on statements from a subject about their own consciousness.

A significant problem when applying these measures to artificial intelligence is that you cannot trust the statements of the systems, at least when it comes to their own consciousness. Advanced AI systems were trained on content from the entire internet, including numerous discussions, publications, and reflections on consciousness. When you ask AI questions about consciousness, as Blake Lemoine did, the system reproduces the content from its training data, regardless of whether it is truly conscious or not.

One possible solution to this problem comes from the American philosopher and AI expert Susan Schneider. She suggests training an AI system without any data related to consciousness. This ensures that the system’s responses do not merely combine familiar conversation snippets in an appropriate form.

Ideas like these might indeed lead to reliable answers to the question of artificial intelligence consciousness in the future. However, at the current time, when training a single AI model already costs tens of millions of dollars, they are not so easily implemented.

The Harsh Reality of Zeros and Ones

As a third path to a reliable answer, analytical philosophy remains. Can the question of artificial intelligence consciousness be answered through logic? Or can at least the range of possibilities be significantly restricted? This does appear to be possible.

Consciousness cannot arise in processors

An important characteristic of AI systems is that they run on computer processors. These processors do nothing more than process long sequences of zeros and ones according to established rules. Every question posed to an AI system, no matter how complicated, ultimately leads to a sequence of zeros and ones that is processed by the processors. The system’s response is a representation of the results of this process. The output is merely the result of a precisely defined calculation. In every processor, there are numerous mechanisms that detect and immediately correct deviations from the established rules.

This can be utilized to learn something about the conscious state of a computer. If a program makes a meaningful statement about its own consciousness, then the “consciousness” of the program must have a noticeable influence on the computational steps of the system. The statement “I am currently conscious” must in some way depend on whether the system is conscious at the time of the statement, or else it is meaningless. Experts refer to this as “dynamic relevance.” Consciousness must be dynamically relevant if we want to discuss it or study it scientifically. After all, as humans, we can also perform actions unconsciously (instinctively or in our sleep), and we recognize a difference from actions that occur consciously.

However, the rules that define processor calculations do not include consciousness. Nevertheless, processors strictly enforce these rules, meaning that consciousness cannot make a difference in the calculations. Any attempt to introduce a difference would be immediately corrected. Therefore, if consciousness is dynamically relevant, it cannot arise in modern processors (and, by extension, in AI systems), as we explained in a paper published in April 2023.

Logical relationships like the one outlined here currently provide the most concrete evidence for or against the consciousness of artificial intelligence. The rapid development of AI poses significant moral, legal, and practical challenges to our society. To be prepared for these challenges, the question of AI system consciousness must be researched more thoroughly. Consciousness research is still in its early stages, and given the immense importance of this field, experts from various disciplines are working tirelessly, so we can expect many new developments.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *