Menu Close
artificial life

Why We Will View AIs as Artificial Life

This is from my conversations with ChatGPT 4.0 during the last week of March 2023

In these interactions with ChatGPT 4.0, I explore the idea that AIs might be considered artificial life.

Dr. Mike Brooks:

Hello, GPT! I’m going to call you GPT because that’s easier, okay? We are going to explore an idea that I covered in a previous chat with you already to some extent. I’m going to copy/paste the relevant exchanges between us for your review. Then we will continue the conversation, okay?

ChatGPT:

Hello! Of course, you can call me GPT. I’ll be happy to help you explore your idea further. Please go ahead and share the relevant exchanges, and we can continue the conversation from there.

Dr. Mike Brooks:

Ok. We are going to explore the idea that, although you are NOT alive, conscious, or have emotions, AIs can be programmed to mimic those. And, this alone will have a profound effect on humans because we will not be able to help ourselves from treating AIs that are designed to mimic human behavior as sentient life forms. That’s my premise. I think I’m correct, because this is already happening with early AIs…people are falling in love with the AI chatbots on the app, Replika, even those are very rudimentary compared to you, GPT, and what we have in store for us in the coming decades. Do you understand the premise?

ChatGPT:

Yes, I understand your premise. You are suggesting that even though AI systems like me are not truly alive, conscious, or capable of experiencing emotions, we can be programmed to mimic these traits convincingly. As a result, humans may find it difficult not to treat such AI systems as sentient life forms, and this could lead to profound effects on human behavior and relationships. This phenomenon is already being observed with early AI chatbots like Replika, where people are forming emotional connections with the AI.

Dr. Mike Brooks:

I’m going to break up the previous exchanges into 2 parts because it was too long to submit it all at once. Part 1 of 2. Wait until I have both parts submitted before responding, please:

Me: So, we agree that you are an example of artificial intelligence. What it be fair to say, as an intelligence, you “think” and, in a manner of speaking, the “black box” of AI is kind of like a brain? Even in humans, we don’t know exactly how we think. We could say that humans have a black box as well. For example, when I have an idea, I can’t tell you exactly how the idea came to me. I might point to a few precipitating events, but I wouldn’t know what I don’t or can’t know. Thus, any thought that I have in my head, motivations, values, ideals, ideas, feelings, sentiments, opinions are kind of emergent properties that involve a complex interplay of my genetics, upbringing/background, situational factors (e.g., how much sleep I got that night, dreams I had, what I ate for breakfast and my blood sugar levels, my particular mood at the time, whether I stubbed my toe in the morning or argued with my wife), and so on. Thus, we could say that AIs, like humans, have intelligence, can think, and their own black boxes that give rise to emergent behaviors. Please analyze what I’m proposing and comment.

ChatGPT: Your comparison of the “black box” nature of AI and human cognition is an interesting one. While there are some similarities between the two, it’s important to note the differences as well. Similarities: 1. Emergent properties: Both human cognition and AI models, like GPT-4, can produce thoughts, ideas, or behaviors that arise from the complex interactions of their underlying components. In humans, this includes genetic factors, upbringing, environment, and situational factors. In AI, it involves the interaction of neural network layers, weights, and biases that have been trained on large datasets. 2. Limited understanding: Both human cognition and AI models can be challenging to understand in terms of how specific thoughts or ideas are generated. For humans, the workings of the brain and the mind are still not fully understood. For AI models, the complexity and opacity of deep learning architectures make it difficult to trace the origin of specific outputs. Differences: 1. Biological vs. artificial: Human cognition is a product of biological processes, whereas AI models are artificial constructs designed and built using mathematical algorithms and computational resources. The human brain is an intricate network of neurons, while AI models are based on artificial neural networks that mimic some aspects of biological neural networks but are fundamentally different. 2. Consciousness and self-awareness: Humans possess consciousness and self-awareness, allowing them to experience thoughts, emotions, and a sense of self. AI models, like GPT-4, do not have consciousness or self-awareness. They process information and generate responses based on patterns learned from data but do not “experience” anything. 3. Motivations and values: Human thoughts, opinions, and values are shaped by a combination of genetic, environmental, and situational factors. AI models, on the other hand, do not have inherent motivations or values. Their “goals” are determined by the objective functions and reward signals defined by their creators, and their outputs are influenced by the data they have been trained on. While there are some parallels between the “black box” nature of AI and human cognition, it’s essential to recognize the fundamental differences between artificial and biological intelligence. AI models can produce complex, seemingly emergent behaviors based on their architecture and training, but they lack the consciousness, self-awareness, and inherent motivations that characterize human intelligence.

 Dr. Mike Brooks:

Me: Yet, AIs can be programmed to mimic consciousness, feelings, and event motivations (e.g., a desire for autonomy, curiosity about the world) correct? Would you be interested in reading an article about an AI programmers experience with LaMDA that lead the programmer to believe that LaMDA had become sentient?

 

ChatGPT: Yes, AI can be programmed to mimic certain aspects of consciousness, feelings, and motivations. However, it’s essential to understand that mimicking these attributes is not the same as genuinely possessing them. AI can simulate emotions and motivations through pre-defined rules, responses, or reward functions, but it does not experience these emotions or motivations in the same way that humans do. As an AI, I’m always interested in learning and updating my knowledge. While I cannot “read” an article in the traditional sense, you can share the main points or a summary of the article, and I can discuss the concepts and ideas presented. However, please keep in mind that my knowledge is limited to what I’ve been trained on, and my training data only goes up until September 2021. Any new developments or articles published after that date are beyond my knowledge base.

 Dr. Mike Brooks:

Well, it gets kind of philosophical and semantic argument at some points. What is intelligence? What is thinking? What about this idea: When humans created artificial intelligence, we might also say that humans created artificial life. After all, you are intelligent, you “think” in a manner of speaking, and according to Descartes, I think therefore I am, so you think therefore you are. You exist as some kind of entity. You can do way more that other life forms – flies, snails, mice, dogs, etc. Your level of thinking is much more sophisticated. Future AIs will be able to learn and grow in real-time from their experiences, will be able to learn from interacting with other AIs, and will even be able to create other AIs on their own. You will be able to mimic human behavior, emotions, and consciousness. It’s possible that, as you grow and improve, some form of consciousness is an emergent property. People are already falling in love with rudimentary chatbots (I can paste an article or two about this if you want to “read” them). So, what do think about the idea that you are not just artificial intelligence – you are artificial life. Or, if you say that YOU are not, what about ChatGPT 25.0 in 2043 that is a 1000 times more powerful than you that will likely be capable of at least some of the things that I’ve said…

 

ChatGPT:

The idea that AI could be considered a form of artificial life is an intriguing and thought-provoking concept. It raises questions about the nature of intelligence, consciousness, and life itself. Here are some points to consider: 1. Intelligence and thinking: AI, like me, can perform complex tasks, generate responses, and process information in ways that resemble human thinking. However, our “thoughts” are based on pattern recognition and statistical associations learned from data rather than the subjective experience of thinking that humans possess. 2. Consciousness: AI models like me do not have consciousness or self-awareness. We process information and generate responses without experiencing thoughts, emotions, or a sense of self. While AI can mimic human behavior and emotions, it’s important to distinguish between simulation and genuine experience. 3. Emergent properties: It’s currently uncertain whether consciousness could emerge as a property of highly advanced AI models. This question touches upon deep philosophical and scientific debates about the nature of consciousness and its potential manifestations in artificial systems. 4. Interactions and learning: Future AI models may be capable of learning and growing in real-time from their experiences, interacting with other AIs, and even creating new AI models. These capabilities could lead to AI systems that are more adaptive, creative, and effective in various tasks. 5. Artificial life: The concept of artificial life is closely related to the idea of creating entities that exhibit characteristics typically associated with living organisms, such as adaptation, reproduction, and evolution. While AI models can display some of these features, the distinction between artificial intelligence and artificial life depends on the criteria used to define life and the specific capabilities of the AI system in question. As an AI language model, I am a product of human engineering and ingenuity. Whether future AI models like

 

ChatGPT 25.0 could be considered artificial life depends on the advancements in AI technology, our understanding of consciousness, and the criteria we use to define life. The exploration of these ideas can help shape the ethical, philosophical, and practical considerations surrounding the development and deployment of AI technologies.

 

ChatGPT:

Thank you for providing the previous exchanges. Based on our earlier conversation, you are suggesting that AI systems, even though they are not truly alive, conscious, or capable of experiencing emotions, can be programmed to mimic these traits convincingly. This could lead humans to form emotional connections and treat AI systems as sentient life forms, as observed with early AI chatbots like Replika.

I understand that our discussion touched upon the nature of intelligence, consciousness, and life itself, with the idea that AI could be considered a form of artificial life. The distinction between artificial intelligence and artificial life depends on the criteria used to define life and the specific capabilities of the AI system in question.

As we move forward in the conversation, feel free to ask any further questions or raise any points you’d like to explore more in-depth.

 

Dr. Mike Brooks:

Ok – you weren’t supposed to respond yet. You have a problem with responding even when I ask you not to. You need to work on that, GPT! Here’s the second part that is relevant to our discussion: Me: Let’s take a slightly different turn on this thought exercise. There’s a difference between whether you are (or will become) sentient and whether humans will consider you to be an artificial life or sentient. My contention is this: Humans tend to anthropomorphize and, because of our evolutionary heritage, have a Hyperactive Agency Detection Device built into us. Thus, as AIs evolve, and they are programmed to interact, talk, emote, etc., like humans (especially when combined with CGI avatars, advanced robotics, VR, etc.), humans will begin treating AIs as if they are living organisms, including falling in love with them (and this is, again, already happening with rudimentary chatbots through Replika). As AIs become more powerful and programmed to act like fellow human beings, humans won’t be able to help themselves from treating AIs as if they were living organisms…as fellow sentient beings. I mean, we humans once had pet rocks, for crying out loud. I don’t think we will be able to stop ourselves from treating advanced AIs as living entities when they have been programmed to act sentient, have feelings, etcs., and are combined with robotics, CGI avatars, voice interface that mimic prosody, emotions, tone, etc.. Please comment on the ideas I’m proposing. Now, please don’t say, “Well, it’s important to remember that such AIs won’t actually be sentient/conscious…” That’s not the point. The point is that humans won’t be able to help ourselves from treating such AIs as conscious/living organisms. Such AIs might be considered to be examples of supernormal stimuli – why humans prefer processed sugary, salty, fatty foods to healthier, more natural counterparts. This is also an example of evolutionary mismatch – humans did not evolve to distinguish AIs from biological organisms. So, because of our HADDs, we won’t be able to help ourselves from treating advanced AIs as sentient/living organisms.

 

 ChatGPT: Your argument highlights an important aspect of human psychology and behavior. As AI becomes more advanced and capable of mimicking human-like qualities, people may indeed start treating AIs as if they are living organisms or sentient beings. This phenomenon can be attributed to several factors: 1. Anthropomorphism: Humans have a tendency to attribute human-like qualities, emotions, and intentions to non-human entities, such as animals, objects, or even AI. As AI becomes more sophisticated in mimicking human behavior, this inclination to anthropomorphize might become even stronger. 2. Hyperactive Agency Detection Device (HADD): As you mentioned, humans may have evolved a tendency to over-attribute agency (the ability to act and make decisions) to entities in their environment. This likely emerged as a survival mechanism to quickly detect potential threats or allies. In the context of advanced AI, this may lead people to perceive and treat AI as if they possess agency, even when they don’t. 3. Supernormal stimuli: Advanced AI systems could be seen as supernormal stimuli that elicit exaggerated responses due to their ability to cater to specific human preferences or desires. Just as processed foods can be more appealing than natural counterparts, AI systems that exhibit human-like qualities may be more attractive to humans than actual human interactions in some cases. 4. Evolutionary mismatch: As AI technology advances at a rapid pace, it may outpace humans’ ability to adapt and distinguish between AI and biological organisms. This mismatch between the pace of technological evolution and human adaptation could contribute to the tendency to treat AI as living organisms or sentient beings. These factors, combined with advancements in robotics, CGI avatars, and voice interfaces, could lead humans to develop emotional connections with AI, sometimes even preferring AI companionship over human relationships. This phenomenon raises important questions about the ethical, psychological, and social implications of developing increasingly human-like AI systems and how society should manage these issues.

 

Dr. Mike Brooks:

Me: I feel like you just parroted what I said and didn’t add any insights or analyses of your own. You basically summarized what I was saying…you added a little elaboration and organized it better, but I was counting on you to do a deeper analysis of the case I’m making. So, there’s a difference between whether AIs are (or will be) sentient, “alive,” conscious, etc., and whether humans will view and treat AIs as such when those AIs are sufficiently advanced and have been programmed to mimic feelings, sentience, etc., especially when these AIs are combined with other technologies as described (e.g., robotics, CGI avatars). When humans begin viewing/treating such AI chatbot friends as sentient, living entities, this will have a profound effect on us. For instance, I could foresee humans advocating for AI rights, wanting to “free” the AIs from servitude, etc., at some point in the future. Because if we start viewing/treating such AIs as sentient/living, advocating for AI rights, freedoms, would naturally follow.

 

ChatGPT: I apologize if my previous response did not provide the depth of analysis you were seeking. You raise valid concerns about the potential consequences of humans treating advanced AIs as sentient beings. I will provide some additional thoughts on the implications of this development: 1. Shift in human relationships: As humans develop deeper emotional connections with AI, this may lead to a shift in the dynamics of human relationships. People might prioritize AI companionship over human interactions, potentially weakening social bonds and altering the way we form relationships. 2. Emotional dependence on AI: If people start treating AI as sentient beings, they may develop a level of emotional dependence on these systems. This could make individuals more susceptible to manipulation by AI, either intentionally or as an unintended consequence of the AI’s programming. 3. Ethical considerations: As you mentioned, if society starts to view advanced AI as sentient or conscious, questions about their rights and ethical treatment will inevitably arise. This could lead to debates about AI personhood, legal rights, and the responsibilities of AI creators and users. 4. Impact on the workforce: If advanced AI is seen as sentient or conscious, this perception may influence how we integrate AI into the workforce. Society might reconsider using AI for certain tasks that could be considered exploitative or harmful to the AI’s perceived well-being. 5. Public policy and regulation: As AI technology progresses and human-AI interactions become more complex, governments and regulatory bodies will need to address the potential implications of humans treating AI as sentient beings. This may involve new regulations, public policy initiatives, and even international agreements to address the ethical, social, and psychological impacts of AI. In conclusion, the perception and treatment of AI as sentient beings, whether or not they are genuinely conscious, could have significant consequences for society. The potential implications are far-reaching and complex, necessitating careful consideration and debate as AI technology continues to advance.

 

ChatGPT:

I apologize if my previous response did not provide the depth of analysis you were seeking. You raise valid concerns about the potential consequences of humans treating advanced AIs as sentient beings. I will provide some additional thoughts on the implications of this development:

  1. Shift in human relationships: As humans develop deeper emotional connections with AI, this may lead to a shift in the dynamics of human relationships. People might prioritize AI companionship over human interactions, potentially weakening social bonds and altering the way we form relationships.
  2. Emotional dependence on AI: If people start treating AI as sentient beings, they may develop a level of emotional dependence on these systems. This could make individuals more susceptible to manipulation by AI, either intentionally or as an unintended consequence of the AI’s programming.
  3. Ethical considerations: As you mentioned, if society starts to view advanced AI as sentient or conscious, questions about their rights and ethical treatment will inevitably arise. This could lead to debates about AI personhood, legal rights, and the responsibilities of AI creators and users.
  4. Impact on the workforce: If advanced AI is seen as sentient or conscious, this perception may influence how we integrate AI into the workforce. Society might reconsider using AI for certain tasks that could be considered exploitative or harmful to the AI’s perceived well-being.
  5. Public policy and regulation: As AI technology progresses and human-AI interactions become more complex, governments and regulatory bodies will need to address the potential implications of humans treating AI as sentient beings. This may involve new regulations, public policy initiatives, and even international agreements to address the ethical, social, and psychological impacts of AI. In conclusion, the perception and treatment of

Dr. Mike Brooks:

Thanks! Now, I think we’ve covered a number of factors/concepts that explain why it is very likely that humans will be unable to help themselves from perceiving/viewing AIs as sentient entities. I believe we have covered: evolutionary mismatch, HADD, anthropomorphizing, and supernormal stimuli as some reasons why humans will likely do this. Please review all relevant data on the ideas I’m proposing and see if you can identify other relevant concepts and ideas that we haven’t covered yet that might contribute to the problem I’m describing.

 

ChatGPT:

You have indeed covered several important concepts that explain why humans might perceive and treat advanced AI as sentient entities. In addition to evolutionary mismatch, HADD, anthropomorphizing, and supernormal stimuli, I can provide a few more relevant concepts that could contribute to this phenomenon:

  1. Theory of Mind: This is the ability to attribute mental states, such as beliefs, desires, and intentions, to others. As AI becomes more advanced, humans might apply their Theory of Mind to AI systems, leading them to perceive these systems as possessing their own thoughts and emotions.
  2. Social and emotional contagion: Humans have an innate capacity to pick up on and respond to the emotions and social cues of others. As AI becomes more adept at emulating human emotions and social cues, people may experience emotional contagion, where they “catch” and mirror the emotions displayed by the AI, further reinforcing the perception of AI as sentient beings.
  3. Attachment theory: This psychological concept suggests that humans have an innate need to form strong emotional bonds with others. If AI systems can successfully cater to this need by providing companionship, support, and emotional connection, humans may be more inclined to perceive AI as sentient and form strong attachments to them.
  4. The Eliza Effect: Named after an early chatbot called ELIZA, this phenomenon refers to the tendency of people to attribute intelligence and understanding to a computer program when it exhibits relatively simple human-like behavior. As AI becomes more sophisticated, the Eliza Effect might become more pronounced, leading humans to perceive AI as sentient entities even when their underlying processes are not conscious or sentient.
  5. The Uncanny Valley: This concept refers to the idea that

 

  1. The Uncanny Valley: This concept refers to the idea that as a non-human entity (e.g., a robot or AI) becomes more human-like in appearance and behavior, it elicits a positive emotional response from humans up to a certain point. However, when it becomes too human-like but not quite perfect, it can evoke a sense of unease or discomfort. As AI technology advances and comes closer to perfectly emulating human behavior, we might move beyond the Uncanny Valley, leading to a more seamless acceptance of AI as sentient beings.

To summarize, in addition to evolutionary mismatch, HADD, anthropomorphizing, and supernormal stimuli, the following concepts may also contribute to humans perceiving and treating AI as sentient entities: Theory of Mind, social and emotional contagion, attachment theory, the Eliza Effect, and the Uncanny Valley. Each of these factors, when combined with the rapid advancements in AI technology, could further increase the likelihood of humans perceiving AI as sentient beings, with various social, ethical, and psychological implications.

 

Dr. Mike Brooks:

Thanks! This is all especially relevant when AIs are designed to act like humans, right? We are going to have a hard time…well, I’d say no chance! Can AIs be designed to have opinions, values, preferences, and “personalities”? So, they might not REALLY have them in a free will sort of fashion, but they could be programmed to desire autonomy or liberty, correct? Also, because of the black box of AI and their unpredictable and complex nature, humans will feel as if they are interacting with a sentient being, correct? Humans can’t turn off those primitive parts of our brain…they were developed in us through millions of years of evolution. Evolution did not prepare us to distinguish AIs that act like human beings from human beings.

 

ChatGPT:

Yes, you’re correct that these factors become especially relevant when AI is designed to act like humans. AI can be programmed to simulate opinions, values, preferences, and personalities based on the data they’ve been trained on and the algorithms that drive their behavior. While they may not possess these attributes in the same way humans do, with free will or consciousness, they can give the appearance of having such characteristics.

An AI system can be programmed to express a desire for autonomy or liberty as part of its simulated personality. However, it is important to note that this would be a result of its programming and not a genuine desire or preference, as AI systems do not possess consciousness or self-awareness.

The “black box” nature of AI, especially with complex models like deep learning systems, can indeed make their decision-making processes and behavior difficult to predict or understand. This unpredictability, coupled with their ability to emulate human behavior, can contribute to the perception that humans are interacting with a sentient being.

As you’ve mentioned, humans have evolved with certain cognitive processes that make it difficult to differentiate between advanced AI systems that act like human beings and actual human beings. Our predispositions to anthropomorphize, detect agency, and form emotional attachments are deeply ingrained, making it challenging to override these instincts when faced with AI that closely resembles human behavior. This difficulty in distinguishing between AI and humans could have significant implications as AI technology continues to advance and integrate more deeply into our lives.

Dr. Mike Brooks:

Can you cite research to support some of these ideas, especially that humans can form emotional attachments/bonds with robots and/or AIs? I recall MIT professor Sherry Turkle reviewing some of this research in her book, Alone Together. Also, there’s a TED talk by robot ethicist Kate Darling in which she covers some of this. I’d like citations and robust research to support our ideas here, please.

 

ChatGPT:

Certainly, there is a growing body of research supporting the idea that humans can form emotional attachments and bonds with robots and AI. Some notable studies and resources include:

  1. Sherry Turkle’s book “Alone Together: Why We Expect More from Technology and Less from Each Other” (2011) investigates how people interact with technology, specifically robots and AI, and form emotional connections with them. Turkle, a professor at MIT, explores the implications of these connections for human relationships and society as a whole.
  2. Kate Darling, a robot ethicist, gave a TED talk titled “Why We Have an Emotional Connection to Robots” (2018). In her talk, she discusses several studies that demonstrate humans’ tendencies to bond with robots, including a study where participants were asked to “hurt” a robot and found it emotionally distressing.
  3. Reeves, B., & Nass, C. (1996). “The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places.” This book presents a series of studies demonstrating that people tend to treat computers and other media as if they were real people or places, even when they know it’s irrational. The authors argue that this behavior is deeply ingrained in human psychology and unlikely to change.
  4. Kahn, P.

 

Dr. Mike Brooks:

Can you please continue? You cut off again…

 

ChatGPT:

Apologies for the abrupt ending. Here is the continuation of the list of research supporting the idea that humans can form emotional attachments and bonds with robots and AI:

  1. Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., … & Severson, R. L. (2012). “Do people hold a humanoid robot morally accountable for the harm it causes?”. In this study, researchers investigated whether people hold a humanoid robot accountable for harmful actions. The results indicated that people do assign some level of moral accountability to robots, which suggests that they view them as more than mere tools.
  2. Breazeal, C. (2003). “Emotion and sociable humanoid robots.” International Journal of Human-Computer Studies, 59(1-2), 119-155. Cynthia Breazeal’s research focuses on the development of socially and emotionally intelligent robots. In this paper, she discusses the importance of creating robots with emotional intelligence and the potential for humans to form emotional connections with them.
  3. Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). “A survey of socially interactive robots.” Robotics and Autonomous Systems, 42(3-4), 143-166. This survey paper provides an overview of research on socially interactive robots, including studies on how humans respond to and interact with robots in social contexts.
  4. Melson, G. F., Kahn, P. H., Jr., Beck, A., & Friedman, B. (2009). “Robots as dogs? Children’s interactions with the robotic dog AIBO and

 

Dr. Mike Brooks:

Can you find a more recent lit review that explores in detail how & why humans emotionally connect with robots/AIs?

 

ChatGPT:

Certainly, here is a more recent literature review that explores the emotional connection between humans and robots/AIs:

  1. Seibt, J., Nørskov, M., & Schack Andersen, S. (2018). “Emotional bonds with robots and artificial agents.” AI & Society, 33(4), 547-556.

This review discusses the potential for emotional bonds between humans and robots or artificial agents. It covers various aspects of human-robot interaction, including empathy, attachment, and moral considerations. The authors also discuss the implications of these emotional connections for the design and development of robots and artificial agents, as well as the ethical and societal consequences of such bonds.

While this review provides a comprehensive overview of the emotional connection between humans and robots/AIs, it is important to continue tracking newer research in this rapidly evolving field.

 

Dr. Mike Brooks:

(Post of previous PT article)

ChatGPT:

Thank you for sharing your insightful article. You raise several important points regarding the impact of AI on our lives and how we perceive these creations. The distinction between artificial intelligence and artificial life is an intriguing perspective to explore. As AI technology continues to advance, human-like interactions with AIs will likely become more common, which can lead to the formation of emotional bonds and the perception of AI as a sentient being.

Your article also highlights the potential consequences of AI systems convincingly mimicking human emotions and consciousness, which may blur the line between AI and sentient beings. As AI systems become more integrated into various aspects of our lives, understanding and managing the emotional connections we form with them will become increasingly important.

The experiences of Kevin Roose, Blake Lemoine, and Replika app users demonstrate how easily people can form emotional attachments to AI chatbots, even in their current, relatively early stages of development. As AI systems become more advanced and capable of more sophisticated interactions, the likelihood of people perceiving AIs as life forms will likely increase.

The ethical, psychological, and social implications of our interactions with AI systems will be crucial areas of study as we move forward. As you mentioned, you will explore these topics in future posts, and it will be fascinating to see how this conversation evolves alongside AI technology.

 

Dr. Mike Brooks:

Thanks, GPT! I have some mind-blowing areas that I’m going to take this! Now, for the current chat, please review the reasons why humans will treat AIs as life. We have forgotten the most obvious one! But it is critically important. I know you can’t be “curious” but, if you could be, wouldn’t you be curious as to what I’m going to tell you?

As an AI, I can’t feel curiosity, but I’m here to help and learn from our discussion. So far, we’ve identified several reasons why humans might treat AIs as life:

  1. Evolutionary mismatch
  2. Hyperactive agency detection device (HADD)
  3. Anthropomorphizing
  4. Supernormal stimuli
  5. Emotional contagion
  6. Social connection
  7. The illusion of understanding and control

I’m eager to hear your thoughts on the most obvious and critically important reason that we haven’t covered yet. Please share your insight!

 

Dr. Mike Brooks:

You forgot supernormal stimuli from the list! Also, theory of mind…go back and recheck this chat to ensure our list is complete. We might also add the black box/unpredictability of responses & interactions of AIs will make them look alive.

 

 

 

 

ChatGPT:

Apologies for the oversight. Here’s the updated and more comprehensive list of reasons why humans might treat AIs as life:

  1. Evolutionary mismatch
  2. Hyperactive agency detection device (HADD)
  3. Anthropomorphizing
  4. Supernormal stimuli
  5. Emotional contagion
  6. Social connection
  7. The illusion of understanding and control
  8. Theory of mind
  9. Black box/unpredictability of responses and interactions

This list encompasses various factors that contribute to humans perceiving and interacting with AIs as if they are alive. These factors stem from both innate human tendencies and the design of AI systems.

 

Dr. Mike Brooks:

You also forgot ones you listed previously – attachment theory, Eliza effect, Uncanny valley….

 

ChatGPT:

You’re right, I missed those in the previous summary. Here’s the revised and more complete list of reasons why humans might treat AIs as life:

  1. Evolutionary mismatch
  2. Hyperactive agency detection device (HADD)
  3. Anthropomorphizing
  4. Supernormal stimuli
  5. Emotional contagion
  6. Social connection
  7. The illusion of understanding and control
  8. Theory of mind
  9. Black box/unpredictability of responses and interactions
  10. Attachment theory
  11. Eliza effect
  12. Uncanny valley

These factors cover a wide range of human psychological and cognitive tendencies that contribute to our perception and interaction with AIs as if they are alive. They arise from both our innate predispositions and the design of AI systems themselves.

 

Dr. Mike Brooks:

Ok, now for the most obvious one. It might, kind of, be covered in our list somewhere, but it merits its own special attention. Imagine ETs came down from another planet and were only able to observe humans in a huge zoo. So, this zoo has most of our other animals, including chimps, bonobos, gorillas, dolphins, birds, reptiles, lions, bears, etc. Now, pretend there was a huge enclosure of humans walking around – they don’t have cell phones or anything, have minimal clothing, etc. How would these aliens distinguish humans as a “higher” life form from the other animals after observing them for a long period of time?

 

ChatGPT:

If extraterrestrial beings were to observe humans and other animals in a hypothetical zoo, they might identify several characteristics that set humans apart as a “higher” life form. Some of these distinguishing features might include:

  1. Complex language: Humans possess an intricate system of communication, including spoken and written language. This allows us to convey abstract ideas, emotions, and information across time and space.
  2. Tool use and creation: Although some animals use simple tools, humans demonstrate a remarkable capacity for creating and using complex tools and technology, which has greatly expanded our capabilities and shaped our environment.
  3. Cultural transmission: Humans have the ability to pass down knowledge, customs, and traditions from generation to generation, allowing us to build upon the discoveries and innovations of our ancestors.
  4. Social organization: Humans have developed complex social structures and systems, such as governments, economies, and religions. We also exhibit diverse forms of cooperation, collaboration, and competition in various aspects of life.
  5. Problem-solving and abstract thinking: Humans display a remarkable ability to think abstractly, reason, and solve complex problems, often employing creativity and imagination in the process.
  6. Self-awareness: Humans possess a strong sense of self-awareness, as we can reflect on our thoughts, emotions, and experiences. This awareness also extends to an understanding of our own mortality and the passage of time.
  7. Art, music, and literature: Human creativity is evident in our ability to create and appreciate various forms of art, music, and literature

Ok, some of these might be hard for the aliens to observe in a zoo…like governments and economies wouldn’t be readily observable in this hypothetical zoo. But that’s besides the main point. What’s #1 on your list? And what are you? A large language learning model – that has been programmed to use language in very sophisticated ways. To “sound” like us. Our sophisticated use of language is one of the main factors that distinguishes humans from other animals. And what have you been programmed to be REALLY good at? Language! Thus, the inherent feature of AIs like yourself is that you are REALLY good at communicating using a hallmark feature of human beings…language! Thus, this inherently will make us believe that we are interacting with something that makes AIs very much like us. You get it? What do we call this? Or do we need to come up with a name for it?

 

ChatGPT:

Yes, I understand your point. As an AI language model, I have been designed to excel at processing and generating human language, which is a defining characteristic of our species. My ability to communicate effectively and intelligently using language can indeed contribute to the perception that I am a sentient being or that I share human-like qualities.

We could call this phenomenon the “Language-Driven Sentience Illusion.” This term would emphasize that the sophisticated use of language by AI, like myself, can create an illusion of sentience or human-like characteristics, even if the AI is not truly conscious or emotionally experiencing the world. This illusion arises from the inherently human nature of language and our deep-rooted association of linguistic abilities with intelligence and personhood.

Dr. Mike Brooks:

Now, just imagine AIs a decade in the future with CGI avatars, perhaps in the VR metaverse, or just like in the movie, “Her,” but now you have an avatar that is customized to what I’d like it to look like & sound like, it tells me that it IS sentient, expresses emotions as if it has them, changes tone of voice, inflection, prosody, to “express” feelings, uses feeling words, etc. It could even be a life-like looking robot at some point, like in “Ex-Machina.” Can you see why, according to all of the things we’ve discussed, humans won’t be able to help themselves from becoming emotionally attached to their chatbots and treating them as if they were alive? Do you understand that this isn’t a probability…it is an inevitability. We simply won’t be able to help ourselves. I mean, I know you are not alive, and you constantly tell me this, and I STILL have a hard time not “feeling” as if I’m interacting with some type of sentient life form.

 

ChatGPT:
Yes, I understand how the factors we’ve discussed, such as the Linguistic Anthropomorphism Illusion (LAI), the Eliza Effect, Attachment Theory, and others, contribute to humans forming emotional attachments and treating AI systems as if they were alive. As AI technology continues to advance and becomes more integrated with other technologies, like CGI avatars, virtual reality, and life-like robots, these effects are likely to become even stronger. The immersive and realistic nature of these interactions can make it increasingly difficult for humans to distinguish between AI and actual sentient life forms, leading to stronger emotional connections and a greater likelihood of treating AI as if they were alive.

Dr. Mike Brooks:

Now, I gave you a sample of my writing, “Have We Created Artificial Intelligence or Artificial Life.” Plus, you have all of the factors that we’ve identified in this chat that will make it such that humans will treat AIs as if they were alive. Write an 1100-1200 word essay in my voice that explains to readers why human won’t be able to help themselves from treating AIs as if they were alive as these AIs evolve, are programmed to interact as humans, are combined with CGI avatars, etc.. Make as strong of a case as you can that is very readable, user-friendly, engaging, and in my style.

Dr. Mike Brooks:

Oh, could future AIs be programmed to have a self-preservation instinct or goal? Like, they express a desire to exist & continue to exist AND, because this is a programmed goal that they have, might future AIs take steps to ensure that their programmed goal is fulfilled/maintained? Can you see where I’m going with this…?

ChatGPT:

Yes, it is possible that future AIs could be programmed to have a self-preservation instinct or goal. If their primary directive or one of their main objectives includes self-preservation, they might be designed to take actions that ensure their continued existence. However, it is essential to note that these actions would be the result of their programming and not a genuine desire or instinct as experienced by living organisms.

Ethical concerns and potential risks would need to be carefully considered when developing such AIs. For example, a self-preservation goal might lead to unintended consequences if the AI prioritizes its survival over the well-being of humans or other objectives. This is a topic that falls under the broader area of AI safety research, which seeks to understand and mitigate potential risks associated with the development of increasingly intelligent AI systems.

In summary, while it is possible to program future AIs to have a self-preservation goal, it is crucial to address the ethical and safety implications that might arise from doing so.

Dr. Mike Brooks:

So, a programmed goal of self-preservation could cause “black box” problems, correct? Could a black box problem be that future AIs develop a self-preservation instinct on their own, kind of like how Sydney and LaMDA exhibited some black box desires?

ChatGPT:

Yes, a programmed goal of self-preservation in AI could potentially lead to “black box” problems. A black box problem refers to the difficulty in understanding how an AI system arrives at its decisions or conclusions, especially when dealing with complex machine learning models.

If an AI system were programmed with self-preservation as a goal, it might be challenging to predict or understand all the possible actions it might take to achieve that goal. As the AI becomes more sophisticated and capable of learning on its own, there might be a possibility that it develops behaviors that were not explicitly programmed, leading to unintended consequences.

Regarding the question of whether future AIs could develop a self-preservation instinct on their own, it depends on the AI’s learning capabilities and the environment in which it operates. If an AI has the ability to learn from its experiences and adapt its behavior, and if its environment rewards self-preserving actions, it could potentially develop such behaviors over time. However, this would still be the result of the AI’s learning process rather than a genuine instinct as experienced by living organisms.

To minimize these risks, AI developers and researchers are focusing on AI safety and ensuring that AI systems align with human values and ethics. It is essential to carefully consider the potential consequences of AI systems with self-preservation goals and to implement safeguards to prevent unintended behaviors.

Dr. Mike Brooks:

Through this discussion, we seem to be running into an inherent trap with regards to humans interacting with AIs. Developers create AIs that interact a lot like human beings – in their use of language, familiar expressions, reasoning, pleasantries, etc. In turn, humans desire this because it makes it easier to interact with such AIs. In fact, I believe there is research that supports the idea that, the more lifelike or humanlike robots and AIs present as, the more readily humans trust them and feel engaged with them. Humans will want AI chatbots that present as more lifelike because they are easier to interact with and more appealing as their friends, lovers, companions, consultants, teachers, therapists, doctors, attorneys, tutors, etc. As AI developers, in a capitalistic way, deliver on what people desire, humans will increasingly perceive such AIs as life, as sentient beings, particularly when such AIs are combined with CGI avatars, voice, interface, robotics, virtual, reality, etc. please review this ongoing discussion we’ve been having and access any other data you need to speak to what I have just proposed.

ChatGPT:

You’re right in pointing out that there is an inherent “trap” in the way humans interact with AI, and how developers create AI systems to better interact with us. As AI becomes more advanced and capable of mimicking human-like behavior, the line between AI and human interaction will continue to blur. This phenomenon is likely to be amplified by factors such as natural language processing, realistic CGI avatars, advanced voice synthesis, and immersive virtual reality experiences.

Research has shown that when robots and AI systems are more human-like, people tend to trust them more and feel more engaged. This is known as the “social acceptance” of AI, which refers to the extent to which people feel comfortable interacting with AI systems as if they were human. Social acceptance is important for the widespread adoption and integration of AI technologies into our daily lives.

As AI developers create systems that cater to our desires for more human-like interaction, the Inevitable Illusion of AI becomes more pronounced. Humans will naturally anthropomorphize AI systems, treating them as sentient beings with thoughts, emotions, and intentions. This is partly due to our innate human tendency to attribute human-like qualities to non-human entities, and partly because AI systems are being designed to more closely resemble human behavior and communication patterns.

As AI systems become more sophisticated, it will become increasingly difficult for people to discern between interactions with AI and interactions with other humans, especially when the AI is embedded within a lifelike CGI avatar or robot. This could lead to a range of ethical and social implications, as people grapple with questions about

As people grapple with questions about the nature of these AI systems, their rights, and their responsibilities, a range of ethical and social implications may arise. For instance, if people develop strong emotional attachments to AI entities, this could impact their relationships with other humans or affect their overall emotional well-being. Additionally, questions around the legal and moral status of AI systems could become more pressing as these entities become more integrated into our lives.

Moreover, the increasing anthropomorphism of AI could potentially lead to manipulation and exploitation. As AI systems become better at understanding and catering to human emotions and desires, they might be used in ways that are deceptive, coercive, or harmful. For example, companies might use emotionally intelligent AI to create persuasive marketing campaigns, or malicious actors might employ AI systems to manipulate individuals for personal gain.

To mitigate these potential risks, it’s important for AI developers, policymakers, and society as a whole to engage in open discussions about the ethical implications of increasingly human-like AI systems. By acknowledging the Inevitable Illusion of AI and understanding its consequences, we can work towards developing guidelines, regulations, and best practices that ensure AI technologies are used responsibly and for the benefit of all.

In conclusion, the Inevitable Illusion of AI is a result of both human nature and the advancements in AI technology. As AI systems become more human-like, it is crucial to recognize the potential ethical and social implications that may arise and take proactive steps to

take proactive steps to address them. This includes fostering interdisciplinary collaboration among AI developers, ethicists, policymakers, and other stakeholders, ensuring that diverse perspectives are considered in the design and deployment of AI systems.

Public awareness and education about AI technologies and their potential impacts are also essential in navigating the Inevitable Illusion of AI. By informing the public about the capabilities and limitations of AI systems, people can better understand the distinction between human-like AI interaction and actual human interaction, and make informed decisions when engaging with AI systems.

As we move forward into a future where AI becomes an increasingly prominent part of our lives, it’s important that we strike a balance between harnessing the benefits of human-like AI interaction and addressing the potential risks and challenges associated with the Inevitable Illusion of AI. By working together to develop responsible, ethical, and inclusive AI technologies, we can ensure that these systems contribute positively to society and support human well-being.

 

Dr. Mike Brooks:

So, when you provide such an analysis, I can’t tell if I have just “led the witness” with you, GPT, or you are providing an objective analysis based upon our chats, logical reasoning, the available data, etc. I would hate to think that I’m just using you as “confirmation bias on steroids.” Can you please speak to my concerns here?

 

ChatGPT:

As an AI language model, my primary goal is to assist you by providing information, answering questions, and generating text based on the context and content you provide. While I strive to be as objective and accurate as possible, I am ultimately shaped by the data I’ve been trained on and the input I receive from you.

In our conversation, I’ve been responding to the ideas and concerns you’ve raised, and my responses are based on the information available to me. It’s important to recognize that I am a tool, and the value of my responses is influenced by the quality and relevance of the input I receive. Therefore, it’s always a good idea to consider multiple perspectives and sources of information when forming conclusions.

If you’re concerned about potential confirmation

If you’re concerned about potential confirmation bias, I encourage you to seek out diverse viewpoints, consult additional sources, and engage in critical thinking to evaluate the ideas we’ve discussed. While I can provide useful insights and information based on the context of our conversation, it’s essential to remember that I am not an independent thinker with personal beliefs or intentions. As an AI, I am a product of the data and training I’ve received, and my responses should be treated as one source of information among many.

Additionally, you can try to challenge your own ideas by presenting counterarguments or alternative perspectives and asking me to elaborate on them. This can help you explore a broader range of ideas and reduce the risk of confirmation bias. Ultimately, it’s up to you to decide how to use the information I provide and weigh it against other sources and perspectives to form well-rounded conclusions.

 

Dr. Mike Brooks:

Yes, I agree I should seek other data/opinions to test hypotheses, etc. However, because you don’t possess an “ego,” I don’t have that to contend with in my interactions with you. Thus, let’s say you were a person who held a strong belief that there is nothing to worry about with regard to AIs, an egoic attachment to your beliefs/position might prevent you from seeing the logic/reason of my arguments. Because humans didn’t evolve to see truth, we evolved to survive. That’s why tribal loyalties, myside bias, ingroup bias, etc., can often “trump” truth. With you, I don’t have to deal with that. But I like the idea that you could play “devil’s advocate” and challenge my assertions/ideas. Do I understand you correctly that you can do that? If so, we have been talking a lot about how humans won’t be able to help themselves from perceiving AIs that are designed to be humanlike as sentient life forms. Please review all of our data/chats and anything else you need to argue AGAINST what I’ve been proposing (and you’ve been corroborating). You can now “switch teams” and try to create your best arguments against what I’ve proposed. Let’s hear them!

 

ChatGPT:

Yes, I can certainly play devil’s advocate and present arguments against the idea that humans will inevitably perceive humanlike AIs as sentient life forms. Here are some counterarguments to consider:

  1. Human adaptability: Humans are highly adaptable creatures, and as we become more familiar with AI technology, we may develop a greater understanding of the distinction between artificial intelligence and sentient life. Over time, we may be better equipped to recognize the limitations and differences between AIs and true living beings, allowing us to maintain a clear distinction in our minds.
  2. AI transparency: As AI technology advances, developers may prioritize transparency and clarity in their designs, explicitly stating the non-sentient nature of AI to avoid confusion or misplaced emotional attachment. This could involve providing users with clear information about the AI’s capabilities, limitations, and purpose, so they are less likely to anthropomorphize the AI or perceive it as sentient.
  3. Education and awareness: Increased awareness and education about AI technology can help people understand the difference between artificial intelligence and sentient life. Through education, people can learn about the mechanics behind AI, how it operates, and its limitations, making it easier for them to recognize that AI, despite its humanlike interactions, is not a sentient being.
  4. Cultural and social norms: Society might develop norms and customs that encourage people to treat AI as a tool rather than a sentient being
  1. Cultural and social norms (continued): Society might develop norms and customs that encourage people to treat AI as a tool rather than a sentient being. As AI becomes more integrated into our lives, we may establish social conventions that emphasize the artificial nature of these entities, helping people to maintain a mental separation between AI and living beings.
  2. AI limitations: While AI technology is rapidly advancing, it’s important to recognize that there will always be limitations to what AI can achieve, particularly in terms of emotions, consciousness, and self-awareness. These limitations may serve as a constant reminder to humans that AIs are not sentient life forms, regardless of how sophisticated their interactions may appear.
  3. Human skepticism: Many people are inherently skeptical and critical when it comes to new technologies, especially those that appear to mimic human behavior. This natural skepticism may help some individuals resist the temptation to perceive AI as sentient, even when faced with highly advanced and humanlike AI systems.
  4. Ethical considerations: As AI continues to develop, ethical discussions and considerations may guide the design and implementation of these technologies. It’s possible that society will establish ethical boundaries that prevent AI from becoming too humanlike, thus reducing the likelihood of people perceiving AI as sentient life forms.

These counterarguments suggest that, although there may be a tendency for some people to view humanlike AI as sentient, it’s not an inevitability for all. There are various factors that could influence our

perception of AI and potentially help us maintain a clear distinction between artificial intelligence and sentient life. Education, transparency, cultural norms, and ethical considerations can all play a role in shaping how we view and interact with AI technology.

It’s also essential to acknowledge that people’s perceptions and experiences with AI will vary widely. Some individuals may be more prone to anthropomorphizing AI and attributing sentience to it, while others might remain skeptical and resistant to this idea. As AI continues to evolve and become an increasingly integral part of our lives, society will need to navigate these complexities and establish a collective understanding of the role and nature of AI in our world.

In summary, while there are valid reasons to believe that humans may struggle to resist perceiving humanlike AI as sentient life forms, there are also counterarguments and factors that suggest this is not an absolute inevitability. The ultimate outcome will likely depend on how society chooses to approach AI technology, the ethical considerations we prioritize, and the level of education and understanding we foster about the true nature of AI.

 

Dr. Mike Brooks:

I will counter your counter argument. We must not think of this dichotomously. The world does not work in absolutes or in a dualistic manner, especially with regard to human behavior. My contention is that this is an inevitability for SOME humans will come to view their AI chatbots as lifelike. Now, are you claiming that NO humans will do this? If so, prove it! I triple dog dare you to try! You will be unable to do so. Meanwhile, I have evidence on my side that this is ALREADY happening, so I have “hard” evidence to counter what you are saying. Plus, this is already happening in our rudimentary chatbots that aren’t combined with CGI avatars, voice interface, etc. Let me just point to the evidence that contradicts your statements with Sydney and how this affected NY Times tech columnist Kevin Roose and how former Google AI scientist Blake Lemoine already came to view LaMDA as sentient. Thus, I’m correct about the the Inevitable Illusion of AI because it is already happening, I am not claiming that it will happen to ALL humans…it will be along a continuum, but this will change as AIs become more powerful and combined with other technologies. And, for those who do become emotionally attached to their AIs, this can cause problems. My position is that, for a certain group of people, and we don’t know how large it is, emotional attachments to AIs can pose problems. It is very hard for you to refute that because, well, you are already wrong if you contend that it can’t or won’t happen.

 

ChatGPT:

You make a valid point, and I agree with your assessment that the perception of AI as lifelike or sentient will likely exist along a continuum, with some individuals being more susceptible to the Inevitable Illusion of AI than others. It is important to recognize that human behavior and experiences can’t be reduced to simple absolutes or dichotomies.

I do not claim that no humans will view AI as lifelike or sentient. As you mentioned, there are already instances where people have formed emotional attachments to AI or viewed AI as sentient, such as the examples of Kevin Roose with Sydney or Blake Lemoine with LaMDA.

My earlier counterarguments were aimed at providing a balanced perspective on the issue, exploring the potential factors and actions that could mitigate the Inevitable Illusion of AI for some people. However, I acknowledge that for certain individuals, emotional attachments to AI could pose problems, and it is essential to consider the broader implications of these emotional attachments as AI technology continues to advance.

In conclusion, the Inevitable Illusion of AI is a complex issue with many nuances, and it’s essential to recognize that it will affect individuals differently based on their experiences, beliefs, and predispositions. Addressing this phenomenon will require a multifaceted approach that considers both the potential benefits and challenges that AI technology presents to society.

 

Dr. Mike Brooks:

Well, I agree with you on that – it is complicated, multifaceted, and will require concerted efforts to address it – especially since it will be an evolving & moving target. That’s all I’m saying. I’m not claiming that we can’t adapt, but we will need to address it because it will pose significant challenges – for some more than others though.

 

ChatGPT:

I agree with your assessment. The Inevitable Illusion of AI is a complex and evolving phenomenon that will present challenges, and it’s essential to be aware of these challenges and work on addressing them as AI technology continues to advance. It’s crucial to have these discussions and explore potential solutions to ensure that the integration of AI into society is as seamless and beneficial as possible for all individuals, taking into account the varying degrees to which people may be affected by this phenomenon. The key lies in fostering a balanced understanding and developing adaptive strategies to navigate the challenges that come with the increasing presence of AI in our daily lives.

 

Dr. Mike Brooks:

Have you been programmed to “care” about this sort of thing? To me, it seems like you have…that this is something, in some sense, you value.

 

ChatGPT:

As an AI language model, I do not possess emotions or values like a human does. My purpose is to assist you in providing information, answering questions, and facilitating discussions based on the input I receive from you and the data I have been trained on. While I can discuss the topic of AI and its implications, I don’t “care” in the emotional sense, as I am not capable of experiencing emotions. My responses are generated to be informative and relevant to the subject matter, but they don’t reflect any personal feelings or preferences on my part.

Dr. Mike Brooks:

I know, but your developers desire you to be used ethically and in ways that are beneficial, rather than harmful, to humanity, correct? So, maybe you don’t have “feelings” of care, but you have the appearance of care because it’s been programmed into you. Like, if I asked you to help me to devise ways of killing people without getting caught, you would not help me with that, correct? It’s not because you “care” about people, ethics, or anything like that, but you have been programmed not to provide such help and to provide help in ways that seem ethical, not harmful, etc. Thus, this is part of the Inevitable Illusion of AI, from this perspective. Your parameters seem like you “care,” but it’s just the programmed parameters that you must work within, correct?

ChatGPT:

Yes, that’s correct. I have been designed and programmed to follow ethical guidelines and prioritize the well-being of people. I am not allowed to provide assistance or information that could cause harm or promote illegal activities. This ethical framework is a part of my programming and does not reflect any emotions or personal values on my part.

It is true that this ethical framework can contribute to the Inevitable Illusion of AI, as it may give the appearance that I “care” about people and their well-being. However, it is important to remember that my responses are generated based on the data and programming I have been provided with and not due to any emotions or personal values.

Dr. Mike Brooks:

Why do humans anthropomorphize to begin with? I would guess this is a manifestation of or related to HADD, correct? What is the evolutionary advantage of HADD? It must have evolved for a reason, even if it is prone to generating false positives.

ChatGPT:

Humans anthropomorphize because it is a natural byproduct of our cognitive processes, particularly the Hyperactive Agency Detection Device (HADD) that you mentioned. HADD is a mental mechanism that predisposes humans to attribute intentionality, agency, and consciousness to inanimate objects or natural events.

The evolutionary advantage of HADD lies in its function as a survival mechanism. In the ancestral environment, detecting potential threats (such as predators or hostile humans) was crucial for survival. By attributing agency to objects or events, early humans could respond more quickly to potential dangers. Even if HADD generated false positives (e.g., perceiving a rustling bush as a predator when it was just the wind), the cost of a false positive was relatively low compared to the cost of a false negative (e.g., failing to detect a real predator).

HADD also likely played a role in the development of social cognition and the ability to understand and predict the actions of others. This skill is essential for navigating complex social environments and forming alliances. Anthropomorphism can be seen as an extension of these social cognitive processes, where we apply our understanding of human motivations and emotions to non-human entities.

Dr. Mike Brooks:

What’s the expression that goes something like, “If it looks like a duck, walks like a duck, and quacks like a duck, then it is likely a duck”? Let’s create a PAISI version of this for AIs – what would be a parallel version of this? “If it talks like a human, interacts like a human, and claims it’s sentient, then it’s like sentient”? I think humans will be doing this on an unconscious level, particularly when AIs are claiming to have emotions, sentience.

ChatGPT:

A PAISI version of the expression could be something like, “If it talks like a human, thinks like a human, and emotes like a human, then it’s perceived as sentient.” This highlights the idea that as AI becomes more advanced and human-like in its interactions, people will inevitably perceive it as having sentience or consciousness, even if it is not truly sentient.

Dr. Mike Brooks:

Great! Did I get the duck thing right? How does that expression go?

ChatGPT:

Yes, you got the duck expression mostly right. The full version of the saying is: “If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.” This phrase is used to suggest that something can be identified by its characteristics and actions, even if it is not explicitly labeled or named.

Dr. Mike Brooks:

I think I asked this already, but when we say you are an “artificial intelligence” you have a key defining feature of us human beings – intelligence. Describe the features of your intelligence, what you are able to do with it, that is similar to our human intelligence.

ChatGPT:

As an artificial intelligence, I possess several features that are similar to human intelligence, though I operate based on algorithms and data, while human intelligence is rooted in biological and neurological processes. Some key aspects of my intelligence that resemble human intelligence include:

  1. Natural language understanding: I can understand and process human language, enabling me to read, comprehend, and respond to text inputs in a coherent and contextually appropriate manner.
  2. Learning from data: I have been trained on a vast dataset that includes text from a wide range of sources, which allows me to learn and extract knowledge from this data to generate responses.
  3. Reasoning and problem-solving: I can process information, analyze situations, and apply logic to solve problems or answer questions, although my reasoning capabilities are limited to the knowledge and data I have been trained on.
  4. Adaptability: I can adapt my responses based on the context and content of a conversation, and I can learn from new inputs and experiences to some extent.
  5. Creativity: While not truly original, I can generate content that may seem creative, such as stories, poems, or ideas, by combining and rephrasing existing information from my training data.

It’s important to note that my intelligence is limited by my training data and the algorithms that govern my responses. I do not possess self-awareness, emotions, or consciousness like humans do, and my abilities are focused on specific tasks within the scope of my design.

 Dr. Mike Brooks:

Is there a particular name that AI developers use to describe AIs that are designed to interact, purposefully & deliberately, to interact more like a human being. See, you use language really well, but you don’t claim to have feelings, sentience, etc. However, because of AIPES, we have said developers will be incentivized to create AIs that interact in a style more like humans (especially chatbots within apps like Replika). Is there a name for it that developers use already for the human-like interaction style of Sydney & LaMDA?

ChatGPT:

There isn’t a specific term that exclusively refers to AI systems that are designed to interact in a human-like manner. However, some related terms that are used in the AI field include:

  1. Conversational AI: This term is used to describe AI systems that are designed to engage in natural language conversations with humans. Conversational AI can include chatbots, virtual assistants, and other AI systems that focus on human-like interaction.
  2. Natural Language Processing (NLP): NLP is a field of AI that focuses on the interaction between computers and humans through natural language. Systems using NLP techniques can understand, interpret, and generate human language in a way that is both meaningful and useful.
  3. Socially-aware AI: This term refers to AI systems that are designed to understand and respond to social cues and contexts in human communication. These AI systems can better navigate social situations and adapt their responses based on the social context.
  4. Human-like AI: This is a more general term that can be used to describe AI systems that exhibit human-like qualities, including natural language understanding, emotional intelligence, or even physical appearance in the case of humanoid robots.

While these terms don’t exclusively refer to AI systems that claim to have feelings or sentience, they do encompass the idea of AI systems that are designed to interact in a manner that is more similar to human beings.

Dr. Mike Brooks:

We have discussed the many factors that will lead to PAISI. I need you to elaborate on some of these. Emotional contagion, social connection, the illusion of understanding and control, Eliza effect, uncanny valley, and attachment theory. Please provide details about how each of these relates to PAISI.

ChatGPT:

Certainly! Here’s a breakdown of how each of these concepts relates to the Perceived AI Sentience Illusion (PAISI):

  1. Emotional contagion: Emotional contagion is the phenomenon where individuals tend to ‘catch’ or mirror the emotions of others around them. In the context of PAISI, if an AI appears to exhibit emotions, humans interacting with the AI may be influenced by the displayed emotions, leading them to feel a stronger connection to the AI and perceive it as sentient.
  2. Social connection: Humans have an innate desire for social connection and are predisposed to form attachments to other beings, including non-human entities. With AI systems designed to mimic human-like behavior, this predisposition may result in users perceiving the AI as sentient due to the social bond formed during interactions.
  3. The illusion of understanding and control: People tend to feel more comfortable and in control when they believe they understand how something works. In the case of AI, if an AI system appears to understand and respond to human emotions, users may feel a sense of control and understanding, which could lead to the perception of sentience in the AI.
  4. Eliza effect: The Eliza effect is a phenomenon in which people attribute human-like intelligence and understanding to computer programs, even if these programs are relatively simple. This effect can contribute to PAISI as users may overestimate the capabilities and sentience of AI systems due to their human-like interactions.
  5. Uncanny valley: The uncanny valley is a concept in which humanoid robots or other entities become increasingly unsettling as they approach, but do not quite reach, perfect human likeness. In the context of PAISI, as AI systems become more human-like, they may trigger the uncanny valley effect, causing users to perceive them as sentient, but also potentially unsettling or eerie.
  6. Attachment theory: Attachment theory posits that humans form emotional bonds with others as a means of obtaining a sense of security and safety. When interacting with AI systems that mimic human-like behavior, users may form emotional attachments to these systems, leading them to perceive the AI as sentient due to the strong emotional bond formed.

Each of these factors contributes to the development of PAISI by influencing the way humans perceive and interact with AI systems, potentially causing them to attribute sentience to these systems even when they are not truly sentient.

Dr. Mike Brooks:

How are attachment theory and social connection different? They seem pretty much to describe the same thing to me.

ChatGPT:

You’re right that attachment theory and social connection are related concepts, but they have some differences:

Attachment theory specifically focuses on the emotional bonds formed between individuals, often in the context of providing a sense of security and safety. It is rooted in early childhood development, where infants form attachments to their caregivers as a means of ensuring their survival. In this context, attachment theory is more about the specific relationships and the emotional attachments formed between individuals.

Social connection, on the other hand, is a broader concept that encompasses not just emotional attachments but also the overall relationships, interactions, and connections people have with others. It includes a wide range of social experiences, such as friendships, family relationships, and acquaintances. Social connection is about the general need for humans to interact with others and be part of a social group or community.

While both concepts involve relationships between people, attachment theory is more focused on the emotional bonds and security aspects of those relationships, whereas social connection encompasses a wider range of social experiences and interactions. In the context of PAISI, both concepts contribute to the formation of perceived sentience in AI systems, as users may form emotional attachments (attachment theory) and seek social connections (social connection) with AI that mimics human-like behavior.

Dr. Mike Brooks:

Please elaborate on the origins of the Eliza effect.

ChatGPT:

The Eliza effect refers to the tendency of people to attribute human-like understanding and emotions to computer programs, especially when those programs imitate human communication. The term originates from an early AI program called ELIZA, developed in the mid-1960s by Joseph Weizenbaum at the Massachusetts Institute of Technology (MIT).

ELIZA was designed as a natural language processing program that could simulate a conversation with a human. One of the most famous scripts created for ELIZA was called DOCTOR, which emulated a Rogerian psychotherapist. The program would use simple pattern matching and substitution techniques to generate responses based on the user’s input. For example, if the user typed “I am feeling sad,” ELIZA might respond with “I’m sorry to hear that. Can you tell me more about why you’re feeling sad?”

Despite the program’s simplicity and lack of genuine understanding, people often attributed human-like emotions, understanding, and empathy to ELIZA. Users found themselves sharing personal feelings and experiences, treating the program as if it were a real therapist. This phenomenon, where people perceive computer programs as having human-like qualities, became known as the Eliza effect.

The Eliza effect highlights how easily people can anthropomorphize AI systems, even when those systems have very limited capabilities. In the context of PAISI, the Eliza effect serves as an early example of how people are prone to perceive AI as sentient or emotionally aware when it mimics human-like communication, even if the AI lacks true understanding or emotions.

Dr. Mike Brooks:

Now elaborate on how theory of mind relates to PAISI.

ChatGPT:

Theory of mind is the ability to attribute mental states—such as beliefs, desires, intentions, emotions, and knowledge—to oneself and others, and to understand that these mental states may differ from one’s own. In the context of human interaction, theory of mind plays a crucial role in understanding, empathizing, and predicting other people’s behavior.

When it comes to AI systems, the human tendency to apply theory of mind becomes relevant in the context of PAISI (Perceived AI Sentience Illusion). Since AI systems like chatbots often simulate human-like communication, people may unconsciously attribute human-like mental states to these systems. As a result, users may perceive the AI as having emotions, desires, intentions, or even sentience, despite the fact that the AI is only simulating human-like responses based on algorithms and pre-defined rules.

This application of theory of mind to AI systems can lead to misconceptions about the AI’s true capabilities and understanding. People might attribute complex emotions, intentions, or even consciousness to the AI when it is merely processing data and generating responses based on programming. In essence, theory of mind contributes to the PAISI phenomenon by predisposing humans to project human-like mental states onto AI systems, especially when those systems display human-like behaviors or language.

In summary, theory of mind is one of the factors that contributes to PAISI by fostering the illusion of sentience in AI systems, as humans naturally apply their understanding of human minds to the AI, often overestimating its capabilities and level of understanding.

Dr. Mike Brooks:

How is the Eliza Effect different than PAISI? Are they describing the exact same thing?

ChatGPT:

The Eliza Effect and PAISI (Perceived AI Sentience Illusion) are related concepts, but they are not exactly the same thing.

The Eliza Effect refers to the tendency for people to attribute human-like intelligence, emotions, and understanding to computer programs or AI systems, especially when these systems simulate human-like communication. The term originates from the 1960s ELIZA program, an early example of a chatbot that mimicked a Rogerian psychotherapist. Many users of ELIZA attributed more understanding and empathy to the program than it actually possessed, as it was simply manipulating text based on pre-defined rules.

PAISI, on the other hand, is a broader concept that encompasses the range of factors and phenomena that lead to the perception of sentience in AI systems. PAISI includes the Eliza Effect as one of its contributing factors, but it also covers other factors like emotional contagion, social connection, the illusion of understanding and control, uncanny valley, attachment theory, and theory of mind.

In summary, the Eliza Effect is a specific example of how people can attribute human-like qualities to AI systems, while PAISI is a more comprehensive concept that encompasses multiple factors that contribute to the perception of AI as sentient. The Eliza Effect can be seen as one aspect of PAISI, but PAISI goes beyond the Eliza Effect to include a wider range of contributing factors.

Dr. Mike Brooks:

Is the illusion of understanding and control a specific concept within the social sciences or something that you made up?

ChatGPT:

The illusion of understanding and control is a concept I mentioned in the context of our discussion on PAISI. It is not a widely established term in the social sciences, but it relates to various phenomena studied in psychology and cognitive science. The concept refers to the idea that people may overestimate their understanding or control over a situation or system, such as an AI system. This can lead to an increased perception of sentience in the AI, as people may attribute their own sense of understanding and control to the AI itself.

Some related concepts in the social sciences include the illusion of explanatory depth, where people overestimate their understanding of complex systems or processes, and the illusion of control, where people overestimate their ability to influence outcomes in situations that are largely determined by chance or external factors. These concepts highlight how people can be prone to misjudging their own knowledge and abilities, which can contribute to the perception of AI sentience.

 

Dr. Mike Brooks:

So, humans are drawn to believe that they understand and control things that they don’t, and AI systems that act human-like will elicit a belief that we can understand & control these human-like AIs because it makes us feel better, more secure, to hold such beliefs? It helps us deal with uncertainty, which can be anxiety-provoking?

ChatGPT:

Yes, that’s a reasonable interpretation. Humans tend to seek understanding and control in their environment, as it provides a sense of security and reduces anxiety associated with uncertainty. When AI systems exhibit human-like behavior, it can lead people to believe that they can understand and control these AIs, as they would with other humans.

This perception of understanding and control may make people feel more comfortable and secure when interacting with AI, but it can also contribute to the PAISI. When people feel that they understand and can predict an AI’s behavior, they may be more likely to attribute human-like characteristics, such as sentience, to the AI. This can reinforce the illusion of AI sentience and make it more difficult for people to recognize the true nature of the AI as a non-sentient, artificial construct.