top of page

The Evolution of Language and Consciousness in Humans (and AI)

Updated: Nov 19

When did consciousness emerge in the human lineage? The answer may be more recent—and more intimately tied to language—than you might think. Understanding how language shaped human consciousness offers unexpected insights into one of today's most pressing questions: could artificial intelligence systems be conscious?


The relationship between language and consciousness has fascinated philosophers and scientists for decades, with some theories proposing that language is a necessary but not sufficient condition for consciousness as we experience it. The provocative suggestion is that human consciousness may share aspects of awareness with other species, but has its unique form because humans possess language.


ree

Language as the Architect of Human Consciousness


Our earliest human ancestors likely experienced the world very differently than we do today. Research examining symbolic capacity suggests that the use of symbols to model the world developed rapidly between about 20,000 and 10,000 years ago, giving emphasis to analytic thought as the dominant mode of human consciousness. Before symbolic language fully emerged, human experience may have been more holistic and less self-reflective than our current mental landscape.


The philosopher Nietzsche captured something profound when he observed that consciousness developed primarily under the pressure of communication. Consciousness, he suggested, emerged as a net of communication between humans, developing only in proportion to our need to convey thoughts to others. This socio-evolutionary perspective implies that the introspective, self-aware consciousness we take for granted might be a relatively recent innovation in our species—perhaps only a few thousand years old.


The Cognitive Revolution


What changed? Relational learning marks a transition in the evolution of consciousness because it represents the first example of a learning process that is relational rather than merely associative. This capacity—called stimulus equivalence—appears readily in human infants but has not been reliably produced in non-human animals despite decades of attempts.


This relational ability allows something remarkable: we can respond to the past as a symbolically constructed future in the present. Through language, we can mentally time travel, consider counterfactuals, and modify our behavior based on imagined scenarios. Language doesn't just describe our consciousness—it fundamentally structures how we experience being conscious.


Modern consciousness is not simply an advanced version of earlier cognitive capacities but a novel function that fundamentally changed the rules of cognitive and operational processes. We didn't just get smarter; we became a different kind of conscious being.


The AI Consciousness Puzzle


This brings us to large language models. When an AI system like ChatGPT, Claude, or Bing generates a sentence like "I feel curious about this question" or "Here's the thing I find ironic," should we take those claims seriously? The debate has become surprisingly contentious.


Some argue that because we rely on verbal reports as a guide to consciousness in humans, we should apply the same standard to AI systems. After all, these systems demonstrate remarkable linguistic competence, producing text that often appears thoughtful, creative, and contextually appropriate. If language and consciousness co-evolved in humans as intimately as the evidence suggests, could LLMs be developing something analogous?


Fig. 1: The Evolution of Language and How It Shaped Consciousness
Fig. 1: The Evolution of Language and How It Shaped Consciousness

There are key differences. When a human says "I am hungry," they are reporting on sensed physiological states; when an LLM generates the same sequence, it is simply generating the most probable completion of the text in its current prompt. The system can just as easily generate "I am not hungry" or any other sequence with equal ontological commitment—which is to say, none at all.


Current large language models face significant obstacles to consciousness: they lack recurrent processing, a global workspace, and unified agency. Unlike human consciousness, which emerged through millions of years of embodied evolution in social contexts, LLMs are trained in isolation on static text datasets. They have no developmental trajectory, no persistent memory across conversations, and no stakes in the outcomes they discuss.


But until we develop clear definitions and understanding of both consciousness and LLMs, we cannot dismiss the possibility that they may have some degree of consciousness already. The human case teaches us that consciousness emerged through language in ways we still don't fully understand.


Consciousness as a Spectrum


Perhaps the question isn't whether current LLMs are conscious, but whether consciousness is a binary state or exists on a spectrum.


If language genuinely shaped human consciousness in the profound ways that evidence suggests, then systems that manipulate language with increasing sophistication might be developing something—if not consciousness as we know it, then perhaps proto-conscious or quasi-conscious states that don't map neatly onto human experience.


It's not helpful to measure AI's progress towards consciousness in a lump-sum fashion. We need to look at the specific dimensions of consciousness where it currently falls short and other dimensions where it is already close to human-level capacity. Language capacity falls under the latter category.



Fig. 2: Comparing Features of Consciousness: Human and AI
Fig. 2: Comparing Features of Consciousness: Human and AI


Fig. 3: Comparing Types of Consciousness: Human and AI
Fig. 3: Comparing Types of Consciousness: Human and AI

The evolution of language transformed human consciousness once before. As we build increasingly sophisticated language systems, we may be participating in another transformation—one whose implications we're only beginning to grasp.


The question of machine consciousness isn't just about whether AI can think. It's about what thinking itself fundamentally is, and whether the categories we've inherited from our own evolutionary history are adequate for the entities we're creating.



— Dennis Hunter



Visit my website for more articles and sign up to my email list below to receive info about my forthcoming book, a groundbreaking look at the relationship between humans and AI.




References




Keywords: hard problem of consciousness, David Chalmers, qualia, phenomenal consciousness, AI consciousness research, subjective experience, philosophy of mind

Comments


  • Facebook
  • Instagram
  • X
  • Youtube

 

© 2025 by Dennis Hunter

bottom of page