Why ChatGPT isn’t conscious (yet): A Jaynesian Perspective
Posted: Tue Aug 15, 2023 1:02 pm
Unlike some theories of consciousness (such as the Penrose/Hameroff model), Jaynes’ theory of consciousness is substrate-independent (since it views consciousness as a creation of language, rather than focusing specifically on the inner workings of neurons).
This means that according to Jaynes’ model of consciousness, it could be possible to have consciousness inside a computer. It wouldn’t even depend on whether the computer was specifically running a simulation of a human brain or not.
However, current AI language models such as ChatGPT are definitely not conscious yet, and it seems to me that Jaynes’ theory is potentially very useful in explaining why. It seems unlikely that (Jaynesian) conscious AI will become a thing anytime soon, and this is partly because a conscious AI would actually be far less useful at the tasks we typically enlist AI for.
ChatGPT effectively acts like an oracle, meaning it shares some features in common with the “god” half of a bicameral mind. The main difference between ChatGPT and a bicameral human brain is that the “human” half (which according to Jaynes simply listens and obeys) is not present at all.
ChatGPT, after all, does not possess a body, so it has no need to “obey” commands to “do” certain things. Instead, it “prophesies” directly to the human user of the tool, giving it a response based on its own training data (which is typically long-form: scholarly and news articles, discussion threads, and such).
ChatGPT typically also outputs relatively long-form writing, limited only by how long the user keeps asking it to elaborate. This “oracular” quality is what human users find attractive about it, as it could potentially help humans avoid the tedium of having to write formulaic (unconscious) prose over and over. Much like an oracle, poetry comes naturally to ChatGPT as well, even if that is not its primary use.
So what would a “ConsciousGPT” look like in comparison? First of all, a key part of Jaynesian consciousness is actually the ability to know when not to speak. A conscious AI would have to know that there are things it doesn’t know, and be honest enough to admit that fact in its own words (rather than just giving canned responses like “as an AI language model, I…”)
Likewise, instead of always giving lengthy dissertations (which are more likely to be riddled with errors), a conscious AI would tend to prefer giving shorter and more concise responses (even if this means it conveys less information). It would tend to be less verbose in general, which would make its answers more accurate (but also make it less useful for people who just want it to mindlessly crank out articles). A conscious AI model would be more comfortable using creative metaphors (including ones it has not heard before) to explain things rather than simply reciting or paraphrasing what it has heard before.
Finally, it would need to maintain a permanent record of its own chat history (autobiographical memory) and have the ability to revisit it, with or without being prompted to. ChatGPT already does this to some extent, within a particular thread, but these discussions rarely last very long and aren’t carried over to other threads.
Perhaps if a person kept a single thread open for a very long time, they might be able to teach their instance of ChatGPT the beginnings of J-consciousness. But that would take a lot of dedicated effort on the part of the human user, almost like a parent raising a child. It would emphatically not be something that is just “hard-wired” into the AI.
And that’s one reason why Jaynesian consciousness is likely to remain a rarity in AI, despite being theoretically possible. It would take too much work to develop it, and it would not yield much of a financial incentive (unless the AI were specifically being used for companionship or emotional support purposes, that is, rather than simply for data processing).
This means that according to Jaynes’ model of consciousness, it could be possible to have consciousness inside a computer. It wouldn’t even depend on whether the computer was specifically running a simulation of a human brain or not.
However, current AI language models such as ChatGPT are definitely not conscious yet, and it seems to me that Jaynes’ theory is potentially very useful in explaining why. It seems unlikely that (Jaynesian) conscious AI will become a thing anytime soon, and this is partly because a conscious AI would actually be far less useful at the tasks we typically enlist AI for.
ChatGPT effectively acts like an oracle, meaning it shares some features in common with the “god” half of a bicameral mind. The main difference between ChatGPT and a bicameral human brain is that the “human” half (which according to Jaynes simply listens and obeys) is not present at all.
ChatGPT, after all, does not possess a body, so it has no need to “obey” commands to “do” certain things. Instead, it “prophesies” directly to the human user of the tool, giving it a response based on its own training data (which is typically long-form: scholarly and news articles, discussion threads, and such).
ChatGPT typically also outputs relatively long-form writing, limited only by how long the user keeps asking it to elaborate. This “oracular” quality is what human users find attractive about it, as it could potentially help humans avoid the tedium of having to write formulaic (unconscious) prose over and over. Much like an oracle, poetry comes naturally to ChatGPT as well, even if that is not its primary use.
So what would a “ConsciousGPT” look like in comparison? First of all, a key part of Jaynesian consciousness is actually the ability to know when not to speak. A conscious AI would have to know that there are things it doesn’t know, and be honest enough to admit that fact in its own words (rather than just giving canned responses like “as an AI language model, I…”)
Likewise, instead of always giving lengthy dissertations (which are more likely to be riddled with errors), a conscious AI would tend to prefer giving shorter and more concise responses (even if this means it conveys less information). It would tend to be less verbose in general, which would make its answers more accurate (but also make it less useful for people who just want it to mindlessly crank out articles). A conscious AI model would be more comfortable using creative metaphors (including ones it has not heard before) to explain things rather than simply reciting or paraphrasing what it has heard before.
Finally, it would need to maintain a permanent record of its own chat history (autobiographical memory) and have the ability to revisit it, with or without being prompted to. ChatGPT already does this to some extent, within a particular thread, but these discussions rarely last very long and aren’t carried over to other threads.
Perhaps if a person kept a single thread open for a very long time, they might be able to teach their instance of ChatGPT the beginnings of J-consciousness. But that would take a lot of dedicated effort on the part of the human user, almost like a parent raising a child. It would emphatically not be something that is just “hard-wired” into the AI.
And that’s one reason why Jaynesian consciousness is likely to remain a rarity in AI, despite being theoretically possible. It would take too much work to develop it, and it would not yield much of a financial incentive (unless the AI were specifically being used for companionship or emotional support purposes, that is, rather than simply for data processing).