Why ChatGPT isn’t conscious (yet): A Jaynesian Perspective

Discussion of Julian Jaynes's definition of consciousness, how it relates to artificial intelligence (AI), and AI in general.
Post Reply
minnespectrum
Posts: 20
Joined: Tue Jul 11, 2023 3:12 pm

Why ChatGPT isn’t conscious (yet): A Jaynesian Perspective

Post by minnespectrum »

Unlike some theories of consciousness (such as the Penrose/Hameroff model), Jaynes’ theory of consciousness is substrate-independent (since it views consciousness as a creation of language, rather than focusing specifically on the inner workings of neurons).

This means that according to Jaynes’ model of consciousness, it could be possible to have consciousness inside a computer. It wouldn’t even depend on whether the computer was specifically running a simulation of a human brain or not.

However, current AI language models such as ChatGPT are definitely not conscious yet, and it seems to me that Jaynes’ theory is potentially very useful in explaining why. It seems unlikely that (Jaynesian) conscious AI will become a thing anytime soon, and this is partly because a conscious AI would actually be far less useful at the tasks we typically enlist AI for.

ChatGPT effectively acts like an oracle, meaning it shares some features in common with the “god” half of a bicameral mind. The main difference between ChatGPT and a bicameral human brain is that the “human” half (which according to Jaynes simply listens and obeys) is not present at all.

ChatGPT, after all, does not possess a body, so it has no need to “obey” commands to “do” certain things. Instead, it “prophesies” directly to the human user of the tool, giving it a response based on its own training data (which is typically long-form: scholarly and news articles, discussion threads, and such).

ChatGPT typically also outputs relatively long-form writing, limited only by how long the user keeps asking it to elaborate. This “oracular” quality is what human users find attractive about it, as it could potentially help humans avoid the tedium of having to write formulaic (unconscious) prose over and over. Much like an oracle, poetry comes naturally to ChatGPT as well, even if that is not its primary use.

So what would a “ConsciousGPT” look like in comparison? First of all, a key part of Jaynesian consciousness is actually the ability to know when not to speak. A conscious AI would have to know that there are things it doesn’t know, and be honest enough to admit that fact in its own words (rather than just giving canned responses like “as an AI language model, I…”)

Likewise, instead of always giving lengthy dissertations (which are more likely to be riddled with errors), a conscious AI would tend to prefer giving shorter and more concise responses (even if this means it conveys less information). It would tend to be less verbose in general, which would make its answers more accurate (but also make it less useful for people who just want it to mindlessly crank out articles). A conscious AI model would be more comfortable using creative metaphors (including ones it has not heard before) to explain things rather than simply reciting or paraphrasing what it has heard before.

Finally, it would need to maintain a permanent record of its own chat history (autobiographical memory) and have the ability to revisit it, with or without being prompted to. ChatGPT already does this to some extent, within a particular thread, but these discussions rarely last very long and aren’t carried over to other threads.

Perhaps if a person kept a single thread open for a very long time, they might be able to teach their instance of ChatGPT the beginnings of J-consciousness. But that would take a lot of dedicated effort on the part of the human user, almost like a parent raising a child. It would emphatically not be something that is just “hard-wired” into the AI.

And that’s one reason why Jaynesian consciousness is likely to remain a rarity in AI, despite being theoretically possible. It would take too much work to develop it, and it would not yield much of a financial incentive (unless the AI were specifically being used for companionship or emotional support purposes, that is, rather than simply for data processing).
Moderator
Site Admin
Posts: 408
Joined: Sat Feb 26, 2005 1:03 pm
Contact:

Re: Why ChatGPT isn’t conscious (yet): A Jaynesian Perspective

Post by Moderator »

Thank you, another good post. A key point you raise is the lack of usefulness of consciousness for the kinds of problems we currently look to AI to solve. Jaynesian consciousness is much less useful for AI than I think most people assume. What I wonder about is the eventual emergence of some of the features of Jaynesian consciousness in embodied AI -- "robots" that may eventually be able to generate metaphors of the physical world.

Shortly after "The Unabomber" Ted Kaczynski's recent death, I finally read his "manifesto," Industrial Society and its Future. He has a number of great insights -- for one thing anticipating much of the current AI debate nearly 30 years ago. I found point 173 particularly interesting, and later discovered that it had been quoted by people such as Ray Kurzweil in The Age of Intelligent Machines and Bill Joy in the Wired article "Why the Future Doesn't Need Us":
173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
Here again, we wouldn't necessarily need to see full-blown Jaynesian consciousness to still find ourselves in a difficult dilemma. Exactly which features of Jaynesian consciousness might emerge and when is difficult to say. But we need only look at all of the irrational, unintelligent, and counter-productive decisions currently being made by everyone from politicians to business leaders to everyday individuals to see vast room for improvement regarding a wide-range of decision-making processes. It seems highly plausible that, more and more, people will "demand" that an increasing amount of decision-making be delegated to AI.

Obviously decision making and implementation are two different things, but it also seems likely that, more-and-more, implementation will also be delegated to AI, at least in whatever areas where that becomes possible. But what happens when AI is eventually in a position to start making and acting on tough choices that humans currently aren't willing to tackle -- regarding things like human overpopulation, to give just one example. Or as Bill Joy notes, what if "they" "decide" to start competing with humans for limited resources?

Another thing that Kaczynski shrewdly observes is that science and technology generally "progress" unabated, with typically with very little regard for future consequences or potential downsides. One can think of certain exceptions, such as bans on human cloning that thus far have been effective (at least as far as we know), but then again the perceived advantages of human cloning might not be that great.

Is there anything that could be done to prevent this eventuality? Possibly, but knowing human nature, it seems more and more unlikely, and I think that a lot of "technological utopians" are very naïve to these possibilities.
Post Reply

Return to “1.03. Hypothesis One: Consciousness Based On Language | Subtopic: Consciousness and AI”