In recent years, artificial intelligence (AI) has seen accelerating progress, now reaching mainstream awareness. For example, Tesla’s AutoPilot, while not fully autonomous, is capable of a level of intelligent behaviour. OpenAI’s language models, GPT-3 and ChatGPT, have captured popular interest, as have DALL-E 2’s beautiful and weird AI art creations, and not surprisingly hype has ensued. Last summer, a senior software engineer at Google claimed the company’s AI chatbot LaMDA was self-aware, leading to a media storm and eventually his being fired from the company. However, it is undeniable that AI researchers do have the lofty goal of giving their creations sentience and consciousness, and possibly even a conscience. The question is whether this is something we should truly aim for, as the negatives may outweigh any benefits to humanity.
How far are we from AI sentience and self-awareness?
As we discussed in our recent article ‘ChatGPT and the AI roadmap’, the so-called AI roadmap includes four types of AI, namely reactive machines, limited memory, theory of mind, and self-awareness. The main characteristics that define each category are both the information the system holds and relies upon to take its decisions, and the functional scope or ambition. Reactive machines have no memory of the past whereas limited memory machines only know about recent data trends, but still lack a proper representation or understanding of the world. The third type, theory of mind, involves a more advanced internal representation of the world, including other agents or entities, which are intelligent themselves and have their own goals that can affect their behavior.
For an AI system to show sentience and self-awareness, the very last and most advanced type of AI is required. This involves not only an advanced internal representation of the world—like other agents or entities, that are intelligent themselves and have their own goals that can affect their behavior—it also involves one of these internal agent representations to be held for itself. That is, the AI system has to be aware of its own goals, feelings, and existence.
Many neuroscientists believe that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness (NCC), but there is no consensus on this view. NCC is the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept. Self-awareness, the sense of belonging to a society, an ethical view of right and wrong, or the greater good are just a few examples of matters that would be dealt with at this AI capability level.
Needless to say, no such system has yet been built, and it is quite unclear whether language models such as a GPT-3, ChatGPT, or even the soon-expected GPT-4 do actually hold any theory of mind, and less so about themselves. They certainly have not been designed to have it, yet the theoretical question is whether it can just emerge as the system learns from data.
While debates about AI sentience and self-awareness may appear purely philosophical at this stage, they will become increasingly important as intelligent autonomous robots take on more roles in our society, which might involve moral decisions. The ultimate promise behind AI and AI-enabled robotics is that they will free up humans from work and usher in a world of increased leisure, where we can devote our time to relaxation and learning. For now, let us put aside the minor matter that our socioeconomic models are built around labor, and nobody has so far found an answer to wealth distribution, because this is not relevant to the main point of this article.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
However, there are significant moral issues if those AI robots had human emotions and feelings, goals and desires, a sense of fairness, or an awareness of their own existence and possibly awareness of class. To what extent would that arrangement be different from human slavery? Would these sentient AI robots feel exploited? Would they be unhappy? How would we humans feel about exploiting this new class of sentient beings, regardless of the fact that we created them?