Tags:Large Language Models, neural networks, theory of thinking and Turing test
Abstract:
The advent of Large Language Models (LLMs) is a new opportunity to take a look at the Turing test and other ways to assess whether machines can think (or be conscious). Of course a lot has been written about it since Turing suggested his test as a criterion for thinking in a machine (or a replacement for the "irrelevant" question whether a machine can think), but until something that is close to pass the Turing test has been built, the criticism had little implications. Now this has changed, since being able to say whether GPT or other neural networks think will have a tremendous impact on the development and usage of these technologies. I am going to analyse criterions for thinking/consciousness used by contemporary AI researchers and confront them with naturalistic theories of consciousness. I will show that AI engineers usually do not have a theory of thinking at all. What they do is engineer machines that do certain things and when they manage to do that they work backwards to fit their inventions into a claim (to not say a "theory") about consciousness. While a few decades ago it was believed that building AI can help understand the mind, now it seems it is incompatible and disastrous to what cognitive scientists do when trying to explain thinking in naturalistic terms (especially in the computational paradigm) and possibly to the whole endeavour of building the science of thinking/consciousness. At the end I will briefly analyse the difference between thinking and consciousness and present my own view on the problem.
The Curious Case of a Slightly Conscious Neural Network