AISB25-AIW: AIWinter University of Western England Bristol, UK, January 14-16, 2025 |
Submission link | https://easychair.org/conferences/?conf=aisb25aiw |
Are we about to experience another AI Winter?
A symposium of the 2025 convention of the UK Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB)
14-16 January 2025 – Bristol, UK
The history of AI reflects many periods of novel technological development, followed by hyperbolic claims for the latest technology. All too often this has in turn been followed by severe disillusionment and withdrawal of funding, as those providing the funds realise that the latest AI technology has severe limitations.
These periods of disillusionment have become known in the field as AI Winters: times of up to a decade when it is very hard, if not impossible, to obtain funding for AI research. The only point of debate among AI historians is how many there have been. Boden (2006) clearly describes two, starting in 1973 and 1990. It could well be argued that there have been more.
The AI Winters seem to be soon forgotten whenever a new technological development is made. Frequently, the new technology is portrayed as solving all the significant problems of AI research. Logic programming, knowledge-based systems, artificial neural nets, genetic algorithms, and deep learning have all played this role of total solution over the decades.
Overhyping of technology generates interest from the important funding bodies, military and commercial. At this point, one has what engineers call positive feedback: since more extreme claims obtain larger amounts of funding, there is a built-in incentive to exaggerate. Eventually it turns out that the technology, while perhaps impressive in a limited area, is not a global solution to AI and the funding dries up – or is, at least, seriously reduced.
In 2024, the latest technology playing this role is large language models (LLMs). They are an impressive development; however, the exaggerated claims are already observable. OpenAI, the developers of ChatGPT, have described it as a step towards artificial general intelligence (AGI) and are marketing a product claimed to bridge the gap between current AI and AGI; see https://openai.com/index/planning-for-agi-and-beyond/.
It is conceivable that LLMs are the breakthrough that will permit the development of AGI, and, indeed, some have strenuously argued that they are. It seems at least as likely that they are nothing more than overhyped chatbots containing no knowledge, motivations, or capacity for reflection – let alone sensory apparatus in the conventional sense. They are nothing more and nothing less than reliable prediction machines. It seems likely that the investment bubble surrounding LLMs will burst when their limitations in real-world tasks becomes apparent, plunging the world into another AI Winter.
We welcome contributions taking the view the LLMs are a significant step towards AGI just as much as contributions taking the opposing view. This symposium aims to develop nuanced discussion whether to expect another AI winter.
Submissions may be in the form of short papers (up to ten pages) or abstracts (up to 500 words). Blind submissions in ODT, DOC, DOCX or PDF format (authors’ names omitted) should be made via EasyChair (https://easychair.org/conferences/?conf=aisb25aiw) latest by Monday 11 November. At least one of the authors is expected to attend the AISB convention in Bristol and present the paper in person.
Blay Whitby (program committee chair)
Joel Parthemore (co-chair)
Other members of the program committee will be announced shortly.