Download PDFOpen PDF in browser

Reasoning AI (RAI), Large Language Models (LLMs) and Cognition

EasyChair Preprint no. 13633

5 pagesDate: June 11, 2024

Abstract

Do Large Language Models have cognitive abilities? Do Large Language Models have understanding? Is the correct recognition of verbal contexts or visual objects, based on pre-learning on a large training dataset, a manifestation of the ability to solve cognitive tasks? Or is any LLM just a statistical approximator that compiles averaged texts from its huge dataset close to the specified prompts? The answers to these questions require rigorous formal definitions of the cognitive concepts of ”knowledge”, ”understanding” and related terms.

Keyphrases: AGI, AI, cognition, HLAI, intelligence, knowledge, LLM, RAI, Reasoning, understanding

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:13633,
  author = {Victor Senkevich},
  title = {Reasoning AI (RAI), Large Language Models (LLMs) and Cognition},
  howpublished = {EasyChair Preprint no. 13633},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser