New ask Hacker News story: Ask HN: What can we learn about human cognition from the performance of LLMs

Ask HN: What can we learn about human cognition from the performance of LLMs
2 by abrax3141 | 1 comments on Hacker News.
What can we learn about human cognition from the performance of LLMs Some hypotheses (adapted from other posts): * We have learned that Spreading Activation, when applied through high-dimensional non-symbolic network (the network formed by embedding vectors) may be able to account for abstraction in fluent language. * We have learned that "fluent reasoning" (sometimes called "inline" or "online" reasoning), that is, the shallow reasoning embedded in fluent language, may be more powerful than usually thought. * We have learned that "talking to yourself" (externally, in the case of GPTs, and potentially also internally in the case of human's "hearing yourself think") is able to successfully maintain enough short-term context to track naturally long chains of argument (via contextually-guided fluent reasoning, as above). * We have learned that to some extent powerful "mental models" that support (again, at least fluent) reasoning can be in effect (functionally) represented and used in a highly distributed system. * We have learned that meta-reasoning (which the LLMs do not do) may be important in augmenting fluent reasoning, and in tracking extended "trains of thought" (and thus extended dialogues). * We have a new model of confabulation that fits into the fluent language model as implemented by LLMs. * We have learned that people's "knowledge space" is quite amazing, given that they have ~10x current LLM parameter size (~10T, where as an individual has potentially ~100T cortical parameters -- depending on what you count, of course) but a given individual only encodes a small number of languages and a small number of domains to any great depth (in addition to the standard operating procedures that almost all people encode). [That is, vs. the LLM encoding the whole damned internet in ~10 different languages.] What else? (And, of course, it goes w/o saying that you'll argue about the above :-)

Comments