top of page

What is ChatGPT doing - Stephen Wolfram



A useful short book on the inner workings of ChatGPT. Most of the technical explanation I had seen and understood before but there are some good visualizations in here to explain the fundamentals more clearly.


The biggest takeaway from this book though is summarized in this quote: "The specific engineering of ChatGPT has made it quite compelling. But ultimately (at least until it can use outside tools) ChatGPT is “merely” pulling out some “coherent thread of text” from the “statistics of conventional wisdom” that it’s accumulated. But it’s amazing how human-like the results are. And as I’ve discussed, this suggests something that’s at least scientifically very important: that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought. ChatGPT has implicitly discovered it. But we can potentially explicitly expose it, with semantic grammar, computational language, etc."


ChatGPT's view on this:


The efficacy of models like ChatGPT in generating human-like text has indeed brought forward questions about the nature of human language and cognition.

A few points of consideration:

  1. Patterns in Language: The fact that models like ChatGPT can generate coherent and contextually relevant responses indicates that there are identifiable and learnable patterns in human language. These patterns allow for the prediction of what comes next in a sequence of words or phrases, which is the fundamental mechanism behind models like GPT.

  2. Human Cognition and Neural Networks: The architecture of neural models like GPT, while inspired by biological neural networks, is vastly simplified compared to the human brain. If such simplified structures can capture a significant portion of human language's complexity, it does lead to the question: Are our brains using some inherent patterns or "laws" that we haven't yet fully identified?

  3. Formal Linguistics: Efforts to discover and define the underlying structures of language aren't new. From Noam Chomsky's generative grammars to the exploration of semantic roles in language, linguists have long sought to identify the "rules" governing human speech and comprehension.

  4. The Gap Between Generation and Understanding: While ChatGPT can generate human-like text, understanding the deeper meaning, context, or nuances, as humans do, is a different challenge. Generating text based on patterns doesn't equate to comprehension or true understanding.

  5. Potential for Discovering "Laws": If there are more explicit "laws" or rules underlying human language and cognition, the success of models like GPT might pave the way for their discovery. By analyzing the patterns these models identify and the ways in which they generate language, researchers might get closer to uncovering such underlying principles.

In summary, the success of models like ChatGPT has indeed reopened discussions about the nature of human language, cognition, and the potential existence of discoverable patterns or "laws" governing them. As AI and linguistics evolve, it will be fascinating to see how these discussions shape our understanding of both human intelligence and artificial intelligence.


 
 
 

Recent Posts

See All
Klarna real-world AI implementation

I started a google sheet to keep track of these kinds of announcements: https://www.klarna.com/international/press/klarna-ai-assistant-ha...

 
 
 

Commentaires


  • Facebook
  • Twitter
  • LinkedIn

©2022 by Secondlife. Proudly created with Wix.com

bottom of page