AI as a fictional machine



It cannot be denied that modern AI machines have achieved remarkable fluency in language. Regardless of the words we choose to express ourselves, they understand what we tell them. It provides the same fluency with which you interact with people. However, we must remember that LLMs are not designed to be true, but to ensure that the story makes “sense” in any given context. Given context, LLMs are taught to create next events in an evolving narrative. Confabulations—intelligible distortions or fabrications—are part of his repertoire, regardless of whether they are true or not. our the world

One of the main functions of language is to imagine and create ideas that have never been expressed before. LLMs do this with ease, even if the context has nothing to do with what they read. The story will always make sense in any context because the machine has learned some general structures of language that will transfer to new situations. One of them is “compositionality”, that is, the concept that the meaning of a complex expression is determined by the meanings of its parts and the way they are combined. AI has learned several useful patterns

In my last moment podcastMachine learning researcher Léon Bottou points out that LLMs are essentially fictional machines that can be very good at talking about new situations far removed from the training data. In fact, what’s impressive to me is how often LLMs are true and right because they’re not meant to be. One reason for this may be intensive reinforcement learning from human feedback (RLHF). armies of human validators used by LLM operators who determine whether their responses are correct or socially acceptable.

Given its prowess in content creation, could AI come up with novels? Can he develop theories of the future in physics that are unknown to mankind and not reflected in the curriculum?

Arguably, it should be easy for AI to create new plots and write novels. After all, if LLMs are fiction machines, they should have no problem creating stories, regardless of quality. As Botto says: “Not An artificial intelligence the ideal language model with perfect reasoning ability and encyclopedic knowledge is best described as a machine printing fiction onto tape. As new words are pressed onto the tape, creativity follows new twists and turns, extracting facts from training data and filling in gaps with plausible confabulations.

But can AI discover new theories?

If we already have a set of candidate models defined and the task is to identify the correct one, this is an easy climb for AI. However, if a theory requires new concepts to describe it, this may require giving new meaning to existing words or creating entirely new concepts, which is a major power for the machine. Created new meanings for existing words such as Einstein’s theory of relativity time, gravity, and strength. Similarly, thermodynamics and quantum mechanics created new concepts that required the introduction of new words. photon, quark, quantumand entropy.

Theories usually go one step further. In addition to the symbols and concepts that represent them, theories usually require a causal structure and a mathematical formulation. Causality means that the phenomenon must be understandable to people in terms of the symbols used. It raises the deeper question of whether or not intelligence can be fully determined by signs. Are there any characters involved in such incidents? feelingvisuals and motor control? If not, we cannot understand the new theory that the machine produces, unless he can explain it to us in terms that we can understand. This reminds me of Geoff Hinton’s metaphor of artificial intelligence as an alien that is so different from us that we may not always understand each other.

It’s a strange brave new world: an intelligent alien that lives with us, albeit of our own making, and that we don’t fully understand. Until we learn his language



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *