AI could be capable of ‘learning by thinking’, say researchers – Times of India



NEW DELHI: Artificial intelligence (AI) too could be capable of ‘learning by thinking‘, which is known to be core to a great discovery, according to a review. Cognitive scientists have documented how people learn from observation, in which one acquires knowledge by observing the external world.
However, another way of learning — a relatively neglected one — is ‘learning by thinking’, in which one gains knowledge without input from the external world, such as through thought experiments or self-explanations, according to Tania Lombrozo, a professor of psychology at the Princeton University and the author of the review published in the journal ‘Trends in Cognitive Sciences’.
In a thought experiment, one hypothetically explores a theory or a principle by thinking through its consequences, whereas in learning through self-explanations, one makes a sense of new information by relating it with what they already know.
Albert Einstein and Galileo Galilei are known to have used thought experiments for coming up with the theory of relativity and insights about gravity, respectively.
Lombrozo’s review showed that this process of thinking may not be exclusive to humans and that AI too is capable of correcting itself and arriving at new conclusions through ‘learning by thinking’.
“There are some recent demonstrations of what looks like learning by thinking in AI, particularly in large language models,” Lombrozo said.
In the review, Lambrozo said, when asked to elaborate on a complex topic, AI may correct or refine its initial response based on the explanation it provides.
The author continued that in the gaming industry, simulation engines are used to approximate real-world outcomes, and models can use outputs of these simulations as inputs to learning.
Further, asking a language model to draw analogies can lead it to answer questions more accurately than it would with simple questions, Lambrozo said. A form of AI, a large language model is trained on massive amounts of textual data and can, therefore, respond to users’ requests in the natural language.
Prompting AI to engage in step-by-step reasoning can lead it to answers it would fail to reach with a direct query, the author added.
“Sometimes ChatGPT will correct itself without being explicitly told. That’s similar to what happens when people are engaged in learning by thinking,” Lombrozo said.
In humans, the authors explained that people can acquire knowledge through explanation, simulation, analogy, and reasoning.
Giving examples of ‘learning by thinking’ in humans, Lombrozo said that explaining how a microwave works to a child might reveal gaps in our understanding.
The author explained that in humans, examples of ‘learning by thinking’ can include explaining how a microwave works to a child, during which we might come across gaps in our understanding.
Another example stated was that of rearranging furniture, which often involves picturing different layouts in one’s mind before making physical changes in a room.
The comparison of ‘learning by thinking’ examples in humans and AI “poses the question of why both natural and artificial minds have these characteristics,” according to Lombrozo.
“What function does ‘learning by thinking’ serve? Why is it valuable? I argue that ‘learning by thinking’ is a kind of ‘on-demand learning’,” Lombrozo said.
When one learns something new, they may not know exactly how the information will serve them in the future, and so, will be stashed away in their minds until the context makes it relevant, the author said.
The review also raised questions as to whether AI systems are actually ‘thinking’ or merely mimicking outputs of such processes.





Source link

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,912FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles