Anthropic researchers shared two new papers on Thursday, sharing the methodology and findings on how an artificial intelligence (AI) model thinks. The San Francisco-based AI firm developed techniques to monitor the decision-making process of a large language model (LLM) to understand what motivates a particular response and structure over another. The company highlighted that this particular area of AI models remains a black box, as even the scientists who develop the models do not fully understand how an AI makes conceptual and logical connections to generate outputs.
Anthropic Research Sheds Light on How an AI Thinks
In a newsroom post, the company posted details from a recently conducted study on “tracing the thoughts of a large language model”. Despite building chatbots and AI models, scientists and developers do not control the electrical circuit a system creates to produce an output.
To solve this “black box,” Anthropic researchers published two papers. The first investigates the internal mechanisms used by Claude 3.5 Haiku by using a circuit tracing methodology, and the second paper is about the techniques used to reveal computational graphs in language models.
Some of the questions the researchers aimed to find answers to included the “thinking” language of Claude, the method of generating text, and its reasoning pattern. Anthropic said, “Knowing how models like Claude think would allow us to have a better understanding of their abilities, as well as help us ensure that they’re doing what we intend them to.”
Based on the insights shared in the paper, the answers to the abovementioned questions were surprising. The researchers believed that Claude would have a preference for a particular language in which it thinks before it responds. However, they found that the AI chatbot thinks in a “conceptual space that is shared between languages.” This means that its thinking is not influenced by a particular language, and it can understand and process concepts in a sort of universal language of thought.
While Claude is trained to write one word at a time, researchers found that the AI model plans its response many words ahead and can adjust its output to reach that destination. Researchers found evidence of this pattern while prompting the AI to write a poem and noticing that Claude first decided the rhyming words and then formed the rest of the lines to make sense of those words.
The research also claimed that, on occasion, Claude can also reverse-engineer logical-sounding arguments to agree with the user instead of following logical steps. This intentional “hallucination” occurs when an incredibly difficult question is asked. Anthropic said its tools can be useful for flagging concerning mechanisms in AI models, as it can identify when a chatbot provides fake reasoning in its responses.
Anthropic highlighted that there are limitations in this methodology. In this study, only prompts of tens of words were given, and still, it took a few hours of human effort to identify and understand the circuits. Compared to the capabilities of LLMs, the research endeavour only captured a fraction of the total computation performed by Claude. In the future, the AI firm plans to use AI models to make sense of the data.