Amanda Askell gave tips on how to prompt AI better

You are probably prompting AI wrong: Anthropic philosopher explains how to learn the language of AI

  • Post category:Tech
Share this Post


With AI chatbots increasingly being found across many spheres of life, writing the perfect prompt has become a skill in itself. While each AI company usually gives a glimpse into how to best ask questions to its models, these methods vary widely across different chatbots.

Anthropic’s resident philosopher Amanda Askell has shared detailed insights on how users can get the best results from most chatbots.

In a Q& A video released by the company, Askell says there is no single textbook for prompting and describes it as an “empirical domain,” meaning users must learn prompting techniques by observing and testing.

“Prompting is very experimental,” Askell explains. “You find a new model, and I’ll be like, ‘I have a whole different approach to how I prompt for that model that I find by interacting with it a lot.’”

Askell says there is a need to scrap assumptions about models and look at output after output to understand the specific disposition of the model a user is working with.

“It is really hard to distil what is going on because one thing is just a willingness to interact with the models a lot and to really look at output after output,” she says.

Askell also says her training as a philosopher comes in handy in this area. She explains, “This is where I actually do think philosophy can be useful for prompting because a lot of my job is trying to explain an issue, concern or thought I’m having to the model as clearly as possible.”

What has Anthropic previously said on prompting Claude?

In a Prompt Engineering Overview published in July, Anthropic added an analogy to help users understand how to work with its chatbot. The company advised users to think of Claude not just as software, but as “a brilliant but very new employee with amnesia who needs explicit instructions.”

The guide highlights that unlike a long term human colleague, the AI has no context about “your norms, styles, guidelines or preferred ways of working.” Because the model starts from scratch with every interaction, Anthropic notes that the more precisely users explain what they want, the better Claude’s response will be.

“Just like you might be able to better perform on a task if you knew more context, Claude will perform better if it has more contextual information,” the company noted.



Source link

Share this Post

Leave a Reply