Because AI does not have the same processing capabilities as humans, being as clear of a communicator as possible is paramount to any AI prompt.
That means providing as many relevant details as possible, Askhell said, including anything that you may think the AI might know or take for granted—like the intended purpose, tone, recipient, and end goal of a task.
Take, for example, email writing. If you prompt the AI with, “Draft an email to my boss,” clearly there are details left out. Instead, be specific: “Draft an email to my boss, [Name], following up on [Topic] [Time], reminding them about the documents they need to send by [Time]. We have a friendly relationship, so don’t be afraid to be casual. ”
“Show your prompt to a colleague, ideally someone who has minimal context on the task, and ask them to follow the instructions,” Anthropic’s website says. “If they’re confused, Claude will likely be too.”
As a result, it’s more important than ever to thoroughly read what an AI provides as a response. Just copying and pasting an AI response is where you can get into muddy water in the workplace.
“One thing that people will do is they’ll put ‘think step by step’ in their prompt, and they won’t check to make sure that the model is actually thinking step by step because the model might take it in a more abstract or general sense,” Witten said.
Having thorough conversations with AI—rather than a one-and-done transactional relationship—can help improve your output, too. In a 15-minute span, Askhell admits there are times she is going back-and-forth hundreds of times with Claude to have to iterate further or point out mistakes.
For example, if you asked an AI chatbot for a list of the 50 U.S. state capital cities, and it makes an error, you can respond just like you might with a colleague: “You’ve mistaken Augusta, Maine for Augusta, Georgia—are you sure your response is correct?”
Moreover, one helpful research resource that many generative AI now have is that they will provide you with where they found knowledge on the internet. Just be sure to add, “And provide me with your sources.”
AI is often compared to technological tools like the calculator. However, just like when you first used a calculator, becoming proficient with the tool does not happen immediately.
Working with AI constantly is what can make the best type of prompt engineer, Askell said.
“Do it over and over again, give your prompts to other people. Try to read your prompts as if you are like a human encountering it for the first time,” she advised.
Trying to get the model to get something you don’t think it can do can be a great way to learn the potential of AI, Hershey added.
“I think a lot of prompt engineering is actually much more about pressing the boundaries of what the model can do,” he said.
One area where this is especially fascinating is with image generation. If you ask a chatbot: “Draft a flyer for a workplace training session I am hosting on harassment.” Since the parameters are relatively limited, the options for what it might produce are endless. You might get something completely different if you are more stern and prompt, “I work in HR, and I need a professional flyer for a workplace training session for [Audience] on harassment hosted on [Date].”
As AI models improve, the learning curve behind prompting will become less extreme. In fact, the Anthropic team said the roles may soon be reversed.
“Maybe prompting becomes something where I explain what I want, and it is kind of prompting me,” Askell said.
And we’ve already started to see this come to fruition. The public models of Claude and other LLMs will already ask follow up questions to prompts with not enough information. Simply saying “Write an email to my boss” will now yield more questions than answers.
“I’d benefit from knowing a bit more,” Claude said.