simeonGriggs.dev

When prompting—ask, don’t tell

Getting the results you want from an LLM is more about collaborating than commanding.

View in Sanity Studio

Newcomers to prompting are often disappointed at the results they get. Based on the Tweets, LinkedIn posts, and hype they've read, they’ve likely come in with the expectation that a brief description of what they want will magically make it come alive on the screen.

But the LLM (Large Language Model) you're working with doesn't know what it doesn't know. So, without the full context of your anticipated solution, it cannot respond accurately the first time. And your prompt almost certainly lacks that context.

It’s not human

This disappointment and confusion often comes from our own built-in expectations about communication.

An LLM is not a human and shouldn't be treated like one. But because the interface for working with LLMs is predominantly a chat interface, you're subconsciously wired to interact with it as if it were human.

You're typing (or dictating) messages in your language. Having a conversation. Something you've only ever done with humans before. This creates a false expectation about how the interaction should work.

(I talk to my dog constantly, but she doesn’t answer my questions.)

Creating a craftsperson

When we work with skilled humans, we assume that we are working with an expert in their field. Someone who has established opinions, taste, craft and a point of view.

When you're working with a tradesperson, such as a painter or a plumber, you are beginning with a high level of trust that they have the specific skills to solve your particular problem.

Outcome: A loose description of your situation should result in them creating a sufficient solution.

This is similar in creative fields. If you are working with a graphic designer, you have probably selected them because of the examples of their work that you have seen. Their taste resonates with yours.

Outcome: You give them a brief of what you would like, and you trust the result will be some combination of what you want with what you know they're capable of.

In these two examples, we do not anticipate having to spell out in great detail what they are experts in, or what their taste is, in addition to solution we need.

And this is what makes working with LLMs difficult—when we don't give them context. They are designed to always give you an answer and do not often interrogate you for more context.

They don't have a specific skill set, style, or taste. They know and understand everything. They have been trained in all skills and all tastes. They are capable of responding to literally anything.

But you don't want anything; you want something.

The most successful interactions I've had with LLMs have been when I resisted the urge to make demands and expect answers at the beginning of a project. Instead, I worked with it to investigate the problem space and talk through it with the LLM together.

Through this conversation, we can establish a baseline of what I want and expect it to do.

Explore, then execute

When I remember to ask-not-tell, one of my favorite prompts to capture the magic of working with AI is:

Create a summary of our conversation so far

Depending on the application you're using, it is possible to scope the knowledge of an LLM’s context. For example, it's a typical pattern to tell the LLM what it is capable of or what it is a specialist in before getting to outcomes.

In this way, working with LLMs is different from selecting a human craftsman; rather, you are creating the craftsman that you would like to work with before hiring them.

By having a conversation before making commands and exploring the problem together rather than expecting instant solutions, you’re showing the AI how to get what and where you need.