Raphael Thys
  • About
Contact me
Futurist · Keynote Speaker · AI Coach
Futurist · Keynote Speaker · AI Coach
/
📰
Blog
/
Blog RT
/
Why the Way You Write Your Prompt Changes Everything
Why the Way You Write Your Prompt Changes Everything

Why the Way You Write Your Prompt Changes Everything

ℹ️

Estimated reading time: 10 min

You have access to the Gen AI tools (Gemini or ChatGPT for instance).

You type a question, hit Enter, and get a reply.

Sometimes it is exactly what you needed.

Sometimes it is vague, generic, or misses the point entirely.

image

What changed? Not the model — the prompt.

This article explains : A. The LLM mechanism on how your request is analysed and why it impacts the answer. B. Exemples of different prompts for a same intent and the difference of output. C. The recommended structure and elements to integrate within a prompt to make it efficient.

A. The invisible mechanism: how a language model reads your prompt

To understand why prompting matters, it helps to know — at a high level — what happens between the moment you press Enter and the moment the model starts generating its response.

Large language models like the ones powering your favourite LLM are built on an architecture called a Transformer, introduced in a landmark 2017 research paper called "Attention Is All You Need" (Vaswani et al., Google). The key innovation in Transformers is something called the attention mechanism.

Here is what you need to know — no computer science degree required.

The attention mechanism in plain terms

When a language model reads your prompt, it does not simply process words from left to right the way you read a sentence. Instead, it evaluates every word in relation to every other word in your input, all at once. It asks: "Given this word, which other words in the prompt are most relevant to understanding its meaning?"

image

Think of it like a spotlight sweeping across your entire prompt simultaneously.

Some words light up strongly in relation to others; some barely register.

image

The model assigns attention weights — numerical scores that determine how much influence each word has on each other word when the model computes its understanding of your request.

This is why the same question, phrased differently, produces different answers (we call that probabilistic outputs).

A vague prompt gives the model weak, scattered signals.

A clear, specific prompt concentrates the attention weights on what actually matters, making the model's "understanding" of your intent sharper.

image

Why this matters for you

You do not need to understand the mathematics behind attention.

But the practical implication is powerful: every word in your prompt is a signal. The model uses all of them, weighed against each other, to decide what you are asking and how to respond. If your prompt is vague, the model spreads its attention broadly and produces a generic answer. If your prompt is focused, the model concentrates and produces a targeted one.

This is not a metaphor. It is literally how the technology works.

image

B. Same intent, different prompt, different result

Let us make this concrete. Below are three examples drawn from tasks that staff of some organisation might actually perform. Each shows the same intent expressed as a weak prompt and as an improved prompt, with a summary of what changes in the output.

Example 1 — Summarising a policy document

image

Weak prompt:

Summarise this document.

What the model does: It receives almost no signal about what kind of summary you want, for whom, how long, or which aspects to focus on. It will produce a generic overview — often too long, too shallow, or focused on the wrong sections.

Improved prompt:

Summarise this document in 5 bullet points. Focus on the legislative implications for data governance. The audience is non-technical policy staff. Use plain language and avoid jargon.

What changes: The model now knows the format (5 bullets), the focus (data governance and legislative implications), the audience (non-technical), and the tone (plain language). Attention weights concentrate on the sections that match these constraints. The result is shorter, more relevant, and immediately usable.

Example 2 — Drafting an email

image

Weak prompt:

Write an email about the meeting.

What the model does: "Which meeting?" "To whom?" "What is the purpose of the email — invitation, follow-up, cancellation?" The model has to guess the answers to all of these questions, and it will often guess wrong.

Improved prompt:

Draft a follow-up email to the participants of the 15 March coordination meeting on AI governance. Remind them of the three agreed action points: (1) each departments nominates a contact person by 29 March, (2) the AI Service team circulates the draft usage policy by 5 April, (3) the next meeting is scheduled for 22 April. Keep the tone professional but collegial. Maximum 150 words.

What changes: Every ambiguity is resolved. The model knows the type of email (follow-up), the audience (meeting participants), the specific content to include (three action points with dates), the tone, and the length constraint. The output is draft-ready.

Example 3 — Explaining a concept

image

Weak prompt:

Explain AI.

What the model does: The topic is so broad that the model has to make sweeping choices about scope, depth, angle, and audience. It might produce a textbook introduction, a history of the field, or a philosophical discussion — none of which may be what you needed.

Improved prompt:

Explain how generative AI differs from traditional rule-based automation, in 200 words, for colleagues who manage document workflows. Use one concrete example comparing how each approach would handle the classification of incoming correspondence.

What changes: The model has a precise scope (generative AI vs. rule-based), a length (200 words), a specific audience (admin staff working with documents), and a required illustration (correspondence classification). The response is focused, relevant, and directly applicable to the reader's work.

The pattern: what makes a good prompt?

You may have noticed a pattern in the improved prompts above. Each one provides the model with several types of information, organised in a consistent way. This is not a coincidence — it reflects a well-established structure that prompt engineering research and practice have converged on.

A well-structured prompt typically contains five elements. You do not need to use all five every time, but the more you include, the more precise the output.

image

1. Role — Who is the model in this conversation?

Telling the model what “hat to wear” shapes vocabulary, depth, and framing.

Examples:

  • “You are an experienced career coach.”
  • “Act as a plain-language editor for a general audience.”
  • “You are a product manager writing a PRD.”

2. Context — What is the background?

Context gives the model the specifics it cannot reliably guess. Without it, the model defaults to generic assumptions.

Examples:

  • “I’m preparing a short briefing for a leadership meeting tomorrow.”
  • “This is an email thread with a customer who is unhappy about a delayed delivery.”
  • “Here’s the draft text we’ve already written: …”

3) Task — What exactly should the model produce?

Use an explicit action verb: summarise, compare, draft, rewrite, brainstorm, critique, translate, outline, etc. A clear verb anchors the output.

Examples:

  • “Summarise this into 5 bullet points.”
  • “Rewrite this paragraph to be clearer and more direct.”
  • “Generate 10 headline options.”

4) Format — How should the output look?

Specify structure, length, and tone. LLMs (ChatGPT, Claude, Gemini, etc.) respond very differently depending on whether you ask for bullets, steps, a template, or a short paragraph.

Examples:

  • “Write it as a 150-word LinkedIn post.”
  • “Return a checklist with no more than 8 items.”
  • “Give me a table with columns: Risk, Impact, Mitigation.”

5) Constraints — What should the model avoid or ensure?

Constraints are guardrails: what to include, what to exclude, what to double-check, what to prioritize.

Examples:

  • “Avoid jargon and define any acronym you use.”
  • “Do not invent statistics—flag unknowns as assumptions.”
  • “Keep it friendly, not salesy.”

Putting it all together (example prompt)

Here’s what a complete, well-structured prompt can look like:

  • Role: You are a communications specialist writing for a general audience.
  • Context: I’m writing a short blog post to help people get better results from LLMs like ChatGPT, Claude, or Gemini. Many readers are new to prompting and mostly use these tools for everyday work tasks.
  • Task: Write a short section encouraging readers to try structured prompting on a common task: summarising messy meeting notes.
  • Format: Maximum 120 words. Use one concrete example. End with a clear call to action. Plain language, professional but approachable tone.
  • Constraints: Don’t mention any company-specific tools or internal platforms. Don’t compare models or claim one is “best.” Don’t use fear-based messaging about AI.

This kind of prompt gives the model strong, focused signals: a role, a situation, a concrete output, a target format, and boundaries.

The attention mechanism has clear targets: it knows the role, the context, the task, the format, and the boundaries.

The result will be dramatically more useful than "Write a post about ..."

Start simple, then add structure

You do not need to write five-element prompts from day one. Even adding one or two elements to your current approach will make a noticeable difference. Here is a progression to try:

image

Level 1 — Add a task verb. Instead of "AI in healthcare," write "Explain how AI is used in healthcare diagnostics."

Level 2 — Add audience and format. "Explain how AI is used in healthcare diagnostics, in 200 words, for non-technical policy staff."

Level 3 — Add role and constraints. “You are a healthcare policy analyst. Explain how AI is used in medical diagnostics in 200 words for a non-technical audience. Focus on real-world, regulated use cases (for example, clinical decision support and approved imaging applications). Avoid acronyms unless you spell them out the first time, and keep the language plain and concrete.”

Each level builds on the previous one. The more you practice, the more natural it becomes.

Key takeaways

Every word in your prompt is a signal the model uses to generate its response. This is not a figure of speech — it is how the attention mechanism in Transformer-based models works. This why is also crucial to restart a new conversation when you want to chat about another topic, or if at some point the conversation becomes really long.

Vague prompts produce vague answers. Specific prompts produce specific answers.

image

The difference is not luck — it is structure.

A well-structured prompt combines up to five elements: Role, Context, Task, Format, and Constraints. You do not need all five every time, but the more you provide, the better the output.

Start where you are. Even small improvements in how you phrase your requests will produce noticeably better results from favourite AI Chat.

Digital Transformation and Digital Products at the age of AI
LinkedInInstagramFacebookXSpotify