top of page

Avion Investigates: Does AI perform better when you're rude?

  • Brenna Watson-Paul
  • Apr 8
  • 7 min read


When GPTs first launched, conversations quickly showed how everyone was communicating with the tool in different ways. Some curt, some friendly, some in between. 

 

If you’ve experimented with Large Language Models such as ChatGPT, you know what you write (your prompt) impacts the quality of the response. What you may not know is that recent studies show the tone of your prompt also influences the response.

 

AI thought leader David Shapiro explains that LLM misbehaviour (odd responses, errors, cheekiness) often results from user error, like vague or assumptive prompts. He stresses the need for precise, fluff-free instructions to help AI grasp intent and context.


What are Large Language Models?


At its core, LLMs are a machine learning tool. ChatGPT, Claude, and Gemini are all examples of LLMs. They analyse vast amounts of data to predict and generate words based on context. It’s like an advanced version of predictive text on your mobile.

 

  • Machine Learning operates on algorithms, working with structured data to draw insights. It’s similar to your phone learning common phrases you type and suggesting them.

  • Deep Learning takes it further. Like how your phone recognises your unique texting style, tone, and slang. It uses neural networks to detect patterns in unstructured data, making its predictions more nuanced.

  • Natural Language Processing (NLP) helps LLMs understand inputs and respond conversationally. Similar to your mobile’s voice assistant (hey Siri!) it doesn’t just match keywords but deciphers the intent of your prompt to deliver a relevant response.

 

Remember, all GPTs are LLMs, but not all LLMs are GPTs (say that 10 times!). GPTs, including ChatGPT, are a particular implementation of the broader category of LLMs.


How to choose the right generative-AI tool for your organisation


Choosing the right LLM is crucial for your organisation’s success with AI generators. Popular models like OpenAI's GPT series, Google's BERT, and Microsoft’s Turing-NLG offer diverse capabilities, ranging from:


  • Text generation: creating coherent and contextually relevant text from prompts.

  • Language translation: converting text between different languages.

  • Sentiment analysis: determining the emotional tone behind a piece of text.

  • Reading time and level: providing a prediction on average read time and reading age-level for comprehension. *Personal favourite

  • Rewriting: rephrasing text while retaining its meaning.

  • Keyword and phrases: identifying significant terms or phrases in a text.

  • Spell checking: detecting and correcting spelling or grammatical errors in text.

  • Code generation: writing and explaining programming code based on requests.

  • Spreadsheet formulas: providing the correct formulas for excel inputs. 


There isn’t always one single best tool – what matters is selecting the right model for each of your use cases. Here are our three key considerations:


Step 1: Identify your primary use cases and usage


Define how your organisation plans to use LLMs. The needs may vary across departments, such as content creation, customer support, or data analysis. Some companies opt for custom LLMs for security reasons, while others provide teams with more flexibility and free range in their tool choices.


It’s best to consider how you’ll manage and support engagement within your team. Because, when leaders ask employees about their AI use, the reply is often, ‘I don’t really use it.’ But data says that almost 30% of employees use AI for work in any given week (PDF). This disconnect is called shadow AI.


Why in the shadows? Employees don’t want to get in trouble, or create their replacements. Here’s how we try to avoid shadow AI use:


“At Avion, we encourage our team to explore different options and share their findings at our monthly AI lunch-and-learns. As we gain a deeper understanding of the landscape, we focus on identifying the right tools for each use case and sharing those insights across the team, ensuring everyone benefits.” – Natalie Khoo, Founder of Avion Communications.


Step 2: Consider a governance framework


Make sure users are aware of the risks and potential pitfalls associated with the model you choose. Train staff to critically assess responses for biases, misinformation or mistakes.  It’s important to remember GPTs can generate information that isn’t always accurate called ‘hallucinations’. You might have heard of funny LLM hallucinations, but some have serious implications…


We’ve recently added an AI usage policy for our team that covers Avion's approach to using artificial intelligence (AI) tools in our copywriting processes. It provides guidelines for our staff to ensure consistent, ethical and transparent use of AI technology while maintaining our high standards of service delivery.


Step 3: Test and evaluate performance


Don’t overlook critical factors like accuracy, response time, and scalability. Test your use case by running your prompts across multiple tools to assess how easily you can achieve the desired output. Compare the performance of each tool to determine which best meets your requirements.


Tip: If you find a tool that’s widely adopted across multiple team members, you may be able to reduce costs by consolidating subscriptions into a workplace package.


Finding the balance for better prompt engineering


“The use of generative AI can altogether succeed or fail based on the prompt that you enter.” – Dr. Lance B. Eliot, renowned AI scientist.


What is prompt engineering?


Prompt engineering is the art of crafting inputs that guide an LLM to deliver the best possible response.


A basic grasp of how LLMs process information can sharpen your approach to prompt engineering. Instead of understanding meaning the way humans do, LLMs break text into tokens; small units that can be whole words or fragments. They don’t judge intentions – they analyse numbers and patterns behind words.


In simple terms, LLMs convert your input into numbers, match them against patterns learned from vast datasets, and generate responses that align with context. The better you engineer your input, the more precise and useful that response becomes.


Reduce fluff to improve efficiency


Streamlining your inputs help the model draw on its training across vast datasets, delivering sharper, more relevant outputs. Take phrases like "can you please" or "would you be able to." These polite additions increase the token count – essentially the workload ChatGPT has to process – impacting both efficiency and performance.


To get the best results, focus on being clear and specific. It’s important to provide enough detail to remove ambiguity, without overloading your prompt with unnecessary fluff. The ideal approach strikes a balance: concise yet informative, cutting through the noise while giving the model exactly what it needs to respond effectively.


Finesse your prompts for better answers


When you craft a detailed prompt, you’re setting the stage for the AI to deliver its best work. Here are the basics to keep in mind when engineering your prompt: 


  • Be specific: specify the topic, format, or details you need.

  • Add context: include background or examples to help AI understand your vibe.

  • Ask for structure: want bullet points or a summary? 


A detailed prompt flowchart

Each iteration showcases the ways to add specificity. When you craft a detailed prompt, you’re setting the stage for AI to deliver its best work.


Layer prompts for research and ideation


Breaking your goals into smaller, bite-sized prompts surfaces more detailed and relevant results.


Let’s say you want to write an article about 2025 social media trends. Instead of asking one broad question, break it down into several focused prompts:

Broad prompt example: 

“Can you tell me about social media trends?”

 

Think of this like sending an email with no context. Sure, they’ll answer, but they might bring back a heap of irrelevant info and more questions!

Segmenting research prompts: 

  • Prompt 1: “Provide an in-depth overview of 2025 social media trends."

  • Prompt 2: “Explain the role of these social media trends when planning content for 2025.”

  • Prompt 3: “Discuss challenges associated with social media trends and provide insights on how organisations can mitigate these.”

  • Prompt 4: “Provide real-world examples or statistics of how following these trends can improve an organisation's brand awareness.”

By segmenting your queries and sticking to the facts, you encourage the model to deliver more granular insights tailored to your specific needs.


Tailor the output to your brands voice


Keep getting generic copy from LLMs? You may need to ‘teach’ the model to use your brand’s voice. At Avion, we use what’s called an N-shot prompt to flavour the copy with a specific voice.


N-shot prompting generates on-brand copy by using previous or supplied examples. For instance, if you’re looking to generate a social media post, you could craft a prompt like this:


  • N-shot prompt framework: Generate [task] and base your output’s [structure/tone/length/themes] on the following example/s. Include [qualifying information]

  • Example N-shot prompt: “Generate a 20-word caption for LinkedIn. Base your output, including the structure, length, and tone, on the following two examples. Don’t directly replicate language.”


This approach ensures AI-generated responses align with your brand’s voice and messaging while making the most of the details you provide. But it’s not a set-and-forget solution. You need to review and refine each piece, as AI can occasionally drift off course.


Control the way it edits


A clear structure helps guide the model, ensuring the output aligns with your expectations. Here’s a simple formula to craft effective proofreading prompts:


  1. Context: briefly explain the purpose of your text—whether it’s an email, blog, or report.

  2. Content: include the text you want to proofread.

  3. Instructions: specify areas of focus, such as grammar, style, coherence, or readability.


For example:


  • Vague prompt: “Refine.”

  • Fluffy prompt: “Can you please check my blog post for any potential mistakes?”

  • Engineered prompt: “Proofread this blog post draft for a general audience interested in wellness tips. Focus on correcting grammar and enhancing sentence flow.”


While all three prompts will turn out results, the engineered prompt provides clear guidance, reduces guesswork and enables more precise feedback.


You can also control how the LLM approaches editing. For example:


  • Request clarity-focused edits: “Change this text for clarity while preserving the original meaning.”

  • Ask for suggestions only: “Review this text and provide suggestions to improve readability without altering the content.” 

  • Base the review on a preferred style/example: “Provide suggestions as an edit for the cut.”


The secret? It’s all about guiding the model with sharp, tailored instructions – so you get exactly what you want. These tailored prompts tell the tool to focus on your priorities. Whether it’s improving sentence flow, refining tone, or ensuring grammar accuracy.


Remember, always review suggestions and results critically.

Okay, maybe not 'rude', but there’s clear research that specificity in prompts significantly improves the relevance and accuracy of responses. 


Using AI effectively requires a clear, repeatable process for crafting prompts. The goal is to fine-tune your approach over time, ensuring consistent and reliable results. This means optimising prompts, executing them thoughtfully, and staying updated on AI advancements to maximise their potential.


As an aside, Dr. Lance B. Eliot, in a Forbes article, explored how politeness can subtly influence AI’s tone. I find this balance between precision and tone fascinating. While directness is often the best approach, experimenting with politeness might improve responses in unexpected ways. AI, much like human communication, responds to the way we phrase our words.


In the end, getting the most out of AI comes down to precision. So, whether you’re crafting content, conducting research, or developing strategies, embrace specificity – AI thrives on clear, detailed input and a human touch.



Image credits: Illustration by pch.vector on Freepik. Custom GIF by tetarahublot on Tenor.

Planning a project?

Tell us about your copywriting needs via our enquiry form or book a time to chat with our team. We are based in Melbourne, Australia.

Thanks for submitting!

  • LinkedIn
  • Instagram

We acknowledge and pay our respects to the Traditional Owners of the countries we work within. We honour their history, their people, and their connection to land. Elders past, present and future, we thank you for your ongoing care and generosity.

bottom of page