top of page

How conversation designers use data to power agentic AI

  • Patrick Ibsen
  • Nov 28, 2025
  • 5 min read


In 2025, chatbots are falling short of user expectations. Now, users want assistants that can handle complex requests, remember context, and actually get things done. Problem is, most conversational AI still struggle with these basics.


To build AI agents that remain truly useful, conversation designers need to design agentic AI that goes beyond responding to prompts and following mapped flows. These AI agents need to become data practitioners themselves. How? This involves the agent:


  • pulling responses from multiple sources

  • remembering past interactions

  • making informed autonomous decisions, and

  • continuously learning from real usage patterns.


This shift transforms conversation design from a creative discipline into a data-driven one. Designers must now craft systems and conversations that adapt, reason, and act on behalf of users – with speed and accuracy.


Why your data setup suddenly matters


Two years ago, companies were learning to manage two types of data for conversation design:


  1. Structured: data sits in neat databases with rows and columns. 

  2. Unstructured: data is messier, think customer service transcripts, social posts, and support tickets.


This step of data collection and organising still holds weight. But new tools made it possible to search and process both types together.


Now, though, AI agents need access to clean, organised data to make good decisions, such as:


  • check your order status

  • remember your postage preference

  • compare prices across warehouses, and

  • suggest products based on your local weather.


Be sure that all of this is possible because someone organised the data properly. 



What AI agents actually do differently


AI agents don't follow simple "if this, then that" rules anymore. They combine four abilities that work together:


1. Retrieval 


They grab information from multiple sources at once, such as your:


  • customer relationship management (CRM)

  • inventory system

  • customer history, and

  • real-time updates all feed into one conversation.


2. Reasoning 


They understand what to do with that information. When a user says, "I need hiking boots for Tasmania in winter," the agent uses data on weather, terrain, and product specs to inform its decisions without you scripting that exact scenario.


3. Memory 


They recall context from this conversation and previous ones. Short-term memory tracks what you've discussed today. Long-term memory knows you bought running shoes last month and you're allergic to synthetic materials.


4. Tool use

 

AI agents take action. They can access booking systems to:


  • reschedule appointments

  • use payment tools like Stripe to process refunds

  • search your knowledge base for policy details, or

  • update customer records.


This isn't just talk. The conversational AI market is growing from US$13.2 billion in 2024 to US$49.9 billion by 2030. A PwC survey, conducted in May 2025, found that 79% of senior executives say their companies are already using AI agents, with two-thirds reporting measurable productivity gains.


How designers are actually working with data now


Your team is deciding what gets remembered


AI agents now remember customer interactions, but you decide what and for how long. A health app might retain medication schedules indefinitely. An e-commerce chatbot might only recall the current session.




What your teams should do


Collaborate across content, design, and legal teams to establish memory rules:


  • Critical to remember – accessibility preferences, account settings.

  • Helpful to retain – style preferences, communication preferences.

  • Immediate deletion – browsing history, sensitive queries after session ends.


The business challenge


Customers appreciate personalised service but find it creepy when AI references old browsing history or makes emotional assumptions. Get this wrong and you risk customer trust and privacy violations.


Manage multiple data sources


Modern AI agents access dozens of data sources per conversation. You must decide which sources get accessed and when. A customer service agent handling billing might access:


  • account history

  • transactions

  • support tickets

  • product details, and

  • policies.


Define the hierarchy:


  • Start with account verification.

  • Pull transaction details only when needed.

  • Escalate to specialists above certain thresholds.


Your strategic decision


Just because an agent can access data doesn't mean it should. Set boundaries that protect privacy while enabling effective service.


Designing handoffs between specialised agents


Multiple AI agents often collaborate on one customer request: a triage agent identifies needs, a specialist solves problems, a billing agent processes payments.


Define which agent handles which scenarios and how they transfer information.


Answer these key questions:


  • When does escalation happen?

  • What context transfers?

  • How does the customer experience transitions?


Your strategic challenge 


Three agents might work behind the scenes, but customers should experience one seamless conversation. Making transitions invisible requires rigorous testing between content, design, and development teams.


Setting decision-making principles


You're establishing principles that guide autonomous decisions across thousands of conversations:


  • When uncertain, acknowledge it and offer alternatives.

  • Never make assumptions without asking first.

  • Prioritise clarity over personality in financial transactions.



The governance challenge


AI agents will encounter unexpected edge cases. Your principles must be robust enough to handle unusual situations. This requires ongoing collaboration between content teams and developers.


The skills you need to build an efficient AI agent


Prompt basics


Learn how language models understand instructions. Vague prompts create random behaviour. Specific prompts, with examples, create consistent results.


Data knowledge


Know the difference between a CRM (customer relationship management) and a CDP (customer data platform). Understand API limits to prevent server overload. You're working with engineers now. Learn enough of their language to work together well.


Testing methods 


How do you know if an agent is working well? Task completion is one way. But what about conversation quality, user happiness, and whether people trust it? You need numbers (like response time and error rates) and opinions (like how natural conversations feel).



What to do on Monday morning


Check your data setup


Meet with your engineering team and ask:


  • What data sources do we have?

  • How clean is it?

  • How fast can we access it?

  • What are the privacy rules?


Find one problem to solve 


Don't try to fix everything at once. Pick one interaction that frustrates users right now. Start with something specific like:


  • customer service question

  • password resets, and

  • booking appointments.


Design for failure first 


What happens when the agent doesn't have the information it needs? When it misunderstands someone? When can’t it finish a task? Design solutions to these situations before the happy path.


Map your memory rules 


Talk with legal and privacy teams. What can be remembered? For how long? What needs permission? What should never be stored?


Decide how to measure success 


How will you know this is working? Don't just count completed tasks. Instead track:


  • conversation quality

  • user happiness

  • whether people come back, and

  • how well errors get fixed.


Start small 


Agents that suggest actions for human approval are safer than agents that act on their own. Build trust in the system first.




The hard truth about trust


Users lose trust fast when things go wrong. Let’s say an agent that works 95% of the time sounds great. But that 5% failure rate means one in twenty users has a bad experience. And bad experiences spread faster than good ones.


Your AI agent need to handle that 5%. How does the agent admit when it's not sure? When does it ask a human for help? How does it keep the user's dignity when something fails?


Honesty is the best policy. Users know AI isn't perfect. They respect what agents can and can't do.


What this means for you


The field of conversation design has vastly changed in two years. Businesses that can design bots who understand how to work with data, manage agent behaviour, and design for independent decision-making will lead their teams. Those who keep designing static flows will fall behind.


At Avion, we're working with companies going through this change right now. We're seeing what works, what fails, and what makes agent experiences good instead of frustrating.


If your team is building AI agents or wondering where to start, book a consultation. We'll talk through your specific challenges, your data setup, and what's realistic for your company.



Image credits: Illustration by pch.vector on Freepik. Custom GIFs by ThisGuyFawkes, jcampos, Stay Close, Chimpers, and kristinag983 on Tenor.


 
 

Planning a project?

Tell us about your copywriting needs via our enquiry form or book a time to chat with our team. We are based in Melbourne, Australia.

Thanks for submitting!

  • LinkedIn
  • Instagram

We acknowledge and pay our respects to the Traditional Owners of the countries we work within. We honour their history, their people, and their connection to land. Elders past, present and future, we thank you for your ongoing care and generosity.

bottom of page