• unwind ai
  • Posts
  • Build a Customer Support AI Agent with Memory

Build a Customer Support AI Agent with Memory

LLM App using GPT-4o and vector database in less than 100 lines of Python code (step-by-step instructions)

Building AI tools that can handle customer interactions while retaining context is becoming increasingly important for modern applications.

In this tutorial, we’ll show you how to create a powerful customer support agent using GPT-4o, with memory capabilities to recall previous interactions.

The AI assistant’s memory will be managed using Mem0 with Qdrant as the vector store. The assistant will handle customer queries while maintaining a persistent memory of interactions, making the experience seamless and more intelligent.

🎁 $50 worth AI Bonus Content at the end!

What We’re Building

This Streamlit app implements an AI-powered customer support agent for synthetic data generated using GPT-4o. The agent uses OpenAI's GPT-4o model and maintains a memory of past interactions using the Mem0 library with Qdrant as the vector store.

Features:

  • Chat interface for interacting with the AI customer support agent

  • Persistent memory of customer interactions and profiles

  • Synthetic data generation for testing and demonstration

  • Utilizes OpenAI's GPT-4o model for intelligent responses

Prerequisites

Before we begin, make sure you have:

  1. Python installed on your machine (version 3.7 or higher is recommended)

  2. Basic familiarity with Python programming

  3. Your OpenAI API key

  4. A code editor of your choice (we recommend VSCode or PyCharm for their excellent Python support)

Step-by-Step Instructions

Step 1: Setting Up the Environment

First, let's get our development environment ready:

  1. Clone the GitHub repository:

git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
  1. Go to the ai_customer_support_agent folder:

cd ai_customer_support_agent
pip install -r requirements.txt
  1. Ensure Qdrant is running: The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.

docker pull qdrant/qdrant

docker run -p 6333:6333 -p 6334:6334 \
    -v $(pwd)/qdrant_storage:/qdrant/storage:z \
    qdrant/qdrant

Step 2: Creating the Streamlit App

Now that the environment is set, let’s create our Streamlit app. Create a new file customer_support_agent.py and add the following code:

  • Import Required Libraries: At the top of your file, add

    • Streamlit for building the web app
    • OpenAI for using GPT-4o
    • Mem0 for personalized memory layer

import streamlit as st
from openai import OpenAI
from mem0 import Memory
import os
import json
from datetime import datetime, timedelta
  • Set up the Streamlit App:
    • Add a title to the app using 'st.title()'
    • Add a description for the app using 'st.caption()'
    • Set the OpenAI API key

st.title("AI Customer Support Agent with Memory 🛒")
st.caption("Chat with a customer support assistant who remembers your past interactions.")

openai_api_key = st.text_input("Enter OpenAI API Key", type="password")

if openai_api_key:
    os.environ['OPENAI_API_KEY'] = openai_api_key
  • Initialize OpenAI client and Mem0 with Qdrant:

    • Set up the OpenAI client with the provided API key
    • Configure Mem0 to use Qdrant as the vector store

    class CustomerSupportAIAgent:
        def __init__(self):
            config = {
                "vector_store": {
                    "provider": "qdrant",
                    "config": {
                        "model": "gpt-4o-mini",
                        "host": "localhost",
                        "port": 6333,
                    }
                },
            }
            self.memory = Memory.from_config(config)
            self.client = OpenAI()
            self.app_id = "customer-support"
  • Implement the handle_query method:
    • Retrieves relevant memories for context
    • Generates a response using OpenAI's GPT-4o
    • Stores the interaction in memory

        def handle_query(self, query, user_id=None):
            relevant_memories = self.memory.search(query=query, user_id=user_id)
            context = "Relevant past information:\n"
            for mem in relevant_memories:
                context += f"- {mem['text']}\n"

            full_prompt = f"{context}\nCustomer: {query}\nSupport Agent:"

            response = self.client.chat.completions.create(
                model="gpt-4o-mini",
                messages=[
                    {"role": "system", "content": "You are a customer support AI agent for TechGadgets.com, an online electronics store."},
                    {"role": "user", "content": full_prompt}
                ]
            )
            answer = response.choices[0].message.content

            self.memory.add(query, user_id=user_id, metadata={"app_id": self.app_id, "role": "user"})
            self.memory.add(answer, user_id=user_id, metadata={"app_id": self.app_id, "role": "assistant"})

            return answer
  • Add methods for memory retrieval and synthetic data generation:
    • Allows retrieval of all memories for a user
    • Generates synthetic customer data for testing and demos

        def get_memories(self, user_id=None):
            return self.memory.get_all(user_id=user_id)

        def generate_synthetic_data(self, user_id):
            today = datetime.now()
            order_date = (today - timedelta(days=10)).strftime("%B %d, %Y")
            expected_delivery = (today + timedelta(days=2)).strftime("%B %d, %Y")

            prompt = f"""Generate a detailed customer profile and order history for a TechGadgets.com customer with ID {user_id}. Include:
            1. Customer name and basic info
            2. A recent order of a high-end electronic device (placed on {order_date}, to be delivered by {expected_delivery})
            3. Order details (product, price, order number)
            4. Customer's shipping address
            5. 2-3 previous orders from the past year
            6. 2-3 customer service interactions related to these orders
            7. Any preferences or patterns in their shopping behavior

            Format the output as a JSON object."""

            response = self.client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "system", "content": "You are a data generation AI that creates realistic customer profiles and order histories. Always respond with valid JSON."},
                {"role": "user", "content": prompt}
            ],
            response_format={"type": "json_object"}
            )

            customer_data = json.loads(response.choices[0].message.content)

            # Add generated data to memory
            for key, value in customer_data.items():
                if isinstance(value, list):
                    for item in value:
                        self.memory.add(json.dumps(item), user_id=user_id, metadata={"app_id": self.app_id, "role": "system"})
                else:
                    self.memory.add(f"{key}: {json.dumps(value)}", user_id=user_id, metadata={"app_id": self.app_id, "role": "system"})

            return customer_data

    support_agent = CustomerSupportAIAgent()
  • Set up the Streamlit sidebar for user interaction:
    • Provides input for customer ID
    • Offers buttons for data generation and viewing

    st.sidebar.title("Enter your Customer ID:")
    previous_customer_id = st.session_state.get("previous_customer_id", None)
    customer_id = st.sidebar.text_input("Enter your Customer ID")

    if customer_id != previous_customer_id:
        st.session_state.messages = []
        st.session_state.previous_customer_id = customer_id
        st.session_state.customer_data = None

    # Add button to generate synthetic data
    if st.sidebar.button("Generate Synthetic Data"):
        if customer_id:
            with st.spinner("Generating customer data..."):
                st.session_state.customer_data = support_agent.generate_synthetic_data(customer_id)
            st.sidebar.success("Synthetic data generated successfully!")
        else:
            st.sidebar.error("Please enter a customer ID first.")

    if st.sidebar.button("View Customer Profile"):
        if st.session_state.customer_data:
            st.sidebar.json(st.session_state.customer_data)
        else:
            st.sidebar.info("No customer data generated yet. Click 'Generate Synthetic Data' first.")

    if st.sidebar.button("View Memory Info"):
        if customer_id:
            memories = support_agent.get_memories(user_id=customer_id)
            if memories:
                st.sidebar.write(f"Memory for customer **{customer_id}**:")
                for mem in memories:
                    st.sidebar.write(f"- {mem['text']}")
            else:
                st.sidebar.info("No memory found for this customer ID.")
        else:
            st.sidebar.error("Please enter a customer ID to view memory info.")
  • Initialize and manage chat history:
    • Stores single session history in Streamlit's session state
    • Displays previous messages in a chat-like interface

    if "messages" not in st.session_state:
        st.session_state.messages = []

    # Display the chat history
    for message in st.session_state.messages:
        with st.chat_message(message["role"]):
            st.markdown(message["content"])
  • Handle user input and generate responses:
    • Captures user queries
    • Generates and displays AI responses
    • Updates chat history in the Streamlit session state

    query = st.chat_input("How can I assist you today?")

    if query and customer_id:
        # Add user message to chat history
        st.session_state.messages.append({"role": "user", "content": query})
        with st.chat_message("user"):
            st.markdown(query)

        # Generate and display response
        answer = support_agent.handle_query(query, user_id=customer_id)

        # Add assistant response to chat history
        st.session_state.messages.append({"role": "assistant", "content": answer})
        with st.chat_message("assistant"):
            st.markdown(answer)

    elif not customer_id:
        st.error("Please enter a customer ID to start the chat.")

else:
    st.warning("Please enter your OpenAI API key to use the customer support agent.")

Step 3: Running the App

With our code in place, it's time to launch the app.

  • Start the Streamlit App: In your terminal, navigate to the project folder, and run the following command

streamlit run customer_support_agent.py
  • Access Your AI Assistant: Streamlit will provide a local URL (typically http://localhost:8501). Open this in your web browser, select the tools you want the assistant to use, and have fun!

Working Application Demo

Conclusion

And your fully functional AI customer support agent with memory is ready! You've implemented a powerful assistant using OpenAI’s GPT-4o, managed customer interaction history with Mem0, and utilized Qdrant for efficient memory storage.

This assistant can now handle customer queries intelligently and personalize responses based on past interactions.

For the next steps, consider extending the assistant by adding more features, such as integrating additional APIs for order tracking or enabling voice interactions. You could also enhance the memory capabilities by exploring more sophisticated vector search techniques.

Keep experimenting and refining to build even smarter AI solutions!

We share hands-on tutorials like this 2-3 times a week, to help you stay ahead in the world of AI. If you're serious about levelling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Bonus worth $50 💵💰

Share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads, Facebook) to get AI resource pack worth $50 for FREE. Valid for limited time only!

Reply

or to participate.