Elevating our customer success with GenAI
Over 90% of C-suite executives believe automating manual workflows for customer support, using AI assistants for day-to-day tasks, and providing self-service support experiences can deliver tangible business impact. To better serve our current customers and provide seamless experiences to future customers, Elastic, as customer zero, led by example and built a generative AI application for customer support using the Search AI Platform.
Our Support Assistant empowers support engineers to efficiently respond to customer support issues and enables customers to self-serve via a generative AI chatbot experience. It delivers business impact by freeing up engineers' time to focus on more complex projects and helping customers quickly answer their queries.
6x increase in hard case deflections
23% improvement in mean time to first response to customers
7% reduction in assisted support case volume
4 month payback period
“Using generative AI for support has been top-of-mind since the launch of ChatGPT. In the past few months, this idea has become a reality. Within the first month of the launch of Support Assistant, my team has already seen capacity gains because it is easy to refine our responses and find information to support customer queries.”
— Julie Rudd, GVP, customer support, Elastic
The challenge
Increasing customer expectations and reactive customer service
We faced multiple challenges when it came to our customer support experience:
- Reactive service: Support was a reactive process. A customer opened a case and we reacted to resolve the query.
- Wait time: The back-and-forth communication between the support engineer and the customer resulted in long wait times and a time-consuming resolution process.
- Customer expectations: There is a growing customer demand for self-service and a need to quickly and efficiently solve their problems.
- Unique deployments: Support engineers needed to meet each customer’s distinct needs while assisting on multiple cases simultaneously, using different product versions and deployment models.
- Information overload: Support engineers and customers spent too much time looking for relevant answers from sources across the knowledge base (including 300K+ product documents, internal knowledge base articles, blogs, previous customer cases, community posts, and more).
- Case fatigue: Support engineers spent significant time processing case summaries for escalations or transitioning cases from one engineer to another.
“If you’re trying to figure out what’s wrong with your phone, the first place you go is Google or, now, ChatGPT. The technical industry is no different. All customers, including ours, want instant self-service. They want to search, find, fix, and move on.”
— Julie Rudd, GVP, Customer Support, Elastic
The solution
From support engineers to support heroes
The journey to build the Support Assistant started three years ago with a simple keyword-based search experience on the support portal. Customers and support engineers could search the support portal for links to relevant documentation and resources based on their queries.
With this solution came drawbacks. Textual search is unable to derive semantic meaning. So, we added more complex vector search capabilities, enabling Elasticsearch to derive a semantic interpretation of the user's query and deliver more relevant results. This is especially important in customer support when customers may not always be aware of the relevant keywords relating to their issue.
For the most recent evolution, we built a generative AI experience using a retrieval augmented generation (RAG) architecture and OpenAI’s GPT-4o large language model (LLM). We landed on a RAG-based approach for several reasons:
- It’s easy to use with unstructured data and immediately incorporates real-time information into the knowledge base via the KCS methodology.
- It restricts access to information with role-based access control (RBAC) and document-level security.
- It requires less maintenance and effort than fine-tuning our own LLM.
We expanded our existing knowledge base to include additional context from disparate sources (CRM data, issue resolutions, defects, cases, white papers, educational content, etc.) to supply our LLM with accurate information and reduce hallucinations. After launching Support Assistant internally, we focused on relevancy tuning and testing to ensure accuracy before we released it to customers. In August 2024, the Support Assistant was launched within the customer support portal, providing customers with the same GenAI conversational experience.
We integrated Support Assistant into our AI-first support case user interface, Engineer View, to bring these capabilities into a single workflow. This streamlined access to Support Assistant and integrated additional case context for enhanced conversational intelligence.
Engineer View includes access to an AI knowledge drafter, which extracts information from customer queries, case conversations, and existing knowledge artifacts to draft new knowledge documentation to feed our system in real time.
To monitor the performance and availability of the Support Assistant, we used Elastic Observability for logging and application performance monitoring (APM) to ensure performance and accuracy and gain aggregate insights into the self-service topics.
“When we started with keyword search, generative AI wasn’t even an option. The advantage now is that using Elastic, you can start with semantic search, combine it with a generative AI model, and build a working solution in a few weeks. Elastic’s search capabilities combined with AI advancements, allowed the team to see results and iterate faster on our self-service initiative.”
— Chris Blaisure, Senior Director, Field Technology, Elastic
Use cases
Providing users with real-time answers
The initial use case for Support Assistant was to deliver self-service experiences to engineers and customers. Moving a majority of customers to a self-service experience has allowed our support engineers to focus on high-priority, complex cases and strategic projects. However, this generative AI application has several other use cases and benefits.
Internal support experience
Support engineers use Support Assistant to find relevant information, sources, and answers for ongoing support cases, such as:
- Drafting initial reply: Streamlines the process of drafting initial responses to customers.
- Augmenting case summaries: Quickly builds case summaries for escalations or case transitions.
- AI knowledge documentation drafter: Efficiently creates and updates documentation to expand the support knowledge base via real-time updates.
- Onboarding and enablement: Provides quick and easy access to product and feature information for support and employee onboarding
External customer experience
Customers use Support Assistant via the Elastic Support Hub for quick, self-service answers to product queries using natural language.
- Troubleshooting configurations: Provides personalized guidance for troubleshooting based on deployment or Elastic version configurations.
- Performance tuning and upgrades: Easily optimizes performance and deploys upgrades with step-by-step guidance based on specific deployment needs.
- Security and compliance: Provides immediate suggestions for securing an Elastic deployment.
- Custom use cases and integrations: Easily build custom applications with information for code snippets, integrations, and relevant examples for specific needs.
“The original intent was a self-service generative AI customer experience that would deflect the volume of easy support cases and allow my team to focus on more complex cases that required human intervention. But then we benefited from many secondary use cases, allowing our support team to scale as our customer base grows.”
— Julie Rudd, GVP, Customer Support, Elastic
The results
ROI within 4 months of launching
In Elastic’s recently completed 2025 fiscal year, demand for support services increased by nearly 9%. Simultaneously, assisted support demand decreased by over 7%, while Digital Support, including the Support Assistant usage, rose significantly by 49%. This evolution has enabled our team to scale to meet the needs of our growing customer base by creating capacity within support for assisted interactions. Other benefits that are trending positively MoM include:
- Increased case deflections and reduced case volume: Six months after the launch, 5% of our customers who navigate to our Support Hub have started using the Support Assistant for support queries. This has resulted in a six times increase in hard case deflections, where users explicitly abandon creating a support ticket after finding answers via Support Assistant, and a 20 times increase in soft case deflections, where users implicitly abandon creating a support ticket or looking through knowledge articles after finding answers via Support Assistant. This case deflection has led to a 7% reduction in volume of assisted support cases, which has led to capacity gains within the support function.
- Rapid responses to customers: Customer experience is critical to maintaining satisfaction and retention. Since launching Support Assistant, we’ve seen a 23% improvement in mean time to first response (MTFR) to customers by reducing the time elapsed between support case submission and support engineer response.
- Fast payback period: The support team realized a return on investment of the Support Assistant within four months. This calculation considers the cost of running the Support Assistant, including hosting the LLM, and the labor costs associated with building it, and offsets these costs by the capacity created through case deflections.
- Increased capacity to take on complex cases: With the deflection of easy support cases, the team is spending more time on complex cases. In the first month since launch, we’ve seen an 11% reduction in first-contact close rate. Though this leads to increased mean-time-to-close, the support organization is investing its human capital in the most complex cases that matter.
- Improved customer adoption and satisfaction: The 7% reduction in case volume is directly attributed to customers adopting self-service support via Support Assistant. Support Assistant has seen significant customer engagement since its launch six months ago, with 30% of customers returning to the tool. This repeat usage demonstrates strong adoption and is growing steadily at 11% month-over-month (MoM). As more customers self-serve, support engineers will have more time to spend on complex cases, leading to increased quality of support engagements and improved customer satisfaction from the overall support experience.
- Enhanced knowledge-centered service (KCS): Since the launch of AI-knowledge drafter, we’ve seen a 130% increase in knowledge articles created and a 56% improvement in external knowledge engagement. As more complex cases arise, support engineers continue to build and add relevant resources and content, expanding our knowledge base. This allows support engineers and customers to access the latest information in real-time, speeding up time-to-resolution for both self-service and incoming cases.
- Support adoption and accelerated onboarding: We’ve achieved 100% support engineer penetration, with 1 in 2 engineers using it daily, reducing our reliance on senior engineers for onboarding and training, and minimizing our ramp time.
What’s next
Building on the success of Support Assistant, we’re excited to continue launching AI-driven capabilities, such as:
- Enriching support cases: To save time and create consistency, the team is currently working on enriching support cases with context. With additional metadata from the case content and other support source systems — including account insights, product, component, deployment type, root cause, solution codes, and more — we’re working to make it easier for support engineers to take on incoming cases with personalization.
- Intelligent routing: We’re currently building towards intelligent routing, so when support cases come in, additional metadata can be attached, such as component codes, to route the case to an expert engineer.
- Improving the intelligence of the Support Assistant: With more relevant context being fed to the LLM, customers can ask questions about their current and previous cases, helping the Support Assistant deliver more personalized responses.
- Integrating AutoOps: With AutoOps, Elastic Cloud customers can ask questions related to the health of their deployment. This will enable faster self-service resolution.
- Automating knowledge creation: Support engineers can automatically generate knowledge articles in a specific output template from relevant case context, use case notes, and Slack threads to make them instantly available for search.
- Providing multi-language support: Today, support engineers and customers can ask the Support Assistant to translate existing knowledge and answers into multiple languages. We are working to provide a more personalized, multilingual experience for our customers via the customer-facing Support Assistant.
“I’m already working with Chris [Senior Director, Field Technology] to augment further processes within the Support Assistant to improve the productivity of the support team. We want to let generative AI populate relevant information from source systems to save time and create better data consistency around the organization. This will not only allow my team to become more efficient but also create business insights for my day-to-day decision-making.”
— Julie Rudd, GVP, Customer Support, Elastic
This use case is based on our use of our own products and services. As such, certain typical costs, such as licensing fees, were not incurred. The results, savings, and fees presented are illustrative and for information only, and may not necessarily reflect the outcomes achievable by users under our standard commercial terms and applicable fees. While similar results may be possible, individual outcomes may vary significantly depending on numerous factors. No guarantees are made or implied.