Resolve Cases Quickly with Interactive Einstein Search Answers

5 min read

Background

We live in a digital world where many of us depend on customer service to be at its best. Whether it's calling about an order issue or needing help to activate an account, we are always reaching out to customer service to save the day. We are no longer waiting endless hours on the phone for customer service agents, as digital customer service is becoming a new standard.

Together, Einstein Search and Salesforce AI Research created Einstein Search Answers. We are transforming the way we search in Salesforce, enabling your users to get actionable search results. Your agents can take immediate action by using Einstein Search Answers that extract the most relevant information from a knowledge article.

Some agents may be required to enter information on how the cases were resolved and what information was provided to the customer. For these instances, the ability to copy "answers" facilitates their internal efficiency. In other cases, service agents need to copy the answer and email it to the customer. Einstein Search Answers enables them to easily do so.

What's New

Previously, agents opened the record page and then copied the URL and section of an article. Now, the returned answers are just a few lines so that agents can copy the answer and its internal link to their clipboard and then share it without leaving the page.

In search, enter a question such as “How do I add an email signature?”, or a phrase such as “login issue with Salesforce iOS.” Once the Answer card is displayed, you can copy the answer or source link and share it, or go to the knowledge article the answer has been extracted from, for more context. This search capability directly surfaces answers to users for knowledge article queries. The AI helps extract service agent-specific queries with specific answers from knowledge articles.

So, how is this any different from previous search methods? Well, previously our search methods were keyword based. You could type in a word or phrase into a search bar, click “enter,” and see that word or phrase highlighted in the search results. This would often lead to agents scrolling through lengthy articles to find the right answer and leaving the customer waiting.

Our customer research indicates that

  • Users need to specify their Question Naturally instead of keywords
  • Users don’t know the Domain Terms referenced in Knowledge Articles
  • Users should be able to get a Direct Relevant Answer instead of clicking into long knowledge articles
  • The Answer should be Actionable for the user
  • The Knowledge Manager needs to understand the Value and Opportunities for their Content

Our new search method provides users with question-answering capability. On top of keyword search, you now receive specific answers to your phrase or question, enhancing your search process. The AI is not only able to understand the intent of the user, but it can identify the most relevant answer to it, extract it, and then display it to the user. Now, we could understand the intent to give a more fitting answer

Figure 1: Einstein Search Answers

Let’s take a booking site as an example. The Einstein Search Answers system takes all the knowledge content from the org, generates all the potential candidate answers, and compares the query such as “add one more person to my reservation” to passages deep within your knowledge content. It goes beyond the title of the content, which may be simple such as “Reservation Policies” and compares content semantically related to sections such as “canceling my reservation”, “incorrect credit card used for reservation”, “adding child seat for reservation”, “modifying my booking” to surface the direct passage to the user query in the form of an easily digestible snippet.

Deep Dive

How does this work? Let’s break down the AI. While traditional methods rely on lexical search, which cares about exact term/keyword match, semantic search requires a deeper understanding of both queries and documents to better address the user’s intent.

Figure 2: Demonstration of the dense embedding process, where the dense embedding model consumes a raw text as input, tokenizes it, and embeds it into high-dimensional vector space. We use a transformer-based 6-layer derivative of the BERT model tailored for better sentence representations for semantic similarity.

Figure 3: 3D visualization of embedding vectors after applying Principal Component Analysis (PCA) for dimensionality reduction from the original BERT embedding dimension of 768 to 3. We denote the example query with a red triangle, the desired knowledge article with a green circle" some distractor articles with a blue circle, and the rest of the articles (and/or queries) with a grey circle. The key takeaway from this figure is that the desired knowledge article ends up closest to the query, which in turn has a great potential to gracefully deal with potential distractor articles (blue circle) which otherwise might be ranked high by the lexical search alone.

Our approach to search for answers distinguishes from the traditional search methods in two major ways:

  1. We search for more compact pieces of text by dividing the long knowledge articles into smaller semantically coherent units (i.e., passages). This way, we aim to produce more precise answers for queries with user intent.
  2. More importantly, we leverage the best of both worlds in how exactly the search is performed over the resulting passages. We enhance the lexical search with deep semantic understanding by leveraging pre-trained language models, which have constructed richer contextual meaning of text by reading lots of documents. In particular, we use BERT-like pre-trained transformer-based architecture to embed queries and passages into the same latent space as demonstrated in Figure 2, where semantically similar pieces of text are closer together.

In Figure 3 above, we demonstrate an example 3D visualization of the query “How to add an email signature (denoted by a triangle) and some candidate passage titles (circles). Our method can embed the answer passage (green circle) very close to the query (red triangle) in the vector space, showing its ability to successfully capture sentence/passage level semantic similarity in the latent space. Leveraging this property, we define a dense passage retriever based on the cosine similarity of the resulting dense vectors of queries and candidate answer passages. Finally, we introduce a fusion reranker to combine the results from sparse/lexical and dense retrievers to determine the final ranking of the passages to be displayed.

What does this mean for customer service agents and their customers and businesses? Using Einstein Search Answers, they are able to find answers faster and return responses to customers faster resulting in a reduced case resolution time, improvement in customer satisfaction, and improvement of efficiency of the service centers.

Acknowledgments

  • AI Research:
    • Kazuma Hashimoto
    • Semih Yavuz
    • Soujanya Lanka
    • Vera Serdiukova
    • Wenhao Liu
    • Ye Liu
    • Yingbo Zhou
  • Einstein Search:
    • Abhishek Sharma
    • Darya Brazouskaya
    • Georgios Balikas
    • Ghislain Brun
    • Guillaume Kempf
    • Mario Rodriguez
    • Marjan Hosseinia
    • Matthieu Landos
    • Médéric Carriat
    • Mukund Ramachadran
    • Paulo Gomes
    • Qianqian Shi
    • Saqib Mughal

Explore More

Salesforce AI Research invites you to dive deeper into the concepts discussed in this blog post. Connect with us on social media and our website to get regular updates on this and other research projects.

  • To join our pilot, please reach out to us at tryeinsteinsearch@salesforce.com.
  • Project site: https://www.salesforceairesearch.com/projects/einstein-search-answers
  • Salesforce AI Research Website: www.salesforceairesearch.com
  • Follow us on Twitter: @SalesforceResearch, @Salesforce

About the Author

  • Denise Pérez has been leading marketing for Salesforce AI Research since 2021 supporting the Chief Scientist and extended research team.