LLM Integration Usage

LLM integration can impact your support operations a great deal and can improve your end users' service experience many folds. But how does SUVA and LLM integration do that? Let's take you though how the integration works.

Prerequisites

Before your proceed, make sure that you have configured the LLM Integration in SUVA. Refer to the article on how to do it - Setting Up Large Language Model (LLM) Integration in SUVA.

Understanding the Impact of LLM Integration with SUVA

Once you have integrated a SUVA chatbot agent with an LLM (OpenAI or Hugging Face), comes the million-dollar question: How does the integration help in chatbot conversations?

LLM integration manages all user interactions in chatbot conversations. We, however, recommend you to use LLM Integration only with SearchUnify Adapter (for the time being). Otherwise, the chatbot will not give relevant results. A profanity layer is configured at the backend to maintain decorum in chatbot interactions.

Hint. Something really big and fascinating related to the LLM integration is coming in the upcoming releases. Stay tuned!

How conservative or creative the chatbot responses will be, are based on the Temperature Setting. Read the doc to learn more about how Temperature Setting works.

In rare cases where the LLM service is down, SUVA automatically switches to the traditional way of generating responses based on the intents and utterances on which it has been trained. In case no intent is detected, it uses the SearchUnify adapter to get fallback responses. In those rarest cases where no relevant fallback response is found, it sends the "I am sorry, no results found" message.

Last updatedMonday, October 9, 2023

Or, send us your review at help-feedback@searchunify.com