How LLM Integration Works in SUVA

LLM integration can impact your support operations a great deal and can improve your end users' service experience many folds. But how does SUVA and LLM integration do that? Let's take you though how the integration works.

PREREQUISITES. Before your proceed, make sure that you have configured the LLM Integration in SUVA. You can start here: Integrations: Integrate SUVA Agents with Large Language Models .

Understanding the Impact of LLM Integration with SUVA

Once you have integrated a SUVA chatbot agent with an LLM (OpenAI or Claude), comes the million-dollar question: How does the integration help in chatbot conversations?

LLM integration manages all user interactions in chatbot conversations. We, however, recommend you to use LLM Integration only with SearchUnify Adapter (for the time being). Otherwise, the chatbot will not give relevant results. A profanity layer is configured at the backend to maintain decorum in chatbot interactions.

Hint. Something really big and fascinating related to the LLM integration is coming in the upcoming releases. Stay tuned!

How predictable or creative the chatbot responses will be, are based on the Temperature Setting. Read the document to learn more about how Temperature Setting works.

In rare cases where the LLM service is down, SUVA automatically switches to the traditional way of generating responses based on the intents and utterances on which it has been trained. In case no intent is detected, it uses the SearchUnify adapter to get fallback responses. In those rarest cases where no relevant fallback response is found, it sends the "I am sorry, no results found" message.

Last updatedThursday, September 26, 2024

Or, send us your review at help-feedback@searchunify.com