Skip to content
Topics
Streamlit
Build a Streamlit Chatbot with LLM Models: Quick Start

Build a Streamlit Chatbot with LLM Models: Quick Start

Ever wondered how Siri, Alexa, or your customer service chatbot works? Well, you're about to get a peek behind the curtain. In this article, we take a quick look at the world of chatbots, those handy little AI-powered tools that are revolutionizing digital communication. They're everywhere, from customer service to interactive experiences, and they're changing the way we communicate and access information.

But here's the kicker - we're not just talking about any chatbot. We're focusing on building a chatbot using Streamlit, an open-source app framework that's a hit among Machine Learning and Data Science enthusiasts. And that's not all - we're also going to explore how to integrate it with LangChain and various language models, including LLM models. So, buckle up, because we're about to create a privacy-conscious chatbot that's not just efficient, but also respects user privacy.

What is Streamlit?

Streamlit is a fast, easy, and fun open-source tool for building web applications. It's designed to help machine learning engineers and data scientists build interactive web applications around their projects without the need for any web development knowledge. The simplicity and speed of Streamlit make it an excellent choice for building a chatbot UI.

What is a Chatbot?

A chatbot is a software application designed to conduct online chat conversations via text or text-to-speech, in lieu of providing direct contact with a live human agent. Designed to convincingly simulate the way a human would behave as a conversational partner, chatbot systems typically require continuous tuning and testing, and many in production remain unable to adequately converse or pass the industry standard Turing test.

Building a Chatbot with Streamlit

Setting Up the Environment

Before diving into the actual process of building our chatbot, we first need to set up our development environment with the necessary libraries and tools. This ensures our code can execute successfully and our chatbot can function as intended. The requirements.txt file contains a list of libraries and tools required for this project. Here's what it includes:

  • streamlit: This library helps us to create interactive web apps for machine learning and data science projects.
  • streamlit_chat: This Streamlit component is used for creating the chatbot user interface.
  • langchain: This is a framework for developing applications powered by language models. It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
  • sentence_transformers: This library allows us to use transformer models like BERT, RoBERTa, etc., for generating semantic representations of text (i.e., embeddings), which we'll use for our document indexing.
  • openai: This is the official OpenAI library that allows us to use their language models, like GPT-3.5-turbo, for generating human-like text.
  • unstructured and unstructured[local-inference]: These are used for document processing and managing unstructured data.
  • pinecone-client: This is the client for Pinecone, a vector database service that enables us to perform similarity search on vector data.

To install all these libraries, you can run the following command in your terminal:

pip install -r requirements.txt

This command tells pip (Python's package installer) to install the libraries mentioned in the requirements.txt file.

Document Indexing

The next step in our journey to build the chatbot involves preparing and indexing the documents that our chatbot will utilize to answer queries. For this, we use the indexing.py script.

Loading documents from a directory with LangChain

The first step in the indexing.py script involves loading the documents from a directory. We use the DirectoryLoader class provided by LangChain to achieve this. This class accepts a directory as input and loads all the documents present in it.

from langchain.document_loaders import DirectoryLoader
 
directory = '/content/data'
 
def load_docs(directory):
 loader = DirectoryLoader(directory)
 documents = loader.load()
 return documents
 
documents = load_docs(directory)
len(documents)

Splitting documents

After loading the documents, the script proceeds to split these documents into smaller chunks. The size of the chunks and the overlap between these chunks can be defined by the user. This is done to ensure that the size of the documents is manageable and that no relevant information is missed out due to the splitting. The RecursiveCharacterTextSplitter class from LangChain is used for this purpose.

from langchain.text_splitter import RecursiveCharacterTextSplitter
 
def split_docs(documents,chunk_size=500,chunk_overlap=20):
 text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
 docs = text_splitter.split_documents(documents)
 return docs
 
docs = split_docs(documents)
print(len(docs))

Creating embeddings

Once the documents are split, we need to convert these chunks of text into a format that our AI model can understand. This is done by creating embeddings of the text using SentenceTransformerEmbeddings class provided by LangChain.

from langchain.embeddings import SentenceTransformerEmbeddings
 
embeddings = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")

Storing embeddings in Pinecone

After the embeddings are created, they need to be stored in a place from where they can be easily accessed and searched. Pinecone is a vector database service that is perfect for this task. The sample code goes as follows:

from langchain.pinecone import PineconeIndexer
 
def index_embeddings(embeddings, docs):
    indexer = PineconeIndexer(api_key='your-pinecone-api-key', index_name='your-index-name')
    indexer.index(embeddings, docs)
 
index_embeddings(embeddings, docs)

This script creates an index in Pinecone and stores the embeddings along with the corresponding text. Now, whenever a user asks a question, the chatbot can search this index for the most similar text and return the corresponding answer.

Creating the Chatbot Interface with Streamlit

With our documents indexed and ready to be searched, we can now focus on building the chatbot interface. Streamlit provides a simple and intuitive way to create interactive web applications, and it's perfect for our chatbot UI.

Streamlit Chat Component

The Streamlit chat component is a new way to create chatbots. It provides a chat-app-like interface, making a chatbot deployed on Streamlit have a cool UI. To use this component, you need to install it separately using pip:

pip install streamlit-chat

After installing, you can import it in your Streamlit app:

import streamlit as st
from streamlit_chat import chat
 
@st.cache(allow_output_mutation=True)
def get_chat():
    return chat()
 
chat = get_chat()

This code creates a new chat interface in your Streamlit app. You can add messages to this chat using the add_message method:

chat.add_message("Hello, how can I help you today?", "bot")

Integrating the Chatbot with LangChain

LangChain is a framework for developing applications powered by language models. It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. To integrate our chatbot with LangChain, we need to modify the load_chain function in main.py.

from langchain import LangChain
 
def load_chain():
    chain = LangChain(api_key='your-openai-api-key')
    return chain
 
chain = load_chain()

This code creates a new LangChain instance with your OpenAI API key. You can use this instance to generate responses to user queries.

Build a No-Code Data Visualization Stream App with PyGWalker

PyGWalker (opens in a new tab) is also another awesome tool with No Code. You can easily use this Open Source Python Library to create easily Streamlit Data Visualization apps.

PyGWalker + Streamlit Online Demo (opens in a new tab)

PyGWalker (opens in a new tab) is a Python Library that helps you easily embed a Tableau-like UI into your own Streamlit app effortlessly.

Check out this amazing video produced by Sven from Coding is Fun (opens in a new tab) demonstrating the detailed steps for empowering your Streamlit app with this powerful Data Visualization Python Library!


Special Thanks to Sven and his great contribution (opens in a new tab) to PyGWalker community!

Additionally, you can also check out these resources:

Visualize Data in Streamlit with PyGWalker (opens in a new tab)

Privacy-Conscious Features

In today's digital age, data breaches and privacy concerns are more prevalent than ever. It's crucial to ensure that our chatbot not only provides a seamless user experience but also respects user privacy. So, how do we achieve this?

Firstly, we can ensure that our chatbot doesn't store any personal data from the user. The Streamlit chatbot component is designed with privacy in mind - it doesn't store any user data by default. This means that any conversation you have with the chatbot stays between you and the bot, with no data being stored or used for other purposes.

But we can take it a step further. We can use privacy-preserving language models to ensure that our chatbot doesn't memorize sensitive information. These models are trained in a way that prevents them from retaining sensitive data. For instance, OpenAI's GPT-3.5-turbo is a language model that's designed to generate human-like text without retaining any sensitive information from the input it's given. This means that even if a user inadvertently shares sensitive information, the model won't remember or use this information in future interactions.

Conclusion

Building a chatbot with Streamlit and LangChain is a straightforward process that involves setting up the environment, indexing documents, creating the chatbot interface, and ensuring privacy-conscious features. With the power of LLM models and open-source tools, you can create a chatbot that is not only efficient and user-friendly but also respects user privacy.

FAQs

  1. How do you make a chatbot on Streamlit? Making a chatbot on Streamlit involves several steps, including setting up your development environment, indexing your documents, creating the chatbot interface, and ensuring privacy-conscious features. This article provides a comprehensive guide on how to build a chatbot with Streamlit.

  2. What is Streamlit chat usage? Streamlit chat is used to create a user-friendly chatbot interface in your Streamlit app. It provides a chat-app-like interface, making your chatbot more interactive and engaging for users.

  3. What is the most advanced AI chatbot? The most advanced AI chatbots leverage sophisticated language models like GPT-3.5-turbo from OpenAI. These chatbots can generate human-like text, understand context, and provide relevant responses. They are also designed with privacy in mind, ensuring they don't retain sensitive user information.