Chatbots powered by large language models (LLMs) have become increasingly popular and sophisticated, thanks to advancements in artificial intelligence. You can create a basic chatbot by directly using APIs from LLM providers like OpenAI or Google, but leveraging frameworks like LangChain, Haystack, and LlamaIndex can simplify the process and add powerful features. In this post, we’ll explore how to create a basic chatbot using both approaches and highlight the benefits of using these frameworks.
To illustrate the differences and benefits of each approach, I have written code snippets for each case: using direct APIs from LLM provider (OpenAI) and using frameworks LangChain, Haystack, and LlamaIndex. I’ve tried to keep the code as similar as possible across all examples to provide a clear comparison and help you understand how each framework simplifies the process of creating a basic chatbot. Each chatbot supports chat history and is able to answer followup questions.
Using Direct APIs from LLM Providers
Creating a chatbot using the APIs directly from LLM providers like OpenAI involves sending a request to the API with your prompt and processing the response. Here’s a simple example using OpenAI’s API in Python:
import os
from openai import OpenAI
os.environ["OPENAI_API_KEY"] = (
"sk-proj-..."
)
client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
)
messages = [
{
"role": "system",
"content": "You are a helpful assistant. Answer all questions to the best of your ability."
}
]
while True:
text_input = input("user: ")
if text_input == "exit":
break
user_message = {
"role": "user",
"content": text_input,
}
messages.append(user_message)
response = client.chat.completions.create(
messages=messages,
model="gpt-3.5-turbo",
)
print(f"assistant: {response.choices[0].message.content}")
messages.append(response.choices[0].message)
Here is the output of running this OpenAI example chatbot:
❯ python example_chatbot_openai.py
user: What is the capital of Canada?
assistant: The capital of Canada is Ottawa.
user: and that of Mexico?
assistant: The capital of Mexico is Mexico City.
user: what about USA?
assistant: The capital of the United States of America is Washington, D.C.
user: exit
This code snippet demonstrates how straightforward it is to interact with OpenAI’s API. However, as your application grows, managing state, integrating with other data sources, and optimizing performance can become complex.
Using LangChain
LangChain is a framework designed to simplify the development and deployment of LLM applications. It provides building blocks, components, and third-party integrations to streamline the entire lifecycle of your application.
Example Chatbot with LangChain
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage
from langchain_community.chat_message_histories import ChatMessageHistory
os.environ["OPENAI_API_KEY"] = (
"sk-proj-..."
)
chat = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.2)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant. Answer all questions to the best of your ability.",
),
MessagesPlaceholder(variable_name="messages"),
]
)
chat_history = ChatMessageHistory()
agent = prompt | chat
while True:
text_input = input("user: ")
if text_input == "exit":
break
chat_history.add_user_message(text_input)
response = agent.invoke({"messages": chat_history.messages})
print(f"assistant: {response.content}")
chat_history.add_ai_message(response)
Here is the output of running this LangChain example chatbot:
❯ python example_chatbot_langchain.py
user: What is the capital of Canada?
assistant: The capital of Canada is Ottawa.
user: and that of Mexico?
assistant: The capital of Mexico is Mexico City.
user: what about USA?
assistant: The capital of the United States of America is Washington, D.C.
user: exit
LangChain makes it easier to build and maintain complex applications by providing a structured way to develop, monitor, and deploy your chatbots. For example, you can use LangSmith for monitoring and LangGraph Cloud for deployment.
Using Haystack
Haystack is an open-source framework that supports building production-ready LLM applications and retrieval-augmented generative pipelines.
Example Chatbot with Haystack
import os
from haystack.dataclasses import ChatMessage
from haystack.components.generators.chat import OpenAIChatGenerator
os.environ["OPENAI_API_KEY"] = (
"sk-proj-..."
)
messages = [
ChatMessage.from_system("You are a helpful assistant. Answer all questions to the best of your ability.")
]
chat_generator = OpenAIChatGenerator(model="gpt-3.5-turbo")
while True:
text_input = input("user: ")
if text_input == "exit":
break
user_msg = ChatMessage.from_user(text_input)
messages.append(user_msg)
response = chat_generator.run(messages=messages)
print(f"assistant: {response['replies'][0].content}")
ai_msg = ChatMessage.from_assistant(response['replies'][0].content)
messages.append(ai_msg)
Here is the output of running this Haystack example chatbot:
❯ python example_chatbot_haystack.py
user: What is the capital of Canada?
assistant: The capital of Canada is Ottawa.
user: and that of Mexico?
assistant: The capital of Mexico is Mexico City.
user: what about USA?
assistant: The capital of the United States of America is Washington, D.C.
user: exit
Haystack’s modular and intuitive design allows you to quickly try out the latest AI models and build robust, scalable applications that can handle large document collections.
Using LlamaIndex
LlamaIndex allows you to build context-augmented LLM applications with ease. It offers data connectors, indexes, and engines to facilitate natural language interactions with your data.
Example Chatbot with LlamaIndex
import os
from llama_index.llms.openai import OpenAI
# from llama_index.agent.openai import OpenAIAgent
from llama_index.core.llms import ChatMessage
os.environ["OPENAI_API_KEY"] = (
"sk-proj-..."
)
agent = OpenAI(model="gpt-3.5-turbo")
messages = [
ChatMessage(
role="system", content="You are a helpful assitant named LlamaIndexBot"
),
]
while True:
text_input = input("user: ")
if text_input == "exit":
break
user_msg = ChatMessage(role="user", content=text_input)
messages.append(user_msg)
response = agent.chat(messages=messages)
print(f"{response}")
messages.append(response.message)
Here is the output of running this LlamaIndex example chatbot:
❯ python example_chatbot_llamaindex.py
user: What is the capital of Canada?
assistant: The capital of Canada is Ottawa.
user: and that of Mexico?
assistant: The capital of Mexico is Mexico City.
user: what about USA?
assistant: The capital of the United States of America is Washington, D.C.
user: exit
LlamaIndex provides a more flexible and performant way to structure and access your data, allowing you to build conversational interfaces that can handle complex interactions.
Final Thoughts
While you can create a basic chatbot by directly using APIs from LLM providers, frameworks like LangChain, LlamaIndex, and Haystack offer many advantages. They provide tools and integrations that simplify development, and make your applications more maintainable. Whether you’re building a simple chatbot or a complex conversational interface, these frameworks can help you achieve your goals more efficiently. Some people prefer to use LLM providers’ APIs directly and view these frameworks as unnecessary bloat. However, the choice depends on the type of application you are developing and whether you need the flexibility to quickly switch between LLMs and vector databases. Try it yourself to see which one you prefer.
Personally, I appreciate Haystack’s clean code, but the rapid development and extensive integrations of LangChain make it the most popular agent framework.