ChatQnA Application
Chatbots are the most widely adopted use case for leveraging the powerful chat and reasoning capabilities of large language models (LLMs). The retrieval augmented generation (RAG) architecture is quickly becoming the industry standard for chatbots development. It combines the benefits of a knowledge base (via a vector store) and generative models to reduce hallucinations, maintain up-to-date information, and leverage domain-specific knowledge.
RAG bridges the knowledge gap by dynamically fetching relevant information from external sources, ensuring that responses generated remain factual and current. The core of this architecture are vector databases, which are instrumental in enabling efficient and semantic retrieval of information. These databases store data as vectors, allowing RAG to swiftly access the most pertinent documents or data points based on semantic similarity.
ChatQnA architecture shows below:
ChatQnA is implemented on top of GenAIComps, the ChatQnA Flow Chart shows below:
---
config:
flowchart:
nodeSpacing: 100
rankSpacing: 100
curve: linear
theme: base
themeVariables:
fontSize: 42px
---
flowchart LR
%% Colors %%
classDef blue fill:#ADD8E6,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
classDef orange fill:#FBAA60,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
classDef orchid fill:#C26DBC,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
classDef invisible fill:transparent,stroke:transparent;
style ChatQnA-MegaService stroke:#000000
%% Subgraphs %%
subgraph ChatQnA-MegaService["ChatQnA-MegaService"]
direction LR
EM([Embedding <br>]):::blue
RET([Retrieval <br>]):::blue
RER([Rerank <br>]):::blue
LLM([LLM <br>]):::blue
end
subgraph User Interface
direction TB
a([User Input Query]):::orchid
Ingest([Ingest data]):::orchid
UI([UI server<br>]):::orchid
end
subgraph ChatQnA GateWay
direction LR
invisible1[ ]:::invisible
GW([ChatQnA GateWay<br>]):::orange
end
subgraph .
X([OPEA Micsrservice]):::blue
Y{{Open Source Service}}
Z([OPEA Gateway]):::orange
Z1([UI]):::orchid
end
TEI_RER{{Reranking service<br>}}
TEI_EM{{Embedding service <br>}}
VDB{{Vector DB<br><br>}}
R_RET{{Retriever service <br>}}
DP([Data Preparation<br>]):::blue
LLM_gen{{LLM Service <br>}}
%% Data Preparation flow
%% Ingest data flow
direction LR
Ingest[Ingest data] -->|a| UI
UI -->|b| DP
DP <-.->|c| TEI_EM
%% Questions interaction
direction LR
a[User Input Query] -->|1| UI
UI -->|2| GW
GW <==>|3| ChatQnA-MegaService
EM ==>|4| RET
RET ==>|5| RER
RER ==>|6| LLM
%% Embedding service flow
direction TB
EM <-.->|3'| TEI_EM
RET <-.->|4'| R_RET
RER <-.->|5'| TEI_RER
LLM <-.->|6'| LLM_gen
direction TB
%% Vector DB interaction
R_RET <-.->|d|VDB
DP <-.->|d|VDB
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit Habana AI products for more details.
In the below, we provide a table that describes for each microservice component in the ChatQnA architecture, the default configuration of the open source project, hardware, port, and endpoint.
Gaudi default compose.yaml
| MicroService | Open Source Project | HW | Port | Endpoint |
|---|---|---|---|---|
| Embedding | Langchain | Xeon | 6000 | /v1/embaddings |
| Retriever | Langchain, Redis | Xeon | 7000 | /v1/retrieval |
| Reranking | Langchain, TEI | Gaudi | 8000 | /v1/reranking |
| LLM | Langchain, TGI | Gaudi | 9000 | /v1/chat/completions |
| Dataprep | Redis, Langchain | Xeon | 6007 | /v1/dataprep |
Deploy ChatQnA Service
The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
Two types of ChatQnA pipeline are supported now: ChatQnA with/without Rerank. And the ChatQnA without Rerank pipeline (including Embedding, Retrieval, and LLM) is offered for Xeon customers who can not run rerank service on HPU yet require high performance and accuracy.
Prepare Docker Image
Currently we support two ways of deploying ChatQnA services with docker compose:
-
Using the docker image on
docker hub:docker pull opea/chatqna:latestTwo type of UI are supported now, choose one you like and pull the referred docker image.
If you choose conversational UI, follow the instruction and modify the compose.yaml.
docker pull opea/chatqna-ui:latest # or docker pull opea/chatqna-conversation-ui:latest -
Using the docker images
built from source: GuideNote: The opea/chatqna-without-rerank:latest docker image has not been published yet, users need to build this docker image from source.
Required Models
By default, the embedding, reranking and LLM models are set to a default value as listed below:
| Service | Model |
|---|---|
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| LLM | Intel/neural-chat-7b-v3-3 |
Change the xxx_MODEL_ID in docker_compose/xxx/set_env.sh for your needs.
For customers with proxy issues, the models from ModelScope are also supported in ChatQnA. Refer to this readme for details.
Setup Environment Variable
To set up environment variables for deploying ChatQnA services, follow these steps:
-
Set the required environment variables:
# Example: host_ip="192.168.1.1" export host_ip="External_Public_IP" # Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1" export no_proxy="Your_No_Proxy" export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token" -
If you are in a proxy environment, also set the proxy-related environment variables:
export http_proxy="Your_HTTP_Proxy" export https_proxy="Your_HTTPs_Proxy" -
Set up other environment variables:
Notice that you can only choose one command below to set up envs according to your hardware. Other that the port numbers may be set incorrectly.
# on Gaudi source ./docker_compose/intel/hpu/gaudi/set_env.sh # on Xeon source ./docker_compose/intel/cpu/xeon/set_env.sh # on Nvidia GPU source ./docker_compose/nvidia/gpu/set_env.sh
Deploy ChatQnA on Gaudi
Find the corresponding compose.yaml.
cd GenAIExamples/ChatQnA/docker_compose/intel/hpu/gaudi/
docker compose up -d
Notice: Currently only the Habana Driver 1.16.x is supported for Gaudi.
Refer to the Gaudi Guide to build docker images from source.
Deploy ChatQnA on Xeon
Find the corresponding compose.yaml.
cd GenAIExamples/ChatQnA/docker_compose/intel/cpu/xeon/
docker compose up -d
Refer to the Xeon Guide for more instructions on building docker images from source.
Deploy ChatQnA on NVIDIA GPU
cd GenAIExamples/ChatQnA/docker_compose/nvidia/gpu/
docker compose up -d
Refer to the NVIDIA GPU Guide for more instructions on building docker images from source.
Deploy ChatQnA into Kubernetes on Xeon & Gaudi with GMC
Refer to the Kubernetes Guide for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi with GMC.
Deploy ChatQnA into Kubernetes on Xeon & Gaudi without GMC
Refer to the Kubernetes Guide for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi without GMC.
Deploy ChatQnA into Kubernetes using Helm Chart
Install Helm (version >= 3.15) first. Refer to the Helm Installation Guide for more information.
Refer to the ChatQnA helm chart for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi.
Deploy ChatQnA on AI PC
Refer to the AI PC Guide for instructions on deploying ChatQnA on AI PC.
Deploy ChatQnA on Red Hat OpenShift Container Platform (RHOCP)
Refer to the Intel Technology enabling for Openshift readme for instructions to deploy ChatQnA prototype on RHOCP with Red Hat OpenShift AI (RHOAI).
Consume ChatQnA Service
Before consuming ChatQnA Service, make sure the TGI/vLLM service is ready (which takes up to 2 minutes to start).
# TGI example
docker logs tgi-service | grep Connected
Consume ChatQnA service until you get the TGI response like below.
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected
Two ways of consuming ChatQnA Service:
-
Use cURL command on terminal
curl http://${host_ip}:8888/v1/chatqna \ -H "Content-Type: application/json" \ -d '{ "messages": "What is the revenue of Nike in 2023?" }' -
Access via frontend
To access the frontend, open the following URL in your browser:
http://{host_ip}:5173By default, the UI runs on port 5173 internally.
If you choose conversational UI, use this URL:
http://{host_ip}:5174
Troubleshooting
-
If you get errors like "Access Denied", validate micro service first. A simple example:
http_proxy="" curl ${host_ip}:6006/embed -X POST -d '{"inputs":"What is Deep Learning?"}' -H 'Content-Type: application/json' -
(Docker only) If all microservices work well, check the port ${host_ip}:8888, the port may be allocated by other users, you can modify the
compose.yaml. -
(Docker only) If you get errors like "The container name is in use", change container name in
compose.yaml.
Monitoring OPEA Service with Prometheus and Grafana dashboard
OPEA microservice deployment can easily be monitored through Grafana dashboards in conjunction with Prometheus data collection. Follow the README to setup Prometheus and Grafana servers and import dashboards to monitor the OPEA service.


