[ChatQnA] Switch to vLLM as default llm backend on Xeon (#1403)

Switching from TGI to vLLM as the default LLM serving backend on Xeon for the ChatQnA example to enhance the perf.

https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
This commit is contained in:
Wang, Kai Lawrence
2025-01-17 20:48:19 +08:00
committed by GitHub
parent 00e9da9ced
commit 742cb6ddd3
13 changed files with 266 additions and 261 deletions

View File

@@ -1,6 +1,8 @@
# Build Mega Service of ChatQnA on Xeon # Build Mega Service of ChatQnA on Xeon
This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `embedding`, `retriever`, `rerank`, and `llm`. We will publish the Docker images to Docker Hub soon, it will simplify the deployment process for this service. This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `embedding`, `retriever`, `rerank`, and `llm`.
The default pipeline deploys with vLLM as the LLM serving component and leverages rerank component. It also provides options of not using rerank in the pipeline and using TGI backend for LLM microservice, please refer to [start-all-the-services-docker-containers](#start-all-the-services-docker-containers) section in this page. Besides, refer to [Build with Pinecone VectorDB](./README_pinecone.md) and [Build with Qdrant VectorDB](./README_qdrant.md) for other deployment variants.
Quick Start: Quick Start:
@@ -186,7 +188,7 @@ By default, the embedding, reranking and LLM models are set to a default value a
Change the `xxx_MODEL_ID` below for your needs. Change the `xxx_MODEL_ID` below for your needs.
For users in China who are unable to download models directly from Huggingface, you can use [ModelScope](https://www.modelscope.cn/models) or a Huggingface mirror to download models. TGI can load the models either online or offline as described below: For users in China who are unable to download models directly from Huggingface, you can use [ModelScope](https://www.modelscope.cn/models) or a Huggingface mirror to download models. The vLLM/TGI can load the models either online or offline as described below:
1. Online 1. Online
@@ -194,6 +196,9 @@ For users in China who are unable to download models directly from Huggingface,
export HF_TOKEN=${your_hf_token} export HF_TOKEN=${your_hf_token}
export HF_ENDPOINT="https://hf-mirror.com" export HF_ENDPOINT="https://hf-mirror.com"
model_name="Intel/neural-chat-7b-v3-3" model_name="Intel/neural-chat-7b-v3-3"
# Start vLLM LLM Service
docker run -p 8008:80 -v ./data:/data --name vllm-service -e HF_ENDPOINT=$HF_ENDPOINT -e http_proxy=$http_proxy -e https_proxy=$https_proxy --shm-size 128g opea/vllm:latest --model $model_name --host 0.0.0.0 --port 80
# Start TGI LLM Service
docker run -p 8008:80 -v ./data:/data --name tgi-service -e HF_ENDPOINT=$HF_ENDPOINT -e http_proxy=$http_proxy -e https_proxy=$https_proxy --shm-size 1g ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu --model-id $model_name docker run -p 8008:80 -v ./data:/data --name tgi-service -e HF_ENDPOINT=$HF_ENDPOINT -e http_proxy=$http_proxy -e https_proxy=$https_proxy --shm-size 1g ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu --model-id $model_name
``` ```
@@ -203,12 +208,15 @@ For users in China who are unable to download models directly from Huggingface,
- Click on `Download this model` button, and choose one way to download the model to your local path `/path/to/model`. - Click on `Download this model` button, and choose one way to download the model to your local path `/path/to/model`.
- Run the following command to start TGI service. - Run the following command to start the LLM service.
```bash ```bash
export HF_TOKEN=${your_hf_token} export HF_TOKEN=${your_hf_token}
export model_path="/path/to/model" export model_path="/path/to/model"
docker run -p 8008:80 -v $model_path:/data --name tgi_service --shm-size 1g ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu --model-id /data # Start vLLM LLM Service
docker run -p 8008:80 -v $model_path:/data --name vllm-service --shm-size 128g opea/vllm:latest --model /data --host 0.0.0.0 --port 80
# Start TGI LLM Service
docker run -p 8008:80 -v $model_path:/data --name tgi-service --shm-size 1g ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu --model-id /data
``` ```
### Setup Environment Variables ### Setup Environment Variables
@@ -246,7 +254,7 @@ For users in China who are unable to download models directly from Huggingface,
cd GenAIExamples/ChatQnA/docker_compose/intel/cpu/xeon/ cd GenAIExamples/ChatQnA/docker_compose/intel/cpu/xeon/
``` ```
If use TGI backend. If use vLLM as the LLM serving backend.
```bash ```bash
# Start ChatQnA with Rerank Pipeline # Start ChatQnA with Rerank Pipeline
@@ -255,10 +263,10 @@ docker compose -f compose.yaml up -d
docker compose -f compose_without_rerank.yaml up -d docker compose -f compose_without_rerank.yaml up -d
``` ```
If use vLLM backend. If use TGI as the LLM serving backend.
```bash ```bash
docker compose -f compose_vllm.yaml up -d docker compose -f compose_tgi.yaml up -d
``` ```
### Validate Microservices ### Validate Microservices
@@ -305,37 +313,34 @@ For details on how to verify the correctness of the response, refer to [how-to-v
4. LLM backend Service 4. LLM backend Service
In first startup, this service will take more time to download the model files. After it's finished, the service will be ready. In the first startup, this service will take more time to download, load and warm up the model. After it's finished, the service will be ready.
Try the command below to check whether the LLM serving is ready. Try the command below to check whether the LLM serving is ready.
```bash ```bash
# vLLM service
docker logs vllm-service 2>&1 | grep complete
# If the service is ready, you will get the response like below.
INFO: Application startup complete.
```
```bash
# TGI service
docker logs tgi-service | grep Connected docker logs tgi-service | grep Connected
``` # If the service is ready, you will get the response like below.
If the service is ready, you will get the response like below.
```
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected 2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected
``` ```
Then try the `cURL` command below to validate services. Then try the `cURL` command below to validate services.
```bash ```bash
# TGI service # either vLLM or TGI service
curl http://${host_ip}:9009/v1/chat/completions \ curl http://${host_ip}:9009/v1/chat/completions \
-X POST \ -X POST \
-d '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens":17}' \ -d '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens":17}' \
-H 'Content-Type: application/json' -H 'Content-Type: application/json'
``` ```
```bash
# vLLM Service
curl http://${host_ip}:9009/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}]}'
```
5. MegaService 5. MegaService
```bash ```bash
@@ -362,7 +367,6 @@ Or run this command to get the file on a terminal.
```bash ```bash
wget https://raw.githubusercontent.com/opea-project/GenAIComps/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf wget https://raw.githubusercontent.com/opea-project/GenAIComps/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf
``` ```
Upload: Upload:

View File

@@ -1,6 +1,8 @@
# Build Mega Service of ChatQnA on Xeon # Build Mega Service of ChatQnA on Xeon
This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `embedding`, `retriever`, `rerank`, and `llm`. We will publish the Docker images to Docker Hub soon, it will simplify the deployment process for this service. This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `embedding`, `retriever`, `rerank`, and `llm`.
The default pipeline deploys with vLLM as the LLM serving component and leverages rerank component.
Quick Start: Quick Start:
@@ -189,7 +191,7 @@ By default, the embedding, reranking and LLM models are set to a default value a
Change the `xxx_MODEL_ID` below for your needs. Change the `xxx_MODEL_ID` below for your needs.
For users in China who are unable to download models directly from Huggingface, you can use [ModelScope](https://www.modelscope.cn/models) or a Huggingface mirror to download models. TGI can load the models either online or offline as described below: For users in China who are unable to download models directly from Huggingface, you can use [ModelScope](https://www.modelscope.cn/models) or a Huggingface mirror to download models. The vLLM can load the models either online or offline as described below:
1. Online 1. Online
@@ -197,7 +199,7 @@ For users in China who are unable to download models directly from Huggingface,
export HF_TOKEN=${your_hf_token} export HF_TOKEN=${your_hf_token}
export HF_ENDPOINT="https://hf-mirror.com" export HF_ENDPOINT="https://hf-mirror.com"
model_name="Intel/neural-chat-7b-v3-3" model_name="Intel/neural-chat-7b-v3-3"
docker run -p 8008:80 -v ./data:/data --name tgi-service -e HF_ENDPOINT=$HF_ENDPOINT -e http_proxy=$http_proxy -e https_proxy=$https_proxy --shm-size 1g ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu --model-id $model_name docker run -p 8008:80 -v ./data:/data --name vllm-service -e HF_ENDPOINT=$HF_ENDPOINT -e http_proxy=$http_proxy -e https_proxy=$https_proxy --shm-size 128g opea/vllm:latest --model $model_name --host 0.0.0.0 --port 80
``` ```
2. Offline 2. Offline
@@ -206,12 +208,12 @@ For users in China who are unable to download models directly from Huggingface,
- Click on `Download this model` button, and choose one way to download the model to your local path `/path/to/model`. - Click on `Download this model` button, and choose one way to download the model to your local path `/path/to/model`.
- Run the following command to start TGI service. - Run the following command to start the LLM service.
```bash ```bash
export HF_TOKEN=${your_hf_token} export HF_TOKEN=${your_hf_token}
export model_path="/path/to/model" export model_path="/path/to/model"
docker run -p 8008:80 -v $model_path:/data --name tgi_service --shm-size 1g ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu --model-id /data docker run -p 8008:80 -v $model_path:/data --name vllm-service --shm-size 128g opea/vllm:latest --model /data --host 0.0.0.0 --port 80
``` ```
### Setup Environment Variables ### Setup Environment Variables
@@ -252,7 +254,7 @@ For users in China who are unable to download models directly from Huggingface,
cd GenAIExamples/ChatQnA/docker_compose/intel/cpu/xeon/ cd GenAIExamples/ChatQnA/docker_compose/intel/cpu/xeon/
``` ```
If use TGI backend. If use vLLM backend.
```bash ```bash
# Start ChatQnA with Rerank Pipeline # Start ChatQnA with Rerank Pipeline
@@ -303,24 +305,23 @@ For details on how to verify the correctness of the response, refer to [how-to-v
4. LLM backend Service 4. LLM backend Service
In first startup, this service will take more time to download the model files. After it's finished, the service will be ready. In the first startup, this service will take more time to download, load and warm up the model. After it's finished, the service will be ready.
Try the command below to check whether the LLM serving is ready. Try the command below to check whether the LLM serving is ready.
```bash ```bash
docker logs tgi-service | grep Connected docker logs vllm-service 2>&1 | grep complete
``` ```
If the service is ready, you will get the response like below. If the service is ready, you will get the response like below.
```text ```text
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected INFO: Application startup complete.
``` ```
Then try the `cURL` command below to validate services. Then try the `cURL` command below to validate services.
```bash ```bash
# TGI service
curl http://${host_ip}:9009/v1/chat/completions \ curl http://${host_ip}:9009/v1/chat/completions \
-X POST \ -X POST \
-d '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens":17}' \ -d '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens":17}' \

View File

@@ -1,6 +1,8 @@
# Build Mega Service of ChatQnA (with Qdrant) on Xeon # Build Mega Service of ChatQnA (with Qdrant) on Xeon
This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `embedding`, `retriever`, `rerank`, and `llm`. We will publish the Docker images to Docker Hub soon, it will simplify the deployment process for this service. This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `embedding`, `retriever`, `rerank`, and `llm`.
The default pipeline deploys with vLLM as the LLM serving component and leverages rerank component.
## 🚀 Apply Xeon Server on AWS ## 🚀 Apply Xeon Server on AWS
@@ -44,7 +46,7 @@ reranking
========= =========
Port 6046 - Open to 0.0.0.0/0 Port 6046 - Open to 0.0.0.0/0
tgi-service vllm-service
=========== ===========
Port 6042 - Open to 0.0.0.0/0 Port 6042 - Open to 0.0.0.0/0
@@ -170,7 +172,7 @@ export your_hf_api_token="Your_Huggingface_API_Token"
**Append the value of the public IP address to the no_proxy list if you are in a proxy environment** **Append the value of the public IP address to the no_proxy list if you are in a proxy environment**
``` ```
export your_no_proxy=${your_no_proxy},"External_Public_IP",chatqna-xeon-ui-server,chatqna-xeon-backend-server,dataprep-qdrant-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service export your_no_proxy=${your_no_proxy},"External_Public_IP",chatqna-xeon-ui-server,chatqna-xeon-backend-server,dataprep-qdrant-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm-service
``` ```
```bash ```bash
@@ -233,23 +235,23 @@ For details on how to verify the correctness of the response, refer to [how-to-v
-H 'Content-Type: application/json' -H 'Content-Type: application/json'
``` ```
4. TGI Service 4. LLM Backend Service
In first startup, this service will take more time to download the model files. After it's finished, the service will be ready. In the first startup, this service will take more time to download, load and warm up the model. After it's finished, the service will be ready.
Try the command below to check whether the TGI service is ready. Try the command below to check whether the LLM service is ready.
```bash ```bash
docker logs ${CONTAINER_ID} | grep Connected docker logs vllm-service 2>&1 | grep complete
``` ```
If the service is ready, you will get the response like below. If the service is ready, you will get the response like below.
``` ```text
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected INFO: Application startup complete.
``` ```
Then try the `cURL` command below to validate TGI. Then try the `cURL` command below to validate vLLM service.
```bash ```bash
curl http://${host_ip}:6042/v1/chat/completions \ curl http://${host_ip}:6042/v1/chat/completions \

View File

@@ -74,32 +74,31 @@ services:
HF_HUB_DISABLE_PROGRESS_BARS: 1 HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0 HF_HUB_ENABLE_HF_TRANSFER: 0
command: --model-id ${RERANK_MODEL_ID} --auto-truncate command: --model-id ${RERANK_MODEL_ID} --auto-truncate
tgi-service: vllm-service:
image: ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu image: ${REGISTRY:-opea}/vllm:${TAG:-latest}
container_name: tgi-service container_name: vllm-service
ports: ports:
- "9009:80" - "9009:80"
volumes: volumes:
- "./data:/data" - "./data:/data"
shm_size: 1g shm_size: 128g
environment: environment:
no_proxy: ${no_proxy} no_proxy: ${no_proxy}
http_proxy: ${http_proxy} http_proxy: ${http_proxy}
https_proxy: ${https_proxy} https_proxy: ${https_proxy}
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN} HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HF_HUB_DISABLE_PROGRESS_BARS: 1 LLM_MODEL_ID: ${LLM_MODEL_ID}
HF_HUB_ENABLE_HF_TRANSFER: 0 VLLM_TORCH_PROFILER_DIR: "/mnt"
command: --model-id ${LLM_MODEL_ID} --cuda-graphs 0 command: --model $LLM_MODEL_ID --host 0.0.0.0 --port 80
chatqna-xeon-backend-server: chatqna-xeon-backend-server:
image: ${REGISTRY:-opea}/chatqna:${TAG:-latest} image: ${REGISTRY:-opea}/chatqna:${TAG:-latest}
container_name: chatqna-xeon-backend-server container_name: chatqna-xeon-backend-server
depends_on: depends_on:
- redis-vector-db - redis-vector-db
- tei-embedding-service - tei-embedding-service
- dataprep-redis-service
- retriever - retriever
- tei-reranking-service - tei-reranking-service
- tgi-service - vllm-service
ports: ports:
- "8888:8888" - "8888:8888"
environment: environment:
@@ -112,7 +111,7 @@ services:
- RETRIEVER_SERVICE_HOST_IP=retriever - RETRIEVER_SERVICE_HOST_IP=retriever
- RERANK_SERVER_HOST_IP=tei-reranking-service - RERANK_SERVER_HOST_IP=tei-reranking-service
- RERANK_SERVER_PORT=${RERANK_SERVER_PORT:-80} - RERANK_SERVER_PORT=${RERANK_SERVER_PORT:-80}
- LLM_SERVER_HOST_IP=tgi-service - LLM_SERVER_HOST_IP=vllm-service
- LLM_SERVER_PORT=${LLM_SERVER_PORT:-80} - LLM_SERVER_PORT=${LLM_SERVER_PORT:-80}
- LLM_MODEL=${LLM_MODEL_ID} - LLM_MODEL=${LLM_MODEL_ID}
- LOGFLAG=${LOGFLAG} - LOGFLAG=${LOGFLAG}

View File

@@ -68,22 +68,22 @@ services:
HF_HUB_DISABLE_PROGRESS_BARS: 1 HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0 HF_HUB_ENABLE_HF_TRANSFER: 0
command: --model-id ${RERANK_MODEL_ID} --auto-truncate command: --model-id ${RERANK_MODEL_ID} --auto-truncate
tgi-service: vllm-service:
image: ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu image: ${REGISTRY:-opea}/vllm:${TAG:-latest}
container_name: tgi-service container_name: vllm-service
ports: ports:
- "9009:80" - "9009:80"
volumes: volumes:
- "./data:/data" - "./data:/data"
shm_size: 1g shm_size: 128g
environment: environment:
no_proxy: ${no_proxy} no_proxy: ${no_proxy}
http_proxy: ${http_proxy} http_proxy: ${http_proxy}
https_proxy: ${https_proxy} https_proxy: ${https_proxy}
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN} HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HF_HUB_DISABLE_PROGRESS_BARS: 1 LLM_MODEL_ID: ${LLM_MODEL_ID}
HF_HUB_ENABLE_HF_TRANSFER: 0 VLLM_TORCH_PROFILER_DIR: "/mnt"
command: --model-id ${LLM_MODEL_ID} --cuda-graphs 0 command: --model $LLM_MODEL_ID --host 0.0.0.0 --port 80
chatqna-xeon-backend-server: chatqna-xeon-backend-server:
image: ${REGISTRY:-opea}/chatqna:${TAG:-latest} image: ${REGISTRY:-opea}/chatqna:${TAG:-latest}
container_name: chatqna-xeon-backend-server container_name: chatqna-xeon-backend-server
@@ -92,7 +92,7 @@ services:
- dataprep-pinecone-service - dataprep-pinecone-service
- retriever - retriever
- tei-reranking-service - tei-reranking-service
- tgi-service - vllm-service
ports: ports:
- "8888:8888" - "8888:8888"
environment: environment:
@@ -105,7 +105,7 @@ services:
- RETRIEVER_SERVICE_HOST_IP=retriever - RETRIEVER_SERVICE_HOST_IP=retriever
- RERANK_SERVER_HOST_IP=tei-reranking-service - RERANK_SERVER_HOST_IP=tei-reranking-service
- RERANK_SERVER_PORT=${RERANK_SERVER_PORT:-80} - RERANK_SERVER_PORT=${RERANK_SERVER_PORT:-80}
- LLM_SERVER_HOST_IP=tgi-service - LLM_SERVER_HOST_IP=vllm-service
- LLM_SERVER_PORT=${LLM_SERVER_PORT:-80} - LLM_SERVER_PORT=${LLM_SERVER_PORT:-80}
- LOGFLAG=${LOGFLAG} - LOGFLAG=${LOGFLAG}
- LLM_MODEL=${LLM_MODEL_ID} - LLM_MODEL=${LLM_MODEL_ID}

View File

@@ -53,7 +53,7 @@ services:
QDRANT_HOST: qdrant-vector-db QDRANT_HOST: qdrant-vector-db
QDRANT_PORT: 6333 QDRANT_PORT: 6333
INDEX_NAME: ${INDEX_NAME} INDEX_NAME: ${INDEX_NAME}
TEI_EMBEDDING_ENDPOINT: ${TEI_EMBEDDING_ENDPOINT} TEI_EMBEDDING_ENDPOINT: http://tei-embedding-service:80
LOGFLAG: ${LOGFLAG} LOGFLAG: ${LOGFLAG}
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_QDRANT" RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_QDRANT"
restart: unless-stopped restart: unless-stopped
@@ -73,22 +73,22 @@ services:
HF_HUB_DISABLE_PROGRESS_BARS: 1 HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0 HF_HUB_ENABLE_HF_TRANSFER: 0
command: --model-id ${RERANK_MODEL_ID} --auto-truncate command: --model-id ${RERANK_MODEL_ID} --auto-truncate
tgi-service: vllm-service:
image: ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu image: ${REGISTRY:-opea}/vllm:${TAG:-latest}
container_name: tgi-service container_name: vllm-service
ports: ports:
- "6042:80" - "6042:80"
volumes: volumes:
- "./data:/data" - "./data:/data"
shm_size: 1g shm_size: 128g
environment: environment:
no_proxy: ${no_proxy} no_proxy: ${no_proxy}
http_proxy: ${http_proxy} http_proxy: ${http_proxy}
https_proxy: ${https_proxy} https_proxy: ${https_proxy}
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN} HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HF_HUB_DISABLE_PROGRESS_BARS: 1 LLM_MODEL_ID: ${LLM_MODEL_ID}
HF_HUB_ENABLE_HF_TRANSFER: 0 VLLM_TORCH_PROFILER_DIR: "/mnt"
command: --model-id ${LLM_MODEL_ID} --cuda-graphs 0 command: --model $LLM_MODEL_ID --host 0.0.0.0 --port 80
chatqna-xeon-backend-server: chatqna-xeon-backend-server:
image: ${REGISTRY:-opea}/chatqna:${TAG:-latest} image: ${REGISTRY:-opea}/chatqna:${TAG:-latest}
container_name: chatqna-xeon-backend-server container_name: chatqna-xeon-backend-server
@@ -97,7 +97,7 @@ services:
- tei-embedding-service - tei-embedding-service
- retriever - retriever
- tei-reranking-service - tei-reranking-service
- tgi-service - vllm-service
ports: ports:
- "8912:8888" - "8912:8888"
environment: environment:
@@ -111,7 +111,7 @@ services:
- RETRIEVER_SERVICE_PORT=${RETRIEVER_SERVICE_PORT:-7000} - RETRIEVER_SERVICE_PORT=${RETRIEVER_SERVICE_PORT:-7000}
- RERANK_SERVER_HOST_IP=tei-reranking-service - RERANK_SERVER_HOST_IP=tei-reranking-service
- RERANK_SERVER_PORT=${RERANK_SERVER_PORT:-80} - RERANK_SERVER_PORT=${RERANK_SERVER_PORT:-80}
- LLM_SERVER_HOST_IP=tgi-service - LLM_SERVER_HOST_IP=vllm-service
- LLM_SERVER_PORT=${LLM_SERVER_PORT:-80} - LLM_SERVER_PORT=${LLM_SERVER_PORT:-80}
- LLM_MODEL=${LLM_MODEL_ID} - LLM_MODEL=${LLM_MODEL_ID}
- LOGFLAG=${LOGFLAG} - LOGFLAG=${LOGFLAG}

View File

@@ -74,31 +74,32 @@ services:
HF_HUB_DISABLE_PROGRESS_BARS: 1 HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0 HF_HUB_ENABLE_HF_TRANSFER: 0
command: --model-id ${RERANK_MODEL_ID} --auto-truncate command: --model-id ${RERANK_MODEL_ID} --auto-truncate
vllm-service: tgi-service:
image: ${REGISTRY:-opea}/vllm:${TAG:-latest} image: ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu
container_name: vllm-service container_name: tgi-service
ports: ports:
- "9009:80" - "9009:80"
volumes: volumes:
- "./data:/data" - "./data:/data"
shm_size: 128g shm_size: 1g
environment: environment:
no_proxy: ${no_proxy} no_proxy: ${no_proxy}
http_proxy: ${http_proxy} http_proxy: ${http_proxy}
https_proxy: ${https_proxy} https_proxy: ${https_proxy}
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN} HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
LLM_MODEL_ID: ${LLM_MODEL_ID} HF_HUB_DISABLE_PROGRESS_BARS: 1
VLLM_TORCH_PROFILER_DIR: "/mnt" HF_HUB_ENABLE_HF_TRANSFER: 0
command: --model $LLM_MODEL_ID --host 0.0.0.0 --port 80 command: --model-id ${LLM_MODEL_ID} --cuda-graphs 0
chatqna-xeon-backend-server: chatqna-xeon-backend-server:
image: ${REGISTRY:-opea}/chatqna:${TAG:-latest} image: ${REGISTRY:-opea}/chatqna:${TAG:-latest}
container_name: chatqna-xeon-backend-server container_name: chatqna-xeon-backend-server
depends_on: depends_on:
- redis-vector-db - redis-vector-db
- tei-embedding-service - tei-embedding-service
- dataprep-redis-service
- retriever - retriever
- tei-reranking-service - tei-reranking-service
- vllm-service - tgi-service
ports: ports:
- "8888:8888" - "8888:8888"
environment: environment:
@@ -111,7 +112,7 @@ services:
- RETRIEVER_SERVICE_HOST_IP=retriever - RETRIEVER_SERVICE_HOST_IP=retriever
- RERANK_SERVER_HOST_IP=tei-reranking-service - RERANK_SERVER_HOST_IP=tei-reranking-service
- RERANK_SERVER_PORT=${RERANK_SERVER_PORT:-80} - RERANK_SERVER_PORT=${RERANK_SERVER_PORT:-80}
- LLM_SERVER_HOST_IP=vllm-service - LLM_SERVER_HOST_IP=tgi-service
- LLM_SERVER_PORT=${LLM_SERVER_PORT:-80} - LLM_SERVER_PORT=${LLM_SERVER_PORT:-80}
- LLM_MODEL=${LLM_MODEL_ID} - LLM_MODEL=${LLM_MODEL_ID}
- LOGFLAG=${LOGFLAG} - LOGFLAG=${LOGFLAG}

View File

@@ -58,22 +58,22 @@ services:
LOGFLAG: ${LOGFLAG} LOGFLAG: ${LOGFLAG}
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_REDIS" RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_REDIS"
restart: unless-stopped restart: unless-stopped
tgi-service: vllm-service:
image: ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu image: ${REGISTRY:-opea}/vllm:${TAG:-latest}
container_name: tgi-service container_name: vllm-service
ports: ports:
- "9009:80" - "9009:80"
volumes: volumes:
- "./data:/data" - "./data:/data"
shm_size: 1g shm_size: 128g
environment: environment:
no_proxy: ${no_proxy} no_proxy: ${no_proxy}
http_proxy: ${http_proxy} http_proxy: ${http_proxy}
https_proxy: ${https_proxy} https_proxy: ${https_proxy}
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN} HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HF_HUB_DISABLE_PROGRESS_BARS: 1 LLM_MODEL_ID: ${LLM_MODEL_ID}
HF_HUB_ENABLE_HF_TRANSFER: 0 VLLM_TORCH_PROFILER_DIR: "/mnt"
command: --model-id ${LLM_MODEL_ID} --cuda-graphs 0 command: --model $LLM_MODEL_ID --host 0.0.0.0 --port 80
chatqna-xeon-backend-server: chatqna-xeon-backend-server:
image: ${REGISTRY:-opea}/chatqna-without-rerank:${TAG:-latest} image: ${REGISTRY:-opea}/chatqna-without-rerank:${TAG:-latest}
container_name: chatqna-xeon-backend-server container_name: chatqna-xeon-backend-server
@@ -82,7 +82,7 @@ services:
- tei-embedding-service - tei-embedding-service
- dataprep-redis-service - dataprep-redis-service
- retriever - retriever
- tgi-service - vllm-service
ports: ports:
- "8888:8888" - "8888:8888"
environment: environment:
@@ -93,7 +93,7 @@ services:
- EMBEDDING_SERVER_HOST_IP=tei-embedding-service - EMBEDDING_SERVER_HOST_IP=tei-embedding-service
- EMBEDDING_SERVER_PORT=${EMBEDDING_SERVER_PORT:-80} - EMBEDDING_SERVER_PORT=${EMBEDDING_SERVER_PORT:-80}
- RETRIEVER_SERVICE_HOST_IP=retriever - RETRIEVER_SERVICE_HOST_IP=retriever
- LLM_SERVER_HOST_IP=tgi-service - LLM_SERVER_HOST_IP=vllm-service
- LLM_SERVER_PORT=${LLM_SERVER_PORT:-80} - LLM_SERVER_PORT=${LLM_SERVER_PORT:-80}
- LLM_MODEL=${LLM_MODEL_ID} - LLM_MODEL=${LLM_MODEL_ID}
- LOGFLAG=${LOGFLAG} - LOGFLAG=${LOGFLAG}

View File

@@ -17,12 +17,12 @@ ip_address=$(hostname -I | awk '{print $1}')
function build_docker_images() { function build_docker_images() {
cd $WORKPATH/docker_image_build cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../ git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
git clone https://github.com/vllm-project/vllm.git
echo "Build all the images with --no-cache, check docker_image_build.log for details..." echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="chatqna chatqna-ui dataprep-redis retriever nginx" service_list="chatqna chatqna-ui dataprep-redis retriever vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
docker images && sleep 1s docker images && sleep 1s
@@ -33,21 +33,19 @@ function start_services() {
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5" export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base" export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export LLM_MODEL_ID="meta-llama/Meta-Llama-3-8B-Instruct" export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
export INDEX_NAME="rag-redis" export INDEX_NAME="rag-redis"
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
# Start Docker Containers # Start Docker Containers
sed -i "s|container_name: chatqna-xeon-backend-server|container_name: chatqna-xeon-backend-server\n volumes:\n - \"${WORKPATH}\/docker_image_build\/GenAIComps:\/home\/user\/GenAIComps\"|g" compose.yaml
docker compose -f compose.yaml up -d > ${LOG_PATH}/start_services_with_compose.log docker compose -f compose.yaml up -d > ${LOG_PATH}/start_services_with_compose.log
n=0 n=0
until [[ "$n" -ge 500 ]]; do until [[ "$n" -ge 100 ]]; do
docker logs tgi-service > ${LOG_PATH}/tgi_service_start.log docker logs vllm-service > ${LOG_PATH}/vllm_service_start.log 2>&1
if grep -q Connected ${LOG_PATH}/tgi_service_start.log; then if grep -q complete ${LOG_PATH}/vllm_service_start.log; then
break break
fi fi
sleep 1s sleep 5s
n=$((n+1)) n=$((n+1))
done done
} }
@@ -59,38 +57,24 @@ function validate_service() {
local DOCKER_NAME="$4" local DOCKER_NAME="$4"
local INPUT_DATA="$5" local INPUT_DATA="$5"
if [[ $SERVICE_NAME == *"dataprep_upload_file"* ]]; then local HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL")
cd $LOG_PATH if [ "$HTTP_STATUS" -eq 200 ]; then
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -F 'files=@./dataprep_file.txt' -H 'Content-Type: multipart/form-data' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_upload_link"* ]]; then
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -F 'link_list=["https://www.ces.tech/"]' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_get"* ]]; then
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -H 'Content-Type: application/json' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_del"* ]]; then
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -d '{"file_path": "all"}' -H 'Content-Type: application/json' "$URL")
else
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL")
fi
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
RESPONSE_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
# check response status
if [ "$HTTP_STATUS" -ne "200" ]; then
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS"
exit 1
else
echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..." echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..."
fi
# check response body
if [[ "$RESPONSE_BODY" != *"$EXPECTED_RESULT"* ]]; then
echo "[ $SERVICE_NAME ] Content does not match the expected result: $RESPONSE_BODY"
exit 1
else
echo "[ $SERVICE_NAME ] Content is as expected."
fi
local CONTENT=$(curl -s -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL" | tee ${LOG_PATH}/${SERVICE_NAME}.log)
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected."
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
exit 1
fi
else
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS"
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
exit 1
fi
sleep 1s sleep 1s
} }
@@ -100,48 +84,19 @@ function validate_microservices() {
# tei for embedding service # tei for embedding service
validate_service \ validate_service \
"${ip_address}:6006/embed" \ "${ip_address}:6006/embed" \
"[[" \ "\[\[" \
"tei-embedding" \ "tei-embedding" \
"tei-embedding-server" \ "tei-embedding-server" \
'{"inputs":"What is Deep Learning?"}' '{"inputs":"What is Deep Learning?"}'
sleep 1m # retrieval can't curl as expected, try to wait for more time sleep 1m # retrieval can't curl as expected, try to wait for more time
# test /v1/dataprep upload file
echo "Deep learning is a subset of machine learning that utilizes neural networks with multiple layers to analyze various levels of abstract data representations. It enables computers to identify patterns and make decisions with minimal human intervention by learning from large amounts of data." > $LOG_PATH/dataprep_file.txt
validate_service \
"http://${ip_address}:6007/v1/dataprep" \
"Data preparation succeeded" \
"dataprep_upload_file" \
"dataprep-redis-server"
# test /v1/dataprep upload link
validate_service \
"http://${ip_address}:6007/v1/dataprep" \
"Data preparation succeeded" \
"dataprep_upload_link" \
"dataprep-redis-server"
# test /v1/dataprep/get_file
validate_service \
"http://${ip_address}:6007/v1/dataprep/get_file" \
'{"name":' \
"dataprep_get" \
"dataprep-redis-server"
# test /v1/dataprep/delete_file
validate_service \
"http://${ip_address}:6007/v1/dataprep/delete_file" \
'{"status":true}' \
"dataprep_del" \
"dataprep-redis-server"
# retrieval microservice # retrieval microservice
test_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)") test_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
validate_service \ validate_service \
"${ip_address}:7000/v1/retrieval" \ "${ip_address}:7000/v1/retrieval" \
"retrieved_docs" \ " " \
"retrieval-microservice" \ "retrieval" \
"retriever-redis-server" \ "retriever-redis-server" \
"{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${test_embedding}}" "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${test_embedding}}"
@@ -153,29 +108,27 @@ function validate_microservices() {
"tei-reranking-server" \ "tei-reranking-server" \
'{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}'
# tgi for llm service # vllm for llm service
validate_service \ validate_service \
"${ip_address}:9009/generate" \ "${ip_address}:9009/v1/chat/completions" \
"generated_text" \ "content" \
"tgi-llm" \ "vllm-llm" \
"tgi-service" \ "vllm-service" \
'{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 17}'
} }
function validate_megaservice() { function validate_megaservice() {
# Curl the Mega Service # Curl the Mega Service
validate_service \ validate_service \
"${ip_address}:8888/v1/chatqna" \ "${ip_address}:8888/v1/chatqna" \
"data: " \ "data" \
"chatqna-megaservice" \ "mega-chatqna" \
"chatqna-xeon-backend-server" \ "chatqna-xeon-backend-server" \
'{"messages": "What is the revenue of Nike in 2023?"}' '{"messages": "What is the revenue of Nike in 2023?"}'
} }
function validate_frontend() { function validate_frontend() {
echo "[ TEST INFO ]: --------- frontend test started ---------"
cd $WORKPATH/ui/svelte cd $WORKPATH/ui/svelte
local conda_env_name="OPEA_e2e" local conda_env_name="OPEA_e2e"
export PATH=${HOME}/miniforge3/bin/:$PATH export PATH=${HOME}/miniforge3/bin/:$PATH
@@ -184,8 +137,8 @@ function validate_frontend() {
else else
conda create -n ${conda_env_name} python=3.12 -y conda create -n ${conda_env_name} python=3.12 -y
fi fi
source activate ${conda_env_name} source activate ${conda_env_name}
echo "[ TEST INFO ]: --------- conda env activated ---------"
sed -i "s/localhost/$ip_address/g" playwright.config.ts sed -i "s/localhost/$ip_address/g" playwright.config.ts
@@ -206,7 +159,7 @@ function validate_frontend() {
function stop_docker() { function stop_docker() {
cd $WORKPATH/docker_compose/intel/cpu/xeon cd $WORKPATH/docker_compose/intel/cpu/xeon
docker compose stop && docker compose rm -f docker compose -f compose.yaml down
} }
function main() { function main() {
@@ -223,11 +176,8 @@ function main() {
python3 $WORKPATH/tests/chatqna_benchmark.py python3 $WORKPATH/tests/chatqna_benchmark.py
elif [ "${mode}" == "" ]; then elif [ "${mode}" == "" ]; then
validate_microservices validate_microservices
echo "==== microservices validated ===="
validate_megaservice validate_megaservice
echo "==== megaservice validated ====" # validate_frontend
validate_frontend
echo "==== frontend validated ===="
fi fi
stop_docker stop_docker

View File

@@ -17,12 +17,12 @@ ip_address=$(hostname -I | awk '{print $1}')
function build_docker_images() { function build_docker_images() {
cd $WORKPATH/docker_image_build cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../ git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
git clone https://github.com/vllm-project/vllm.git
echo "Build all the images with --no-cache, check docker_image_build.log for details..." echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="chatqna chatqna-ui dataprep-pinecone retriever nginx" service_list="chatqna chatqna-ui dataprep-pinecone retriever vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
docker images && sleep 1s docker images && sleep 1s
@@ -33,7 +33,7 @@ function start_services() {
export no_proxy=${no_proxy},${ip_address} export no_proxy=${no_proxy},${ip_address}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5" export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base" export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export LLM_MODEL_ID="meta-llama/Meta-Llama-3-8B-Instruct" export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
export PINECONE_API_KEY=${PINECONE_KEY_LANGCHAIN_TEST} export PINECONE_API_KEY=${PINECONE_KEY_LANGCHAIN_TEST}
export PINECONE_INDEX_NAME="langchain-test" export PINECONE_INDEX_NAME="langchain-test"
export INDEX_NAME="langchain-test" export INDEX_NAME="langchain-test"
@@ -44,12 +44,12 @@ function start_services() {
docker compose -f compose_pinecone.yaml up -d > ${LOG_PATH}/start_services_with_compose.log docker compose -f compose_pinecone.yaml up -d > ${LOG_PATH}/start_services_with_compose.log
n=0 n=0
until [[ "$n" -ge 500 ]]; do until [[ "$n" -ge 100 ]]; do
docker logs tgi-service > ${LOG_PATH}/tgi_service_start.log docker logs vllm-service > ${LOG_PATH}/vllm_service_start.log 2>&1
if grep -q Connected ${LOG_PATH}/tgi_service_start.log; then if grep -q complete ${LOG_PATH}/vllm_service_start.log; then
break break
fi fi
sleep 1s sleep 5s
n=$((n+1)) n=$((n+1))
done done
} }
@@ -146,15 +146,14 @@ function validate_microservices() {
'{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}'
# tgi for llm service # vllm for llm service
echo "Validating llm service" echo "Validating llm service"
validate_service \ validate_service \
"${ip_address}:9009/generate" \ "${ip_address}:9009/v1/chat/completions" \
"generated_text" \ "content" \
"tgi-llm" \ "vllm-llm" \
"tgi-service" \ "vllm-service" \
'{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 17}'
} }
function validate_megaservice() { function validate_megaservice() {

View File

@@ -17,9 +17,10 @@ ip_address=$(hostname -I | awk '{print $1}')
function build_docker_images() { function build_docker_images() {
cd $WORKPATH/docker_image_build cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../ git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
git clone https://github.com/vllm-project/vllm.git
echo "Build all the images with --no-cache, check docker_image_build.log for details..." echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="chatqna chatqna-ui dataprep-qdrant retriever nginx" service_list="chatqna chatqna-ui dataprep-qdrant retriever vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker images && sleep 1s docker images && sleep 1s
@@ -40,8 +41,8 @@ function start_services() {
docker compose -f compose_qdrant.yaml up -d > ${LOG_PATH}/start_services_with_compose.log docker compose -f compose_qdrant.yaml up -d > ${LOG_PATH}/start_services_with_compose.log
n=0 n=0
until [[ "$n" -ge 100 ]]; do until [[ "$n" -ge 100 ]]; do
docker logs tgi-service > tgi_service_start.log docker logs vllm-service > ${LOG_PATH}/vllm_service_start.log 2>&1
if grep -q Connected tgi_service_start.log; then if grep -q complete ${LOG_PATH}/vllm_service_start.log; then
break break
fi fi
sleep 5s sleep 5s
@@ -49,7 +50,7 @@ function start_services() {
done done
} }
function validate_services() { function validate_service() {
local URL="$1" local URL="$1"
local EXPECTED_RESULT="$2" local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3" local SERVICE_NAME="$3"
@@ -91,7 +92,7 @@ function validate_microservices() {
# Check if the microservices are running correctly. # Check if the microservices are running correctly.
# tei for embedding service # tei for embedding service
validate_services \ validate_service \
"${ip_address}:6040/embed" \ "${ip_address}:6040/embed" \
"[[" \ "[[" \
"tei-embedding" \ "tei-embedding" \
@@ -100,14 +101,14 @@ function validate_microservices() {
# test /v1/dataprep upload file # test /v1/dataprep upload file
echo "Deep learning is a subset of machine learning that utilizes neural networks with multiple layers to analyze various levels of abstract data representations. It enables computers to identify patterns and make decisions with minimal human intervention by learning from large amounts of data." > $LOG_PATH/dataprep_file.txt echo "Deep learning is a subset of machine learning that utilizes neural networks with multiple layers to analyze various levels of abstract data representations. It enables computers to identify patterns and make decisions with minimal human intervention by learning from large amounts of data." > $LOG_PATH/dataprep_file.txt
validate_services \ validate_service \
"${ip_address}:6043/v1/dataprep" \ "${ip_address}:6043/v1/dataprep" \
"Data preparation succeeded" \ "Data preparation succeeded" \
"dataprep_upload_file" \ "dataprep_upload_file" \
"dataprep-qdrant-server" "dataprep-qdrant-server"
# test upload link # test upload link
validate_services \ validate_service \
"${ip_address}:6043/v1/dataprep" \ "${ip_address}:6043/v1/dataprep" \
"Data preparation succeeded" \ "Data preparation succeeded" \
"dataprep_upload_link" \ "dataprep_upload_link" \
@@ -115,7 +116,7 @@ function validate_microservices() {
# retrieval microservice # retrieval microservice
test_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)") test_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
validate_services \ validate_service \
"${ip_address}:6045/v1/retrieval" \ "${ip_address}:6045/v1/retrieval" \
"retrieved_docs" \ "retrieved_docs" \
"retrieval" \ "retrieval" \
@@ -123,25 +124,25 @@ function validate_microservices() {
"{\"text\":\"What is Deep Learning?\",\"embedding\":${test_embedding}}" "{\"text\":\"What is Deep Learning?\",\"embedding\":${test_embedding}}"
# tei for rerank microservice # tei for rerank microservice
validate_services \ validate_service \
"${ip_address}:6041/rerank" \ "${ip_address}:6041/rerank" \
'{"index":1,"score":' \ '{"index":1,"score":' \
"tei-rerank" \ "tei-rerank" \
"tei-reranking-server" \ "tei-reranking-server" \
'{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}'
# tgi for llm service # vllm for llm service
validate_services \ validate_service \
"${ip_address}:6042/generate" \ "${ip_address}:6042/v1/chat/completions" \
"generated_text" \ "content" \
"tgi-llm" \ "vllm-llm" \
"tgi-service" \ "vllm-service" \
'{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 17}'
} }
function validate_megaservice() { function validate_megaservice() {
# Curl the Mega Service # Curl the Mega Service
validate_services \ validate_service \
"${ip_address}:8912/v1/chatqna" \ "${ip_address}:8912/v1/chatqna" \
"data: " \ "data: " \
"mega-chatqna" \ "mega-chatqna" \
@@ -174,7 +175,7 @@ function validate_frontend() {
function stop_docker() { function stop_docker() {
cd $WORKPATH/docker_compose/intel/cpu/xeon cd $WORKPATH/docker_compose/intel/cpu/xeon
docker compose -f compose_qdrant.yaml stop && docker compose -f compose_qdrant.yaml rm -f docker compose -f compose_qdrant.yaml down
} }
function main() { function main() {

View File

@@ -17,13 +17,12 @@ ip_address=$(hostname -I | awk '{print $1}')
function build_docker_images() { function build_docker_images() {
cd $WORKPATH/docker_image_build cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../ git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
git clone https://github.com/vllm-project/vllm.git
echo "Build all the images with --no-cache, check docker_image_build.log for details..." echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="chatqna chatqna-ui dataprep-redis retriever vllm nginx" service_list="chatqna chatqna-ui dataprep-redis retriever nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.6 docker pull ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
docker images && sleep 1s docker images && sleep 1s
@@ -39,11 +38,13 @@ function start_services() {
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
# Start Docker Containers # Start Docker Containers
docker compose -f compose_vllm.yaml up -d > ${LOG_PATH}/start_services_with_compose.log sed -i "s|container_name: chatqna-xeon-backend-server|container_name: chatqna-xeon-backend-server\n volumes:\n - \"${WORKPATH}\/docker_image_build\/GenAIComps:\/home\/user\/GenAIComps\"|g" compose.yaml
docker compose -f compose_tgi.yaml up -d > ${LOG_PATH}/start_services_with_compose.log
n=0 n=0
until [[ "$n" -ge 100 ]]; do until [[ "$n" -ge 100 ]]; do
docker logs vllm-service > ${LOG_PATH}/vllm_service_start.log docker logs tgi-service > ${LOG_PATH}/tgi_service_start.log
if grep -q Connected ${LOG_PATH}/vllm_service_start.log; then if grep -q Connected ${LOG_PATH}/tgi_service_start.log; then
break break
fi fi
sleep 5s sleep 5s
@@ -51,31 +52,45 @@ function start_services() {
done done
} }
function validate_services() { function validate_service() {
local URL="$1" local URL="$1"
local EXPECTED_RESULT="$2" local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3" local SERVICE_NAME="$3"
local DOCKER_NAME="$4" local DOCKER_NAME="$4"
local INPUT_DATA="$5" local INPUT_DATA="$5"
local HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL") if [[ $SERVICE_NAME == *"dataprep_upload_file"* ]]; then
if [ "$HTTP_STATUS" -eq 200 ]; then cd $LOG_PATH
echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..." HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -F 'files=@./dataprep_file.txt' -H 'Content-Type: multipart/form-data' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_upload_link"* ]]; then
local CONTENT=$(curl -s -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL" | tee ${LOG_PATH}/${SERVICE_NAME}.log) HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -F 'link_list=["https://www.ces.tech/"]' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_get"* ]]; then
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -H 'Content-Type: application/json' "$URL")
echo "[ $SERVICE_NAME ] Content is as expected." elif [[ $SERVICE_NAME == *"dataprep_del"* ]]; then
else HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -d '{"file_path": "all"}' -H 'Content-Type: application/json' "$URL")
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
exit 1
fi
else else
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS" HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL")
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
exit 1
fi fi
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
RESPONSE_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
# check response status
if [ "$HTTP_STATUS" -ne "200" ]; then
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS"
exit 1
else
echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..."
fi
# check response body
if [[ "$RESPONSE_BODY" != *"$EXPECTED_RESULT"* ]]; then
echo "[ $SERVICE_NAME ] Content does not match the expected result: $RESPONSE_BODY"
exit 1
else
echo "[ $SERVICE_NAME ] Content is as expected."
fi
sleep 1s sleep 1s
} }
@@ -83,53 +98,83 @@ function validate_microservices() {
# Check if the microservices are running correctly. # Check if the microservices are running correctly.
# tei for embedding service # tei for embedding service
validate_services \ validate_service \
"${ip_address}:6006/embed" \ "${ip_address}:6006/embed" \
"\[\[" \ "[[" \
"tei-embedding" \ "tei-embedding" \
"tei-embedding-server" \ "tei-embedding-server" \
'{"inputs":"What is Deep Learning?"}' '{"inputs":"What is Deep Learning?"}'
sleep 1m # retrieval can't curl as expected, try to wait for more time sleep 1m # retrieval can't curl as expected, try to wait for more time
# test /v1/dataprep upload file
echo "Deep learning is a subset of machine learning that utilizes neural networks with multiple layers to analyze various levels of abstract data representations. It enables computers to identify patterns and make decisions with minimal human intervention by learning from large amounts of data." > $LOG_PATH/dataprep_file.txt
validate_service \
"http://${ip_address}:6007/v1/dataprep" \
"Data preparation succeeded" \
"dataprep_upload_file" \
"dataprep-redis-server"
# test /v1/dataprep upload link
validate_service \
"http://${ip_address}:6007/v1/dataprep" \
"Data preparation succeeded" \
"dataprep_upload_link" \
"dataprep-redis-server"
# test /v1/dataprep/get_file
validate_service \
"http://${ip_address}:6007/v1/dataprep/get_file" \
'{"name":' \
"dataprep_get" \
"dataprep-redis-server"
# test /v1/dataprep/delete_file
validate_service \
"http://${ip_address}:6007/v1/dataprep/delete_file" \
'{"status":true}' \
"dataprep_del" \
"dataprep-redis-server"
# retrieval microservice # retrieval microservice
test_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)") test_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
validate_services \ validate_service \
"${ip_address}:7000/v1/retrieval" \ "${ip_address}:7000/v1/retrieval" \
" " \ "retrieved_docs" \
"retrieval" \ "retrieval-microservice" \
"retriever-redis-server" \ "retriever-redis-server" \
"{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${test_embedding}}" "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${test_embedding}}"
# tei for rerank microservice # tei for rerank microservice
validate_services \ validate_service \
"${ip_address}:8808/rerank" \ "${ip_address}:8808/rerank" \
'{"index":1,"score":' \ '{"index":1,"score":' \
"tei-rerank" \ "tei-rerank" \
"tei-reranking-server" \ "tei-reranking-server" \
'{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}'
# vllm for llm service # tgi for llm service
validate_services \ validate_service \
"${ip_address}:9009/v1/completions" \ "${ip_address}:9009/v1/chat/completions" \
"text" \ "content" \
"vllm-llm" \ "tgi-llm" \
"vllm-service" \ "tgi-service" \
'{"model": "Intel/neural-chat-7b-v3-3", "prompt": "What is Deep Learning?", "max_tokens": 32, "temperature": 0}' '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 17}'
} }
function validate_megaservice() { function validate_megaservice() {
# Curl the Mega Service # Curl the Mega Service
validate_services \ validate_service \
"${ip_address}:8888/v1/chatqna" \ "${ip_address}:8888/v1/chatqna" \
"data" \ "data: " \
"mega-chatqna" \ "chatqna-megaservice" \
"chatqna-xeon-backend-server" \ "chatqna-xeon-backend-server" \
'{"messages": "What is the revenue of Nike in 2023?"}' '{"messages": "What is the revenue of Nike in 2023?"}'
} }
function validate_frontend() { function validate_frontend() {
echo "[ TEST INFO ]: --------- frontend test started ---------"
cd $WORKPATH/ui/svelte cd $WORKPATH/ui/svelte
local conda_env_name="OPEA_e2e" local conda_env_name="OPEA_e2e"
export PATH=${HOME}/miniforge3/bin/:$PATH export PATH=${HOME}/miniforge3/bin/:$PATH
@@ -138,8 +183,8 @@ function validate_frontend() {
else else
conda create -n ${conda_env_name} python=3.12 -y conda create -n ${conda_env_name} python=3.12 -y
fi fi
source activate ${conda_env_name} source activate ${conda_env_name}
echo "[ TEST INFO ]: --------- conda env activated ---------"
sed -i "s/localhost/$ip_address/g" playwright.config.ts sed -i "s/localhost/$ip_address/g" playwright.config.ts
@@ -160,7 +205,7 @@ function validate_frontend() {
function stop_docker() { function stop_docker() {
cd $WORKPATH/docker_compose/intel/cpu/xeon cd $WORKPATH/docker_compose/intel/cpu/xeon
docker compose -f compose_vllm.yaml down docker compose -f compose_tgi.yaml down
} }
function main() { function main() {
@@ -177,8 +222,11 @@ function main() {
python3 $WORKPATH/tests/chatqna_benchmark.py python3 $WORKPATH/tests/chatqna_benchmark.py
elif [ "${mode}" == "" ]; then elif [ "${mode}" == "" ]; then
validate_microservices validate_microservices
echo "==== microservices validated ===="
validate_megaservice validate_megaservice
# validate_frontend echo "==== megaservice validated ===="
validate_frontend
echo "==== frontend validated ===="
fi fi
stop_docker stop_docker

View File

@@ -17,12 +17,12 @@ ip_address=$(hostname -I | awk '{print $1}')
function build_docker_images() { function build_docker_images() {
cd $WORKPATH/docker_image_build cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../ git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
git clone https://github.com/vllm-project/vllm.git
echo "Build all the images with --no-cache, check docker_image_build.log for details..." echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="chatqna-without-rerank chatqna-ui dataprep-redis retriever nginx" service_list="chatqna-without-rerank chatqna-ui dataprep-redis retriever vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.6
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
docker images && sleep 1s docker images && sleep 1s
@@ -40,8 +40,8 @@ function start_services() {
docker compose -f compose_without_rerank.yaml up -d > ${LOG_PATH}/start_services_with_compose.log docker compose -f compose_without_rerank.yaml up -d > ${LOG_PATH}/start_services_with_compose.log
n=0 n=0
until [[ "$n" -ge 100 ]]; do until [[ "$n" -ge 100 ]]; do
docker logs tgi-service > ${LOG_PATH}/tgi_service_start.log docker logs vllm-service > ${LOG_PATH}/vllm_service_start.log 2>&1
if grep -q Connected ${LOG_PATH}/tgi_service_start.log; then if grep -q complete ${LOG_PATH}/vllm_service_start.log; then
break break
fi fi
sleep 5s sleep 5s
@@ -142,13 +142,13 @@ function validate_microservices() {
"retriever-redis-server" \ "retriever-redis-server" \
"{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${test_embedding}}" "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${test_embedding}}"
# tgi for llm service # vllm for llm service
validate_service \ validate_service \
"${ip_address}:9009/generate" \ "${ip_address}:9009/v1/chat/completions" \
"generated_text" \ "content" \
"tgi-llm" \ "vllm-llm" \
"tgi-service" \ "vllm-service" \
'{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' '{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 17}'
} }
function validate_megaservice() { function validate_megaservice() {
@@ -194,7 +194,7 @@ function validate_frontend() {
function stop_docker() { function stop_docker() {
cd $WORKPATH/docker_compose/intel/cpu/xeon/ cd $WORKPATH/docker_compose/intel/cpu/xeon/
docker compose -f compose_without_rerank.yaml stop && docker compose -f compose_without_rerank.yaml rm -f docker compose -f compose_without_rerank.yaml down
} }
function main() { function main() {