Files
Xiaotian Chen 1bd56af994 Update TGI image versions (#1625)
Signed-off-by: xiaotia3 <xiaotian.chen@intel.com>
2025-04-01 11:27:51 +08:00

141 lines
4.7 KiB
Markdown

# Build MegaService of VisualQnA on Gaudi
This document outlines the deployment process for a VisualQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Gaudi server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as llm. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.
## 🚀 Build Docker Images
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
### 1. Build LVM and NGINX Docker Images
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build --no-cache -t opea/lvm:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/src/Dockerfile .
docker build --no-cache -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/nginx/src/Dockerfile .
```
### 2. Build vLLM/Pull TGI Gaudi Image
```bash
# vLLM
# currently you have to build the opea/vllm-gaudi with the habana_main branch and the specific commit locally
# we will update it to stable release tag in the future
git clone https://github.com/HabanaAI/vllm-fork.git
cd ./vllm-fork/
docker build -f Dockerfile.hpu -t opea/vllm-gaudi:latest --shm-size=128g . --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy
cd ..
rm -rf vllm-fork
```
```bash
# TGI (Optional)
docker pull ghcr.io/huggingface/tgi-gaudi:2.3.1
```
### 3. Build MegaService Docker Image
To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the `visualqna.py` Python script. Build the MegaService Docker image using the command below:
```bash
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/VisualQnA
docker build --no-cache -t opea/visualqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
cd ../..
```
### 4. Build UI Docker Image
Build frontend Docker image via below command:
```bash
cd GenAIExamples/VisualQnA/ui
docker build --no-cache -t opea/visualqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
```
Then run the command `docker images`, you will have the following 5 Docker Images:
1. `opea/vllm-gaudi:latest`
2. `ghcr.io/huggingface/tgi-gaudi:2.3.1` (Optional)
3. `opea/lvm:latest`
4. `opea/visualqna:latest`
5. `opea/visualqna-ui:latest`
6. `opea/nginx`
## 🚀 Start MicroServices and MegaService
### Setup Environment Variables
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.
```bash
source set_env.sh
```
Note: Please replace with `host_ip` with you external IP address, do not use localhost.
### Start all the services Docker Containers
```bash
cd GenAIExamples/VisualQnA/docker_compose/intel/hpu/gaudi/
```
```bash
docker compose -f compose.yaml up -d
# if use TGI as the LLM serving backend
docker compose -f compose_tgi.yaml up -d
```
> **_NOTE:_** Users need at least one Gaudi cards to run the VisualQnA successfully.
### Validate MicroServices and MegaService
Follow the instructions to validate MicroServices.
> Note: If you see an "Internal Server Error" from the `curl` command, wait a few minutes for the microserver to be ready and then try again.
1. LLM Microservice
```bash
http_proxy="" curl http://${host_ip}:9399/v1/lvm -XPOST -d '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}' -H 'Content-Type: application/json'
```
2. MegaService
```bash
curl http://${host_ip}:8888/v1/visualqna -H "Content-Type: application/json" -d '{
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What'\''s in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://www.ilankelman.org/stopsigns/australia.jpg"
}
}
]
}
],
"max_tokens": 300
}'
```
## 🚀 Launch the UI
To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below:
```yaml
visualqna-gaudi-ui-server:
image: opea/visualqna-ui:latest
...
ports:
- "80:5173"
```