Fix README issues (#817)

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
lvliang-intel
2024-09-18 09:50:17 +08:00
committed by GitHub
parent 375ea7a90c
commit bceacdc804
24 changed files with 106 additions and 243 deletions

View File

@@ -36,16 +36,12 @@ Port 5173 - Open to 0.0.0.0/0
First of all, you need to build Docker Images locally and install the python package of it.
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
```
### 1. Build LVM and NGINX Docker Images
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build --no-cache -t opea/lvm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/tgi-llava/Dockerfile .
docker build --no-cache -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
```
@@ -57,7 +53,6 @@ To construct the Mega Service, we utilize the [GenAIComps](https://github.com/op
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/VisualQnA
docker build --no-cache -t opea/visualqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
cd ../..
```
### 3. Build UI Docker Image
@@ -67,7 +62,6 @@ Build frontend Docker image via below command:
```bash
cd GenAIExamples/VisualQnA/ui
docker build --no-cache -t opea/visualqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f docker/Dockerfile .
cd ../../..
```
### 4. Pull TGI Xeon Image

View File

@@ -6,28 +6,22 @@ This document outlines the deployment process for a VisualQnA application utiliz
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
### 1. Source Code install GenAIComps
### 1. Build LVM and NGINX Docker Images
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
```
### 2. Build LVM and NGINX Docker Images
```bash
docker build --no-cache -t opea/lvm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/tgi-llava/Dockerfile .
docker build --no-cache -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
```
### 3. Pull TGI Gaudi Image
### 2. Pull TGI Gaudi Image
```bash
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.4
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.5
```
### 4. Build MegaService Docker Image
### 3. Build MegaService Docker Image
To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the `visuralqna.py` Python script. Build the MegaService Docker image using the command below:
@@ -38,19 +32,18 @@ docker build --no-cache -t opea/visualqna:latest --build-arg https_proxy=$https_
cd ../..
```
### 5. Build UI Docker Image
### 4. Build UI Docker Image
Build frontend Docker image via below command:
```bash
cd GenAIExamples/VisualQnA/ui
docker build --no-cache -t opea/visualqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
cd ../../..
```
Then run the command `docker images`, you will have the following 5 Docker Images:
1. `ghcr.io/huggingface/tgi-gaudi:2.0.4`
1. `ghcr.io/huggingface/tgi-gaudi:2.0.5`
2. `opea/lvm-tgi:latest`
3. `opea/visualqna:latest`
4. `opea/visualqna-ui:latest`