Update README for some minor issues (#1000)

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
This commit is contained in:
lvliang-intel
2024-10-22 10:30:18 +08:00
committed by GitHub
parent 1929dfd3a0
commit 9438d392b4
9 changed files with 11 additions and 13 deletions

View File

@@ -206,8 +206,6 @@ cd GenAIExamples/ChatQnA/docker_compose/intel/hpu/gaudi/
docker compose up -d
```
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
### Deploy ChatQnA on Xeon

View File

@@ -97,6 +97,11 @@ After launching your instance, you can connect to it using SSH (for Linux instan
First of all, you need to build Docker Images locally and install the python package of it.
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
```
### 1. Build Retriever Image
```bash

View File

@@ -111,7 +111,7 @@ Build frontend Docker image that enables Conversational experience with ChatQnA
**Export the value of the public IP address of your Xeon server to the `host_ip` environment variable**
```bash
cd GenAIExamples/ChatQnA//ui
cd GenAIExamples/ChatQnA/ui
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8912/v1/chatqna"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6043/v1/dataprep"
docker build --no-cache -t opea/chatqna-conversation-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg BACKEND_SERVICE_ENDPOINT=$BACKEND_SERVICE_ENDPOINT --build-arg DATAPREP_SERVICE_ENDPOINT=$DATAPREP_SERVICE_ENDPOINT -f ./docker/Dockerfile.react .

View File

@@ -70,6 +70,11 @@ curl http://${host_ip}:8888/v1/chatqna \
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
```
### 1. Build Retriever Image
```bash

View File

@@ -132,8 +132,6 @@ cd GenAIExamples/CodeGen/docker_compose/intel/hpu/gaudi
docker compose up -d
```
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
#### Deploy CodeGen on Xeon

View File

@@ -72,8 +72,6 @@ cd GenAIExamples/CodeTrans/docker_compose/intel/hpu/gaudi
docker compose up -d
```
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
#### Deploy Code Translation on Xeon

View File

@@ -71,8 +71,6 @@ cd GenAIExamples/DocSum/docker_compose/intel/hpu/gaudi/
docker compose -f compose.yaml up -d
```
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
#### Deploy on Xeon

View File

@@ -137,8 +137,6 @@ cd GenAIExamples/SearchQnA/docker_compose/intel/hpu/gaudi/
docker compose up -d
```
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
### Deploy SearchQnA on Xeon

View File

@@ -76,8 +76,6 @@ cd GenAIExamples/VisualQnA/docker_compose/intel/hpu/gaudi/
docker compose up -d
```
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
### Deploy VisualQnA on Xeon
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source.