doc: fix CodeGen/README.md markdown (#475)

* fix multiple H1 headings
* edit use of please
* remove use of unnecessary HTML tags

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Co-authored-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com>
This commit is contained in:
David Kinder
2024-07-30 18:24:12 -07:00
committed by GitHub
parent 076bca3bbf
commit 33f83293d6

View File

@@ -18,7 +18,7 @@ The workflow falls into the following architecture:
![architecture](./assets/img/codegen_architecture.png)
# Deploy CodeGen Service
## Deploy CodeGen Service
The CodeGen service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processor.
@@ -26,57 +26,57 @@ Currently we support two ways of deploying ChatQnA services with docker compose:
1. Start services using the docker image on `docker hub`:
```bash
docker pull opea/codegen:latest
```
```bash
docker pull opea/codegen:latest
```
2. Start services using the docker images `built from source`: [Guide](./docker)
## Setup Environment Variable
### Setup Environment Variable
To set up environment variables for deploying ChatQnA services, follow these steps:
1. Set the required environment variables:
```bash
# Example: host_ip="192.168.1.1"
export host_ip="External_Public_IP"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
```
```bash
# Example: host_ip="192.168.1.1"
export host_ip="External_Public_IP"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
```
2. If you are in a proxy environment, also set the proxy-related environment variables:
```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
```
```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
```
3. Set up other environment variables:
```bash
source ./docker/set_env.sh
```
```bash
source ./docker/set_env.sh
```
## Deploy CodeGen using Docker
### Deploy CodeGen using Docker
### Deploy CodeGen on Gaudi
#### Deploy CodeGen on Gaudi
Please find corresponding [compose.yaml](./docker/gaudi/compose.yaml).
Find the corresponding [compose.yaml](./docker/gaudi/compose.yaml).
```bash
cd GenAIExamples/CodeGen/docker/gaudi
docker compose up -d
```
> Notice: Currently only the <b>Habana Driver 1.16.x</b> is supported for Gaudi.
> Notice: Currently only the **Habana Driver 1.16.xi** is supported for Gaudi.
Please refer to the [Gaudi Guide](./docker/gaudi/README.md) to build docker images from source.
Refer to the [Gaudi Guide](./docker/gaudi/README.md) to build docker images from source.
### Deploy CodeGen on Xeon
#### Deploy CodeGen on Xeon
Please find corresponding [compose.yaml](./docker/xeon/compose.yaml).
Find the corresponding [compose.yaml](./docker/xeon/compose.yaml).
```bash
cd GenAIExamples/CodeGen/docker/xeon
@@ -85,46 +85,46 @@ docker compose up -d
Refer to the [Xeon Guide](./docker/xeon/README.md) for more instructions on building docker images from source.
## Deploy CodeGen using Kubernetes
### Deploy CodeGen using Kubernetes
Refer to the [Kubernetes Guide](./kubernetes/manifests/README.md) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi.
## Deploy CodeGen into Kubernetes using Helm Chart
### Deploy CodeGen into Kubernetes using Helm Chart
Install Helm (version >= 3.15) first. Please refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
Refer to the [CodeGen helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi.
# Consume CodeGen Service
## Consume CodeGen Service
Two ways of consuming CodeGen Service:
1. Use cURL command on terminal
```bash
curl http://${host_ip}:7778/v1/codegen \
-H "Content-Type: application/json" \
-d '{"messages": "Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception."}'
```
```bash
curl http://${host_ip}:7778/v1/codegen \
-H "Content-Type: application/json" \
-d '{"messages": "Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception."}'
```
2. Access via frontend
To access the frontend, open the following URL in your browser: http://{host_ip}:5173.
To access the frontend, open the following URL in your browser: http://{host_ip}:5173.
By default, the UI runs on port 5173 internally.
By default, the UI runs on port 5173 internally.
# Troubleshooting
## Troubleshooting
1. If you get errors like "Access Denied", please [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker/xeon#validate-microservices) first. A simple example:
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker/xeon#validate-microservices) first. A simple example:
```bash
http_proxy=""
curl http://${host_ip}:8028/generate \
-X POST \
-d '{"inputs":"Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception.","parameters":{"max_new_tokens":256, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
```bash
http_proxy=""
curl http://${host_ip}:8028/generate \
-X POST \
-d '{"inputs":"Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception.","parameters":{"max_new_tokens":256, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
2. (Docker only) If all microservices work well, please check the port ${host_ip}:7778, the port may be allocated by other users, you can modify the `compose.yaml`.
2. (Docker only) If all microservices work well, check the port ${host_ip}:7778, the port may be allocated by other users, you can modify the `compose.yaml`.
3. (Docker only) If you get errors like "The container name is in use", please change container name in `compose.yaml`.
3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.