Compare commits
20 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
99b3338649 | ||
|
|
b02db2ad40 | ||
|
|
dd232736e5 | ||
|
|
a82caef698 | ||
|
|
2dc2ba1d5c | ||
|
|
f90a6d2a8e | ||
|
|
1fdab591d9 | ||
|
|
13ea13862a | ||
|
|
1787d1ee98 | ||
|
|
15c62bfb7a | ||
|
|
aebb69cd75 | ||
|
|
db4bf1a4c3 | ||
|
|
f7002fcb70 | ||
|
|
c39c875211 | ||
|
|
6287f7945a | ||
|
|
d1b5113ce0 | ||
|
|
c2e9a259fe | ||
|
|
48eaf9c1c9 | ||
|
|
a39824f142 | ||
|
|
e10e6dd002 |
1
.github/workflows/_example-workflow.yml
vendored
1
.github/workflows/_example-workflow.yml
vendored
@@ -76,6 +76,7 @@ jobs:
|
||||
example: ${{ inputs.example }}
|
||||
hardware: ${{ inputs.node }}
|
||||
use_model_cache: ${{ inputs.use_model_cache }}
|
||||
opea_branch: ${{ inputs.opea_branch }}
|
||||
secrets: inherit
|
||||
|
||||
|
||||
|
||||
5
.github/workflows/_run-docker-compose.yml
vendored
5
.github/workflows/_run-docker-compose.yml
vendored
@@ -32,6 +32,10 @@ on:
|
||||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
opea_branch:
|
||||
default: "main"
|
||||
required: false
|
||||
type: string
|
||||
jobs:
|
||||
get-test-case:
|
||||
runs-on: ubuntu-latest
|
||||
@@ -169,6 +173,7 @@ jobs:
|
||||
FINANCIAL_DATASETS_API_KEY: ${{ secrets.FINANCIAL_DATASETS_API_KEY }}
|
||||
IMAGE_REPO: ${{ inputs.registry }}
|
||||
IMAGE_TAG: ${{ inputs.tag }}
|
||||
opea_branch: ${{ inputs.opea_branch }}
|
||||
example: ${{ inputs.example }}
|
||||
hardware: ${{ inputs.hardware }}
|
||||
test_case: ${{ matrix.test_case }}
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
|
||||
ARG IMAGE_REPO=opea
|
||||
ARG BASE_TAG=latest
|
||||
FROM opea/comps-base:$BASE_TAG
|
||||
FROM $IMAGE_REPO/comps-base:$BASE_TAG
|
||||
|
||||
COPY ./audioqna.py $HOME/audioqna.py
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
|
||||
ARG IMAGE_REPO=opea
|
||||
ARG BASE_TAG=latest
|
||||
FROM opea/comps-base:$BASE_TAG
|
||||
FROM $IMAGE_REPO/comps-base:$BASE_TAG
|
||||
|
||||
COPY ./audioqna_multilang.py $HOME/audioqna_multilang.py
|
||||
|
||||
@@ -1,120 +1,59 @@
|
||||
# Build Mega Service of AudioQnA on AMD ROCm GPU
|
||||
# Deploying AudioQnA on AMD ROCm GPU
|
||||
|
||||
This document outlines the deployment process for a AudioQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice
|
||||
pipeline on server on AMD ROCm GPU platform.
|
||||
This document outlines the single node deployment process for a AudioQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices on server with AMD ROCm processing accelerators. The steps include pulling Docker images, container deployment via Docker Compose, and service execution using microservices `llm`.
|
||||
|
||||
## Build Docker Images
|
||||
Note: The default LLM is `Intel/neural-chat-7b-v3-3`. Before deploying the application, please make sure either you've requested and been granted the access to it on [Huggingface](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) or you've downloaded the model locally from [ModelScope](https://www.modelscope.cn/models).
|
||||
|
||||
### 1. Build Docker Image
|
||||
## Table of Contents
|
||||
|
||||
- #### Create application install directory and go to it:
|
||||
1. [AudioQnA Quick Start Deployment](#audioqna-quick-start-deployment)
|
||||
2. [AudioQnA Docker Compose Files](#audioqna-docker-compose-files)
|
||||
3. [Validate Microservices](#validate-microservices)
|
||||
4. [Conclusion](#conclusion)
|
||||
|
||||
```bash
|
||||
mkdir ~/audioqna-install && cd audioqna-install
|
||||
```
|
||||
## AudioQnA Quick Start Deployment
|
||||
|
||||
- #### Clone the repository GenAIExamples (the default repository branch "main" is used here):
|
||||
This section describes how to quickly deploy and test the AudioQnA service manually on an AMD ROCm platform. The basic steps are:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples.git
|
||||
```
|
||||
1. [Access the Code](#access-the-code)
|
||||
2. [Configure the Deployment Environment](#configure-the-deployment-environment)
|
||||
3. [Deploy the Services Using Docker Compose](#deploy-the-services-using-docker-compose)
|
||||
4. [Check the Deployment Status](#check-the-deployment-status)
|
||||
5. [Validate the Pipeline](#validate-the-pipeline)
|
||||
6. [Cleanup the Deployment](#cleanup-the-deployment)
|
||||
|
||||
If you need to use a specific branch/tag of the GenAIExamples repository, then (v1.3 replace with its own value):
|
||||
### Access the Code
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples.git && cd GenAIExamples && git checkout v1.3
|
||||
```
|
||||
|
||||
We remind you that when using a specific version of the code, you need to use the README from this version:
|
||||
|
||||
- #### Go to build directory:
|
||||
|
||||
```bash
|
||||
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_image_build
|
||||
```
|
||||
|
||||
- Cleaning up the GenAIComps repository if it was previously cloned in this directory.
|
||||
This is necessary if the build was performed earlier and the GenAIComps folder exists and is not empty:
|
||||
|
||||
```bash
|
||||
echo Y | rm -R GenAIComps
|
||||
```
|
||||
|
||||
- #### Clone the repository GenAIComps (the default repository branch "main" is used here):
|
||||
Clone the GenAIExample repository and access the AudioQnA AMD ROCm platform Docker Compose files and supporting scripts:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIComps.git
|
||||
cd GenAIComps
|
||||
git clone https://github.com/opea-project/GenAIExamples.git
|
||||
cd GenAIExamples/AudioQnA
|
||||
```
|
||||
|
||||
We remind you that when using a specific version of the code, you need to use the README from this version.
|
||||
Then checkout a released version, such as v1.3:
|
||||
|
||||
- #### Setting the list of images for the build (from the build file.yaml)
|
||||
```bash
|
||||
git checkout v1.3
|
||||
```
|
||||
|
||||
If you want to deploy a vLLM-based or TGI-based application, then the set of services is installed as follows:
|
||||
### Configure the Deployment Environment
|
||||
|
||||
#### vLLM-based application
|
||||
#### Docker Compose GPU Configuration
|
||||
|
||||
```bash
|
||||
service_list="vllm-rocm whisper speecht5 audioqna audioqna-ui"
|
||||
```
|
||||
Consult the section on [AudioQnA Service configuration](#audioqna-configuration) for information on how service specific configuration parameters affect deployments.
|
||||
|
||||
#### TGI-based application
|
||||
|
||||
```bash
|
||||
service_list="whisper speecht5 audioqna audioqna-ui"
|
||||
```
|
||||
|
||||
- #### Optional. Pull TGI Docker Image (Do this if you want to use TGI)
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
```
|
||||
|
||||
- #### Build Docker Images
|
||||
|
||||
```bash
|
||||
docker compose -f build.yaml build ${service_list} --no-cache
|
||||
```
|
||||
|
||||
After the build, we check the list of images with the command:
|
||||
|
||||
```bash
|
||||
docker image ls
|
||||
```
|
||||
|
||||
The list of images should include:
|
||||
|
||||
##### vLLM-based application:
|
||||
|
||||
- opea/vllm-rocm:latest
|
||||
- opea/whisper:latest
|
||||
- opea/speecht5:latest
|
||||
- opea/audioqna:latest
|
||||
|
||||
##### TGI-based application:
|
||||
|
||||
- ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
- opea/whisper:latest
|
||||
- opea/speecht5:latest
|
||||
- opea/audioqna:latest
|
||||
|
||||
---
|
||||
|
||||
## Deploy the AudioQnA Application
|
||||
|
||||
### Docker Compose Configuration for AMD GPUs
|
||||
|
||||
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
|
||||
|
||||
- compose_vllm.yaml - for vLLM-based application
|
||||
- compose.yaml - for TGI-based
|
||||
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose files (`compose.yaml`, `compose_vllm.yaml`) for the LLM serving container:
|
||||
|
||||
```yaml
|
||||
# Example for vLLM service in compose_vllm.yaml
|
||||
# Note: Modern docker compose might use deploy.resources syntax instead.
|
||||
# Check your docker version and compose file.
|
||||
shm_size: 1g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/:/dev/dri/
|
||||
# - /dev/dri/render128:/dev/dri/render128
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
@@ -123,131 +62,161 @@ security_opt:
|
||||
- seccomp:unconfined
|
||||
```
|
||||
|
||||
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
|
||||
#### Environment Variables (`set_env*.sh`)
|
||||
|
||||
```yaml
|
||||
shm_size: 1g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/card0:/dev/dri/card0
|
||||
- /dev/dri/render128:/dev/dri/render128
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
- video
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
```
|
||||
These scripts (`set_env_vllm.sh` for vLLM, `set_env.sh` for TGI) configure crucial parameters passed to the containers.
|
||||
|
||||
**How to Identify GPU Device IDs:**
|
||||
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
|
||||
To set up environment variables for deploying AudioQnA services, set up some parameters specific to the deployment environment and source the `set_env.sh` script in this directory:
|
||||
|
||||
### Set deploy environment variables
|
||||
|
||||
#### Setting variables in the operating system environment:
|
||||
|
||||
##### Set variable HUGGINGFACEHUB_API_TOKEN:
|
||||
For TGI inference usage:
|
||||
|
||||
```bash
|
||||
### Replace the string 'your_huggingfacehub_token' with your HuggingFacehub repository access token.
|
||||
export HUGGINGFACEHUB_API_TOKEN='your_huggingfacehub_token'
|
||||
export host_ip="External_Public_IP" # ip address of the node
|
||||
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
|
||||
export http_proxy="Your_HTTP_Proxy" # http proxy if any
|
||||
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
|
||||
export no_proxy=localhost,127.0.0.1,$host_ip,whisper-service,speecht5-service,vllm-service,tgi-service,audioqna-xeon-backend-server,audioqna-xeon-ui-server # additional no proxies if needed
|
||||
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
|
||||
source ./set_env.sh
|
||||
```
|
||||
|
||||
#### Set variables value in set_env\*\*\*\*.sh file:
|
||||
|
||||
Go to Docker Compose directory:
|
||||
For vLLM inference usage
|
||||
|
||||
```bash
|
||||
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
|
||||
export host_ip="External_Public_IP" # ip address of the node
|
||||
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
|
||||
export http_proxy="Your_HTTP_Proxy" # http proxy if any
|
||||
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
|
||||
export no_proxy=localhost,127.0.0.1,$host_ip,whisper-service,speecht5-service,vllm-service,tgi-service,audioqna-xeon-backend-server,audioqna-xeon-ui-server # additional no proxies if needed
|
||||
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
|
||||
source ./set_env_vllm.sh
|
||||
```
|
||||
|
||||
The example uses the Nano text editor. You can use any convenient text editor:
|
||||
### Deploy the Services Using Docker Compose
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
nano set_env_vllm.sh
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
|
||||
```bash
|
||||
nano set_env.sh
|
||||
```
|
||||
|
||||
If you are in a proxy environment, also set the proxy-related environment variables:
|
||||
|
||||
```bash
|
||||
export http_proxy="Your_HTTP_Proxy"
|
||||
export https_proxy="Your_HTTPs_Proxy"
|
||||
```
|
||||
|
||||
Set the values of the variables:
|
||||
|
||||
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
|
||||
|
||||
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
|
||||
|
||||
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
|
||||
|
||||
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
|
||||
|
||||
We set these values in the file set_env\*\*\*\*.sh
|
||||
|
||||
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
|
||||
The values shown in the file set_env.sh or set_env_vllm they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
|
||||
|
||||
#### Set variables with script set_env\*\*\*\*.sh
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
. set_env_vllm.sh
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
|
||||
```bash
|
||||
. set_env.sh
|
||||
```
|
||||
|
||||
### Start the services:
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
docker compose -f compose_vllm.yaml up -d
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
To deploy the AudioQnA services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
|
||||
|
||||
for TGI inference deployment
|
||||
|
||||
```bash
|
||||
cd docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose.yaml up -d
|
||||
```
|
||||
|
||||
All containers should be running and should not restart:
|
||||
for vLLM inference deployment
|
||||
|
||||
##### If you use vLLM:
|
||||
```bash
|
||||
cd docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose_vllm.yaml up -d
|
||||
```
|
||||
|
||||
- audioqna-vllm-service
|
||||
- whisper-service
|
||||
- speecht5-service
|
||||
- audioqna-backend-server
|
||||
- audioqna-ui-server
|
||||
> **Note**: developers should build docker image from source when:
|
||||
>
|
||||
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
|
||||
> - Unable to download the docker image.
|
||||
> - Use a specific version of Docker image.
|
||||
|
||||
##### If you use TGI:
|
||||
Please refer to the table below to build different microservices from source:
|
||||
|
||||
- audioqna-tgi-service
|
||||
- whisper-service
|
||||
- speecht5-service
|
||||
- audioqna-backend-server
|
||||
- audioqna-ui-server
|
||||
| Microservice | Deployment Guide |
|
||||
| ------------ | --------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
|
||||
| LLM | [LLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/llms) |
|
||||
| WHISPER | [Whisper build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/asr/src#211-whisper-server-image) |
|
||||
| SPEECHT5 | [SpeechT5 build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/tts/src#211-speecht5-server-image) |
|
||||
| GPT-SOVITS | [GPT-SOVITS build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/gpt-sovits/src#build-the-image) |
|
||||
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
|
||||
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
|
||||
|
||||
---
|
||||
### Check the Deployment Status
|
||||
|
||||
## Validate the Services
|
||||
After running docker compose, check if all the containers launched via docker compose have started:
|
||||
|
||||
### 1. Validate the vLLM/TGI Service
|
||||
#### For TGI inference deployment
|
||||
|
||||
```bash
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
For the default deployment, the following 5 containers should have started:
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
d8007690868d opea/audioqna:latest "python audioqna.py" 21 seconds ago Up 19 seconds 0.0.0.0:3008->8888/tcp, [::]:3008->8888/tcp audioqna-rocm-backend-server
|
||||
87ba9a1d56ae ghcr.io/huggingface/text-generation-inference:2.4.1-rocm "/tgi-entrypoint.sh …" 21 seconds ago Up 20 seconds 0.0.0.0:3006->80/tcp, [::]:3006->80/tcp tgi-service
|
||||
59e869acd742 opea/speecht5:latest "python speecht5_ser…" 21 seconds ago Up 20 seconds 0.0.0.0:7055->7055/tcp, :::7055->7055/tcp speecht5-service
|
||||
0143267a4327 opea/whisper:latest "python whisper_serv…" 21 seconds ago Up 20 seconds 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
|
||||
```
|
||||
|
||||
### For vLLM inference deployment
|
||||
|
||||
```bash
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
For the default deployment, the following 5 containers should have started:
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
f3e6893a69fa opea/audioqna-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 35 seconds 0.0.0.0:18039->5173/tcp, [::]:18039->5173/tcp audioqna-ui-server
|
||||
f943e5cd21e9 opea/audioqna:latest "python audioqna.py" 37 seconds ago Up 35 seconds 0.0.0.0:18038->8888/tcp, [::]:18038->8888/tcp audioqna-backend-server
|
||||
074e8c418f52 opea/speecht5:latest "python speecht5_ser…" 37 seconds ago Up 36 seconds 0.0.0.0:7055->7055/tcp, :::7055->7055/tcp speecht5-service
|
||||
77abe498e427 opea/vllm-rocm:latest "python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:8081->8011/tcp, [::]:8081->8011/tcp audioqna-vllm-service
|
||||
9074a95bb7a6 opea/whisper:latest "python whisper_serv…" 37 seconds ago Up 36 seconds 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
|
||||
```
|
||||
|
||||
If any issues are encountered during deployment, refer to the [Troubleshooting](../../../../README_miscellaneous.md#troubleshooting) section.
|
||||
|
||||
### Validate the Pipeline
|
||||
|
||||
Once the AudioQnA services are running, test the pipeline using the following command:
|
||||
|
||||
```bash
|
||||
# Test the AudioQnA megaservice by recording a .wav file, encoding the file into the base64 format, and then sending the base64 string to the megaservice endpoint.
|
||||
# The megaservice will return a spoken response as a base64 string. To listen to the response, decode the base64 string and save it as a .wav file.
|
||||
wget https://github.com/intel/intel-extension-for-transformers/raw/refs/heads/main/intel_extension_for_transformers/neural_chat/assets/audio/sample_2.wav
|
||||
base64_audio=$(base64 -w 0 sample_2.wav)
|
||||
|
||||
# if you are using speecht5 as the tts service, voice can be "default" or "male"
|
||||
# if you are using gpt-sovits for the tts service, you can set the reference audio following https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/gpt-sovits/src/README.md
|
||||
|
||||
curl http://${host_ip}:3008/v1/audioqna \
|
||||
-X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"audio\": \"${base64_audio}\", \"max_tokens\": 64, \"voice\": \"default\"}" \
|
||||
| sed 's/^"//;s/"$//' | base64 -d > output.wav
|
||||
```
|
||||
|
||||
**Note** : Access the AudioQnA UI by web browser through this URL: `http://${host_ip}:5173`. Please confirm the `5173` port is opened in the firewall. To validate each microservice used in the pipeline refer to the [Validate Microservices](#validate-microservices) section.
|
||||
|
||||
### Cleanup the Deployment
|
||||
|
||||
To stop the containers associated with the deployment, execute the following command:
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose_vllm.yaml down
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
|
||||
```bash
|
||||
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose.yaml down
|
||||
```
|
||||
|
||||
## AudioQnA Docker Compose Files
|
||||
|
||||
In the context of deploying an AudioQnA pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks, or single English TTS/multi-language TTS component. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
|
||||
|
||||
| File | Description |
|
||||
| ---------------------------------------- | ----------------------------------------------------------------------------------------- |
|
||||
| [compose_vllm.yaml](./compose_vllm.yaml) | Default compose file using vllm as serving framework and redis as vector database |
|
||||
| [compose.yaml](./compose.yaml) | The LLM serving framework is TGI. All other configurations remain the same as the default |
|
||||
|
||||
### Validate the vLLM/TGI Service
|
||||
|
||||
#### If you use vLLM:
|
||||
|
||||
@@ -313,7 +282,7 @@ Checking the response from the service. The response should be similar to JSON:
|
||||
If the service response has a meaningful response in the value of the "generated_text" key,
|
||||
then we consider the TGI service to be successfully launched
|
||||
|
||||
### 2. Validate MegaServices
|
||||
### Validate MegaServices
|
||||
|
||||
Test the AudioQnA megaservice by recording a .wav file, encoding the file into the base64 format, and then sending the
|
||||
base64 string to the megaservice endpoint. The megaservice will return a spoken response as a base64 string. To listen
|
||||
@@ -327,7 +296,7 @@ curl http://${host_ip}:3008/v1/audioqna \
|
||||
-H 'Content-Type: application/json' | sed 's/^"//;s/"$//' | base64 -d > output.wav
|
||||
```
|
||||
|
||||
### 3. Validate MicroServices
|
||||
### Validate MicroServices
|
||||
|
||||
```bash
|
||||
# whisper service
|
||||
@@ -343,18 +312,6 @@ curl http://${host_ip}:7055/v1/tts \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
### 4. Stop application
|
||||
## Conclusion
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose_vllm.yaml down
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
|
||||
```bash
|
||||
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose.yaml down
|
||||
```
|
||||
This guide should enable developers to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
|
||||
|
||||
@@ -42,7 +42,7 @@ services:
|
||||
environment:
|
||||
TTS_ENDPOINT: ${TTS_ENDPOINT}
|
||||
tgi-service:
|
||||
image: ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
image: ghcr.io/huggingface/text-generation-inference:2.4.1-rocm
|
||||
container_name: tgi-service
|
||||
ports:
|
||||
- "${TGI_SERVICE_PORT:-3006}:80"
|
||||
@@ -66,24 +66,6 @@ services:
|
||||
- seccomp:unconfined
|
||||
ipc: host
|
||||
command: --model-id ${LLM_MODEL_ID} --max-input-length 4096 --max-total-tokens 8192
|
||||
llm:
|
||||
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
|
||||
container_name: llm-tgi-server
|
||||
depends_on:
|
||||
- tgi-service
|
||||
ports:
|
||||
- "3007:9000"
|
||||
ipc: host
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
TGI_LLM_ENDPOINT: ${TGI_LLM_ENDPOINT}
|
||||
LLM_ENDPOINT: ${TGI_LLM_ENDPOINT}
|
||||
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
|
||||
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
|
||||
OPENAI_API_KEY: ${OPENAI_API_KEY}
|
||||
restart: unless-stopped
|
||||
wav2lip-service:
|
||||
image: ${REGISTRY:-opea}/wav2lip:${TAG:-latest}
|
||||
container_name: wav2lip-service
|
||||
@@ -125,7 +107,7 @@ services:
|
||||
container_name: avatarchatbot-backend-server
|
||||
depends_on:
|
||||
- asr
|
||||
- llm
|
||||
- tgi-service
|
||||
- tts
|
||||
- animation
|
||||
ports:
|
||||
|
||||
@@ -30,7 +30,7 @@ export ANIMATION_SERVICE_HOST_IP=${host_ip}
|
||||
export MEGA_SERVICE_PORT=8888
|
||||
export ASR_SERVICE_PORT=3001
|
||||
export TTS_SERVICE_PORT=3002
|
||||
export LLM_SERVICE_PORT=3007
|
||||
export LLM_SERVICE_PORT=3006
|
||||
export ANIMATION_SERVICE_PORT=3008
|
||||
|
||||
export DEVICE="cpu"
|
||||
|
||||
@@ -27,7 +27,7 @@ function build_docker_images() {
|
||||
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
|
||||
|
||||
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
|
||||
service_list="avatarchatbot whisper asr llm-textgen speecht5 tts wav2lip animation"
|
||||
service_list="avatarchatbot whisper asr speecht5 tts wav2lip animation"
|
||||
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
|
||||
|
||||
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
@@ -65,7 +65,7 @@ function start_services() {
|
||||
export MEGA_SERVICE_PORT=8888
|
||||
export ASR_SERVICE_PORT=3001
|
||||
export TTS_SERVICE_PORT=3002
|
||||
export LLM_SERVICE_PORT=3007
|
||||
export LLM_SERVICE_PORT=3006
|
||||
export ANIMATION_SERVICE_PORT=3008
|
||||
|
||||
export DEVICE="cpu"
|
||||
|
||||
@@ -41,7 +41,6 @@ function build_docker_images() {
|
||||
}
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/intel/cpu/xeon/
|
||||
export no_proxy=${no_proxy},${ip_address}
|
||||
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
|
||||
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
|
||||
export LLM_MODEL_ID="meta-llama/Meta-Llama-3-8B-Instruct"
|
||||
|
||||
@@ -2,78 +2,69 @@
|
||||
|
||||
This README provides instructions for deploying the CodeGen application using Docker Compose on a system equipped with AMD GPUs supporting ROCm, detailing the steps to configure, run, and validate the services. This guide defaults to using the **vLLM** backend for LLM serving.
|
||||
|
||||
If the service response has a meaningful response in the value of the "choices.text" key,
|
||||
then we consider the vLLM service to be successfully launched
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Steps to Run with Docker Compose (Default vLLM)](#steps-to-run-with-docker-compose-default-vllm)
|
||||
- [Service Overview](#service-overview)
|
||||
- [Overview](#overview)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Available Deployment Options](#available-deployment-options)
|
||||
- [compose_vllm.yaml (vLLM - Default)](#compose_vllyaml-vllm---default)
|
||||
- [compose.yaml (TGI)](#composeyaml-tgi)
|
||||
- [Configuration Parameters and Usage](#configuration-parameters-and-usage)
|
||||
- [Docker Compose GPU Configuration](#docker-compose-gpu-configuration)
|
||||
- [Environment Variables (`set_env*.sh`)](#environment-variables-set_envsh)
|
||||
- [Building Docker Images Locally (Optional)](#building-docker-images-locally-optional)
|
||||
- [1. Setup Build Environment](#1-setup-build-environment)
|
||||
- [2. Clone Repositories](#2-clone-repositories)
|
||||
- [3. Select Services and Build](#3-select-services-and-build)
|
||||
- [Validate Service Health](#validate-service-health)
|
||||
- [1. Validate the vLLM/TGI Service](#1-validate-the-vllmtgi-service)
|
||||
- [2. Validate the LLM Service](#2-validate-the-llm-service)
|
||||
- [3. Validate the MegaService (Backend)](#3-validate-the-megaservice-backend)
|
||||
- [4. Validate the Frontend (UI)](#4-validate-the-frontend-ui)
|
||||
- [How to Open the UI](#how-to-open-the-ui)
|
||||
- [Default: vLLM-based Deployment (`--profile codegen-xeon-vllm`)](#default-vllm-based-deployment---profile-codegen-xeon-vllm)
|
||||
- [TGI-based Deployment (`--profile codegen-xeon-tgi`)](#tgi-based-deployment---profile-codegen-xeon-tgi)
|
||||
- [Configuration Parameters](#configuration-parameters)
|
||||
- [Environment Variables](#environment-variables)
|
||||
- [Compose Profiles](#compose-profiles)
|
||||
- [Building Custom Images (Optional)](#building-custom-images-optional)
|
||||
- [Validate Services](#validate-services)
|
||||
- [Check Container Status](#check-container-status)
|
||||
- [Run Validation Script/Commands](#run-validation-scriptcommands)
|
||||
- [Accessing the User Interface (UI)](#accessing-the-user-interface-ui)
|
||||
- [Gradio UI (Default)](#gradio-ui-default)
|
||||
- [Svelte UI (Optional)](#svelte-ui-optional)
|
||||
- [React UI (Optional)](#react-ui-optional)
|
||||
- [VS Code Extension (Optional)](#vs-code-extension-optional)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Stopping the Application](#stopping-the-application)
|
||||
- [Next Steps](#next-steps)
|
||||
|
||||
## Steps to Run with Docker Compose (Default vLLM)
|
||||
## Overview
|
||||
|
||||
_This section assumes you are using pre-built images and targets the default vLLM deployment._
|
||||
This guide focuses on running the pre-configured CodeGen service using Docker Compose on AMD ROCm processing acelarating platform. It leverages containers optimized for Intel architecture for the CodeGen gateway, LLM serving (vLLM or TGI), and UI.
|
||||
|
||||
1. **Set Deploy Environment Variables:**
|
||||
## CodeGen Quick Start Deployment
|
||||
|
||||
- Go to the Docker Compose directory:
|
||||
```bash
|
||||
# Adjust path if your GenAIExamples clone is located elsewhere
|
||||
cd GenAIExamples/CodeGen/docker_compose/amd/gpu/rocm
|
||||
```
|
||||
- Setting variables in the operating system environment:
|
||||
- Set variable `HUGGINGFACEHUB_API_TOKEN`:
|
||||
```bash
|
||||
### Replace the string 'your_huggingfacehub_token' with your HuggingFacehub repository access token.
|
||||
export HUGGINGFACEHUB_API_TOKEN='your_huggingfacehub_token'
|
||||
```
|
||||
- Edit the environment script for the **vLLM** deployment (`set_env_vllm.sh`):
|
||||
```bash
|
||||
nano set_env_vllm.sh
|
||||
```
|
||||
- Configure `HOST_IP`, `EXTERNAL_HOST_IP`, `*_PORT` variables, and proxies (`http_proxy`, `https_proxy`, `no_proxy`) as described in the Configuration section below.
|
||||
- Source the environment variables:
|
||||
```bash
|
||||
. set_env_vllm.sh
|
||||
```
|
||||
This section describes how to quickly deploy and test the CodeGen service manually on an AMD GPU (ROCm) platform. The basic steps are:
|
||||
|
||||
2. **Start the Services (vLLM):**
|
||||
1. [Prerequisites](#prerequisites)
|
||||
2. [Generate a HuggingFace Access Token](#generate-a-huggingface-access-token)
|
||||
3. [Configure the Deployment Environment](#configure-the-deployment-environment)
|
||||
4. [Deploy the Services Using Docker Compose](#deploy-the-services-using-docker-compose)
|
||||
5. [Check the Deployment Status](#check-the-deployment-status)
|
||||
6. [Test the Pipeline](#test-the-pipeline)
|
||||
7. [Cleanup the Deployment](#cleanup-the-deployment)
|
||||
|
||||
```bash
|
||||
docker compose -f compose_vllm.yaml up -d
|
||||
```
|
||||
## Prerequisites
|
||||
|
||||
3. **Verify:** Proceed to the [Validate Service Health](#validate-service-health) section after allowing time for services to start.
|
||||
- Docker and Docker Compose installed.
|
||||
- x86 Intel or AMD CPU.
|
||||
- 4x AMD Instinct MI300X Accelerators.
|
||||
- Git installed (for cloning repository).
|
||||
- Hugging Face Hub API Token (for downloading models).
|
||||
- Access to the internet (or a private model cache).
|
||||
- Clone the `GenAIExamples` repository:
|
||||
|
||||
## Service Overview
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples.git
|
||||
cd GenAIExamples/CodeGen/docker_compose/amd/gpu/rocm/
|
||||
```
|
||||
|
||||
When using the default `compose_vllm.yaml` (vLLM-based), the following services are deployed:
|
||||
Checkout a released version, such as v1.3:
|
||||
|
||||
| Service Name | Default Port (Host) | Internal Port | Purpose |
|
||||
| :--------------------- | :--------------------------------------------- | :------------ | :-------------------------- |
|
||||
| codegen-vllm-service | `${CODEGEN_VLLM_SERVICE_PORT}` (e.g., 8028) | 8000 | LLM Serving (vLLM on ROCm) |
|
||||
| codegen-llm-server | `${CODEGEN_LLM_SERVICE_PORT}` (e.g., 9000) | 80 | LLM Microservice Wrapper |
|
||||
| codegen-backend-server | `${CODEGEN_BACKEND_SERVICE_PORT}` (e.g., 7778) | 80 | CodeGen MegaService/Gateway |
|
||||
| codegen-ui-server | `${CODEGEN_UI_SERVICE_PORT}` (e.g., 5173) | 80 | Frontend User Interface |
|
||||
|
||||
_(Note: Ports are configurable via `set_env_vllm.sh`. Check the script for actual defaults used.)_
|
||||
_(Note: The TGI deployment (`compose.yaml`) uses `codegen-tgi-service` instead of `codegen-vllm-service`)_
|
||||
```bash
|
||||
git checkout v1.3
|
||||
```
|
||||
|
||||
## Available Deployment Options
|
||||
|
||||
@@ -91,6 +82,69 @@ This directory provides different Docker Compose files:
|
||||
|
||||
## Configuration Parameters and Usage
|
||||
|
||||
### Environment Variables (`set_env*.sh`)
|
||||
|
||||
These scripts (`set_env_vllm.sh` for vLLM, `set_env.sh` for TGI) configure crucial parameters passed to the containers.
|
||||
|
||||
This example covers the single-node on-premises deployment of the CodeGen example using OPEA components. There are various ways to enable CodeGen, but this example will focus on four options available for deploying the CodeGen pipeline to AMD ROCm AI Accelerators. This example begins with a Quick Start section and then documents how to modify deployments, leverage new models and configure the number of allocated devices.
|
||||
|
||||
This example includes the following sections:
|
||||
|
||||
- [CodeGen Quick Start Deployment](#CodeGen-quick-start-deployment): Demonstrates how to quickly deploy a CodeGen application/pipeline on AMD GPU (ROCm) platform.
|
||||
- [CodeGen Docker Compose Files](#CodeGen-docker-compose-files): Describes some example deployments and their docker compose files.
|
||||
- [CodeGen Service Configuration](#CodeGen-service-configuration): Describes the services and possible configuration changes.
|
||||
|
||||
**Note** This example requires access to a properly installed AMD ROCm platform with a functional Docker service configured
|
||||
|
||||
## Generate a HuggingFace Access Token
|
||||
|
||||
Some HuggingFace resources, such as some models, are only accessible if you have an access token. If you do not already have a HuggingFace access token, you can create one by first creating an account by following the steps provided at [HuggingFace](https://huggingface.co/) and then generating a [user access token](https://huggingface.co/docs/transformers.js/en/guides/private#step-1-generating-a-user-access-token).
|
||||
|
||||
## Configure the Deployment Environment
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Key parameters are configured via environment variables set before running `docker compose up`.
|
||||
|
||||
| Environment Variable | Description | Default (Set Externally) |
|
||||
| :-------------------------------------- | :------------------------------------------------------------------------------------------------------------------ | :----------------------------------------------------------------------------------------------- |
|
||||
| `HOST_IP` | External IP address of the host machine. **Required.** | `your_external_ip_address` |
|
||||
| `HUGGINGFACEHUB_API_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingface_token` |
|
||||
| `LLM_MODEL_ID` | Hugging Face model ID for the CodeGen LLM (used by TGI/vLLM service). Configured within `compose.yaml` environment. | `Qwen/Qwen2.5-Coder-7B-Instruct` |
|
||||
| `EMBEDDING_MODEL_ID` | Hugging Face model ID for the embedding model (used by TEI service). Configured within `compose.yaml` environment. | `BAAI/bge-base-en-v1.5` |
|
||||
| `LLM_ENDPOINT` | Internal URL for the LLM serving endpoint (used by `codegen-llm-server`). Configured in `compose.yaml`. | `http://codegen-tgi-server:80/generate` or `http://codegen-vllm-server:8000/v1/chat/completions` |
|
||||
| `TEI_EMBEDDING_ENDPOINT` | Internal URL for the Embedding service. Configured in `compose.yaml`. | `http://codegen-tei-embedding-server:80/embed` |
|
||||
| `DATAPREP_ENDPOINT` | Internal URL for the Data Preparation service. Configured in `compose.yaml`. | `http://codegen-dataprep-server:80/dataprep` |
|
||||
| `BACKEND_SERVICE_ENDPOINT` | External URL for the CodeGen Gateway (MegaService). Derived from `HOST_IP` and port `7778`. | `http://${HOST_IP}:7778/v1/codegen` |
|
||||
| `*_PORT` (Internal) | Internal container ports (e.g., `80`, `6379`). Defined in `compose.yaml`. | N/A |
|
||||
| `http_proxy` / `https_proxy`/`no_proxy` | Network proxy settings (if required). | `""` |
|
||||
|
||||
To set up environment variables for deploying CodeGen services, source the _setup_env.sh_ script in this directory:
|
||||
|
||||
For TGI
|
||||
|
||||
```bash
|
||||
export host_ip="External_Public_IP" #ip address of the node
|
||||
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
|
||||
export http_proxy="Your_HTTP_Proxy" #http proxy if any
|
||||
export https_proxy="Your_HTTPs_Proxy" #https proxy if any
|
||||
export no_proxy=localhost,127.0.0.1,$host_ip #additional no proxies if needed
|
||||
export no_proxy=$no_proxy
|
||||
source ./set_env.sh
|
||||
```
|
||||
|
||||
For vLLM
|
||||
|
||||
```bash
|
||||
export host_ip="External_Public_IP" #ip address of the node
|
||||
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
|
||||
export http_proxy="Your_HTTP_Proxy" #http proxy if any
|
||||
export https_proxy="Your_HTTPs_Proxy" #https proxy if any
|
||||
export no_proxy=localhost,127.0.0.1,$host_ip #additional no proxies if needed
|
||||
export no_proxy=$no_proxy
|
||||
source ./set_env_vllm.sh
|
||||
```
|
||||
|
||||
### Docker Compose GPU Configuration
|
||||
|
||||
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose files (`compose.yaml`, `compose_vllm.yaml`) for the LLM serving container:
|
||||
@@ -103,7 +157,6 @@ shm_size: 1g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/:/dev/dri/
|
||||
# - /dev/dri/render128:/dev/dri/render128
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
@@ -112,302 +165,329 @@ security_opt:
|
||||
- seccomp:unconfined
|
||||
```
|
||||
|
||||
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs (e.g., `/dev/dri/card0:/dev/dri/card0`, `/dev/dri/render128:/dev/dri/render128`). Use AMD GPU driver utilities to identify device IDs.
|
||||
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs (e.g., `/dev/dri/card0:/dev/dri/card0`, `/dev/dri/render128:/dev/dri/render128`). For example:
|
||||
|
||||
### Environment Variables (`set_env*.sh`)
|
||||
```yaml
|
||||
shm_size: 1g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/card0:/dev/dri/card0
|
||||
- /dev/dri/render128:/dev/dri/render128
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
- video
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
```
|
||||
|
||||
These scripts (`set_env_vllm.sh` for vLLM, `set_env.sh` for TGI) configure crucial parameters passed to the containers.
|
||||
**How to Identify GPU Device IDs:**
|
||||
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
|
||||
|
||||
| Environment Variable | Description | Example Value (Edit in Script) |
|
||||
| :----------------------------- | :------------------------------------------------------------------------------------------------------- | :------------------------------- |
|
||||
| `HUGGINGFACEHUB_API_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingfacehub_token` |
|
||||
| `HOST_IP` | Internal/Primary IP address of the host machine. Used for inter-service communication. **Required.** | `192.168.1.100` |
|
||||
| `EXTERNAL_HOST_IP` | External IP/hostname used to access the UI from outside. Same as `HOST_IP` if no proxy/LB. **Required.** | `192.168.1.100` |
|
||||
| `CODEGEN_LLM_MODEL_ID` | Hugging Face model ID for the CodeGen LLM. | `Qwen/Qwen2.5-Coder-7B-Instruct` |
|
||||
| `CODEGEN_VLLM_SERVICE_PORT` | Host port mapping for the vLLM serving endpoint (in `set_env_vllm.sh`). | `8028` |
|
||||
| `CODEGEN_TGI_SERVICE_PORT` | Host port mapping for the TGI serving endpoint (in `set_env.sh`). | `8028` |
|
||||
| `CODEGEN_LLM_SERVICE_PORT` | Host port mapping for the LLM Microservice wrapper. | `9000` |
|
||||
| `CODEGEN_BACKEND_SERVICE_PORT` | Host port mapping for the CodeGen MegaService/Gateway. | `7778` |
|
||||
| `CODEGEN_UI_SERVICE_PORT` | Host port mapping for the UI service. | `5173` |
|
||||
| `http_proxy` | Network HTTP Proxy URL (if required). | `Your_HTTP_Proxy` |
|
||||
| `https_proxy` | Network HTTPS Proxy URL (if required). | `Your_HTTPs_Proxy` |
|
||||
| `no_proxy` | Comma-separated list of hosts to bypass proxy. Should include `localhost,127.0.0.1,$HOST_IP`. | `localhost,127.0.0.1` |
|
||||
### Deploy the Services Using Docker Compose
|
||||
|
||||
**How to Use:** Edit the relevant `set_env*.sh` file (`set_env_vllm.sh` for the default) with your values, then source it (`. ./set_env*.sh`) before running `docker compose`.
|
||||
Please refer to the table below to build different microservices from source:
|
||||
|
||||
## Building Docker Images Locally (Optional)
|
||||
When using the default `compose_vllm.yaml` (vLLM-based), the following services are deployed:
|
||||
|
||||
Follow these steps if you need to build the Docker images from source instead of using pre-built ones.
|
||||
| Service Name | Default Port (Host) | Internal Port | Purpose |
|
||||
| :--------------------- | :--------------------------------------------- | :------------ | :-------------------------- |
|
||||
| codegen-vllm-service | `${CODEGEN_VLLM_SERVICE_PORT}` (e.g., 8028) | 8000 | LLM Serving (vLLM on ROCm) |
|
||||
| codegen-llm-server | `${CODEGEN_LLM_SERVICE_PORT}` (e.g., 9000) | 80 | LLM Microservice Wrapper |
|
||||
| codegen-backend-server | `${CODEGEN_BACKEND_SERVICE_PORT}` (e.g., 7778) | 80 | CodeGen MegaService/Gateway |
|
||||
| codegen-ui-server | `${CODEGEN_UI_SERVICE_PORT}` (e.g., 5173) | 80 | Frontend User Interface |
|
||||
|
||||
### 1. Setup Build Environment
|
||||
To deploy the CodeGen services, execute the `docker compose up` command with the appropriate arguments. For a vLLM deployment, execute:
|
||||
|
||||
- #### Create application install directory and go to it:
|
||||
```bash
|
||||
docker compose -f compose_vllm.sh up -d
|
||||
```
|
||||
|
||||
```bash
|
||||
mkdir ~/codegen-install && cd codegen-install
|
||||
```
|
||||
The CodeGen docker images should automatically be downloaded from the `OPEA registry` and deployed on the AMD GPU (ROCM) Platform:
|
||||
|
||||
### 2. Clone Repositories
|
||||
```bash
|
||||
[+] Running 5/5_default Created 0.3s
|
||||
✔ Network rocm_default Created 0.3s
|
||||
✔ Container codegen-vllm-service Healthy 100.9s
|
||||
✔ Container codegen-llm-server Started 101.2s
|
||||
✔ Container codegen-backend-server Started 101.5s
|
||||
✔ Container codegen-ui-server Started 101.9s
|
||||
```
|
||||
|
||||
- #### Clone the repository GenAIExamples (the default repository branch "main" is used here):
|
||||
# To deploy the CodeGen services, execute the `docker compose up` command with the appropriate arguments. For a TGI deployment, execute:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples.git
|
||||
```
|
||||
```
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
If you need to use a specific branch/tag of the GenAIExamples repository, then (v1.3 replace with its own value):
|
||||
The CodeGen docker images should automatically be downloaded from the `OPEA registry` and deployed on the AMD GPU (ROCM) Platform:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples.git && cd GenAIExamples && git checkout v1.3
|
||||
```
|
||||
```bash
|
||||
[+] Running 5/5_default Created 0.4s
|
||||
✔ Network rocm_default Created 0.4s
|
||||
✔ Container codegen-tgi-service Healthy 102.6s
|
||||
✔ Container codegen-llm-server Started 100.2s
|
||||
✔ Container codegen-backend-server Started 103.7s
|
||||
✔ Container codegen-ui-server Started 102.9s
|
||||
```
|
||||
|
||||
We remind you that when using a specific version of the code, you need to use the README from this version.
|
||||
## Building Custom Images (Optional)
|
||||
|
||||
- #### Go to build directory:
|
||||
If you need to modify the microservices:
|
||||
|
||||
```bash
|
||||
cd ~/codegen-install/GenAIExamples/CodeGen/docker_image_build
|
||||
```
|
||||
1. Clone the [OPEA GenAIComps](https://github.com/opea-project/GenAIComps) repository.
|
||||
2. Follow build instructions in the respective component directories (e.g., `comps/llms/text-generation`, `comps/codegen`, `comps/ui/gradio`, etc.). Use the provided Dockerfiles (e.g., `CodeGen/Dockerfile`, `CodeGen/ui/docker/Dockerfile.gradio`).
|
||||
3. Tag your custom images appropriately (e.g., `my-custom-codegen:latest`).
|
||||
4. Update the `image:` fields in the `compose.yaml` file to use your custom image tags.
|
||||
|
||||
- Cleaning up the GenAIComps repository if it was previously cloned in this directory.
|
||||
This is necessary if the build was performed earlier and the GenAIComps folder exists and is not empty:
|
||||
_Refer to the main [CodeGen README](../../../../README.md) for links to relevant GenAIComps components._
|
||||
|
||||
```bash
|
||||
echo Y | rm -R GenAIComps
|
||||
```
|
||||
## Validate Services
|
||||
|
||||
- #### Clone the repository GenAIComps (the default repository branch "main" is used here):
|
||||
### Check the Deployment Status for TGI base deployment
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIComps.git
|
||||
```
|
||||
After running docker compose, check if all the containers launched via docker compose have started:
|
||||
|
||||
If you use a specific tag of the GenAIExamples repository,
|
||||
then you should also use the corresponding tag for GenAIComps. (v1.3 replace with its own value):
|
||||
```bash
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout v1.3
|
||||
```
|
||||
For the default deployment, the following 10 containers should have started:
|
||||
|
||||
We remind you that when using a specific version of the code, you need to use the README from this version.
|
||||
```bash
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
1d08caeae2ed opea/codegen-ui:latest "docker-entrypoint.s…" 2 minutes ago Up About a minute 0.0.0.0:18151->5173/tcp, [::]:18151->5173/tcp codegen-ui-server
|
||||
f52adc66c116 opea/codegen:latest "python codegen.py" 2 minutes ago Up About a minute 0.0.0.0:18150->7778/tcp, [::]:18150->7778/tcp codegen-backend-server
|
||||
4b1cb8f5d4ff opea/llm-textgen:latest "bash entrypoint.sh" 2 minutes ago Up About a minute 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp codegen-llm-server
|
||||
3bb4ee0abf15 ghcr.io/huggingface/text-generation-inference:2.4.1-rocm "/tgi-entrypoint.sh …" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:8028->80/tcp, [::]:8028->80/tcp codegen-tgi-service
|
||||
```
|
||||
|
||||
### 3. Select Services and Build
|
||||
### Check the Deployment Status for vLLM base deployment
|
||||
|
||||
- #### Setting the list of images for the build (from the build file.yaml)
|
||||
After running docker compose, check if all the containers launched via docker compose have started:
|
||||
|
||||
Select the services corresponding to your desired deployment (vLLM is the default):
|
||||
```bash
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
##### vLLM-based application (Default)
|
||||
For the default deployment, the following 10 containers should have started:
|
||||
|
||||
```bash
|
||||
service_list="vllm-rocm llm-textgen codegen codegen-ui"
|
||||
```
|
||||
```bash
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
f100cc326343 opea/codegen-ui:latest "docker-entrypoint.s…" 16 minutes ago Up 14 minutes 0.0.0.0:18151->5173/tcp, [::]:18151->5173/tcp codegen-ui-server
|
||||
c59de0b2da5b opea/codegen:latest "python codegen.py" 16 minutes ago Up 14 minutes 0.0.0.0:18150->7778/tcp, [::]:18150->7778/tcp codegen-backend-server
|
||||
dcd83e0e4c0f opea/llm-textgen:latest "bash entrypoint.sh" 16 minutes ago Up 14 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp codegen-llm-server
|
||||
d091d8f2fab6 opea/vllm-rocm:latest "python3 /workspace/…" 16 minutes ago Up 16 minutes (healthy) 0.0.0.0:8028->8011/tcp, [::]:8028->8011/tcp codegen-vllm-service
|
||||
```
|
||||
|
||||
##### TGI-based application
|
||||
### Test the Pipeline
|
||||
|
||||
```bash
|
||||
service_list="llm-textgen codegen codegen-ui"
|
||||
```
|
||||
### If you use vLLM:
|
||||
|
||||
- #### Optional. Pull TGI Docker Image (Do this if you plan to build/use the TGI variant)
|
||||
```bash
|
||||
DATA='{"model": "Qwen/Qwen2.5-Coder-7B-Instruct", '\
|
||||
'"messages": [{"role": "user", "content": "Implement a high-level API for a TODO list application. '\
|
||||
'The API takes as input an operation request and updates the TODO list in place. '\
|
||||
'If the request is invalid, raise an exception."}], "max_tokens": 256}'
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
```
|
||||
curl http://${HOST_IP}:${CODEGEN_VLLM_SERVICE_PORT}/v1/chat/completions \
|
||||
-X POST \
|
||||
-d "$DATA" \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
- #### Build Docker Images
|
||||
Checking the response from the service. The response should be similar to JSON:
|
||||
|
||||
_Ensure you are in the `~/codegen-install/GenAIExamples/CodeGen/docker_image_build` directory._
|
||||
````json
|
||||
{
|
||||
"id": "chatcmpl-142f34ef35b64a8db3deedd170fed951",
|
||||
"object": "chat.completion",
|
||||
"created": 1742270316,
|
||||
"model": "Qwen/Qwen2.5-Coder-7B-Instruct",
|
||||
"choices": [
|
||||
{
|
||||
"index": 0,
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "```python\nfrom typing import Optional, List, Dict, Union\nfrom pydantic import BaseModel, validator\n\nclass OperationRequest(BaseModel):\n # Assuming OperationRequest is already defined as per the given text\n pass\n\nclass UpdateOperation(OperationRequest):\n new_items: List[str]\n\n def apply_and_maybe_raise(self, updatable_item: \"Updatable todo list\") -> None:\n # Assuming updatable_item is an instance of Updatable todo list\n self.validate()\n updatable_item.add_items(self.new_items)\n\nclass Updatable:\n # Abstract class for items that can be updated\n pass\n\nclass TodoList(Updatable):\n # Class that represents a todo list\n items: List[str]\n\n def add_items(self, new_items: List[str]) -> None:\n self.items.extend(new_items)\n\ndef handle_request(operation_request: OperationRequest) -> None:\n # Function to handle an operation request\n if isinstance(operation_request, UpdateOperation):\n operation_request.apply_and_maybe_raise(get_todo_list_for_update())\n else:\n raise ValueError(\"Invalid operation request\")\n\ndef get_todo_list_for_update() -> TodoList:\n # Function to get the todo list for update\n # Assuming this function returns the",
|
||||
"tool_calls": []
|
||||
},
|
||||
"logprobs": null,
|
||||
"finish_reason": "length",
|
||||
"stop_reason": null
|
||||
}
|
||||
],
|
||||
"usage": { "prompt_tokens": 66, "total_tokens": 322, "completion_tokens": 256, "prompt_tokens_details": null },
|
||||
"prompt_logprobs": null
|
||||
}
|
||||
````
|
||||
|
||||
```bash
|
||||
docker compose -f build.yaml build ${service_list} --no-cache
|
||||
```
|
||||
If the service response has a meaningful response in the value of the "choices.message.content" key,
|
||||
then we consider the vLLM service to be successfully launched
|
||||
|
||||
After the build, check the list of images with the command:
|
||||
### If you use TGI:
|
||||
|
||||
```bash
|
||||
docker image ls
|
||||
```
|
||||
```bash
|
||||
DATA='{"inputs":"Implement a high-level API for a TODO list application. '\
|
||||
'The API takes as input an operation request and updates the TODO list in place. '\
|
||||
'If the request is invalid, raise an exception.",'\
|
||||
'"parameters":{"max_new_tokens":256,"do_sample": true}}'
|
||||
|
||||
The list of images should include (depending on `service_list`):
|
||||
curl http://${HOST_IP}:${CODEGEN_TGI_SERVICE_PORT}/generate \
|
||||
-X POST \
|
||||
-d "$DATA" \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
###### vLLM-based application:
|
||||
Checking the response from the service. The response should be similar to JSON:
|
||||
|
||||
- opea/vllm-rocm:latest
|
||||
- opea/llm-textgen:latest
|
||||
- opea/codegen:latest
|
||||
- opea/codegen-ui:latest
|
||||
````json
|
||||
{
|
||||
"generated_text": " The supported operations are \"add_task\", \"complete_task\", and \"remove_task\". Each operation can be defined with a corresponding function in the API.\n\nAdd your API in the following format:\n\n```\nTODO App API\n\nsupported operations:\n\noperation name description\n----------------------- ------------------------------------------------\n<operation_name> <operation description>\n```\n\nUse type hints for function parameters and return values. Specify a text description of the API's supported operations.\n\nUse the following code snippet as a starting point for your high-level API function:\n\n```\nclass TodoAPI:\n def __init__(self, tasks: List[str]):\n self.tasks = tasks # List of tasks to manage\n\n def add_task(self, task: str) -> None:\n self.tasks.append(task)\n\n def complete_task(self, task: str) -> None:\n self.tasks = [t for t in self.tasks if t != task]\n\n def remove_task(self, task: str) -> None:\n self.tasks = [t for t in self.tasks if t != task]\n\n def handle_request(self, request: Dict[str, str]) -> None:\n operation = request.get('operation')\n if operation == 'add_task':\n self.add_task(request.get('task'))\n elif"
|
||||
}
|
||||
````
|
||||
|
||||
###### TGI-based application:
|
||||
|
||||
- ghcr.io/huggingface/text-generation-inference:2.3.1-rocm (if pulled)
|
||||
- opea/llm-textgen:latest
|
||||
- opea/codegen:latest
|
||||
- opea/codegen-ui:latest
|
||||
|
||||
_After building, ensure the `image:` tags in the main `compose_vllm.yaml` or `compose.yaml` (in the `amd/gpu/rocm` directory) match these built images (e.g., `opea/vllm-rocm:latest`)._
|
||||
|
||||
## Validate Service Health
|
||||
|
||||
Run these checks after starting the services to ensure they are operational. Focus on the vLLM checks first as it's the default.
|
||||
|
||||
### 1. Validate the vLLM/TGI Service
|
||||
|
||||
#### If you use vLLM (Default - using `compose_vllm.yaml` and `set_env_vllm.sh`)
|
||||
|
||||
- **How Tested:** Send a POST request with a sample prompt to the vLLM endpoint.
|
||||
- **CURL Command:**
|
||||
|
||||
```bash
|
||||
DATA='{"model": "Qwen/Qwen2.5-Coder-7B-Instruct", '\
|
||||
'"messages": [{"role": "user", "content": "Implement a high-level API for a TODO list application. '\
|
||||
'The API takes as input an operation request and updates the TODO list in place. '\
|
||||
'If the request is invalid, raise an exception."}], "max_tokens": 256}'
|
||||
|
||||
curl http://${HOST_IP}:${CODEGEN_VLLM_SERVICE_PORT}/v1/chat/completions \
|
||||
-X POST \
|
||||
-d "$DATA" \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
- **Sample Output:**
|
||||
```json
|
||||
{
|
||||
"id": "chatcmpl-142f34ef35b64a8db3deedd170fed951",
|
||||
"object": "chat.completion"
|
||||
// ... (rest of output) ...
|
||||
}
|
||||
```
|
||||
- **Expected Result:** A JSON response with a `choices[0].message.content` field containing meaningful generated code.
|
||||
|
||||
#### If you use TGI (using `compose.yaml` and `set_env.sh`)
|
||||
|
||||
- **How Tested:** Send a POST request with a sample prompt to the TGI endpoint.
|
||||
- **CURL Command:**
|
||||
|
||||
```bash
|
||||
DATA='{"inputs":"Implement a high-level API for a TODO list application. '\
|
||||
# ... (data payload as before) ...
|
||||
'"parameters":{"max_new_tokens":256,"do_sample": true}}'
|
||||
|
||||
curl http://${HOST_IP}:${CODEGEN_TGI_SERVICE_PORT}/generate \
|
||||
-X POST \
|
||||
-d "$DATA" \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
- **Sample Output:**
|
||||
```json
|
||||
{
|
||||
"generated_text": " The supported operations are \"add_task\", \"complete_task\", and \"remove_task\". # ... (generated code) ..."
|
||||
}
|
||||
```
|
||||
- **Expected Result:** A JSON response with a `generated_text` field containing meaningful generated code.
|
||||
If the service response has a meaningful response in the value of the "generated_text" key,
|
||||
then we consider the TGI service to be successfully launched
|
||||
|
||||
### 2. Validate the LLM Service
|
||||
|
||||
- **Service Name:** `codegen-llm-server`
|
||||
- **How Tested:** Send a POST request to the LLM microservice wrapper endpoint.
|
||||
- **CURL Command:**
|
||||
```bash
|
||||
DATA='{"query":"Implement a high-level API for a TODO list application. '\
|
||||
'The API takes as input an operation request and updates the TODO list in place. '\
|
||||
'If the request is invalid, raise an exception.",'\
|
||||
'"max_tokens":256,"top_k":10,"top_p":0.95,"typical_p":0.95,"temperature":0.01,'\
|
||||
'"repetition_penalty":1.03,"stream":false}'
|
||||
|
||||
```bash
|
||||
DATA='{"query":"Implement a high-level API for a TODO list application. '\
|
||||
# ... (data payload as before) ...
|
||||
'"repetition_penalty":1.03,"stream":false}'
|
||||
curl http://${HOST_IP}:${CODEGEN_LLM_SERVICE_PORT}/v1/chat/completions \
|
||||
-X POST \
|
||||
-d "$DATA" \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
curl http://${HOST_IP}:${CODEGEN_LLM_SERVICE_PORT}/v1/chat/completions \
|
||||
-X POST \
|
||||
-d "$DATA" \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
Checking the response from the service. The response should be similar to JSON:
|
||||
|
||||
- **Sample Output:** (Structure may vary slightly depending on whether vLLM or TGI is backend)
|
||||
```json
|
||||
{
|
||||
"id": "cmpl-4e89a590b1af46bfb37ce8f12b2996f8" // Example ID
|
||||
// ... (output structure depends on backend, check original validation) ...
|
||||
````json
|
||||
{
|
||||
"id": "cmpl-4e89a590b1af46bfb37ce8f12b2996f8",
|
||||
"choices": [
|
||||
{
|
||||
"finish_reason": "length",
|
||||
"index": 0,
|
||||
"logprobs": null,
|
||||
"text": " The API should support the following operations:\n\n1. Add a new task to the TODO list.\n2. Remove a task from the TODO list.\n3. Mark a task as completed.\n4. Retrieve the list of all tasks.\n\nThe API should also support the following features:\n\n1. The ability to filter tasks based on their completion status.\n2. The ability to sort tasks based on their priority.\n3. The ability to search for tasks based on their description.\n\nHere is an example of how the API can be used:\n\n```python\ntodo_list = []\napi = TodoListAPI(todo_list)\n\n# Add tasks\napi.add_task(\"Buy groceries\")\napi.add_task(\"Finish homework\")\n\n# Mark a task as completed\napi.mark_task_completed(\"Buy groceries\")\n\n# Retrieve the list of all tasks\nprint(api.get_all_tasks())\n\n# Filter tasks based on completion status\nprint(api.filter_tasks(completed=True))\n\n# Sort tasks based on priority\napi.sort_tasks(priority=\"high\")\n\n# Search for tasks based on description\nprint(api.search_tasks(description=\"homework\"))\n```\n\nIn this example, the `TodoListAPI` class is used to manage the TODO list. The `add_task` method adds a new task to the list, the `mark_task_completed` method",
|
||||
"stop_reason": null,
|
||||
"prompt_logprobs": null
|
||||
}
|
||||
],
|
||||
"created": 1742270567,
|
||||
"model": "Qwen/Qwen2.5-Coder-7B-Instruct",
|
||||
"object": "text_completion",
|
||||
"system_fingerprint": null,
|
||||
"usage": {
|
||||
"completion_tokens": 256,
|
||||
"prompt_tokens": 37,
|
||||
"total_tokens": 293,
|
||||
"completion_tokens_details": null,
|
||||
"prompt_tokens_details": null
|
||||
}
|
||||
```
|
||||
- **Expected Result:** A JSON response containing meaningful generated code within the `choices` array.
|
||||
}
|
||||
````
|
||||
|
||||
### 3. Validate the MegaService (Backend)
|
||||
## Accessing the User Interface (UI)
|
||||
|
||||
- **Service Name:** `codegen-backend-server`
|
||||
- **How Tested:** Send a POST request to the main CodeGen gateway endpoint.
|
||||
- **CURL Command:**
|
||||
Multiple UI options can be configured via the `compose.yaml`.
|
||||
|
||||
```bash
|
||||
DATA='{"messages": "Implement a high-level API for a TODO list application. '\
|
||||
# ... (data payload as before) ...
|
||||
'If the request is invalid, raise an exception."}'
|
||||
### Svelte UI (Optional)
|
||||
|
||||
curl http://${HOST_IP}:${CODEGEN_BACKEND_SERVICE_PORT}/v1/codegen \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$DATA"
|
||||
```
|
||||
1. Modify `compose.yaml`: Comment out the `codegen-gradio-ui-server` service and uncomment/add the `codegen-xeon-ui-server` (Svelte) service definition, ensuring the port mapping is correct (e.g., `"- 5173:5173"`).
|
||||
2. Restart Docker Compose: `docker compose --profile <profile_name> up -d`
|
||||
3. Access: `http://{HOST_IP}:5173` (or the host port you mapped).
|
||||
|
||||
- **Sample Output:**
|
||||
```textmate
|
||||
data: {"id":"cmpl-...", ...}
|
||||
# ... more data chunks ...
|
||||
data: [DONE]
|
||||
```
|
||||
- **Expected Result:** A stream of server-sent events (SSE) containing JSON data with generated code tokens, ending with `data: [DONE]`.
|
||||

|
||||
|
||||
### 4. Validate the Frontend (UI)
|
||||
### VS Code Extension (Optional)
|
||||
|
||||
- **Service Name:** `codegen-ui-server`
|
||||
- **How Tested:** Access the UI URL in a web browser and perform a test query.
|
||||
- **Steps:** See [How to Open the UI](#how-to-open-the-ui).
|
||||
- **Expected Result:** The UI loads correctly, and submitting a prompt results in generated code displayed on the page.
|
||||
Users can interact with the backend service using the `Neural Copilot` VS Code extension.
|
||||
|
||||
## How to Open the UI
|
||||
|
||||
1. Determine the UI access URL using the `EXTERNAL_HOST_IP` and `CODEGEN_UI_SERVICE_PORT` variables defined in your sourced `set_env*.sh` file (use `set_env_vllm.sh` for the default vLLM deployment). The default URL format is:
|
||||
`http://${EXTERNAL_HOST_IP}:${CODEGEN_UI_SERVICE_PORT}`
|
||||
(e.g., `http://192.168.1.100:5173`)
|
||||
|
||||
2. Open this URL in your web browser.
|
||||
|
||||
3. You should see the CodeGen starting page:
|
||||

|
||||
|
||||
4. Enter a prompt in the input field (e.g., "Write a Python code that returns the current time and date") and press Enter or click the submit button.
|
||||
|
||||
5. Verify that the generated code appears correctly:
|
||||

|
||||
1. **Install:** Find and install `Neural Copilot` from the VS Code Marketplace.
|
||||

|
||||
2. **Configure:** Set the "Service URL" in the extension settings to your CodeGen backend endpoint: `http://${HOST_IP}:7778/v1/codegen` (use the correct port if changed).
|
||||

|
||||
3. **Usage:**
|
||||
- **Inline Suggestion:** Type a comment describing the code you want (e.g., `# Python function to read a file`) and wait for suggestions.
|
||||

|
||||
- **Chat:** Use the Neural Copilot panel to chat with the AI assistant about code.
|
||||

|
||||
|
||||
## Troubleshooting
|
||||
|
||||
_(No specific troubleshooting steps provided in the original content for this file. Add common issues if known.)_
|
||||
- **Model Download Issues:** Check `HUGGINGFACEHUB_API_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
|
||||
- **Connection Errors:** Verify `HOST_IP` is correct and accessible. Check `docker ps` for port mappings. Ensure `no_proxy` includes `HOST_IP` if using a proxy. Check logs of the service failing to connect (e.g., `codegen-backend-server` logs if it can't reach `codegen-llm-server`).
|
||||
- **"Container name is in use"**: Stop existing containers (`docker compose down`) or change `container_name` in `compose.yaml`.
|
||||
- **Resource Issues:** CodeGen models can be memory-intensive. Monitor host RAM usage. Increase Docker resources if needed.
|
||||
|
||||
- Check container logs (`docker compose -f <file> logs <service_name>`), especially for `codegen-vllm-service` or `codegen-tgi-service`.
|
||||
- Ensure `HUGGINGFACEHUB_API_TOKEN` is correct.
|
||||
- Verify ROCm drivers and Docker setup for GPU access.
|
||||
- Confirm network connectivity and proxy settings.
|
||||
- Ensure `HOST_IP` and `EXTERNAL_HOST_IP` are correctly set and accessible.
|
||||
- If building locally, ensure build steps completed without error and image tags match compose file.
|
||||
### Cleanup the Deployment
|
||||
|
||||
## Stopping the Application
|
||||
|
||||
### If you use vLLM (Default)
|
||||
To stop the containers associated with the deployment, execute the following command:
|
||||
|
||||
```bash
|
||||
# Ensure you are in the correct directory
|
||||
# cd GenAIExamples/CodeGen/docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose_vllm.yaml down
|
||||
```
|
||||
|
||||
### If you use TGI
|
||||
|
||||
```bash
|
||||
# Ensure you are in the correct directory
|
||||
# cd GenAIExamples/CodeGen/docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose.yaml down
|
||||
```
|
||||
|
||||
```bash
|
||||
[+] Running 0/1
|
||||
[+] Running 1/2degen-ui-server Stopping 0.4s
|
||||
[+] Running 2/3degen-ui-server Removed 10.5s
|
||||
[+] Running 2/3degen-ui-server Removed 10.5s
|
||||
[+] Running 3/4degen-ui-server Removed 10.5s
|
||||
[+] Running 5/5degen-ui-server Removed 10.5s
|
||||
✔ Container codegen-ui-server Removed 10.5s
|
||||
✔ Container codegen-backend-server Removed 10.4s
|
||||
✔ Container codegen-llm-server Removed 10.4s
|
||||
✔ Container codegen-tgi-service Removed 8.0s
|
||||
✔ Network rocm_default Removed 0.6s
|
||||
```
|
||||
|
||||
### compose.yaml - TGI Deployment
|
||||
|
||||
The TGI (Text Generation Inference) deployment and the default deployment differ primarily in their service configurations and specific focus on handling large language models (LLMs). The TGI deployment includes a unique `codegen-tgi-service`, which utilizes the `ghcr.io/huggingface/text-generation-inference:2.4.1-rocm` image and is specifically configured to run on AMD hardware.
|
||||
|
||||
| Service Name | Image Name | AMD Use |
|
||||
| ---------------------- | -------------------------------------------------------- | ------- |
|
||||
| codegen-backend-server | opea/codegen:latest | no |
|
||||
| codegen-llm-server | opea/codegen:latest | no |
|
||||
| codegen-tgi-service | ghcr.io/huggingface/text-generation-inference:2.4.1-rocm | yes |
|
||||
| codegen-ui-server | opea/codegen-ui:latest | no |
|
||||
|
||||
### compose_vllm.yaml - vLLM Deployment
|
||||
|
||||
The vLLM deployment utilizes AMD devices primarily for the `vllm-service`, which handles large language model (LLM) tasks. This service is configured to maximize the use of AMD's capabilities, potentially allocating multiple devices to enhance parallel processing and throughput.
|
||||
|
||||
| Service Name | Image Name | AMD Use |
|
||||
| ---------------------- | ---------------------- | ------- |
|
||||
| codegen-backend-server | opea/codegen:latest | no |
|
||||
| codegen-llm-server | opea/codegen:latest | no |
|
||||
| codegen-vllm-service | opea/vllm-rocm:latest | yes |
|
||||
| codegen-ui-server | opea/codegen-ui:latest | no |
|
||||
|
||||
## CodeGen Service Configuration
|
||||
|
||||
The table provides a comprehensive overview of the CodeGen services utilized across various deployments as illustrated in the example Docker Compose files. Each row in the table represents a distinct service, detailing its possible images used to enable it and a concise description of its function within the deployment architecture. These services collectively enable functionalities such as data storage and management, text embedding, retrieval, reranking, and large language model processing.
|
||||
|
||||
ex.: (From ChatQna)
|
||||
| Service Name | Possible Image Names | Optional | Description
|
||||
| redis-vector-db | redis/redis-stack:7.2.0-v9 | No | Acts as a Redis database for storing and managing
|
||||
|
||||
## Conclusion
|
||||
|
||||
In the configuration of the `vllm-service` and the `tgi-service`, two variables play a primary role in determining the service's performance and functionality. The `LLM_MODEL_ID` parameter specifies the particular large language model (LLM) that the service will utilize, effectively determining the capabilities and characteristics of the language processing tasks it can perform. This model identifier ensures that the service is aligned with the specific requirements of the application, whether it involves text generation, comprehension, or other language-related tasks.
|
||||
|
||||
However, developers need to be aware of the models that have been tested with the respective service image supporting the `vllm-service` and `tgi-service`. For example, documentation for the OPEA GenAIComps v1.0 release specify the list of [validated LLM models](https://github.com/opea-project/GenAIComps/blob/v1.0/comps/llms/text-generation/README.md#validated-llm-models) for each AMD ROCm enabled service image. Specific models may have stringent requirements on the number of AMD ROCm devices required to support them.
|
||||
|
||||
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore the alternative TGI deployment option if needed.
|
||||
- Refer to the main [CodeGen README](../../../../README.md) for architecture details and links to other deployment methods (Kubernetes, Xeon).
|
||||
- Consult the [OPEA GenAIComps](https://github.com/opea-project/GenAIComps) repository for details on individual microservices.
|
||||
- Refer to the main [CodeGen README](../../../../README.md) for links to benchmarking and Kubernetes deployment options.
|
||||
|
||||
@@ -5,8 +5,8 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
### The IP address or domain name of the server on which the application is running
|
||||
export HOST_IP=''
|
||||
export EXTERNAL_HOST_IP=''
|
||||
export HOST_IP=${ip_address}
|
||||
export EXTERNAL_HOST_IP=${ip_address}
|
||||
|
||||
### The port of the TGI service. On this port, the TGI service will accept connections
|
||||
export CODEGEN_TGI_SERVICE_PORT=8028
|
||||
@@ -36,4 +36,4 @@ export CODEGEN_BACKEND_SERVICE_URL="http://${EXTERNAL_HOST_IP}:${CODEGEN_BACKEND
|
||||
export CODEGEN_LLM_SERVICE_HOST_IP=${HOST_IP}
|
||||
|
||||
### The CodeGen service UI port
|
||||
export CODEGEN_UI_SERVICE_PORT=18151
|
||||
export CODEGEN_UI_SERVICE_PORT=5173
|
||||
|
||||
@@ -5,8 +5,8 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
### The IP address or domain name of the server on which the application is running
|
||||
export HOST_IP=''
|
||||
export EXTERNAL_HOST_IP=''
|
||||
export HOST_IP=${ip_address}
|
||||
export EXTERNAL_HOST_IP=${ip_address}
|
||||
|
||||
### The port of the vLLM service. On this port, the TGI service will accept connections
|
||||
export CODEGEN_VLLM_SERVICE_PORT=8028
|
||||
@@ -25,7 +25,7 @@ export CODEGEN_LLM_SERVICE_PORT=9000
|
||||
export CODEGEN_MEGA_SERVICE_HOST_IP=${HOST_IP}
|
||||
|
||||
### The port for CodeGen backend service
|
||||
export CODEGEN_BACKEND_SERVICE_PORT=18150
|
||||
export CODEGEN_BACKEND_SERVICE_PORT=7778
|
||||
|
||||
### The URL of CodeGen backend service, used by the frontend service
|
||||
export CODEGEN_BACKEND_SERVICE_URL="http://${EXTERNAL_HOST_IP}:${CODEGEN_BACKEND_SERVICE_PORT}/v1/codegen"
|
||||
@@ -34,4 +34,4 @@ export CODEGEN_BACKEND_SERVICE_URL="http://${EXTERNAL_HOST_IP}:${CODEGEN_BACKEND
|
||||
export CODEGEN_LLM_SERVICE_HOST_IP=${HOST_IP}
|
||||
|
||||
### The CodeGen service UI port
|
||||
export CODEGEN_UI_SERVICE_PORT=18151
|
||||
export CODEGEN_UI_SERVICE_PORT=5173
|
||||
|
||||
@@ -41,6 +41,7 @@ services:
|
||||
https_proxy: ${https_proxy}
|
||||
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
|
||||
host_ip: ${host_ip}
|
||||
VLLM_CPU_KVCACHE_SPACE: 40
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:80/health || exit 1"]
|
||||
interval: 10s
|
||||
|
||||
@@ -52,6 +52,7 @@ services:
|
||||
VLLM_SKIP_WARMUP: ${VLLM_SKIP_WARMUP:-false}
|
||||
NUM_CARDS: ${NUM_CARDS:-1}
|
||||
VLLM_TORCH_PROFILER_DIR: "/mnt"
|
||||
VLLM_CPU_KVCACHE_SPACE: 40
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:80/health || exit 1"]
|
||||
interval: 10s
|
||||
|
||||
33
CodeGen/tests/README.md
Normal file
33
CodeGen/tests/README.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# CodeGen E2E test scripts
|
||||
|
||||
## Set the required environment variable
|
||||
|
||||
```bash
|
||||
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
|
||||
```
|
||||
|
||||
## Run test
|
||||
|
||||
On Intel Xeon with TGI:
|
||||
|
||||
```bash
|
||||
bash test_compose_on_xeon.sh
|
||||
```
|
||||
|
||||
On Intel Gaudi with TGI:
|
||||
|
||||
```bash
|
||||
bash test_compose_on_gaudi.sh
|
||||
```
|
||||
|
||||
On AMD ROCm with TGI:
|
||||
|
||||
```bash
|
||||
bash test_compose_on_rocm.sh
|
||||
```
|
||||
|
||||
On AMD ROCm with vLLM:
|
||||
|
||||
```bash
|
||||
bash test_compose_vllm_on_rocm.sh
|
||||
```
|
||||
@@ -10,21 +10,11 @@ echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
|
||||
export REGISTRY=${IMAGE_REPO}
|
||||
export TAG=${IMAGE_TAG}
|
||||
export MODEL_CACHE=${model_cache:-"./data"}
|
||||
export REDIS_DB_PORT=6379
|
||||
export REDIS_INSIGHTS_PORT=8001
|
||||
export REDIS_RETRIEVER_PORT=7000
|
||||
export EMBEDDER_PORT=6000
|
||||
export TEI_EMBEDDER_PORT=8090
|
||||
export DATAPREP_REDIS_PORT=6007
|
||||
|
||||
WORKPATH=$(dirname "$PWD")
|
||||
LOG_PATH="$WORKPATH/tests"
|
||||
ip_address=$(hostname -I | awk '{print $1}')
|
||||
|
||||
export http_proxy=${http_proxy}
|
||||
export https_proxy=${https_proxy}
|
||||
export no_proxy=${no_proxy},${ip_address}
|
||||
|
||||
function build_docker_images() {
|
||||
opea_branch=${opea_branch:-"main"}
|
||||
# If the opea_branch isn't main, replace the git clone branch in Dockerfile.
|
||||
@@ -58,29 +48,12 @@ function start_services() {
|
||||
local compose_profile="$1"
|
||||
local llm_container_name="$2"
|
||||
|
||||
cd $WORKPATH/docker_compose/intel/hpu/gaudi
|
||||
|
||||
export LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"
|
||||
export LLM_ENDPOINT="http://${ip_address}:8028"
|
||||
cd $WORKPATH/docker_compose
|
||||
export LLM_MODEL_ID="Qwen/Qwen2.5-Coder-32B-Instruct"
|
||||
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
|
||||
export MEGA_SERVICE_PORT=7778
|
||||
export MEGA_SERVICE_HOST_IP=${ip_address}
|
||||
export LLM_SERVICE_HOST_IP=${ip_address}
|
||||
export BACKEND_SERVICE_ENDPOINT="http://${ip_address}:${MEGA_SERVICE_PORT}/v1/codegen"
|
||||
export NUM_CARDS=1
|
||||
export host_ip=${ip_address}
|
||||
|
||||
export REDIS_URL="redis://${host_ip}:${REDIS_DB_PORT}"
|
||||
export RETRIEVAL_SERVICE_HOST_IP=${host_ip}
|
||||
export RETRIEVER_COMPONENT_NAME="OPEA_RETRIEVER_REDIS"
|
||||
export INDEX_NAME="CodeGen"
|
||||
|
||||
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
|
||||
export TEI_EMBEDDING_HOST_IP=${host_ip}
|
||||
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:${TEI_EMBEDDER_PORT}"
|
||||
export DATAPREP_ENDPOINT="http://${host_ip}:${DATAPREP_REDIS_PORT}/v1/dataprep"
|
||||
|
||||
export INDEX_NAME="CodeGen"
|
||||
source set_env.sh
|
||||
cd intel/hpu/gaudi
|
||||
|
||||
# Start Docker Containers
|
||||
docker compose --profile ${compose_profile} up -d | tee ${LOG_PATH}/start_services_with_compose.log
|
||||
@@ -144,7 +117,7 @@ function validate_microservices() {
|
||||
"completion_tokens" \
|
||||
"llm-service" \
|
||||
"${llm_container_name}" \
|
||||
'{"model": "Qwen/Qwen2.5-Coder-7B-Instruct", "messages": [{"role": "user", "content": "def print_hello_world():"}], "max_tokens": 256}'
|
||||
'{"model": "Qwen/Qwen2.5-Coder-32B-Instruct", "messages": [{"role": "user", "content": "def print_hello_world():"}], "max_tokens": 256}'
|
||||
|
||||
# llm microservice
|
||||
validate_services \
|
||||
@@ -176,7 +149,7 @@ function validate_megaservice() {
|
||||
# Curl the Mega Service with index_name and agents_flag
|
||||
validate_services \
|
||||
"${ip_address}:7778/v1/codegen" \
|
||||
"" \
|
||||
"fingerprint" \
|
||||
"mega-codegen" \
|
||||
"codegen-gaudi-backend-server" \
|
||||
'{ "index_name": "test_redis", "agents_flag": "True", "messages": "def print_hello_world():", "max_tokens": 256}'
|
||||
@@ -225,8 +198,9 @@ function validate_gradio() {
|
||||
|
||||
function stop_docker() {
|
||||
local docker_profile="$1"
|
||||
|
||||
cd $WORKPATH/docker_compose/intel/hpu/gaudi
|
||||
cd $WORKPATH/docker_compose
|
||||
source set_env.sh
|
||||
cd intel/hpu/gaudi
|
||||
docker compose --profile ${docker_profile} down
|
||||
}
|
||||
|
||||
|
||||
@@ -41,18 +41,7 @@ function build_docker_images() {
|
||||
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/amd/gpu/rocm/
|
||||
|
||||
export CODEGEN_LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"
|
||||
export CODEGEN_TGI_SERVICE_PORT=8028
|
||||
export CODEGEN_TGI_LLM_ENDPOINT="http://${ip_address}:${CODEGEN_TGI_SERVICE_PORT}"
|
||||
export CODEGEN_LLM_SERVICE_PORT=9000
|
||||
export CODEGEN_HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
|
||||
export CODEGEN_MEGA_SERVICE_HOST_IP=${ip_address}
|
||||
export CODEGEN_LLM_SERVICE_HOST_IP=${ip_address}
|
||||
export CODEGEN_BACKEND_SERVICE_PORT=7778
|
||||
export CODEGEN_BACKEND_SERVICE_URL="http://${ip_address}:${CODEGEN_BACKEND_SERVICE_PORT}/v1/codegen"
|
||||
export CODEGEN_UI_SERVICE_PORT=5173
|
||||
export HOST_IP=${ip_address}
|
||||
source set_env.sh
|
||||
|
||||
sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env
|
||||
|
||||
|
||||
@@ -10,21 +10,11 @@ echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
|
||||
export REGISTRY=${IMAGE_REPO}
|
||||
export TAG=${IMAGE_TAG}
|
||||
export MODEL_CACHE=${model_cache:-"./data"}
|
||||
export REDIS_DB_PORT=6379
|
||||
export REDIS_INSIGHTS_PORT=8001
|
||||
export REDIS_RETRIEVER_PORT=7000
|
||||
export EMBEDDER_PORT=6000
|
||||
export TEI_EMBEDDER_PORT=8090
|
||||
export DATAPREP_REDIS_PORT=6007
|
||||
|
||||
WORKPATH=$(dirname "$PWD")
|
||||
LOG_PATH="$WORKPATH/tests"
|
||||
ip_address=$(hostname -I | awk '{print $1}')
|
||||
|
||||
export http_proxy=${http_proxy}
|
||||
export https_proxy=${https_proxy}
|
||||
export no_proxy=${no_proxy},${ip_address}
|
||||
|
||||
function build_docker_images() {
|
||||
opea_branch=${opea_branch:-"main"}
|
||||
# If the opea_branch isn't main, replace the git clone branch in Dockerfile.
|
||||
@@ -60,26 +50,11 @@ function start_services() {
|
||||
local compose_profile="$1"
|
||||
local llm_container_name="$2"
|
||||
|
||||
cd $WORKPATH/docker_compose/intel/cpu/xeon/
|
||||
|
||||
export LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"
|
||||
export LLM_ENDPOINT="http://${ip_address}:8028"
|
||||
cd $WORKPATH/docker_compose
|
||||
export LLM_MODEL_ID="Qwen/Qwen2.5-Coder-32B-Instruct"
|
||||
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
|
||||
export MEGA_SERVICE_PORT=7778
|
||||
export MEGA_SERVICE_HOST_IP=${ip_address}
|
||||
export LLM_SERVICE_HOST_IP=${ip_address}
|
||||
export BACKEND_SERVICE_ENDPOINT="http://${ip_address}:${MEGA_SERVICE_PORT}/v1/codegen"
|
||||
export host_ip=${ip_address}
|
||||
|
||||
export REDIS_URL="redis://${host_ip}:${REDIS_DB_PORT}"
|
||||
export RETRIEVAL_SERVICE_HOST_IP=${host_ip}
|
||||
export RETRIEVER_COMPONENT_NAME="OPEA_RETRIEVER_REDIS"
|
||||
export INDEX_NAME="CodeGen"
|
||||
|
||||
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
|
||||
export TEI_EMBEDDING_HOST_IP=${host_ip}
|
||||
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:${TEI_EMBEDDER_PORT}"
|
||||
export DATAPREP_ENDPOINT="http://${host_ip}:${DATAPREP_REDIS_PORT}/v1/dataprep"
|
||||
source set_env.sh
|
||||
cd intel/cpu/xeon/
|
||||
|
||||
# Start Docker Containers
|
||||
docker compose --profile ${compose_profile} up -d > ${LOG_PATH}/start_services_with_compose.log
|
||||
@@ -143,7 +118,7 @@ function validate_microservices() {
|
||||
"completion_tokens" \
|
||||
"llm-service" \
|
||||
"${llm_container_name}" \
|
||||
'{"model": "Qwen/Qwen2.5-Coder-7B-Instruct", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 256}'
|
||||
'{"model": "Qwen/Qwen2.5-Coder-32B-Instruct", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 256}'
|
||||
|
||||
# llm microservice
|
||||
validate_services \
|
||||
@@ -175,7 +150,7 @@ function validate_megaservice() {
|
||||
# Curl the Mega Service with index_name and agents_flag
|
||||
validate_services \
|
||||
"${ip_address}:7778/v1/codegen" \
|
||||
"" \
|
||||
"fingerprint" \
|
||||
"mega-codegen" \
|
||||
"codegen-xeon-backend-server" \
|
||||
'{ "index_name": "test_redis", "agents_flag": "True", "messages": "def print_hello_world():", "max_tokens": 256}'
|
||||
@@ -225,7 +200,9 @@ function validate_gradio() {
|
||||
function stop_docker() {
|
||||
local docker_profile="$1"
|
||||
|
||||
cd $WORKPATH/docker_compose/intel/cpu/xeon/
|
||||
cd $WORKPATH/docker_compose
|
||||
source set_env.sh
|
||||
cd intel/cpu/xeon/
|
||||
docker compose --profile ${docker_profile} down
|
||||
}
|
||||
|
||||
|
||||
@@ -40,18 +40,7 @@ function build_docker_images() {
|
||||
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/amd/gpu/rocm/
|
||||
|
||||
export CODEGEN_LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"
|
||||
export CODEGEN_VLLM_SERVICE_PORT=8028
|
||||
export CODEGEN_VLLM_ENDPOINT="http://${ip_address}:${CODEGEN_VLLM_SERVICE_PORT}"
|
||||
export CODEGEN_LLM_SERVICE_PORT=9000
|
||||
export CODEGEN_HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
|
||||
export CODEGEN_MEGA_SERVICE_HOST_IP=${ip_address}
|
||||
export CODEGEN_LLM_SERVICE_HOST_IP=${ip_address}
|
||||
export CODEGEN_BACKEND_SERVICE_PORT=7778
|
||||
export CODEGEN_BACKEND_SERVICE_URL="http://${ip_address}:${CODEGEN_BACKEND_SERVICE_PORT}/v1/codegen"
|
||||
export CODEGEN_UI_SERVICE_PORT=5173
|
||||
export HOST_IP=${ip_address}
|
||||
source set_env.sh
|
||||
|
||||
sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env
|
||||
|
||||
@@ -104,7 +93,7 @@ function validate_microservices() {
|
||||
"content" \
|
||||
"codegen-vllm-service" \
|
||||
"codegen-vllm-service" \
|
||||
'{"model": "Qwen/Qwen2.5-Coder-7B-Instruct", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 17}'
|
||||
'{"model": "Qwen/Qwen2.5-Coder-32B-Instruct", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 17}'
|
||||
sleep 10
|
||||
# llm microservice
|
||||
validate_services \
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
# Deploy CodeTrans on AMD GPU (ROCm)
|
||||
# Deploying CodeTrans on AMD ROCm GPU
|
||||
|
||||
This document outlines the single node deployment process for a CodeTrans application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices on AMD GPU (ROCm) server. The steps include pulling Docker images, container deployment via Docker Compose, and service execution using microservices `llm`.
|
||||
This document outlines the single node deployment process for a CodeTrans application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices on Intel Xeon server and AMD GPU. The steps include pulling Docker images, container deployment via Docker Compose, and service execution using microservices `llm`.
|
||||
|
||||
# Table of Contents
|
||||
Note: The default LLM is `Qwen/Qwen2.5-Coder-7B-Instruct`. Before deploying the application, please make sure either you've requested and been granted the access to it on [Huggingface](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) or you've downloaded the model locally from [ModelScope](https://www.modelscope.cn/models).
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [CodeTrans Quick Start Deployment](#codetrans-quick-start-deployment)
|
||||
2. [CodeTrans Docker Compose Files](#codetrans-docker-compose-files)
|
||||
@@ -11,7 +13,7 @@ This document outlines the single node deployment process for a CodeTrans applic
|
||||
|
||||
## CodeTrans Quick Start Deployment
|
||||
|
||||
This section describes how to quickly deploy and test the CodeTrans service manually on an AMD GPU (ROCm) processor. The basic steps are:
|
||||
This section describes how to quickly deploy and test the CodeTrans service manually on an AMD ROCm GPU. The basic steps are:
|
||||
|
||||
1. [Access the Code](#access-the-code)
|
||||
2. [Configure the Deployment Environment](#configure-the-deployment-environment)
|
||||
@@ -22,7 +24,7 @@ This section describes how to quickly deploy and test the CodeTrans service manu
|
||||
|
||||
### Access the Code
|
||||
|
||||
Clone the GenAIExample repository and access the CodeTrans AMD GPU (ROCm) platform Docker Compose files and supporting scripts:
|
||||
Clone the GenAIExample repository and access the CodeTrans AMD ROCm GPU platform Docker Compose files and supporting scripts:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples.git
|
||||
@@ -37,29 +39,84 @@ git checkout v1.2
|
||||
|
||||
### Configure the Deployment Environment
|
||||
|
||||
To set up environment variables for deploying CodeTrans services, set up some parameters specific to the deployment environment and source the `set_env.sh` script in this directory:
|
||||
To set up environment variables for deploying CodeTrans services, set up some parameters specific to the deployment environment and source the `set_env_*.sh` script in this directory:
|
||||
|
||||
- if used vLLM - set_env_vllm.sh
|
||||
- if used TGI - set_env.sh
|
||||
|
||||
Set the values of the variables:
|
||||
|
||||
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
|
||||
|
||||
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
|
||||
|
||||
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
|
||||
|
||||
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
|
||||
|
||||
We set these values in the file set_env\*\*\*\*.sh
|
||||
|
||||
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
|
||||
The values shown in the file set_env.sh or set_env_vllm they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
|
||||
|
||||
Setting variables in the operating system environment:
|
||||
|
||||
```bash
|
||||
export host_ip="External_Public_IP" # ip address of the node
|
||||
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
|
||||
export http_proxy="Your_HTTP_Proxy" # http proxy if any
|
||||
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
|
||||
export no_proxy=localhost,127.0.0.1,$host_ip # additional no proxies if needed
|
||||
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
|
||||
source ./set_env.sh
|
||||
source ./set_env_*.sh # replace the script name with the appropriate one
|
||||
```
|
||||
|
||||
Consult the section on [CodeTrans Service configuration](#codetrans-configuration) for information on how service specific configuration parameters affect deployments.
|
||||
|
||||
### Deploy the Services Using Docker Compose
|
||||
|
||||
To deploy the CodeTrans services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
|
||||
To deploy the CodeTrans services, execute the `docker compose up` command with the appropriate arguments. For a default deployment with TGI, execute the command below. It uses the 'compose.yaml' file.
|
||||
|
||||
```bash
|
||||
cd docker_compose/amd/gpu/rocm
|
||||
# if used TGI
|
||||
docker compose -f compose.yaml up -d
|
||||
# if used vLLM
|
||||
# docker compose -f compose_vllm.yaml up -d
|
||||
```
|
||||
|
||||
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
|
||||
|
||||
- compose_vllm.yaml - for vLLM-based application
|
||||
- compose.yaml - for TGI-based
|
||||
|
||||
```yaml
|
||||
shm_size: 1g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri:/dev/dri
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
- video
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
```
|
||||
|
||||
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
|
||||
|
||||
```yaml
|
||||
shm_size: 1g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/card0:/dev/dri/card0
|
||||
- /dev/dri/render128:/dev/dri/render128
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
- video
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
```
|
||||
|
||||
**How to Identify GPU Device IDs:**
|
||||
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
|
||||
|
||||
> **Note**: developers should build docker image from source when:
|
||||
>
|
||||
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
|
||||
@@ -71,9 +128,11 @@ Please refer to the table below to build different microservices from source:
|
||||
| Microservice | Deployment Guide |
|
||||
| ------------ | -------------------------------------------------------------------------------------------------------------- |
|
||||
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
|
||||
| TGI | [TGI project](https://github.com/huggingface/text-generation-inference.git) |
|
||||
| LLM | [LLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/llms) |
|
||||
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
|
||||
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
|
||||
| MegaService | [MegaService guide](../../../../README.md) |
|
||||
| UI | [UI guide](../../../../ui/svelte/README.md) |
|
||||
| Nginx | [Nginx guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/nginx) |
|
||||
|
||||
### Check the Deployment Status
|
||||
|
||||
@@ -83,15 +142,26 @@ After running docker compose, check if all the containers launched via docker co
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
For the default deployment, the following 5 containers should have started:
|
||||
For the default deployment with TGI, the following 9 containers should have started:
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
b3e1388fa2ca opea/nginx:${RELEASE_VERSION} "/usr/local/bin/star…" 32 hours ago Up 2 hours 0.0.0.0:80->80/tcp, :::80->80/tcp codetrans-nginx-server
|
||||
3b5fa9a722da opea/codetrans-ui:${RELEASE_VERSION} "docker-entrypoint.s…" 32 hours ago Up 2 hours 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp codetrans-ui-server
|
||||
d3b37f3d1faa opea/codetrans:${RELEASE_VERSION} "python codetrans.py" 32 hours ago Up 2 hours 0.0.0.0:7777->7777/tcp, :::7777->7777/tcp codetrans-backend-server
|
||||
24cae0db1a70 opea/llm-textgen:${RELEASE_VERSION} "bash entrypoint.sh" 32 hours ago Up 2 hours 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp codetrans-llm-server
|
||||
b98fa07a4f5c opea/vllm:${RELEASE_VERSION} "python3 -m vllm.ent…" 32 hours ago Up 2 hours 0.0.0.0:9009->80/tcp, :::9009->80/tcp codetrans-tgi-service
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
eaf24161aca8 opea/nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 5 seconds 0.0.0.0:18104->80/tcp, [::]:18104->80/tcp chaqna-nginx-server
|
||||
2fce48a4c0f4 opea/codetrans-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 5 seconds 0.0.0.0:18101->5173/tcp, [::]:18101->5173/tcp codetrans-ui-server
|
||||
613c384979f4 opea/codetrans:latest "bash entrypoint.sh" 37 seconds ago Up 5 seconds 0.0.0.0:18102->8888/tcp, [::]:18102->8888/tcp codetrans-backend-server
|
||||
e0ef1ea67640 opea/llm-textgen:latest "bash entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:18011->9000/tcp, [::]:18011->9000/tcp codetrans-llm-server
|
||||
342f01bfdbb2 ghcr.io/huggingface/text-generation-inference:2.3.1-rocm"python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:18008->8011/tcp, [::]:18008->8011/tcp codetrans-tgi-service
|
||||
```
|
||||
|
||||
if used vLLM:
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
eaf24161aca8 opea/nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 5 seconds 0.0.0.0:18104->80/tcp, [::]:18104->80/tcp chaqna-nginx-server
|
||||
2fce48a4c0f4 opea/codetrans-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 5 seconds 0.0.0.0:18101->5173/tcp, [::]:18101->5173/tcp codetrans-ui-server
|
||||
613c384979f4 opea/codetrans:latest "bash entrypoint.sh" 37 seconds ago Up 5 seconds 0.0.0.0:18102->8888/tcp, [::]:18102->8888/tcp codetrans-backend-server
|
||||
e0ef1ea67640 opea/llm-textgen:latest "bash entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:18011->9000/tcp, [::]:18011->9000/tcp codetrans-llm-server
|
||||
342f01bfdbb2 opea/vllm-rocm:latest "python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:18008->8011/tcp, [::]:18008->8011/tcp codetrans-vllm-service
|
||||
```
|
||||
|
||||
If any issues are encountered during deployment, refer to the [Troubleshooting](../../../../README_miscellaneous.md#troubleshooting) section.
|
||||
@@ -109,65 +179,68 @@ curl http://${HOST_IP}:${CODETRANS_BACKEND_SERVICE_PORT}/v1/codetrans \
|
||||
-d "$DATA"
|
||||
```
|
||||
|
||||
**Note** : Access the CodeTrans UI by web browser through this URL: `http://${host_ip}:80`. Please confirm the `80` port is opened in the firewall. To validate each microservie used in the pipeline refer to the [Validate Microservices](#validate-microservices) section.
|
||||
**Note** : Access the CodeTrans UI by web browser through this URL: `http://${HOST_IP_EXTERNAL}:${CODETRANS_NGINX_PORT}`
|
||||
|
||||
### Cleanup the Deployment
|
||||
|
||||
To stop the containers associated with the deployment, execute the following command:
|
||||
|
||||
```bash
|
||||
# if used TGI
|
||||
docker compose -f compose.yaml down
|
||||
# if used vLLM
|
||||
# docker compose -f compose_vllm.yaml down
|
||||
```
|
||||
|
||||
## CodeTrans Docker Compose Files
|
||||
|
||||
In the context of deploying a CodeTrans pipeline on an AMD GPU (ROCm) platform, we can pick and choose different large language model serving frameworks. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
|
||||
In the context of deploying an ChatQnA pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks, or single English TTS/multi-language TTS component. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
|
||||
|
||||
| File | Description |
|
||||
| ---------------------------------------- | ------------------------------------------------------------------------------------------ |
|
||||
| [compose.yaml](./compose.yaml) | Default compose file using TGI as serving framework |
|
||||
| [compose_vllm.yaml](./compose_vllm.yaml) | The LLM serving framework is vLLM. All other configurations remain the same as the default |
|
||||
| File | Description |
|
||||
| ---------------------------------------- | ------------------------------------------------------------------------------------- |
|
||||
| [compose.yaml](./compose.yaml) | The LLM serving framework is TGI. Default compose file using TGI as serving framework |
|
||||
| [compose_vllm.yaml](./compose_vllm.yaml) | The LLM serving framework is vLLM. Compose file using vllm as serving framework |
|
||||
|
||||
## Validate Microservices
|
||||
## Validate MicroServices
|
||||
|
||||
1. LLM backend Service
|
||||
LLM backend Service
|
||||
|
||||
In the first startup, this service will take more time to download, load and warm up the model. After it's finished, the service will be ready.
|
||||
In the first startup, this service will take more time to download, load and warm up the model. After it's finished, the service will be ready.
|
||||
|
||||
Try the command below to check whether the LLM serving is ready.
|
||||
Try the command below to check whether the LLM serving is ready.
|
||||
|
||||
```bash
|
||||
# vLLM service
|
||||
docker logs codetrans-vllm-service 2>&1 | grep complete
|
||||
# If the service is ready, you will get the response like below.
|
||||
INFO: Application startup complete.
|
||||
```
|
||||
```bash
|
||||
# vLLM service
|
||||
docker logs codetrans-vllm-service 2>&1 | grep complete
|
||||
# If the service is ready, you will get the response like below.
|
||||
INFO: Application startup complete.
|
||||
```
|
||||
|
||||
```bash
|
||||
# TGI service
|
||||
docker logs codetrans-tgi-service | grep Connected
|
||||
# If the service is ready, you will get the response like below.
|
||||
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected
|
||||
```
|
||||
```bash
|
||||
# TGI service
|
||||
docker logs codetrans-tgi-service | grep Connected
|
||||
# If the service is ready, you will get the response like below.
|
||||
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected
|
||||
```
|
||||
|
||||
Then try the `cURL` command below to validate services.
|
||||
Then try the `cURL` command below to validate services.
|
||||
|
||||
```bash
|
||||
# either vLLM or TGI service
|
||||
# for vllm service
|
||||
export port=${CODETRANS_VLLM_SERVICE_PORT}
|
||||
# for tgi service
|
||||
export port=${CODETRANS_TGI_SERVICE_PORT}
|
||||
curl http://${HOST_IP}:${port}/v1/chat/completions \
|
||||
-X POST \
|
||||
-d '{"inputs":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:","parameters":{"max_new_tokens":17, "do_sample": true}}' \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
```bash
|
||||
# either vLLM or TGI service
|
||||
# for vllm service
|
||||
export port=${CODETRANS_VLLM_SERVICE_PORT}
|
||||
# for tgi service
|
||||
export port=${CODETRANS_TGI_SERVICE_PORT}
|
||||
curl http://${HOST_IP}:${port}/v1/chat/completions \
|
||||
-X POST \
|
||||
-d '{"inputs":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:","parameters":{"max_new_tokens":17, "do_sample": true}}' \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
2. LLM Microservice
|
||||
|
||||
```bash
|
||||
curl http://${HOST_IP}:${CODETRANS_LLM_SERVICE_PORT}/v1/chat/completions\
|
||||
curl http://${HOST_IP}:${CODETRANS_LLM_SERVICE_PORT}/v1/chat/completions \
|
||||
-X POST \
|
||||
-d '{"query":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:"}' \
|
||||
-H 'Content-Type: application/json'
|
||||
|
||||
@@ -43,9 +43,6 @@ function build_docker_images() {
|
||||
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/intel/hpu/gaudi
|
||||
|
||||
export http_proxy=${http_proxy}
|
||||
export https_proxy=${http_proxy}
|
||||
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
|
||||
export LLM_ENDPOINT="http://${ip_address}:8008"
|
||||
export LLM_COMPONENT_NAME="OpeaTextGenService"
|
||||
|
||||
@@ -42,8 +42,6 @@ function build_docker_images() {
|
||||
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/amd/gpu/rocm/
|
||||
export http_proxy=${http_proxy}
|
||||
export https_proxy=${http_proxy}
|
||||
export CODETRANS_TGI_SERVICE_PORT=8008
|
||||
export CODETRANS_LLM_SERVICE_PORT=9000
|
||||
export CODETRANS_LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"
|
||||
|
||||
@@ -45,8 +45,6 @@ function build_docker_images() {
|
||||
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/intel/cpu/xeon/
|
||||
export http_proxy=${http_proxy}
|
||||
export https_proxy=${http_proxy}
|
||||
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
|
||||
export LLM_ENDPOINT="http://${ip_address}:8008"
|
||||
export LLM_COMPONENT_NAME="OpeaTextGenService"
|
||||
|
||||
@@ -41,8 +41,6 @@ function build_docker_images() {
|
||||
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/intel/hpu/gaudi/
|
||||
export http_proxy=${http_proxy}
|
||||
export https_proxy=${http_proxy}
|
||||
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
|
||||
export LLM_ENDPOINT="http://${ip_address}:8008"
|
||||
export LLM_COMPONENT_NAME="OpeaTextGenService"
|
||||
|
||||
@@ -41,8 +41,6 @@ function build_docker_images() {
|
||||
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/intel/cpu/xeon/
|
||||
export http_proxy=${http_proxy}
|
||||
export https_proxy=${http_proxy}
|
||||
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
|
||||
export LLM_ENDPOINT="http://${ip_address}:8008"
|
||||
export LLM_COMPONENT_NAME="OpeaTextGenService"
|
||||
|
||||
@@ -40,8 +40,6 @@ function build_docker_images() {
|
||||
|
||||
function start_services() {
|
||||
cd $WORKPATH/docker_compose/amd/gpu/rocm/
|
||||
export http_proxy=${http_proxy}
|
||||
export https_proxy=${http_proxy}
|
||||
export HOST_IP=${ip_address}
|
||||
export CODETRANS_VLLM_SERVICE_PORT=8008
|
||||
export CODETRANS_LLM_SERVICE_PORT=9000
|
||||
|
||||
@@ -50,7 +50,7 @@ flowchart LR
|
||||
|
||||
### 💬 SQL Query Generation
|
||||
|
||||
The key feature of DBQnA app is that it converts a user's natural language query into an SQL query and automatically executes the generated SQL query on the database to return the relevant results. BAsically ask questions to database, receive corresponding SQL query and real-time query execution output, all without needing any SQL knowledge.
|
||||
The key feature of DBQnA app is that it converts a user's natural language query into an SQL query and automatically executes the generated SQL query on the database to return the relevant results. Basically ask questions to database, receive corresponding SQL query and real-time query execution output, all without needing any SQL knowledge.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
# Copyright (C) 2024 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
ARG IMAGE_REPO=opea
|
||||
ARG BASE_TAG=latest
|
||||
FROM opea/comps-base:$BASE_TAG
|
||||
FROM $IMAGE_REPO/comps-base:$BASE_TAG
|
||||
|
||||
COPY ./chatqna.py $HOME/chatqna.py
|
||||
|
||||
|
||||
@@ -63,7 +63,7 @@ services:
|
||||
- ecrag
|
||||
vllm-openvino-server:
|
||||
container_name: vllm-openvino-server
|
||||
image: opea/vllm-arc:latest
|
||||
image: ${REGISTRY:-opea}/vllm-arc:${TAG:-latest}
|
||||
ports:
|
||||
- ${VLLM_SERVICE_PORT:-8008}:80
|
||||
environment:
|
||||
|
||||
@@ -2,35 +2,33 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
services:
|
||||
edgecraftrag-server:
|
||||
build:
|
||||
context: ../
|
||||
args:
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
dockerfile: ./Dockerfile.server
|
||||
image: ${REGISTRY:-opea}/edgecraftrag-server:${TAG:-latest}
|
||||
edgecraftrag-ui:
|
||||
build:
|
||||
context: ../
|
||||
args:
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
dockerfile: ./ui/docker/Dockerfile.ui
|
||||
image: ${REGISTRY:-opea}/edgecraftrag-ui:${TAG:-latest}
|
||||
edgecraftrag-ui-gradio:
|
||||
build:
|
||||
context: ../
|
||||
args:
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
dockerfile: ./ui/docker/Dockerfile.gradio
|
||||
image: ${REGISTRY:-opea}/edgecraftrag-ui-gradio:${TAG:-latest}
|
||||
edgecraftrag:
|
||||
build:
|
||||
context: ../
|
||||
args:
|
||||
IMAGE_REPO: ${REGISTRY}
|
||||
BASE_TAG: ${TAG}
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
dockerfile: ./Dockerfile
|
||||
image: ${REGISTRY:-opea}/edgecraftrag:${TAG:-latest}
|
||||
edgecraftrag-server:
|
||||
build:
|
||||
dockerfile: ./Dockerfile.server
|
||||
extends: edgecraftrag
|
||||
image: ${REGISTRY:-opea}/edgecraftrag-server:${TAG:-latest}
|
||||
edgecraftrag-ui:
|
||||
build:
|
||||
dockerfile: ./ui/docker/Dockerfile.ui
|
||||
extends: edgecraftrag
|
||||
image: ${REGISTRY:-opea}/edgecraftrag-ui:${TAG:-latest}
|
||||
edgecraftrag-ui-gradio:
|
||||
build:
|
||||
dockerfile: ./ui/docker/Dockerfile.gradio
|
||||
extends: edgecraftrag
|
||||
image: ${REGISTRY:-opea}/edgecraftrag-ui-gradio:${TAG:-latest}
|
||||
vllm-arc:
|
||||
build:
|
||||
context: GenAIComps
|
||||
dockerfile: comps/third_parties/vllm/src/Dockerfile.intel_gpu
|
||||
image: ${REGISTRY:-opea}/vllm-arc:${TAG:-latest}
|
||||
|
||||
@@ -30,8 +30,16 @@ HF_ENDPOINT=https://hf-mirror.com
|
||||
|
||||
|
||||
function build_docker_images() {
|
||||
opea_branch=${opea_branch:-"main"}
|
||||
cd $WORKPATH/docker_image_build
|
||||
git clone --depth 1 --branch ${opea_branch} https://github.com/opea-project/GenAIComps.git
|
||||
pushd GenAIComps
|
||||
echo "GenAIComps test commit is $(git rev-parse HEAD)"
|
||||
docker build --no-cache -t ${REGISTRY}/comps-base:${TAG} --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
|
||||
popd && sleep 1s
|
||||
|
||||
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
|
||||
service_list="edgecraftrag edgecraftrag-server edgecraftrag-ui"
|
||||
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log
|
||||
|
||||
docker images && sleep 1s
|
||||
@@ -102,16 +110,30 @@ function stop_docker() {
|
||||
function main() {
|
||||
mkdir -p $LOG_PATH
|
||||
|
||||
echo "::group::stop_docker"
|
||||
stop_docker
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::build_docker_images"
|
||||
if [[ "$IMAGE_REPO" == "opea" ]]; then build_docker_images; fi
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::start_services"
|
||||
start_services
|
||||
echo "EC_RAG service started" && sleep 1s
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::validate_rag"
|
||||
validate_rag
|
||||
validate_megaservice
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::validate_megaservice"
|
||||
validate_megaservice
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::stop_docker"
|
||||
stop_docker
|
||||
echo y | docker system prune
|
||||
echo "::endgroup::"
|
||||
|
||||
}
|
||||
|
||||
|
||||
@@ -33,7 +33,14 @@ vLLM_ENDPOINT="http://${HOST_IP}:${VLLM_SERVICE_PORT}"
|
||||
|
||||
|
||||
function build_docker_images() {
|
||||
opea_branch=${opea_branch:-"main"}
|
||||
cd $WORKPATH/docker_image_build
|
||||
git clone --depth 1 --branch ${opea_branch} https://github.com/opea-project/GenAIComps.git
|
||||
pushd GenAIComps
|
||||
echo "GenAIComps test commit is $(git rev-parse HEAD)"
|
||||
docker build --no-cache -t ${REGISTRY}/comps-base:${TAG} --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
|
||||
popd && sleep 1s
|
||||
|
||||
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
|
||||
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log
|
||||
|
||||
@@ -152,19 +159,30 @@ function stop_docker() {
|
||||
function main() {
|
||||
mkdir -p "$LOG_PATH"
|
||||
|
||||
echo "::group::stop_docker"
|
||||
stop_docker
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::build_docker_images"
|
||||
if [[ "$IMAGE_REPO" == "opea" ]]; then build_docker_images; fi
|
||||
start_time=$(date +%s)
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::start_services"
|
||||
start_services
|
||||
end_time=$(date +%s)
|
||||
duration=$((end_time-start_time))
|
||||
echo "EC_RAG service start duration is $duration s" && sleep 1s
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::validate_rag"
|
||||
validate_rag
|
||||
validate_megaservice
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::validate_megaservice"
|
||||
validate_megaservice
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "::group::stop_docker"
|
||||
stop_docker
|
||||
echo y | docker system prune
|
||||
echo "::endgroup::"
|
||||
|
||||
}
|
||||
|
||||
|
||||
@@ -241,7 +241,7 @@ docker compose -f compose.yaml up -d
|
||||
export MILVUS_HOST=${host_ip}
|
||||
export MILVUS_PORT=19530
|
||||
export MILVUS_RETRIEVER_PORT=7000
|
||||
export COLLECTION_NAME=mm_rag_milvus
|
||||
export COLLECTION_NAME=LangChainCollection
|
||||
cd GenAIExamples/MultimodalQnA/docker_compose/intel/cpu/xeon/
|
||||
docker compose -f compose_milvus.yaml up -d
|
||||
```
|
||||
@@ -385,6 +385,8 @@ curl --silent --write-out "HTTPSTATUS:%{http_code}" \
|
||||
|
||||
Now, test the microservice with posting a custom caption along with an image and a PDF containing images and text. The image caption can be provided as a text (`.txt`) or as spoken audio (`.wav` or `.mp3`).
|
||||
|
||||
> Note: Audio captions for images are currently only supported when using the Redis data prep backend.
|
||||
|
||||
```bash
|
||||
curl --silent --write-out "HTTPSTATUS:%{http_code}" \
|
||||
${DATAPREP_INGEST_SERVICE_ENDPOINT} \
|
||||
|
||||
@@ -226,6 +226,8 @@ services:
|
||||
- DATAPREP_INGEST_SERVICE_ENDPOINT=${DATAPREP_INGEST_SERVICE_ENDPOINT}
|
||||
- DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT=${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT}
|
||||
- DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT=${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT}
|
||||
- DATAPREP_GET_FILE_ENDPOINT=${DATAPREP_GET_FILE_ENDPOINT}
|
||||
- DATAPREP_DELETE_FILE_ENDPOINT=${DATAPREP_DELETE_FILE_ENDPOINT}
|
||||
- MEGA_SERVICE_PORT:=${MEGA_SERVICE_PORT}
|
||||
- UI_PORT=${UI_PORT}
|
||||
- DATAPREP_MMR_PORT=${DATAPREP_MMR_PORT}
|
||||
|
||||
@@ -16,7 +16,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
|
||||
| [opea/chatqna-conversation-ui](https://hub.docker.com/r/opea/chatqna-conversation-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/docker/Dockerfile.react) | Chatqna React UI. Facilitates interaction with users, enabling chat-based Q&A with conversation history stored in the browser's local storage. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/react/README.md) |
|
||||
| [opea/chatqna-ui](https://hub.docker.com/r/opea/chatqna-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/docker/Dockerfile) | Chatqna UI entry. Facilitates interaction with users to answer questions | [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/svelte/README.md) |
|
||||
| [opea/codegen](https://hub.docker.com/r/opea/codegen) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/Dockerfile) | Codegen gateway. Provides automatic creation of source code from high-level representations | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/README.md) |
|
||||
| [opea/codegen-gradio-ui]() | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/docker/Dockerfile.gradio) | Codegen Gradio UI entry. Interact with users to generate source code by providing high-level descriptions or inputs. | |
|
||||
| [opea/codegen-gradio-ui](https://hub.docker.com/r/opea/codegen-gradio-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/docker/Dockerfile.gradio) | Codegen Gradio UI entry. Interact with users to generate source code by providing high-level descriptions or inputs. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/gradio/README.md) |
|
||||
| [opea/codegen-react-ui](https://hub.docker.com/r/opea/codegen-react-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/docker/Dockerfile.react) | Codegen React UI. Interact with users to generate appropriate code based on current user input. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/react/README.md) |
|
||||
| [opea/codegen-ui](https://hub.docker.com/r/opea/codegen-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/docker/Dockerfile) | Codegen UI entry. Facilitates interaction with users, automatically generate code based on user's descriptions | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/svelte/README.md) |
|
||||
| [opea/codetrans](https://hub.docker.com/r/opea/codetrans) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeTrans/Dockerfile) | Codetrans gateway. Provide services to convert source code written in one programming language to an equivalent version in another programming language. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeTrans/README.md) |
|
||||
@@ -29,7 +29,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
|
||||
| [opea/edgecraftrag](https://hub.docker.com/r/opea/edgecraftrag) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/Dockerfile) | Edge Craft RAG (EC-RAG) gateway. Provides a customizable, production-ready retrieval-enhanced generation system that is optimized for edge solutions. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/README.md) |
|
||||
| [opea/edgecraftrag-server](https://hub.docker.com/r/opea/edgecraftrag-server) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/Dockerfile.server) | Edge Craft RAG (EC-RAG) server, Provides a customizable, production-ready retrieval-enhanced generation system that is optimized for edge solutions. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/README.md) |
|
||||
| [opea/edgecraftrag-ui](https://hub.docker.com/r/opea/edgecraftrag-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/ui/docker/Dockerfile.ui) | Edge Craft RAG (EC-RAG) UI entry. Ensuring high-quality, performant interactions tailored for edge environments. | |
|
||||
| [opea/edgecraftrag-ui-gradio]() | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/ui/docker/Dockerfile.gradio) | Edge Craft RAG (EC-RAG) Gradio UI entry. Interact with users to provide a customizable, production-ready retrieval-enhanced generation system optimized for edge solutions. | |
|
||||
| [opea/edgecraftrag-ui-gradio](https://hub.docker.com/r/opea/edgecraftrag-ui-gradio) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/ui/docker/Dockerfile.gradio) | Edge Craft RAG (EC-RAG) Gradio UI entry. Interact with users to provide a customizable, production-ready retrieval-enhanced generation system optimized for edge solutions. | |
|
||||
| [opea/graphrag](https://hub.docker.com/r/opea/graphrag) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/Dockerfile) | GraphRAG gateway, Local and global queries are processed using knowledge graphs extracted from source documents. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/README.md) |
|
||||
| [opea/graphrag-react-ui](https://hub.docker.com/r/opea/graphrag-react-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/ui/docker/Dockerfile.react) | Graphrag React UI entry. Facilitates interaction with users, enabling queries and providing relevant answers using knowledge graphs. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/ui/react/README.md) |
|
||||
| [opea/graphrag-ui](https://hub.docker.com/r/opea/graphrag-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/ui/docker/Dockerfile) | Graphrag UI entry. Interact with users to facilitate queries and provide relevant answers using knowledge graphs. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/ui/svelte/README.md) |
|
||||
@@ -54,7 +54,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
|
||||
| [opea/animation](https://hub.docker.com/r/opea/animation) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/animation/src/Dockerfile) | OPEA Avatar Animation microservice for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/animation/src/README.md) |
|
||||
| [opea/asr](https://hub.docker.com/r/opea/asr) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/asr/src/Dockerfile) | OPEA Audio-Speech-Recognition microservice for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/asr/src/README.md) |
|
||||
| [opea/chathistory-mongo](https://hub.docker.com/r/opea/chathistory-mongo) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/chathistory/src/Dockerfile) | OPEA Chat History microservice is based on a MongoDB database and is designed to allow users to store, retrieve and manage chat conversations. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/chathistory/src/README.md) |
|
||||
| [opea/comps-base]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/Dockerfile) | OPEA Microservice base image. | [Link](https://github.com/opea-project/GenAIComps/blob/main/README.md) |
|
||||
| [opea/comps-base](https://hub.docker.com/r/opea/comps-base) | [Link](https://github.com/opea-project/GenAIComps/blob/main/Dockerfile) | OPEA Microservice base image. | [Link](https://github.com/opea-project/GenAIComps/blob/main/README.md) |
|
||||
| [opea/dataprep](https://hub.docker.com/r/opea/dataprep) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/dataprep/src/Dockerfile) | OPEA data preparation microservices for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/dataprep/README.md) |
|
||||
| [opea/embedding](https://hub.docker.com/r/opea/embedding) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/embeddings/src/Dockerfile) | OPEA mosec embedding microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/embeddings/src/README.md) |
|
||||
| [opea/embedding-multimodal-bridgetower](https://hub.docker.com/r/opea/embedding-multimodal-bridgetower) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/bridgetower/src/Dockerfile) | OPEA multimodal embedded microservices based on bridgetower for use by GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/bridgetower/src/README.md) |
|
||||
@@ -63,7 +63,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
|
||||
| [opea/feedbackmanagement-mongo](https://hub.docker.com/r/opea/feedbackmanagement-mongo) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/feedback_management/src/Dockerfile) | OPEA feedback management microservice uses MongoDB database for GenAI applications. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/feedback_management/src/README.md) |
|
||||
| [opea/finetuning](https://hub.docker.com/r/opea/finetuning) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/Dockerfile) | OPEA Fine-tuning microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/README.md) |
|
||||
| [opea/finetuning-gaudi](https://hub.docker.com/r/opea/finetuning-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/Dockerfile.intel_hpu) | OPEA Fine-tuning microservice for GenAI application use on the Gaudi | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/README.md) |
|
||||
| [opea/finetuning-xtune]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/Dockerfile.xtune) | OPEA Fine-tuning microservice base on Xtune for GenAI application use on the Arc A770 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/README.md) |
|
||||
| [opea/finetuning-xtune](https://hub.docker.com/r/opea/finetuning-xtune) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/Dockerfile.xtune) | OPEA Fine-tuning microservice base on Xtune for GenAI application use on the Arc A770 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/README.md) |
|
||||
| [opea/gpt-sovits](https://hub.docker.com/r/opea/gpt-sovits) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/gpt-sovits/src/Dockerfile) | OPEA GPT-SoVITS service for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/gpt-sovits/src/README.md) |
|
||||
| [opea/guardrails](https://hub.docker.com/r/opea/guardrails) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/guardrails/src/guardrails/Dockerfile) | OPEA guardrail microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/guardrails/src/guardrails/README.md) |
|
||||
| [opea/guardrails-bias-detection](https://hub.docker.com/r/opea/guardrails-bias-detection) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/guardrails/src/bias_detection/Dockerfile) | OPEA guardrail microservice to provide bias detection for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/guardrails/src/bias_detection/README.md) |
|
||||
@@ -76,19 +76,19 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
|
||||
| [opea/image2image-gaudi](https://hub.docker.com/r/opea/image2image-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2image/src/Dockerfile.intel_hpu) | OPEA Image-to-Image microservice for GenAI application use on the Gaudi. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2image/src/README.md) |
|
||||
| [opea/image2video](https://hub.docker.com/r/opea/image2video) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2video/src/Dockerfile) | OPEA image-to-video microservice for GenAI application. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2video/src/README.md) |
|
||||
| [opea/image2video-gaudi](https://hub.docker.com/r/opea/image2video-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2video/src/Dockerfile.intel_hpu) | OPEA image-to-video microservice for GenAI application use on the Gaudi. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2video/src/README.md) |
|
||||
| [opea/ipex-llm]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/ipex/src/Dockerfile) | OPEA is a Large Language Model (LLM) service based on intel-extension-for-pytorch. It provides specialized optimizations, including technical points like paged attention, ROPE fusion, etc. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/ipex/src/README.md) |
|
||||
| [opea/ipex-llm](https://hub.docker.com/r/opea/ipex-llm) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/ipex/src/Dockerfile) | OPEA is a Large Language Model (LLM) service based on intel-extension-for-pytorch. It provides specialized optimizations, including technical points like paged attention, ROPE fusion, etc. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/ipex/src/README.md) |
|
||||
| [opea/llm-docsum](https://hub.docker.com/r/opea/llm-docsum) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/doc-summarization/Dockerfile) | OPEA LLM microservice upon docsum docker image for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/doc-summarization/README.md) |
|
||||
| [opea/llm-eval](https://hub.docker.com/r/opea/llm-eval) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/utils/lm-eval/Dockerfile) | OPEA LLM microservice upon eval docker image for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/utils/lm-eval/README.md) |
|
||||
| [opea/llm-faqgen](https://hub.docker.com/r/opea/llm-faqgen) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/faq-generation/Dockerfile) | OPEA FAQ Generation Microservice is designed to generate frequently asked questions from document input using the HuggingFace Text Generation Inference (TGI) framework. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/faq-generation/README.md) |
|
||||
| [opea/llm-textgen](https://hub.docker.com/r/opea/llm-textgen) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/Dockerfile) | OPEA LLM microservice upon textgen docker image for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/README.md) |
|
||||
| [opea/llm-textgen-gaudi](https://hub.docker.com/r/opea/llm-textgen-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/Dockerfile.intel_hpu) | OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/README.md) |
|
||||
| [opea/llm-textgen-phi4-gaudi]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/Dockerfile.intel_hpu_phi4) | OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 with Phi4 optimization. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/README_native.md) |
|
||||
| [opea/llm-textgen-phi4-gaudi](https://hub.docker.com/r/opea/llm-textgen-phi4-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/Dockerfile.intel_hpu_phi4) | OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 with Phi4 optimization. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/README_native.md) |
|
||||
| [opea/lvm](https://hub.docker.com/r/opea/lvm) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/Dockerfile) | OPEA large visual model (LVM) microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/README.md) |
|
||||
| [opea/lvm-llama-vision](https://hub.docker.com/r/opea/lvm-llama-vision) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/Dockerfile) | OPEA microservice running Llama Vision as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/README.md) |
|
||||
| [opea/lvm-llama-vision-guard](https://hub.docker.com/r/opea/lvm-llama-vision-guard) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/Dockerfile.guard) | OPEA microservice running Llama Vision Guard as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/README.md) |
|
||||
| [opea/lvm-llama-vision-tp](https://hub.docker.com/r/opea/lvm-llama-vision-tp) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/Dockerfile.tp) | OPEA microservice running Llama Vision with DeepSpeed as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/README.md) |
|
||||
| [opea/lvm-llava](https://hub.docker.com/r/opea/lvm-llava) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/Dockerfile) | OPEA microservice running LLaVA as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/README.md) |
|
||||
| [opea/lvm-llava-gaudi]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/Dockerfile.intel_hpu) | OPEA microservice running LLaVA as a large visualization model (LVM) server for GenAI applications on the Gaudi2 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/README.md) |
|
||||
| [opea/lvm-llava-gaudi](https://hub.docker.com/r/opea/lvm-llava-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/Dockerfile.intel_hpu) | OPEA microservice running LLaVA as a large visualization model (LVM) server for GenAI applications on the Gaudi2 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/README.md) |
|
||||
| [opea/lvm-predictionguard](https://hub.docker.com/r/opea/lvm-predictionguard) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/predictionguard/src/Dockerfile) | OPEA microservice running PredictionGuard as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/predictionguard/src/README.md) |
|
||||
| [opea/lvm-video-llama](https://hub.docker.com/r/opea/lvm-video-llama) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/video-llama/src/Dockerfile) | OPEA microservice running Video-Llama as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/video-llama/src/README.md) |
|
||||
| [opea/nginx](https://hub.docker.com/r/opea/nginx) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/nginx/src/Dockerfile) | OPEA nginx microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/nginx/deployment/kubernetes/README.md) |
|
||||
@@ -98,20 +98,20 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
|
||||
| [opea/retriever](https://hub.docker.com/r/opea/retriever) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/src/Dockerfile) | OPEA retrieval microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/README.md) |
|
||||
| [opea/speecht5](https://hub.docker.com/r/opea/speecht5) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/speecht5/src/Dockerfile) | OPEA SpeechT5 service for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/README.md) |
|
||||
| [opea/speecht5-gaudi](https://hub.docker.com/r/opea/speecht5-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/speecht5/src/Dockerfile.intel_hpu) | OPEA SpeechT5 service on the Gaudi2 for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/README.md) |
|
||||
| [opea/struct2graph]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/struct2graph/src/Dockerfile) | OPEA struct-to-graph service for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/struct2graph/src/README.md) |
|
||||
| [opea/struct2graph](https://hub.docker.com/r/opea/struct2graph) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/struct2graph/src/Dockerfile) | OPEA struct-to-graph service for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/struct2graph/src/README.md) |
|
||||
| [opea/text2cypher-gaudi]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2cypher/src/Dockerfile.intel_hpu) | OPEA Text-to-Cypher microservice for GenAI application use on the Gaudi2. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2cypher/src/README.md) |
|
||||
| [opea/text2graph]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2graph/src/Dockerfile) | OPEA Text-to-Graph microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2graph/src/README.md) |
|
||||
| [opea/text2graph](https://hub.docker.com/r/opea/text2graph) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2graph/src/Dockerfile) | OPEA Text-to-Graph microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2graph/src/README.md) |
|
||||
| [opea/text2image](https://hub.docker.com/r/opea/text2image) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2image/src/Dockerfile) | OPEA text-to-image microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2image/src/README.md) |
|
||||
| [opea/text2image-gaudi](https://hub.docker.com/r/opea/text2image-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2image/src/Dockerfile.intel_hpu) | OPEA text-to-image microservice for GenAI application use on the Gaudi | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2image/src/README.md) |
|
||||
| [opea/text2image-ui](https://hub.docker.com/r/opea/text2image-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/Text2Image/ui/docker/Dockerfile) | OPEA text-to-image microservice UI entry for GenAI application | [Link](https://github.com/opea-project/GenAIExamples/blob/main/Text2Image/README.md) |
|
||||
| [opea/text2sql](https://hub.docker.com/r/opea/text2sql) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2sql/src/Dockerfile) | OPEA text to Structured Query Language microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2sql/src/README.md) |
|
||||
| [opea/text2sql-react-ui](https://hub.docker.com/r/opea/text2sql-react-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/DBQnA/ui/docker/Dockerfile.react) | OPEA text to Structured Query Language microservice react UI entry for GenAI application | [Link](https://github.com/opea-project/GenAIExamples/blob/main/DBQnA/README.md) |
|
||||
| [opea/tts](https://hub.docker.com/r/opea/tts) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/Dockerfile) | OPEA Text-To-Speech microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/README.md) |
|
||||
| [opea/vllm](https://hub.docker.com/r/opea/vllm) | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/Dockerfile.cpu) | Deploying and servicing VLLM models based on VLLM projects | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/README.md) |
|
||||
| [opea/vllm](https://hub.docker.com/r/opea/vllm) | [Link](https://github.com/vllm-project/vllm/blob/v0.8.3/docker/Dockerfile.cpu) | Deploying and servicing VLLM models based on VLLM projects | [Link](https://github.com/vllm-project/vllm/blob/v0.8.3/README.md) |
|
||||
| [opea/vllm-arc](https://hub.docker.com/r/opea/vllm-arc) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/src/Dockerfile.intel_gpu) | Deploying and servicing VLLM models on Arc based on VLLM projects | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/README.md) |
|
||||
| [opea/vllm-gaudi](https://hub.docker.com/r/opea/vllm-gaudi) | [Link](https://github.com/HabanaAI/vllm-fork/blob/v0.6.6.post1%2BGaudi-1.20.0/Dockerfile.hpu) | Deploying and servicing VLLM models on Gaudi2 based on VLLM project | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/README.md) |
|
||||
| [opea/vllm-openvino](https://hub.docker.com/r/opea/vllm-openvino) | [Link](https://github.com/vllm-project/vllm/blob/v0.6.1/Dockerfile.openvino) | VLLM Model for Deploying and Serving Openvino Framework Based on VLLM Project | [Link](https://github.com/vllm-project/vllm/blob/main/README.md) |
|
||||
| [opea/vllm-rocm]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/src/Dockerfile.amd_gpu) | Deploying and servicing VLLM models on AMD Rocm based on VLLM project | |
|
||||
| [opea/vllm-rocm](https://hub.docker.com/r/opea/vllm-rocm) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/src/Dockerfile.amd_gpu) | Deploying and servicing VLLM models on AMD Rocm based on VLLM project | |
|
||||
| [opea/wav2lip](https://hub.docker.com/r/opea/wav2lip) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/wav2lip/src/Dockerfile) | OPEA Generate lip movements from audio files microservice with Pathway for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/wav2lip/deployment/kubernetes/README.md) |
|
||||
| [opea/wav2lip-gaudi](https://hub.docker.com/r/opea/wav2lip-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/wav2lip/src/Dockerfile.intel_hpu) | OPEA Generate lip movements from audio files microservice with Pathway for GenAI application use on the Gaudi2 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/wav2lip/deployment/kubernetes/README.md) |
|
||||
| [opea/web-retriever](https://hub.docker.com/r/opea/web-retriever)<br> | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/web_retrievers/src/Dockerfile) | OPEA retrieval microservice based on chroma vectordb for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/web_retrievers/src/README.md) |
|
||||
|
||||
Reference in New Issue
Block a user