Compare commits

...

21 Commits

Author SHA1 Message Date
lkk
e380c18d56 Fix ui dockerfile. (#1909)
Signed-off-by: lkk <33276950+lkk12014402@users.noreply.github.com>
(cherry picked from commit ff66600ab4)
2025-05-06 22:55:20 +08:00
lkk
ba0add7690 build cpu agent ui docker image. (#1894)
(cherry picked from commit d334f5c8fd)
2025-05-06 10:27:30 +08:00
Zhu Yongbo
631f30dff8 Install cpu version for components (#1888)
Signed-off-by: Yongbozzz <yongbo.zhu@intel.com>
(cherry picked from commit 555c4100b3)
2025-04-29 13:49:56 +08:00
CICD-at-OPEA
0b55835259 Freeze OPEA images tag
Signed-off-by: CICD-at-OPEA <CICD@opea.dev>
2025-04-25 15:16:03 +00:00
chen, suyue
17355b6719 downgrade tei version from 1.6 to 1.5, fix the chatqna perf regression (#1886)
Signed-off-by: chensuyue <suyue.chen@intel.com>
(cherry picked from commit c546d96e98)
2025-04-25 23:11:56 +08:00
chen, suyue
63277feabb Update benchmark scripts (#1883)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit be5933ad85)
2025-04-25 23:11:50 +08:00
rbrugaro
c956de0f51 README fixes Finance Example (#1882)
Signed-off-by: Rita Brugarolas <rita.brugarolas.brufau@intel.com>
Co-authored-by: Ying Hu <ying.hu@intel.com>
(cherry picked from commit 18b4f39f27)
2025-04-25 23:11:44 +08:00
chyundunovDatamonsters
82419b1818 DocSum - refactoring README.md for deploy application on ROCm (#1881)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
(cherry picked from commit ef9290f245)
2025-04-25 23:11:40 +08:00
Artem Astafev
ead526514d Refine README.MD for SearchQnA on AMD ROCm platform (#1876)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
(cherry picked from commit ccc145ea1a)
2025-04-25 23:11:29 +08:00
chyundunovDatamonsters
d1a8f0d07d ChatQnA - refactoring README.md for deploy application on ROCm (#1857)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Signed-off-by: Chingis Yundunov <c.yundunov@datamonsters.com>
Co-authored-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit bb7a675665)
2025-04-25 23:11:27 +08:00
chen, suyue
48f577571b [CICD enhance] EdgeCraftRAG run CI with latest base image, group logs in GHA outputs. (#1877)
Signed-off-by: chensuyue <suyue.chen@intel.com>
(cherry picked from commit f90a6d2a8e)
2025-04-25 23:11:23 +08:00
chyundunovDatamonsters
55815a3316 CodeTrans - refactoring README.md for deploy application on ROCm with Docker Compose (#1875)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
(cherry picked from commit 1fdab591d9)
2025-04-25 23:11:21 +08:00
chen, suyue
e0fc8d5405 Remove proxy in CodeTrans test (#1874)
Signed-off-by: chensuyue <suyue.chen@intel.com>
(cherry picked from commit 13ea13862a)
2025-04-25 23:11:17 +08:00
ZePan110
c35e86cd08 Update image links. (#1866)
Signed-off-by: ZePan110 <ze.pan@intel.com>
(cherry picked from commit 1787d1ee98)
2025-04-25 23:11:15 +08:00
Artem Astafev
c8f9c58148 Refine README.MD for AMD ROCm docker compose deployment (#1856)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
(cherry picked from commit db4bf1a4c3)
2025-04-25 23:11:12 +08:00
chen, suyue
4eb75e7b25 Set opea_branch for CD test (#1870)
Signed-off-by: chensuyue <suyue.chen@intel.com>
(cherry picked from commit f7002fcb70)
2025-04-25 23:11:11 +08:00
Artem Astafev
4804efc852 Fix compose file and functional tests for Avatarchatbot on AMD ROCm platform (#1872)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
(cherry picked from commit c39c875211)
2025-04-25 23:11:08 +08:00
Artem Astafev
f8d60337e2 Refine AuidoQnA README.MD for AMD ROCm docker compose deployment (#1862)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
(cherry picked from commit c2e9a259fe)
2025-04-25 23:11:06 +08:00
Omar Khleif
c8260d1ef4 Added CodeGen Gradio README link to Docker Images List (#1864)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
(cherry picked from commit 48eaf9c1c9)
2025-04-25 23:11:03 +08:00
Ervin Castelino
599d301ee8 Update README.md of DBQnA (#1855)
Co-authored-by: Ying Hu <ying.hu@intel.com>
(cherry picked from commit a39824f142)
2025-04-25 23:11:02 +08:00
Dina Suehiro Jones
d3a84108af Fixes for MultimodalQnA with the Milvus vector db (#1859)
Signed-off-by: Dina Suehiro Jones <dina.s.jones@intel.com>
(cherry picked from commit e10e6dd002)
2025-04-25 23:11:01 +08:00
79 changed files with 1541 additions and 2151 deletions

View File

@@ -76,6 +76,7 @@ jobs:
example: ${{ inputs.example }}
hardware: ${{ inputs.node }}
use_model_cache: ${{ inputs.use_model_cache }}
opea_branch: ${{ inputs.opea_branch }}
secrets: inherit

View File

@@ -32,6 +32,10 @@ on:
required: false
type: boolean
default: false
opea_branch:
default: "main"
required: false
type: string
jobs:
get-test-case:
runs-on: ubuntu-latest
@@ -169,6 +173,7 @@ jobs:
FINANCIAL_DATASETS_API_KEY: ${{ secrets.FINANCIAL_DATASETS_API_KEY }}
IMAGE_REPO: ${{ inputs.registry }}
IMAGE_TAG: ${{ inputs.tag }}
opea_branch: ${{ inputs.opea_branch }}
example: ${{ inputs.example }}
hardware: ${{ inputs.hardware }}
test_case: ${{ matrix.test_case }}

View File

@@ -7,7 +7,7 @@ source /GenAIExamples/.github/workflows/scripts/change_color
log_dir=/GenAIExamples/.github/workflows/scripts/codeScan
ERROR_WARN=false
find . -type f \( -name "Dockerfile*" \) -print -exec hadolint --ignore DL3006 --ignore DL3007 --ignore DL3008 --ignore DL3013 {} \; > ${log_dir}/hadolint.log
find . -type f \( -name "Dockerfile*" \) -print -exec hadolint --ignore DL3006 --ignore DL3007 --ignore DL3008 --ignore DL3013 --ignore DL3018 --ignore DL3016 {} \; > ${log_dir}/hadolint.log
if [[ $(grep -c "error" ${log_dir}/hadolint.log) != 0 ]]; then
$BOLD_RED && echo "Error!! Please Click on the artifact button to download and check error details." && $RESET

View File

@@ -1,49 +1,203 @@
# Copyright (C) 2024 Intel Corporation
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#FROM python:3.11-slim
FROM node:22.9.0
# syntax=docker/dockerfile:1
# Initialize device type args
# use build args in the docker build command with --build-arg="BUILDARG=true"
ARG USE_CUDA=false
ARG USE_OLLAMA=false
# Tested with cu117 for CUDA 11 and cu121 for CUDA 12 (default)
ARG USE_CUDA_VER=cu121
# any sentence transformer model; models to use can be found at https://huggingface.co/models?library=sentence-transformers
# Leaderboard: https://huggingface.co/spaces/mteb/leaderboard
# for better performance and multilangauge support use "intfloat/multilingual-e5-large" (~2.5GB) or "intfloat/multilingual-e5-base" (~1.5GB)
# IMPORTANT: If you change the embedding model (sentence-transformers/all-MiniLM-L6-v2) and vice versa, you aren't able to use RAG Chat with your previous documents loaded in the WebUI! You need to re-embed them.
ARG USE_EMBEDDING_MODEL=sentence-transformers/all-MiniLM-L6-v2
ARG USE_RERANKING_MODEL=""
ENV LANG=C.UTF-8
ARG ARCH=cpu
# Tiktoken encoding name; models to use can be found at https://huggingface.co/models?library=tiktoken
ARG USE_TIKTOKEN_ENCODING_NAME="cl100k_base"
RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
build-essential \
libgl1-mesa-glx \
libjemalloc-dev \
git \
python3-venv
ARG BUILD_HASH=dev-build
# Override at your own risk - non-root configurations are untested
ARG UID=0
ARG GID=0
######## WebUI frontend ########
FROM --platform=$BUILDPLATFORM node:22-alpine3.20 AS build
ARG BUILD_HASH
WORKDIR /app
COPY open_webui_patches /app/patches
ARG WEBUI_VERSION=v0.5.20
RUN apk add --no-cache git
# Clone code and use patch
RUN git config --global user.name "opea" && \
git config --global user.email "" && \
git clone https://github.com/open-webui/open-webui.git
WORKDIR /app/open-webui
RUN git checkout ${WEBUI_VERSION} && git am /app/patches/*.patch
WORKDIR /app
RUN mv open-webui/* . && rm -fr open-webui && ls -lrth /app/backend/
RUN npm install onnxruntime-node --onnxruntime-node-install-cuda=skip
RUN apk update && \
apk add --no-cache wget && \
wget https://github.com/microsoft/onnxruntime/releases/download/v1.20.1/onnxruntime-linux-x64-gpu-1.20.1.tgz
ENV APP_BUILD_HASH=${BUILD_HASH}
RUN npm run build
######## WebUI backend ########
FROM python:3.11-slim-bookworm AS base
# Use args
ARG USE_CUDA
ARG USE_OLLAMA
ARG USE_CUDA_VER
ARG USE_EMBEDDING_MODEL
ARG USE_RERANKING_MODEL
ARG UID
ARG GID
## Basis ##
ENV ENV=prod \
PORT=8080 \
# pass build args to the build
USE_OLLAMA_DOCKER=${USE_OLLAMA} \
USE_CUDA_DOCKER=${USE_CUDA} \
USE_CUDA_DOCKER_VER=${USE_CUDA_VER} \
USE_EMBEDDING_MODEL_DOCKER=${USE_EMBEDDING_MODEL} \
USE_RERANKING_MODEL_DOCKER=${USE_RERANKING_MODEL}
## Basis URL Config ##
ENV OLLAMA_BASE_URL="/ollama" \
OPENAI_API_BASE_URL=""
## API Key and Security Config ##
ENV OPENAI_API_KEY="" \
WEBUI_SECRET_KEY="" \
SCARF_NO_ANALYTICS=true \
DO_NOT_TRACK=true \
ANONYMIZED_TELEMETRY=false
#### Other models #########################################################
## whisper TTS model settings ##
ENV WHISPER_MODEL="base" \
WHISPER_MODEL_DIR="/app/backend/data/cache/whisper/models"
## RAG Embedding model settings ##
ENV RAG_EMBEDDING_MODEL="$USE_EMBEDDING_MODEL_DOCKER" \
RAG_RERANKING_MODEL="$USE_RERANKING_MODEL_DOCKER" \
SENTENCE_TRANSFORMERS_HOME="/app/backend/data/cache/embedding/models"
## Tiktoken model settings ##
ENV TIKTOKEN_ENCODING_NAME="cl100k_base" \
TIKTOKEN_CACHE_DIR="/app/backend/data/cache/tiktoken"
## Hugging Face download cache ##
ENV HF_HOME="/app/backend/data/cache/embedding/models"
## Torch Extensions ##
# ENV TORCH_EXTENSIONS_DIR="/.cache/torch_extensions"
#### Other models ##########################################################
COPY --from=build /app/backend /app/backend
WORKDIR /app/backend
WORKDIR /root/
ENV HOME=/root
ENV VIRTUAL_ENV=$HOME/.env/open-webui
# Create user and group if not root
RUN if [ $UID -ne 0 ]; then \
if [ $GID -ne 0 ]; then \
addgroup --gid $GID app; \
fi; \
adduser --uid $UID --gid $GID --home $HOME --disabled-password --no-create-home app; \
fi
COPY open_webui_patches /root/patches
RUN mkdir -p $HOME/.cache/chroma
RUN printf 00000000-0000-0000-0000-000000000000 > $HOME/.cache/chroma/telemetry_user_id
RUN git clone https://github.com/open-webui/open-webui.git && \
git config --global user.name "opea" && git config --global user.email "" && \
mkdir -p $HOME/.env && python3 -m venv $VIRTUAL_ENV && \
$VIRTUAL_ENV/bin/python -m pip install --no-cache-dir --upgrade pip && \
$VIRTUAL_ENV/bin/python -m pip install --no-cache-dir build
# Make sure the user has access to the app and root directory
RUN chown -R $UID:$GID /app $HOME
WORKDIR /root/open-webui
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN if [ "$USE_OLLAMA" = "true" ]; then \
apt-get update && \
# Install pandoc and netcat
apt-get install -y --no-install-recommends git build-essential pandoc netcat-openbsd curl && \
apt-get install -y --no-install-recommends gcc python3-dev && \
# for RAG OCR
apt-get install -y --no-install-recommends ffmpeg libsm6 libxext6 && \
# install helper tools
apt-get install -y --no-install-recommends curl jq && \
# install ollama
curl -fsSL https://ollama.com/install.sh | sh && \
# cleanup
rm -rf /var/lib/apt/lists/*; \
else \
apt-get update && \
# Install pandoc, netcat and gcc
apt-get install -y --no-install-recommends git build-essential pandoc gcc netcat-openbsd curl jq && \
apt-get install -y --no-install-recommends gcc python3-dev && \
# for RAG OCR
apt-get install -y --no-install-recommends ffmpeg libsm6 libxext6 && \
# cleanup
rm -rf /var/lib/apt/lists/*; \
fi
RUN git checkout v0.5.20 && \
git am ../patches/*.patch && \
python -m build && \
pip install --no-cache-dir dist/open_webui-0.5.20-py3-none-any.whl
# install python dependencies
# COPY --chown=$UID:$GID ./backend/requirements.txt ./requirements.txt
# RUN cp /app/backend/requirements.txt ./requirements.txt
ENV LANG=en_US.UTF-8
WORKDIR /root/
RUN rm -fr /root/open-webui && rm -fr /root/patches
# CMD ["/bin/bash"]
ENTRYPOINT ["open-webui", "serve"]
RUN pip3 install --no-cache-dir uv && \
if [ "$USE_CUDA" = "true" ]; then \
# If you use CUDA the whisper and embedding model will be downloaded on first use
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/$USE_CUDA_DOCKER_VER --no-cache-dir && \
uv pip install --system -r requirements.txt --no-cache-dir && \
python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" && \
python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])"; \
python -c "import os; import tiktoken; tiktoken.get_encoding(os.environ['TIKTOKEN_ENCODING_NAME'])"; \
else \
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --no-cache-dir && \
uv pip install --system -r requirements.txt --no-cache-dir && \
python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" && \
python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])"; \
python -c "import os; import tiktoken; tiktoken.get_encoding(os.environ['TIKTOKEN_ENCODING_NAME'])"; \
fi; \
chown -R $UID:$GID /app/backend/data/
# copy embedding weight from build
# RUN mkdir -p /root/.cache/chroma/onnx_models/all-MiniLM-L6-v2
# COPY --from=build /app/onnx /root/.cache/chroma/onnx_models/all-MiniLM-L6-v2/onnx
# copy built frontend files
COPY --chown=$UID:$GID --from=build /app/build /app/build
COPY --chown=$UID:$GID --from=build /app/CHANGELOG.md /app/CHANGELOG.md
COPY --chown=$UID:$GID --from=build /app/package.json /app/package.json
# copy backend files
# COPY --chown=$UID:$GID ./backend .
EXPOSE 8080
HEALTHCHECK CMD curl --silent --fail http://localhost:${PORT:-8080}/health | jq -ne 'input.status == true' || exit 1
USER $UID:$GID
ARG BUILD_HASH
ENV WEBUI_BUILD_VERSION=${BUILD_HASH}
ENV DOCKER=true
CMD [ "bash", "start.sh"]

View File

@@ -3,7 +3,6 @@
ARG IMAGE_REPO=opea
ARG BASE_TAG=latest
FROM opea/comps-base:$BASE_TAG
FROM $IMAGE_REPO/comps-base:$BASE_TAG
COPY ./audioqna.py $HOME/audioqna.py

View File

@@ -3,7 +3,6 @@
ARG IMAGE_REPO=opea
ARG BASE_TAG=latest
FROM opea/comps-base:$BASE_TAG
FROM $IMAGE_REPO/comps-base:$BASE_TAG
COPY ./audioqna_multilang.py $HOME/audioqna_multilang.py

View File

@@ -1,120 +1,59 @@
# Build Mega Service of AudioQnA on AMD ROCm GPU
# Deploying AudioQnA on AMD ROCm GPU
This document outlines the deployment process for a AudioQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice
pipeline on server on AMD ROCm GPU platform.
This document outlines the single node deployment process for a AudioQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices on server with AMD ROCm processing accelerators. The steps include pulling Docker images, container deployment via Docker Compose, and service execution using microservices `llm`.
## Build Docker Images
Note: The default LLM is `Intel/neural-chat-7b-v3-3`. Before deploying the application, please make sure either you've requested and been granted the access to it on [Huggingface](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) or you've downloaded the model locally from [ModelScope](https://www.modelscope.cn/models).
### 1. Build Docker Image
## Table of Contents
- #### Create application install directory and go to it:
1. [AudioQnA Quick Start Deployment](#audioqna-quick-start-deployment)
2. [AudioQnA Docker Compose Files](#audioqna-docker-compose-files)
3. [Validate Microservices](#validate-microservices)
4. [Conclusion](#conclusion)
```bash
mkdir ~/audioqna-install && cd audioqna-install
```
## AudioQnA Quick Start Deployment
- #### Clone the repository GenAIExamples (the default repository branch "main" is used here):
This section describes how to quickly deploy and test the AudioQnA service manually on an AMD ROCm platform. The basic steps are:
```bash
git clone https://github.com/opea-project/GenAIExamples.git
```
1. [Access the Code](#access-the-code)
2. [Configure the Deployment Environment](#configure-the-deployment-environment)
3. [Deploy the Services Using Docker Compose](#deploy-the-services-using-docker-compose)
4. [Check the Deployment Status](#check-the-deployment-status)
5. [Validate the Pipeline](#validate-the-pipeline)
6. [Cleanup the Deployment](#cleanup-the-deployment)
If you need to use a specific branch/tag of the GenAIExamples repository, then (v1.3 replace with its own value):
### Access the Code
```bash
git clone https://github.com/opea-project/GenAIExamples.git && cd GenAIExamples && git checkout v1.3
```
We remind you that when using a specific version of the code, you need to use the README from this version:
- #### Go to build directory:
```bash
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_image_build
```
- Cleaning up the GenAIComps repository if it was previously cloned in this directory.
This is necessary if the build was performed earlier and the GenAIComps folder exists and is not empty:
```bash
echo Y | rm -R GenAIComps
```
- #### Clone the repository GenAIComps (the default repository branch "main" is used here):
Clone the GenAIExample repository and access the AudioQnA AMD ROCm platform Docker Compose files and supporting scripts:
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/AudioQnA
```
We remind you that when using a specific version of the code, you need to use the README from this version.
Then checkout a released version, such as v1.3:
- #### Setting the list of images for the build (from the build file.yaml)
```bash
git checkout v1.3
```
If you want to deploy a vLLM-based or TGI-based application, then the set of services is installed as follows:
### Configure the Deployment Environment
#### vLLM-based application
#### Docker Compose GPU Configuration
```bash
service_list="vllm-rocm whisper speecht5 audioqna audioqna-ui"
```
Consult the section on [AudioQnA Service configuration](#audioqna-configuration) for information on how service specific configuration parameters affect deployments.
#### TGI-based application
```bash
service_list="whisper speecht5 audioqna audioqna-ui"
```
- #### Optional. Pull TGI Docker Image (Do this if you want to use TGI)
```bash
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
```
- #### Build Docker Images
```bash
docker compose -f build.yaml build ${service_list} --no-cache
```
After the build, we check the list of images with the command:
```bash
docker image ls
```
The list of images should include:
##### vLLM-based application:
- opea/vllm-rocm:latest
- opea/whisper:latest
- opea/speecht5:latest
- opea/audioqna:latest
##### TGI-based application:
- ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
- opea/whisper:latest
- opea/speecht5:latest
- opea/audioqna:latest
---
## Deploy the AudioQnA Application
### Docker Compose Configuration for AMD GPUs
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
- compose_vllm.yaml - for vLLM-based application
- compose.yaml - for TGI-based
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose files (`compose.yaml`, `compose_vllm.yaml`) for the LLM serving container:
```yaml
# Example for vLLM service in compose_vllm.yaml
# Note: Modern docker compose might use deploy.resources syntax instead.
# Check your docker version and compose file.
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/:/dev/dri/
# - /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
@@ -123,131 +62,161 @@ security_opt:
- seccomp:unconfined
```
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
#### Environment Variables (`set_env*.sh`)
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/card0:/dev/dri/card0
- /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
These scripts (`set_env_vllm.sh` for vLLM, `set_env.sh` for TGI) configure crucial parameters passed to the containers.
**How to Identify GPU Device IDs:**
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
To set up environment variables for deploying AudioQnA services, set up some parameters specific to the deployment environment and source the `set_env.sh` script in this directory:
### Set deploy environment variables
#### Setting variables in the operating system environment:
##### Set variable HUGGINGFACEHUB_API_TOKEN:
For TGI inference usage:
```bash
### Replace the string 'your_huggingfacehub_token' with your HuggingFacehub repository access token.
export HUGGINGFACEHUB_API_TOKEN='your_huggingfacehub_token'
export host_ip="External_Public_IP" # ip address of the node
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
export http_proxy="Your_HTTP_Proxy" # http proxy if any
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
export no_proxy=localhost,127.0.0.1,$host_ip,whisper-service,speecht5-service,vllm-service,tgi-service,audioqna-xeon-backend-server,audioqna-xeon-ui-server # additional no proxies if needed
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
source ./set_env.sh
```
#### Set variables value in set_env\*\*\*\*.sh file:
Go to Docker Compose directory:
For vLLM inference usage
```bash
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
export host_ip="External_Public_IP" # ip address of the node
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
export http_proxy="Your_HTTP_Proxy" # http proxy if any
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
export no_proxy=localhost,127.0.0.1,$host_ip,whisper-service,speecht5-service,vllm-service,tgi-service,audioqna-xeon-backend-server,audioqna-xeon-ui-server # additional no proxies if needed
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
source ./set_env_vllm.sh
```
The example uses the Nano text editor. You can use any convenient text editor:
### Deploy the Services Using Docker Compose
#### If you use vLLM
```bash
nano set_env_vllm.sh
```
#### If you use TGI
```bash
nano set_env.sh
```
If you are in a proxy environment, also set the proxy-related environment variables:
```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
```
Set the values of the variables:
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
We set these values in the file set_env\*\*\*\*.sh
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
The values shown in the file set_env.sh or set_env_vllm they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
#### Set variables with script set_env\*\*\*\*.sh
#### If you use vLLM
```bash
. set_env_vllm.sh
```
#### If you use TGI
```bash
. set_env.sh
```
### Start the services:
#### If you use vLLM
```bash
docker compose -f compose_vllm.yaml up -d
```
#### If you use TGI
To deploy the AudioQnA services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
for TGI inference deployment
```bash
cd docker_compose/amd/gpu/rocm
docker compose -f compose.yaml up -d
```
All containers should be running and should not restart:
for vLLM inference deployment
##### If you use vLLM:
```bash
cd docker_compose/amd/gpu/rocm
docker compose -f compose_vllm.yaml up -d
```
- audioqna-vllm-service
- whisper-service
- speecht5-service
- audioqna-backend-server
- audioqna-ui-server
> **Note**: developers should build docker image from source when:
>
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
> - Unable to download the docker image.
> - Use a specific version of Docker image.
##### If you use TGI:
Please refer to the table below to build different microservices from source:
- audioqna-tgi-service
- whisper-service
- speecht5-service
- audioqna-backend-server
- audioqna-ui-server
| Microservice | Deployment Guide |
| ------------ | --------------------------------------------------------------------------------------------------------------------------------- |
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
| LLM | [LLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/llms) |
| WHISPER | [Whisper build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/asr/src#211-whisper-server-image) |
| SPEECHT5 | [SpeechT5 build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/tts/src#211-speecht5-server-image) |
| GPT-SOVITS | [GPT-SOVITS build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/gpt-sovits/src#build-the-image) |
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
---
### Check the Deployment Status
## Validate the Services
After running docker compose, check if all the containers launched via docker compose have started:
### 1. Validate the vLLM/TGI Service
#### For TGI inference deployment
```bash
docker ps -a
```
For the default deployment, the following 5 containers should have started:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8007690868d opea/audioqna:latest "python audioqna.py" 21 seconds ago Up 19 seconds 0.0.0.0:3008->8888/tcp, [::]:3008->8888/tcp audioqna-rocm-backend-server
87ba9a1d56ae ghcr.io/huggingface/text-generation-inference:2.4.1-rocm "/tgi-entrypoint.sh …" 21 seconds ago Up 20 seconds 0.0.0.0:3006->80/tcp, [::]:3006->80/tcp tgi-service
59e869acd742 opea/speecht5:latest "python speecht5_ser…" 21 seconds ago Up 20 seconds 0.0.0.0:7055->7055/tcp, :::7055->7055/tcp speecht5-service
0143267a4327 opea/whisper:latest "python whisper_serv…" 21 seconds ago Up 20 seconds 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
```
### For vLLM inference deployment
```bash
docker ps -a
```
For the default deployment, the following 5 containers should have started:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3e6893a69fa opea/audioqna-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 35 seconds 0.0.0.0:18039->5173/tcp, [::]:18039->5173/tcp audioqna-ui-server
f943e5cd21e9 opea/audioqna:latest "python audioqna.py" 37 seconds ago Up 35 seconds 0.0.0.0:18038->8888/tcp, [::]:18038->8888/tcp audioqna-backend-server
074e8c418f52 opea/speecht5:latest "python speecht5_ser…" 37 seconds ago Up 36 seconds 0.0.0.0:7055->7055/tcp, :::7055->7055/tcp speecht5-service
77abe498e427 opea/vllm-rocm:latest "python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:8081->8011/tcp, [::]:8081->8011/tcp audioqna-vllm-service
9074a95bb7a6 opea/whisper:latest "python whisper_serv…" 37 seconds ago Up 36 seconds 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
```
If any issues are encountered during deployment, refer to the [Troubleshooting](../../../../README_miscellaneous.md#troubleshooting) section.
### Validate the Pipeline
Once the AudioQnA services are running, test the pipeline using the following command:
```bash
# Test the AudioQnA megaservice by recording a .wav file, encoding the file into the base64 format, and then sending the base64 string to the megaservice endpoint.
# The megaservice will return a spoken response as a base64 string. To listen to the response, decode the base64 string and save it as a .wav file.
wget https://github.com/intel/intel-extension-for-transformers/raw/refs/heads/main/intel_extension_for_transformers/neural_chat/assets/audio/sample_2.wav
base64_audio=$(base64 -w 0 sample_2.wav)
# if you are using speecht5 as the tts service, voice can be "default" or "male"
# if you are using gpt-sovits for the tts service, you can set the reference audio following https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/gpt-sovits/src/README.md
curl http://${host_ip}:3008/v1/audioqna \
-X POST \
-H "Content-Type: application/json" \
-d "{\"audio\": \"${base64_audio}\", \"max_tokens\": 64, \"voice\": \"default\"}" \
| sed 's/^"//;s/"$//' | base64 -d > output.wav
```
**Note** : Access the AudioQnA UI by web browser through this URL: `http://${host_ip}:5173`. Please confirm the `5173` port is opened in the firewall. To validate each microservice used in the pipeline refer to the [Validate Microservices](#validate-microservices) section.
### Cleanup the Deployment
To stop the containers associated with the deployment, execute the following command:
#### If you use vLLM
```bash
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
docker compose -f compose_vllm.yaml down
```
#### If you use TGI
```bash
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
docker compose -f compose.yaml down
```
## AudioQnA Docker Compose Files
In the context of deploying an AudioQnA pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks, or single English TTS/multi-language TTS component. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
| File | Description |
| ---------------------------------------- | ----------------------------------------------------------------------------------------- |
| [compose_vllm.yaml](./compose_vllm.yaml) | Default compose file using vllm as serving framework and redis as vector database |
| [compose.yaml](./compose.yaml) | The LLM serving framework is TGI. All other configurations remain the same as the default |
### Validate the vLLM/TGI Service
#### If you use vLLM:
@@ -313,7 +282,7 @@ Checking the response from the service. The response should be similar to JSON:
If the service response has a meaningful response in the value of the "generated_text" key,
then we consider the TGI service to be successfully launched
### 2. Validate MegaServices
### Validate MegaServices
Test the AudioQnA megaservice by recording a .wav file, encoding the file into the base64 format, and then sending the
base64 string to the megaservice endpoint. The megaservice will return a spoken response as a base64 string. To listen
@@ -327,7 +296,7 @@ curl http://${host_ip}:3008/v1/audioqna \
-H 'Content-Type: application/json' | sed 's/^"//;s/"$//' | base64 -d > output.wav
```
### 3. Validate MicroServices
### Validate MicroServices
```bash
# whisper service
@@ -343,18 +312,6 @@ curl http://${host_ip}:7055/v1/tts \
-H 'Content-Type: application/json'
```
### 4. Stop application
## Conclusion
#### If you use vLLM
```bash
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
docker compose -f compose_vllm.yaml down
```
#### If you use TGI
```bash
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
docker compose -f compose.yaml down
```
This guide should enable developers to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.

View File

@@ -42,7 +42,7 @@ services:
environment:
TTS_ENDPOINT: ${TTS_ENDPOINT}
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
image: ghcr.io/huggingface/text-generation-inference:2.4.1-rocm
container_name: tgi-service
ports:
- "${TGI_SERVICE_PORT:-3006}:80"
@@ -66,24 +66,6 @@ services:
- seccomp:unconfined
ipc: host
command: --model-id ${LLM_MODEL_ID} --max-input-length 4096 --max-total-tokens 8192
llm:
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
container_name: llm-tgi-server
depends_on:
- tgi-service
ports:
- "3007:9000"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
TGI_LLM_ENDPOINT: ${TGI_LLM_ENDPOINT}
LLM_ENDPOINT: ${TGI_LLM_ENDPOINT}
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
OPENAI_API_KEY: ${OPENAI_API_KEY}
restart: unless-stopped
wav2lip-service:
image: ${REGISTRY:-opea}/wav2lip:${TAG:-latest}
container_name: wav2lip-service
@@ -125,7 +107,7 @@ services:
container_name: avatarchatbot-backend-server
depends_on:
- asr
- llm
- tgi-service
- tts
- animation
ports:

View File

@@ -30,7 +30,7 @@ export ANIMATION_SERVICE_HOST_IP=${host_ip}
export MEGA_SERVICE_PORT=8888
export ASR_SERVICE_PORT=3001
export TTS_SERVICE_PORT=3002
export LLM_SERVICE_PORT=3007
export LLM_SERVICE_PORT=3006
export ANIMATION_SERVICE_PORT=3008
export DEVICE="cpu"

View File

@@ -27,7 +27,7 @@ function build_docker_images() {
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="avatarchatbot whisper asr llm-textgen speecht5 tts wav2lip animation"
service_list="avatarchatbot whisper asr speecht5 tts wav2lip animation"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
@@ -65,7 +65,7 @@ function start_services() {
export MEGA_SERVICE_PORT=8888
export ASR_SERVICE_PORT=3001
export TTS_SERVICE_PORT=3002
export LLM_SERVICE_PORT=3007
export LLM_SERVICE_PORT=3006
export ANIMATION_SERVICE_PORT=3008
export DEVICE="cpu"

View File

@@ -1,192 +0,0 @@
# ChatQnA Benchmarking
This folder contains a collection of Kubernetes manifest files for deploying the ChatQnA service across scalable nodes. It includes a comprehensive [benchmarking tool](https://github.com/opea-project/GenAIEval/blob/main/evals/benchmark/README.md) that enables throughput analysis to assess inference performance.
By following this guide, you can run benchmarks on your deployment and share the results with the OPEA community.
## Purpose
We aim to run these benchmarks and share them with the OPEA community for three primary reasons:
- To offer insights on inference throughput in real-world scenarios, helping you choose the best service or deployment for your needs.
- To establish a baseline for validating optimization solutions across different implementations, providing clear guidance on which methods are most effective for your use case.
- To inspire the community to build upon our benchmarks, allowing us to better quantify new solutions in conjunction with current leading llms, serving frameworks etc.
## Metrics
The benchmark will report the below metrics, including:
- Number of Concurrent Requests
- End-to-End Latency: P50, P90, P99 (in milliseconds)
- End-to-End First Token Latency: P50, P90, P99 (in milliseconds)
- Average Next Token Latency (in milliseconds)
- Average Token Latency (in milliseconds)
- Requests Per Second (RPS)
- Output Tokens Per Second
- Input Tokens Per Second
Results will be displayed in the terminal and saved as CSV file named `1_stats.csv` for easy export to spreadsheets.
## Table of Contents
- [Deployment](#deployment)
- [Prerequisites](#prerequisites)
- [Deployment Scenarios](#deployment-scenarios)
- [Case 1: Baseline Deployment with Rerank](#case-1-baseline-deployment-with-rerank)
- [Case 2: Baseline Deployment without Rerank](#case-2-baseline-deployment-without-rerank)
- [Case 3: Tuned Deployment with Rerank](#case-3-tuned-deployment-with-rerank)
- [Benchmark](#benchmark)
- [Test Configurations](#test-configurations)
- [Test Steps](#test-steps)
- [Upload Retrieval File](#upload-retrieval-file)
- [Run Benchmark Test](#run-benchmark-test)
- [Data collection](#data-collection)
- [Teardown](#teardown)
## Deployment
### Prerequisites
- Kubernetes installation: Use [kubespray](https://github.com/opea-project/docs/blob/main/guide/installation/k8s_install/k8s_install_kubespray.md) or other official Kubernetes installation guides:
- (Optional) [Kubernetes set up guide on Intel Gaudi product](https://github.com/opea-project/GenAIInfra/blob/main/README.md#setup-kubernetes-cluster)
- Helm installation: Follow the [Helm documentation](https://helm.sh/docs/intro/install/#helm) to install Helm.
- Setup Hugging Face Token
To access models and APIs from Hugging Face, set your token as environment variable.
```bash
export HF_TOKEN="insert-your-huggingface-token-here"
```
- Prepare Shared Models (Optional but Strongly Recommended)
Downloading models simultaneously to multiple nodes in your cluster can overload resources such as network bandwidth, memory and storage. To prevent resource exhaustion, it's recommended to preload the models in advance.
```bash
pip install -U "huggingface_hub[cli]"
sudo mkdir -p /mnt/models
sudo chmod 777 /mnt/models
huggingface-cli download --cache-dir /mnt/models Intel/neural-chat-7b-v3-3
export MODEL_DIR=/mnt/models
```
Once the models are downloaded, you can consider the following methods for sharing them across nodes:
- Persistent Volume Claim (PVC): This is the recommended approach for production setups. For more details on using PVC, refer to [PVC](https://github.com/opea-project/GenAIInfra/blob/main/helm-charts/README.md#using-persistent-volume).
- Local Host Path: For simpler testing, ensure that each node involved in the deployment follows the steps above to locally prepare the models. After preparing the models, use `--set global.modelUseHostPath=${MODELDIR}` in the deployment command.
- Label Nodes
```base
python deploy.py --add-label --num-nodes 2
```
### Deployment Scenarios
The example below are based on a two-node setup. You can adjust the number of nodes by using the `--num-nodes` option.
By default, these commands use the `default` namespace. To specify a different namespace, use the `--namespace` flag with deploy, uninstall, and kubernetes command. Additionally, update the `namespace` field in `benchmark.yaml` before running the benchmark test.
For additional configuration options, run `python deploy.py --help`
#### Case 1: Baseline Deployment with Rerank
Deploy Command (with node number, Hugging Face token, model directory specified):
```bash
python deploy.py --hf-token $HF_TOKEN --model-dir $MODEL_DIR --num-nodes 2 --with-rerank
```
Uninstall Command:
```bash
python deploy.py --uninstall
```
#### Case 2: Baseline Deployment without Rerank
```bash
python deploy.py --hf-token $HFTOKEN --model-dir $MODELDIR --num-nodes 2
```
#### Case 3: Tuned Deployment with Rerank
```bash
python deploy.py --hf-token $HFTOKEN --model-dir $MODELDIR --num-nodes 2 --with-rerank --tuned
```
## Benchmark
### Test Configurations
| Key | Value |
| -------- | ------- |
| Workload | ChatQnA |
| Tag | V1.1 |
Models configuration
| Key | Value |
| ---------- | ------------------ |
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| Inference | Intel/neural-chat-7b-v3-3 |
Benchmark parameters
| Key | Value |
| ---------- | ------------------ |
| LLM input tokens | 1024 |
| LLM output tokens | 128 |
Number of test requests for different scheduled node number:
| Node count | Concurrency | Query number |
| ----- | -------- | -------- |
| 1 | 128 | 640 |
| 2 | 256 | 1280 |
| 4 | 512 | 2560 |
More detailed configuration can be found in configuration file [benchmark.yaml](./benchmark.yaml).
### Test Steps
Use `kubectl get pods` to confirm that all pods are `READY` before starting the test.
#### Upload Retrieval File
Before testing, upload a specified file to make sure the llm input have the token length of 1k.
Get files:
```bash
wget https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/data/upload_file.txt
```
Retrieve the `ClusterIP` of the `chatqna-data-prep` service.
```bash
kubectl get svc
```
Expected output:
```log
chatqna-data-prep ClusterIP xx.xx.xx.xx <none> 6007/TCP 51m
```
Use the following `cURL` command to upload file:
```bash
cd GenAIEval/evals/benchmark/data
curl -X POST "http://${cluster_ip}:6007/v1/dataprep/ingest" \
-H "Content-Type: multipart/form-data" \
-F "chunk_size=3800" \
-F "files=@./upload_file.txt"
```
#### Run Benchmark Test
Run the benchmark test using:
```bash
bash benchmark.sh -n 2
```
The `-n` argument specifies the number of test nodes. Required dependencies will be automatically installed when running the benchmark for the first time.
#### Data collection
All the test results will come to the folder `GenAIEval/evals/benchmark/benchmark_output`.
## Teardown
After completing the benchmark, use the following command to clean up the environment:
Remove Node Labels:
```bash
python deploy.py --delete-label
```

View File

@@ -1,102 +0,0 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
deployment_type="k8s"
node_number=1
service_port=8888
query_per_node=640
benchmark_tool_path="$(pwd)/GenAIEval"
usage() {
echo "Usage: $0 [-d deployment_type] [-n node_number] [-i service_ip] [-p service_port]"
echo " -d deployment_type ChatQnA deployment type, select between k8s and docker (default: k8s)"
echo " -n node_number Test node number, required only for k8s deployment_type, (default: 1)"
echo " -i service_ip chatqna service ip, required only for docker deployment_type"
echo " -p service_port chatqna service port, required only for docker deployment_type, (default: 8888)"
exit 1
}
while getopts ":d:n:i:p:" opt; do
case ${opt} in
d )
deployment_type=$OPTARG
;;
n )
node_number=$OPTARG
;;
i )
service_ip=$OPTARG
;;
p )
service_port=$OPTARG
;;
\? )
echo "Invalid option: -$OPTARG" 1>&2
usage
;;
: )
echo "Invalid option: -$OPTARG requires an argument" 1>&2
usage
;;
esac
done
if [[ "$deployment_type" == "docker" && -z "$service_ip" ]]; then
echo "Error: service_ip is required for docker deployment_type" 1>&2
usage
fi
if [[ "$deployment_type" == "k8s" && ( -n "$service_ip" || -n "$service_port" ) ]]; then
echo "Warning: service_ip and service_port are ignored for k8s deployment_type" 1>&2
fi
function main() {
if [[ ! -d ${benchmark_tool_path} ]]; then
echo "Benchmark tool not found, setting up..."
setup_env
fi
run_benchmark
}
function setup_env() {
git clone https://github.com/opea-project/GenAIEval.git
pushd ${benchmark_tool_path}
python3 -m venv stress_venv
source stress_venv/bin/activate
pip install -r requirements.txt
popd
}
function run_benchmark() {
source ${benchmark_tool_path}/stress_venv/bin/activate
export DEPLOYMENT_TYPE=${deployment_type}
export SERVICE_IP=${service_ip:-"None"}
export SERVICE_PORT=${service_port:-"None"}
export LOAD_SHAPE=${load_shape:-"constant"}
export CONCURRENT_LEVEL=${concurrent_level:-5}
export ARRIVAL_RATE=${arrival_rate:-1.0}
if [[ -z $USER_QUERIES ]]; then
user_query=$((query_per_node*node_number))
export USER_QUERIES="[${user_query}, ${user_query}, ${user_query}, ${user_query}]"
echo "USER_QUERIES not configured, setting to: ${USER_QUERIES}."
fi
export WARMUP=$(echo $USER_QUERIES | sed -e 's/[][]//g' -e 's/,.*//')
if [[ -z $WARMUP ]]; then export WARMUP=0; fi
if [[ -z $TEST_OUTPUT_DIR ]]; then
if [[ $DEPLOYMENT_TYPE == "k8s" ]]; then
export TEST_OUTPUT_DIR="${benchmark_tool_path}/evals/benchmark/benchmark_output/node_${node_number}"
else
export TEST_OUTPUT_DIR="${benchmark_tool_path}/evals/benchmark/benchmark_output/docker"
fi
echo "TEST_OUTPUT_DIR not configured, setting to: ${TEST_OUTPUT_DIR}."
fi
envsubst < ./benchmark.yaml > ${benchmark_tool_path}/evals/benchmark/benchmark.yaml
cd ${benchmark_tool_path}/evals/benchmark
python benchmark.py
}
main

View File

@@ -1,68 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
test_suite_config: # Overall configuration settings for the test suite
examples: ["chatqna"] # The specific test cases being tested, e.g., chatqna, codegen, codetrans, faqgen, audioqna, visualqna
deployment_type: ${DEPLOYMENT_TYPE} # Default is "k8s", can also be "docker"
service_ip: ${SERVICE_IP} # Leave as None for k8s, specify for Docker
service_port: ${SERVICE_PORT} # Leave as None for k8s, specify for Docker
warm_ups: ${WARMUP} # Number of test requests for warm-up
run_time: 60m # The max total run time for the test suite
seed: # The seed for all RNGs
user_queries: ${USER_QUERIES} # Number of test requests at each concurrency level
query_timeout: 120 # Number of seconds to wait for a simulated user to complete any executing task before exiting. 120 sec by defeult.
random_prompt: false # Use random prompts if true, fixed prompts if false
collect_service_metric: false # Collect service metrics if true, do not collect service metrics if false
data_visualization: false # Generate data visualization if true, do not generate data visualization if false
llm_model: "Intel/neural-chat-7b-v3-3" # The LLM model used for the test
test_output_dir: "${TEST_OUTPUT_DIR}" # The directory to store the test output
load_shape: # Tenant concurrency pattern
name: ${LOAD_SHAPE} # poisson or constant(locust default load shape)
params: # Loadshape-specific parameters
constant: # Constant load shape specific parameters, activate only if load_shape.name is constant
concurrent_level: ${CONCURRENT_LEVEL} # If user_queries is specified, concurrent_level is target number of requests per user. If not, it is the number of simulated users
poisson: # Poisson load shape specific parameters, activate only if load_shape.name is poisson
arrival_rate: ${ARRIVAL_RATE} # Request arrival rate
test_cases:
chatqna:
embedding:
run_test: false
service_name: "chatqna-embedding-usvc" # Replace with your service name
embedserve:
run_test: false
service_name: "chatqna-tei" # Replace with your service name
retriever:
run_test: false
service_name: "chatqna-retriever-usvc" # Replace with your service name
parameters:
search_type: "similarity"
k: 1
fetch_k: 20
lambda_mult: 0.5
score_threshold: 0.2
reranking:
run_test: false
service_name: "chatqna-reranking-usvc" # Replace with your service name
parameters:
top_n: 1
rerankserve:
run_test: false
service_name: "chatqna-teirerank" # Replace with your service name
llm:
run_test: false
service_name: "chatqna-llm-uservice" # Replace with your service name
parameters:
max_tokens: 128
temperature: 0.01
top_k: 10
top_p: 0.95
repetition_penalty: 1.03
stream: true
llmserve:
run_test: false
service_name: "chatqna-tgi" # Replace with your service name
e2e:
run_test: true
service_name: "chatqna" # Replace with your service name
k: 1

View File

@@ -1,278 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import glob
import json
import os
import shutil
import subprocess
import sys
from generate_helm_values import generate_helm_values
def run_kubectl_command(command):
"""Run a kubectl command and return the output."""
try:
result = subprocess.run(command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
return result.stdout
except subprocess.CalledProcessError as e:
print(f"Error running command: {command}\n{e.stderr}")
exit(1)
def get_all_nodes():
"""Get the list of all nodes in the Kubernetes cluster."""
command = ["kubectl", "get", "nodes", "-o", "json"]
output = run_kubectl_command(command)
nodes = json.loads(output)
return [node["metadata"]["name"] for node in nodes["items"]]
def add_label_to_node(node_name, label):
"""Add a label to the specified node."""
command = ["kubectl", "label", "node", node_name, label, "--overwrite"]
print(f"Labeling node {node_name} with {label}...")
run_kubectl_command(command)
print(f"Label {label} added to node {node_name} successfully.")
def add_labels_to_nodes(node_count=None, label=None, node_names=None):
"""Add a label to the specified number of nodes or to specified nodes."""
if node_names:
# Add label to the specified nodes
for node_name in node_names:
add_label_to_node(node_name, label)
else:
# Fetch the node list and label the specified number of nodes
all_nodes = get_all_nodes()
if node_count is None or node_count > len(all_nodes):
print(f"Error: Node count exceeds the number of available nodes ({len(all_nodes)} available).")
sys.exit(1)
selected_nodes = all_nodes[:node_count]
for node_name in selected_nodes:
add_label_to_node(node_name, label)
def clear_labels_from_nodes(label, node_names=None):
"""Clear the specified label from specific nodes if provided, otherwise from all nodes."""
label_key = label.split("=")[0] # Extract key from 'key=value' format
# If specific nodes are provided, use them; otherwise, get all nodes
nodes_to_clear = node_names if node_names else get_all_nodes()
for node_name in nodes_to_clear:
# Check if the node has the label by inspecting its metadata
command = ["kubectl", "get", "node", node_name, "-o", "json"]
node_info = run_kubectl_command(command)
node_metadata = json.loads(node_info)
# Check if the label exists on this node
labels = node_metadata["metadata"].get("labels", {})
if label_key in labels:
# Remove the label from the node
command = ["kubectl", "label", "node", node_name, f"{label_key}-"]
print(f"Removing label {label_key} from node {node_name}...")
run_kubectl_command(command)
print(f"Label {label_key} removed from node {node_name} successfully.")
else:
print(f"Label {label_key} not found on node {node_name}, skipping.")
def install_helm_release(release_name, chart_name, namespace, values_file, device_type):
"""Deploy a Helm release with a specified name and chart.
Parameters:
- release_name: The name of the Helm release.
- chart_name: The Helm chart name or path, e.g., "opea/chatqna".
- namespace: The Kubernetes namespace for deployment.
- values_file: The user values file for deployment.
- device_type: The device type (e.g., "gaudi") for specific configurations (optional).
"""
# Check if the namespace exists; if not, create it
try:
# Check if the namespace exists
command = ["kubectl", "get", "namespace", namespace]
subprocess.run(command, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
except subprocess.CalledProcessError:
# Namespace does not exist, create it
print(f"Namespace '{namespace}' does not exist. Creating it...")
command = ["kubectl", "create", "namespace", namespace]
subprocess.run(command, check=True)
print(f"Namespace '{namespace}' created successfully.")
# Handle gaudi-specific values file if device_type is "gaudi"
hw_values_file = None
untar_dir = None
if device_type == "gaudi":
print("Device type is gaudi. Pulling Helm chart to get gaudi-values.yaml...")
# Combine chart_name with fixed prefix
chart_pull_url = f"oci://ghcr.io/opea-project/charts/{chart_name}"
# Pull and untar the chart
subprocess.run(["helm", "pull", chart_pull_url, "--untar"], check=True)
# Find the untarred directory
untar_dirs = glob.glob(f"{chart_name}*")
if untar_dirs:
untar_dir = untar_dirs[0]
hw_values_file = os.path.join(untar_dir, "gaudi-values.yaml")
print("gaudi-values.yaml pulled and ready for use.")
else:
print(f"Error: Could not find untarred directory for {chart_name}")
return
# Prepare the Helm install command
command = ["helm", "install", release_name, chart_name, "--namespace", namespace]
# Append additional values file for gaudi if it exists
if hw_values_file:
command.extend(["-f", hw_values_file])
# Append the main values file
command.extend(["-f", values_file])
# Execute the Helm install command
try:
print(f"Running command: {' '.join(command)}") # Print full command for debugging
subprocess.run(command, check=True)
print("Deployment initiated successfully.")
except subprocess.CalledProcessError as e:
print(f"Error occurred while deploying Helm release: {e}")
# Cleanup: Remove the untarred directory
if untar_dir and os.path.isdir(untar_dir):
print(f"Removing temporary directory: {untar_dir}")
shutil.rmtree(untar_dir)
print("Temporary directory removed successfully.")
def uninstall_helm_release(release_name, namespace=None):
"""Uninstall a Helm release and clean up resources, optionally delete the namespace if not 'default'."""
# Default to 'default' namespace if none is specified
if not namespace:
namespace = "default"
try:
# Uninstall the Helm release
command = ["helm", "uninstall", release_name, "--namespace", namespace]
print(f"Uninstalling Helm release {release_name} in namespace {namespace}...")
run_kubectl_command(command)
print(f"Helm release {release_name} uninstalled successfully.")
# If the namespace is specified and not 'default', delete it
if namespace != "default":
print(f"Deleting namespace {namespace}...")
delete_namespace_command = ["kubectl", "delete", "namespace", namespace]
run_kubectl_command(delete_namespace_command)
print(f"Namespace {namespace} deleted successfully.")
else:
print("Namespace is 'default', skipping deletion.")
except subprocess.CalledProcessError as e:
print(f"Error occurred while uninstalling Helm release or deleting namespace: {e}")
def main():
parser = argparse.ArgumentParser(description="Manage Helm Deployment.")
parser.add_argument(
"--release-name",
type=str,
default="chatqna",
help="The Helm release name created during deployment (default: chatqna).",
)
parser.add_argument(
"--chart-name",
type=str,
default="chatqna",
help="The chart name to deploy, composed of repo name and chart name (default: chatqna).",
)
parser.add_argument("--namespace", default="default", help="Kubernetes namespace (default: default).")
parser.add_argument("--hf-token", help="Hugging Face API token.")
parser.add_argument(
"--model-dir", help="Model directory, mounted as volumes for service access to pre-downloaded models"
)
parser.add_argument("--user-values", help="Path to a user-specified values.yaml file.")
parser.add_argument(
"--create-values-only", action="store_true", help="Only create the values.yaml file without deploying."
)
parser.add_argument("--uninstall", action="store_true", help="Uninstall the Helm release.")
parser.add_argument("--num-nodes", type=int, default=1, help="Number of nodes to use (default: 1).")
parser.add_argument("--node-names", nargs="*", help="Optional specific node names to label.")
parser.add_argument("--add-label", action="store_true", help="Add label to specified nodes if this flag is set.")
parser.add_argument(
"--delete-label", action="store_true", help="Delete label from specified nodes if this flag is set."
)
parser.add_argument(
"--label", default="node-type=opea-benchmark", help="Label to add/delete (default: node-type=opea-benchmark)."
)
parser.add_argument("--with-rerank", action="store_true", help="Include rerank service in the deployment.")
parser.add_argument(
"--tuned",
action="store_true",
help="Modify resources for services and change extraCmdArgs when creating values.yaml.",
)
parser.add_argument(
"--device-type",
type=str,
choices=["cpu", "gaudi"],
default="gaudi",
help="Specify the device type for deployment (choices: 'cpu', 'gaudi'; default: gaudi).",
)
args = parser.parse_args()
# Adjust num-nodes based on node-names if specified
if args.node_names:
num_node_names = len(args.node_names)
if args.num_nodes != 1 and args.num_nodes != num_node_names:
parser.error("--num-nodes must match the number of --node-names if both are specified.")
else:
args.num_nodes = num_node_names
# Node labeling management
if args.add_label:
add_labels_to_nodes(args.num_nodes, args.label, args.node_names)
return
elif args.delete_label:
clear_labels_from_nodes(args.label, args.node_names)
return
# Uninstall Helm release if specified
if args.uninstall:
uninstall_helm_release(args.release_name, args.namespace)
return
# Prepare values.yaml if not uninstalling
if args.user_values:
values_file_path = args.user_values
else:
if not args.hf_token:
parser.error("--hf-token are required")
node_selector = {args.label.split("=")[0]: args.label.split("=")[1]}
values_file_path = generate_helm_values(
with_rerank=args.with_rerank,
num_nodes=args.num_nodes,
hf_token=args.hf_token,
model_dir=args.model_dir,
node_selector=node_selector,
tune=args.tuned,
)
# Read back the generated YAML file for verification
with open(values_file_path, "r") as file:
print("Generated YAML contents:")
print(file.read())
# Deploy unless --create-values-only is specified
if not args.create_values_only:
install_helm_release(args.release_name, args.chart_name, args.namespace, values_file_path, args.device_type)
if __name__ == "__main__":
main()

View File

@@ -1,164 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
import yaml
def generate_helm_values(with_rerank, num_nodes, hf_token, model_dir, node_selector=None, tune=False):
"""Create a values.yaml file based on the provided configuration."""
# Log the received parameters
print("Received parameters:")
print(f"with_rerank: {with_rerank}")
print(f"num_nodes: {num_nodes}")
print(f"node_selector: {node_selector}") # Log the node_selector
print(f"tune: {tune}")
if node_selector is None:
node_selector = {}
# Construct the base values dictionary
values = {
"tei": {"nodeSelector": {key: value for key, value in node_selector.items()}},
"tgi": {"nodeSelector": {key: value for key, value in node_selector.items()}},
"data-prep": {"nodeSelector": {key: value for key, value in node_selector.items()}},
"redis-vector-db": {"nodeSelector": {key: value for key, value in node_selector.items()}},
"retriever-usvc": {"nodeSelector": {key: value for key, value in node_selector.items()}},
"chatqna-ui": {"nodeSelector": {key: value for key, value in node_selector.items()}},
"global": {
"HUGGINGFACEHUB_API_TOKEN": hf_token, # Use passed token
"modelUseHostPath": model_dir, # Use passed model directory
},
"nodeSelector": {key: value for key, value in node_selector.items()},
}
if with_rerank:
values["teirerank"] = {"nodeSelector": {key: value for key, value in node_selector.items()}}
else:
values["image"] = {"repository": "opea/chatqna-without-rerank"}
values["teirerank"] = {"enabled": False}
default_replicas = [
{"name": "chatqna", "replicaCount": 2},
{"name": "tei", "replicaCount": 1},
{"name": "teirerank", "replicaCount": 1} if with_rerank else None,
{"name": "tgi", "replicaCount": 7 if with_rerank else 8},
{"name": "data-prep", "replicaCount": 1},
{"name": "redis-vector-db", "replicaCount": 1},
{"name": "retriever-usvc", "replicaCount": 2},
]
if num_nodes > 1:
# Scale replicas based on number of nodes
replicas = [
{"name": "chatqna", "replicaCount": 1 * num_nodes},
{"name": "tei", "replicaCount": 1 * num_nodes},
{"name": "teirerank", "replicaCount": 1} if with_rerank else None,
{"name": "tgi", "replicaCount": (8 * num_nodes - 1) if with_rerank else 8 * num_nodes},
{"name": "data-prep", "replicaCount": 1},
{"name": "redis-vector-db", "replicaCount": 1},
{"name": "retriever-usvc", "replicaCount": 1 * num_nodes},
]
else:
replicas = default_replicas
# Remove None values for rerank disabled
replicas = [r for r in replicas if r]
# Update values.yaml with replicas
for replica in replicas:
service_name = replica["name"]
if service_name == "chatqna":
values["replicaCount"] = replica["replicaCount"]
print(replica["replicaCount"])
elif service_name in values:
values[service_name]["replicaCount"] = replica["replicaCount"]
# Prepare resource configurations based on tuning
resources = []
if tune:
resources = [
{
"name": "chatqna",
"resources": {
"limits": {"cpu": "16", "memory": "8000Mi"},
"requests": {"cpu": "16", "memory": "8000Mi"},
},
},
{
"name": "tei",
"resources": {
"limits": {"cpu": "80", "memory": "20000Mi"},
"requests": {"cpu": "80", "memory": "20000Mi"},
},
},
{"name": "teirerank", "resources": {"limits": {"habana.ai/gaudi": 1}}} if with_rerank else None,
{"name": "tgi", "resources": {"limits": {"habana.ai/gaudi": 1}}},
{"name": "retriever-usvc", "resources": {"requests": {"cpu": "8", "memory": "8000Mi"}}},
]
# Filter out any None values directly as part of initialization
resources = [r for r in resources if r is not None]
# Add resources for each service if tuning
for resource in resources:
service_name = resource["name"]
if service_name == "chatqna":
values["resources"] = resource["resources"]
elif service_name in values:
values[service_name]["resources"] = resource["resources"]
# Add extraCmdArgs for tgi service with default values
if "tgi" in values:
values["tgi"]["extraCmdArgs"] = [
"--max-input-length",
"1280",
"--max-total-tokens",
"2048",
"--max-batch-total-tokens",
"65536",
"--max-batch-prefill-tokens",
"4096",
]
yaml_string = yaml.dump(values, default_flow_style=False)
# Determine the mode based on the 'tune' parameter
mode = "tuned" if tune else "oob"
# Determine the filename based on 'with_rerank' and 'num_nodes'
if with_rerank:
filename = f"{mode}-{num_nodes}-gaudi-with-rerank-values.yaml"
else:
filename = f"{mode}-{num_nodes}-gaudi-without-rerank-values.yaml"
# Write the YAML data to the file
with open(filename, "w") as file:
file.write(yaml_string)
# Get the current working directory and construct the file path
current_dir = os.getcwd()
filepath = os.path.join(current_dir, filename)
print(f"YAML file {filepath} has been generated.")
return filepath # Optionally return the file path
# Main execution for standalone use of create_values_yaml
if __name__ == "__main__":
# Example values for standalone execution
with_rerank = True
num_nodes = 2
hftoken = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
modeldir = "/mnt/model"
node_selector = {"node-type": "opea-benchmark"}
tune = True
filename = generate_helm_values(with_rerank, num_nodes, hftoken, modeldir, node_selector, tune)
# Read back the generated YAML file for verification
with open(filename, "r") as file:
print("Generated YAML contents:")
print(file.read())

View File

@@ -3,7 +3,7 @@
deploy:
device: gaudi
version: 1.2.0
version: 1.3.0
modelUseHostPath: /mnt/models
HUGGINGFACEHUB_API_TOKEN: "" # mandatory
node: [1, 2, 4, 8]

View File

@@ -1,163 +1,90 @@
# Build and Deploy ChatQnA Application on AMD GPU (ROCm)
# Deploying ChatQnA on AMD ROCm GPU
## Build Docker Images
This document outlines the single node deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices on Intel Xeon server and AMD GPU. The steps include pulling Docker images, container deployment via Docker Compose, and service execution using microservices `llm`.
### 1. Build Docker Image
Note: The default LLM is `meta-llama/Meta-Llama-3-8B-Instruct`. Before deploying the application, please make sure either you've requested and been granted the access to it on [Huggingface](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) or you've downloaded the model locally from [ModelScope](https://www.modelscope.cn/models).
- #### Create application install directory and go to it:
## Table of Contents
```bash
mkdir ~/chatqna-install && cd chatqna-install
```
1. [ChatQnA Quick Start Deployment](#chatqna-quick-start-deployment)
2. [ChatQnA Docker Compose Files](#chatqna-docker-compose-files)
3. [Validate Microservices](#validate-microservices)
4. [Conclusion](#conclusion)
- #### Clone the repository GenAIExamples (the default repository branch "main" is used here):
## ChatQnA Quick Start Deployment
```bash
git clone https://github.com/opea-project/GenAIExamples.git
```
This section describes how to quickly deploy and test the ChatQnA service manually on an AMD ROCm GPU. The basic steps are:
If you need to use a specific branch/tag of the GenAIExamples repository, then (v1.3 replace with its own value):
1. [Access the Code](#access-the-code)
2. [Configure the Deployment Environment](#configure-the-deployment-environment)
3. [Deploy the Services Using Docker Compose](#deploy-the-services-using-docker-compose)
4. [Check the Deployment Status](#check-the-deployment-status)
5. [Validate the Pipeline](#validate-the-pipeline)
6. [Cleanup the Deployment](#cleanup-the-deployment)
```bash
git clone https://github.com/opea-project/GenAIExamples.git && cd GenAIExamples && git checkout v1.3
```
### Access the Code
We remind you that when using a specific version of the code, you need to use the README from this version:
Clone the GenAIExample repository and access the ChatQnA AMD ROCm GPU platform Docker Compose files and supporting scripts:
- #### Go to build directory:
```bash
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/ChatQnA
```
```bash
cd ~/chatqna-install/GenAIExamples/ChatQnA/docker_image_build
```
Then checkout a released version, such as v1.3:
- Cleaning up the GenAIComps repository if it was previously cloned in this directory.
This is necessary if the build was performed earlier and the GenAIComps folder exists and is not empty:
```bash
git checkout v1.3
```
```bash
echo Y | rm -R GenAIComps
```
### Configure the Deployment Environment
- #### Clone the repository GenAIComps (the default repository branch "main" is used here):
To set up environment variables for deploying ChatQnA services, set up some parameters specific to the deployment environment and source the `set_env_*.sh` script in this directory:
```bash
git clone https://github.com/opea-project/GenAIComps.git
```
- if used vLLM - set_env_vllm.sh
- if used vLLM with FaqGen - set_env_faqgen_vllm.sh
- if used TGI - set_env.sh
- if used TGI with FaqGen - set_env_faqgen.sh
If you use a specific tag of the GenAIExamples repository,
then you should also use the corresponding tag for GenAIComps. (v1.3 replace with its own value):
Set the values of the variables:
```bash
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout v1.3
```
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
We remind you that when using a specific version of the code, you need to use the README from this version.
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
- #### Setting the list of images for the build (from the build file.yaml)
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
If you want to deploy a vLLM-based or TGI-based application, then the set of services is installed as follows:
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
#### vLLM-based application
We set these values in the file set_env\*\*\*\*.sh
```bash
service_list="dataprep retriever vllm-rocm chatqna chatqna-ui nginx"
```
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
The values shown in the file set_env.sh or set_env_vllm.sh they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
#### vLLM-based application with FaqGen
Setting variables in the operating system environment:
```bash
service_list="dataprep retriever vllm-rocm llm-faqgen chatqna chatqna-ui nginx"
```
```bash
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
source ./set_env_*.sh # replace the script name with the appropriate one
```
#### TGI-based application
Consult the section on [ChatQnA Service configuration](#chatqna-configuration) for information on how service specific configuration parameters affect deployments.
```bash
service_list="dataprep retriever chatqna chatqna-ui nginx"
```
### Deploy the Services Using Docker Compose
#### TGI-based application with FaqGen
To deploy the ChatQnA services, execute the `docker compose up` command with the appropriate arguments. For a default deployment with TGI, execute the command below. It uses the 'compose.yaml' file.
```bash
service_list="dataprep retriever llm-faqgen chatqna chatqna-ui nginx"
```
- #### Pull Docker Images
```bash
docker pull redis/redis-stack:7.2.0-v9
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
```
- #### Optional. Pull TGI Docker Image (Do this if you want to use TGI)
```bash
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
```
- #### Build Docker Images
```bash
docker compose -f build.yaml build ${service_list} --no-cache
```
After the build, we check the list of images with the command:
```bash
docker image ls
```
The list of images should include:
##### vLLM-based application:
- redis/redis-stack:7.2.0-v9
- opea/dataprep:latest
- ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
- opea/retriever:latest
- opea/vllm-rocm:latest
- opea/chatqna:latest
- opea/chatqna-ui:latest
- opea/nginx:latest
##### vLLM-based application with FaqGen:
- redis/redis-stack:7.2.0-v9
- opea/dataprep:latest
- ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
- opea/retriever:latest
- opea/vllm-rocm:latest
- opea/llm-faqgen:latest
- opea/chatqna:latest
- opea/chatqna-ui:latest
- opea/nginx:latest
##### TGI-based application:
- redis/redis-stack:7.2.0-v9
- opea/dataprep:latest
- ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
- opea/retriever:latest
- ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
- opea/chatqna:latest
- opea/chatqna-ui:latest
- opea/nginx:latest
##### TGI-based application with FaqGen:
- redis/redis-stack:7.2.0-v9
- opea/dataprep:latest
- ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
- opea/retriever:latest
- ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
- opea/llm-faqgen:latest
- opea/chatqna:latest
- opea/chatqna-ui:latest
- opea/nginx:latest
---
## Deploy the ChatQnA Application
### Docker Compose Configuration for AMD GPUs
```bash
cd docker_compose/amd/gpu/rocm
# if used TGI
docker compose -f compose.yaml up -d
# if used TGI with FaqGen
# docker compose -f compose_faqgen.yaml up -d
# if used vLLM
# docker compose -f compose_vllm.yaml up -d
# if used vLLM with FaqGen
# docker compose -f compose_faqgen_vllm.yaml up -d
```
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
@@ -198,332 +125,103 @@ security_opt:
**How to Identify GPU Device IDs:**
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
### Set deploy environment variables
> **Note**: developers should build docker image from source when:
>
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
> - Unable to download the docker image.
> - Use a specific version of Docker image.
#### Setting variables in the operating system environment:
Please refer to the table below to build different microservices from source:
##### Set variable HUGGINGFACEHUB_API_TOKEN:
| Microservice | Deployment Guide |
| --------------- | ------------------------------------------------------------------------------------------------------------------ |
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
| TGI | [TGI project](https://github.com/huggingface/text-generation-inference.git) |
| LLM | [LLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/llms) |
| Redis Vector DB | [Redis](https://github.com/redis/redis.git) |
| Dataprep | [Dataprep build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/dataprep/src/README_redis.md) |
| TEI Embedding | [TEI guide](https://github.com/huggingface/text-embeddings-inference.git) |
| Retriever | [Retriever build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/retrievers/src/README_redis.md) |
| TEI Reranking | [TEI guide](https://github.com/huggingface/text-embeddings-inference.git) |
| MegaService | [MegaService guide](../../../../README.md) |
| UI | [UI guide](../../../../ui/react/README.md) |
| Nginx | [Nginx guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/nginx) |
### Check the Deployment Status
After running docker compose, check if all the containers launched via docker compose have started:
```bash
### Replace the string 'your_huggingfacehub_token' with your HuggingFacehub repository access token.
export HUGGINGFACEHUB_API_TOKEN='your_huggingfacehub_token'
docker ps -a
```
#### Set variables value in set_env\*\*\*\*.sh file:
For the default deployment with TGI, the following 9 containers should have started:
Go to Docker Compose directory:
```bash
cd ~/chatqna-install/GenAIExamples/ChatQnA/docker_compose/amd/gpu/rocm
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eaf24161aca8 opea/nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 5 seconds 0.0.0.0:18104->80/tcp, [::]:18104->80/tcp chatqna-nginx-server
2fce48a4c0f4 opea/chatqna-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 5 seconds 0.0.0.0:18101->5173/tcp, [::]:18101->5173/tcp chatqna-ui-server
613c384979f4 opea/chatqna:latest "bash entrypoint.sh" 37 seconds ago Up 5 seconds 0.0.0.0:18102->8888/tcp, [::]:18102->8888/tcp chatqna-backend-server
05512bd29fee opea/dataprep:latest "sh -c 'python $( [ …" 37 seconds ago Up 36 seconds (healthy) 0.0.0.0:18103->5000/tcp, [::]:18103->5000/tcp chatqna-dataprep-service
49844d339d1d opea/retriever:latest "python opea_retriev…" 37 seconds ago Up 36 seconds 0.0.0.0:7000->7000/tcp, [::]:7000->7000/tcp chatqna-retriever
75b698fe7de0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 37 seconds ago Up 36 seconds 0.0.0.0:18808->80/tcp, [::]:18808->80/tcp chatqna-tei-reranking-service
342f01bfdbb2 ghcr.io/huggingface/text-generation-inference:2.3.1-rocm"python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:18008->8011/tcp, [::]:18008->8011/tcp chatqna-tgi-service
6081eb1c119d redis/redis-stack:7.2.0-v9 "/entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp, 0.0.0.0:8001->8001/tcp, [::]:8001->8001/tcp chatqna-redis-vector-db
eded17420782 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 37 seconds ago Up 36 seconds 0.0.0.0:18090->80/tcp, [::]:18090->80/tcp chatqna-tei-embedding-service
```
The example uses the Nano text editor. You can use any convenient text editor:
if used TGI with FaqGen:
#### If you use vLLM based application
```bash
nano set_env_vllm.sh
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eaf24161aca8 opea/nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 5 seconds 0.0.0.0:18104->80/tcp, [::]:18104->80/tcp chatqna-nginx-server
2fce48a4c0f4 opea/chatqna-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 5 seconds 0.0.0.0:18101->5173/tcp, [::]:18101->5173/tcp chatqna-ui-server
613c384979f4 opea/chatqna:latest "bash entrypoint.sh" 37 seconds ago Up 5 seconds 0.0.0.0:18102->8888/tcp, [::]:18102->8888/tcp chatqna-backend-server
e0ef1ea67640 opea/llm-faqgen:latest "bash entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:18011->9000/tcp, [::]:18011->9000/tcp chatqna-llm-faqgen
05512bd29fee opea/dataprep:latest "sh -c 'python $( [ …" 37 seconds ago Up 36 seconds (healthy) 0.0.0.0:18103->5000/tcp, [::]:18103->5000/tcp chatqna-dataprep-service
49844d339d1d opea/retriever:latest "python opea_retriev…" 37 seconds ago Up 36 seconds 0.0.0.0:7000->7000/tcp, [::]:7000->7000/tcp chatqna-retriever
75b698fe7de0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 37 seconds ago Up 36 seconds 0.0.0.0:18808->80/tcp, [::]:18808->80/tcp chatqna-tei-reranking-service
342f01bfdbb2 ghcr.io/huggingface/text-generation-inference:2.3.1-rocm"python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:18008->8011/tcp, [::]:18008->8011/tcp chatqna-tgi-service
6081eb1c119d redis/redis-stack:7.2.0-v9 "/entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp, 0.0.0.0:8001->8001/tcp, [::]:8001->8001/tcp chatqna-redis-vector-db
eded17420782 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 37 seconds ago Up 36 seconds 0.0.0.0:18090->80/tcp, [::]:18090->80/tcp chatqna-tei-embedding-service
```
#### If you use vLLM based application with FaqGen
if used vLLM:
```bash
nano set_env_vllm_faqgen.sh
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eaf24161aca8 opea/nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 5 seconds 0.0.0.0:18104->80/tcp, [::]:18104->80/tcp chatqna-nginx-server
2fce48a4c0f4 opea/chatqna-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 5 seconds 0.0.0.0:18101->5173/tcp, [::]:18101->5173/tcp chatqna-ui-server
613c384979f4 opea/chatqna:latest "bash entrypoint.sh" 37 seconds ago Up 5 seconds 0.0.0.0:18102->8888/tcp, [::]:18102->8888/tcp chatqna-backend-server
05512bd29fee opea/dataprep:latest "sh -c 'python $( [ …" 37 seconds ago Up 36 seconds (healthy) 0.0.0.0:18103->5000/tcp, [::]:18103->5000/tcp chatqna-dataprep-service
49844d339d1d opea/retriever:latest "python opea_retriev…" 37 seconds ago Up 36 seconds 0.0.0.0:7000->7000/tcp, [::]:7000->7000/tcp chatqna-retriever
75b698fe7de0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 37 seconds ago Up 36 seconds 0.0.0.0:18808->80/tcp, [::]:18808->80/tcp chatqna-tei-reranking-service
342f01bfdbb2 opea/vllm-rocm:latest "python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:18008->8011/tcp, [::]:18008->8011/tcp chatqna-vllm-service
6081eb1c119d redis/redis-stack:7.2.0-v9 "/entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp, 0.0.0.0:8001->8001/tcp, [::]:8001->8001/tcp chatqna-redis-vector-db
eded17420782 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 37 seconds ago Up 36 seconds 0.0.0.0:18090->80/tcp, [::]:18090->80/tcp chatqna-tei-embedding-service
```
#### If you use TGI based application
if used vLLM with FaqGen:
```bash
nano set_env.sh
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eaf24161aca8 opea/nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 5 seconds 0.0.0.0:18104->80/tcp, [::]:18104->80/tcp chatqna-nginx-server
2fce48a4c0f4 opea/chatqna-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 5 seconds 0.0.0.0:18101->5173/tcp, [::]:18101->5173/tcp chatqna-ui-server
613c384979f4 opea/chatqna:latest "bash entrypoint.sh" 37 seconds ago Up 5 seconds 0.0.0.0:18102->8888/tcp, [::]:18102->8888/tcp chatqna-backend-server
e0ef1ea67640 opea/llm-faqgen:latest "bash entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:18011->9000/tcp, [::]:18011->9000/tcp chatqna-llm-faqgen
05512bd29fee opea/dataprep:latest "sh -c 'python $( [ …" 37 seconds ago Up 36 seconds (healthy) 0.0.0.0:18103->5000/tcp, [::]:18103->5000/tcp chatqna-dataprep-service
49844d339d1d opea/retriever:latest "python opea_retriev…" 37 seconds ago Up 36 seconds 0.0.0.0:7000->7000/tcp, [::]:7000->7000/tcp chatqna-retriever
75b698fe7de0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 37 seconds ago Up 36 seconds 0.0.0.0:18808->80/tcp, [::]:18808->80/tcp chatqna-tei-reranking-service
342f01bfdbb2 opea/vllm-rocm:latest "python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:18008->8011/tcp, [::]:18008->8011/tcp chatqna-vllm-service
6081eb1c119d redis/redis-stack:7.2.0-v9 "/entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp, 0.0.0.0:8001->8001/tcp, [::]:8001->8001/tcp chatqna-redis-vector-db
eded17420782 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 37 seconds ago Up 36 seconds 0.0.0.0:18090->80/tcp, [::]:18090->80/tcp chatqna-tei-embedding-service
```
#### If you use TGI based application with FaqGen
If any issues are encountered during deployment, refer to the [Troubleshooting](../../../../README_miscellaneous.md#troubleshooting) section.
```bash
nano set_env_faqgen.sh
```
### Validate the Pipeline
If you are in a proxy environment, also set the proxy-related environment variables:
```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
```
Set the values of the variables:
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
We set these values in the file set_env\*\*\*\*.sh
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
The values shown in the file set_env.sh or set_env_vllm they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
#### Set variables with script set_env\*\*\*\*.sh
#### If you use vLLM based application
```bash
. set_env_vllm.sh
```
#### If you use vLLM based application with FaqGen
```bash
. set_env_faqgen_vllm.sh
```
#### If you use TGI based application
```bash
. set_env.sh
```
#### If you use TGI based application with FaqGen
```bash
. set_env_faqgen.sh
```
### Start the services:
#### If you use vLLM based application
```bash
docker compose -f compose_vllm.yaml up -d
```
#### If you use vLLM based application with FaqGen
```bash
docker compose -f compose_faqgen_vllm.yaml up -d
```
#### If you use TGI based application
```bash
docker compose -f compose.yaml up -d
```
#### If you use TGI based application with FaqGen
```bash
docker compose -f compose_faqgen.yaml up -d
```
All containers should be running and should not restart:
##### If you use vLLM based application:
- chatqna-redis-vector-db
- chatqna-dataprep-service
- chatqna-tei-embedding-service
- chatqna-retriever
- chatqna-tei-reranking-service
- chatqna-vllm-service
- chatqna-backend-server
- chatqna-ui-server
- chatqna-nginx-server
##### If you use vLLM based application with FaqGen:
- chatqna-redis-vector-db
- chatqna-dataprep-service
- chatqna-tei-embedding-service
- chatqna-retriever
- chatqna-tei-reranking-service
- chatqna-vllm-service
- chatqna-llm-faqgen
- chatqna-backend-server
- chatqna-ui-server
- chatqna-nginx-server
##### If you use TGI based application:
- chatqna-redis-vector-db
- chatqna-dataprep-service
- chatqna-tei-embedding-service
- chatqna-retriever
- chatqna-tei-reranking-service
- chatqna-tgi-service
- chatqna-backend-server
- chatqna-ui-server
- chaqna-nginx-server
##### If you use TGI based application with FaqGen:
- chatqna-redis-vector-db
- chatqna-dataprep-service
- chatqna-tei-embedding-service
- chatqna-retriever
- chatqna-tei-reranking-service
- chatqna-tgi-service
- chatqna-llm-faqgen
- chatqna-backend-server
- chatqna-ui-server
- chaqna-nginx-server
---
## Validate the Services
### 1. Validate TEI Embedding Service
```bash
curl http://${HOST_IP}:${CHATQNA_TEI_EMBEDDING_PORT}/embed \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to text:
```textmate
[[0.00037115702,-0.06356819,0.0024758505,..................,0.022725677,0.016026087,-0.02125421,-0.02984927,-0.0049473033]]
```
If the service response has a meaningful response in the value,
then we consider the TEI Embedding Service to be successfully launched
### 2. Validate Retriever Microservice
```bash
export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://${HOST_IP}:${CHATQNA_REDIS_RETRIEVER_PORT}/v1/retrieval \
-X POST \
-d "{\"text\":\"test\",\"embedding\":${your_embedding}}" \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to JSON:
```json
{ "id": "e191846168aed1f80b2ea12df80844d2", "retrieved_docs": [], "initial_query": "test", "top_n": 1, "metadata": [] }
```
If the response corresponds to the form of the provided JSON, then we consider the
Retriever Microservice verification successful.
### 3. Validate TEI Reranking Service
```bash
curl http://${HOST_IP}:${CHATQNA_TEI_RERANKING_PORT}/rerank \
-X POST \
-d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to JSON:
```json
[
{ "index": 1, "score": 0.94238955 },
{ "index": 0, "score": 0.120219156 }
]
```
If the response corresponds to the form of the provided JSON, then we consider the TEI Reranking Service
verification successful.
### 4. Validate the vLLM/TGI Service
#### If you use vLLM:
```bash
DATA='{"model": "meta-llama/Meta-Llama-3-8B-Instruct", '\
'"messages": [{"role": "user", "content": "What is a Deep Learning?"}], "max_tokens": 64}'
curl http://${HOST_IP}:${CHATQNA_VLLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to JSON:
```json
{
"id": "chatcmpl-91003647d1c7469a89e399958f390f67",
"object": "chat.completion",
"created": 1742877228,
"model": "meta-llama/Meta-Llama-3-8B-Instruct",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Deep Learning ( DL) is a subfield of Machine Learning (ML) that focuses on the design of algorithms and architectures inspired by the structure and function of the human brain. These algorithms are designed to analyze and interpret data that is presented in the form of patterns or signals, and they often mimic the way the human brain",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "length",
"stop_reason": null
}
],
"usage": { "prompt_tokens": 16, "total_tokens": 80, "completion_tokens": 64, "prompt_tokens_details": null },
"prompt_logprobs": null
}
```
If the service response has a meaningful response in the value of the "choices.message.content" key,
then we consider the vLLM service to be successfully launched
#### If you use TGI:
```bash
DATA='{"inputs":"What is a Deep Learning?",'\
'"parameters":{"max_new_tokens":64,"do_sample": true}}'
curl http://${HOST_IP}:${CHATQNA_TGI_SERVICE_PORT}/generate \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to JSON:
```json
{
"generated_text": " What is its application in Computer Vision?\nWhat is a Deep Learning?\nDeep learning is a subfield of machine learning that involves the use of artificial neural networks to model high-level abstractions in data. It involves the use of deep neural networks, which are composed of multiple layers, to learn complex patterns in data. The"
}
```
If the service response has a meaningful response in the value of the "generated_text" key,
then we consider the TGI service to be successfully launched
### 5. Validate the LLM Service (if your used application with FaqGen)
```bash
DATA='{"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source '\
'text embeddings and sequence classification models. TEI enables high-performance extraction for the most '\
'popular models, including FlagEmbedding, Ember, GTE and E5.","max_tokens": 128}'
curl http://${HOST_IP}:${CHATQNA_LLM_FAQGEN_PORT}/v1/faqgen \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to JSON:
```json
{
"id": "58f0632f5f03af31471b895b0d0d397b",
"text": " Q: What is Text Embeddings Inference (TEI)?\n A: TEI is a toolkit for deploying and serving open source text embeddings and sequence classification models.\n\n Q: What models does TEI support?\n A: TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.\n\n Q: What is the purpose of TEI?\n A: The purpose of TEI is to enable high-performance extraction for text embeddings and sequence classification models.\n\n Q: What are the benefits of using TEI?\n A: The benefits of using TEI include high",
"prompt": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."
}
```
If the service response has a meaningful response in the value of the "text" key,
then we consider the LLM service to be successfully launched
### 6. Validate the MegaService
Once the ChatQnA services are running, test the pipeline using the following command:
```bash
curl http://${HOST_IP}:${CHATQNA_BACKEND_SERVICE_PORT}/v1/chatqna \
@@ -531,91 +229,105 @@ curl http://${HOST_IP}:${CHATQNA_BACKEND_SERVICE_PORT}/v1/chatqna \
-d '{"messages": "What is the revenue of Nike in 2023?"}'
```
Checking the response from the service. The response should be similar to text:
**Note** : Access the ChatQnA UI by web browser through this URL: `http://${HOST_IP_EXTERNAL}:${CHATQNA_NGINX_PORT}`
```textmate
data: b' What'
data: b' is'
data: b' the'
data: b' revenue'
data: b' of'
data: b' Nike'
data: b' in'
data: b' '
data: b'202'
data: b'3'
data: b'?\n'
data: b' '
data: b' Answer'
data: b':'
data: b' According'
data: b' to'
data: b' the'
data: b' search'
data: b' results'
data: b','
data: b' the'
data: b' revenue'
data: b' of'
data: b''
### Cleanup the Deployment
data: [DONE]
```
If the output lines in the "data" keys contain words (tokens) containing meaning, then the service
is considered launched successfully.
### 7. Validate the Frontend (UI)
To access the UI, use the URL - http://${EXTERNAL_HOST_IP}:${CHATQNA_NGINX_PORT}
A page should open when you click through to this address:
![UI start page](../../../../assets/img/ui-starting-page.png)
If a page of this type has opened, then we believe that the service is running and responding,
and we can proceed to functional UI testing.
Let's enter the task for the service in the "Enter prompt here" field.
For example, "What is a Deep Learning?" and press Enter.
After that, a page with the result of the task should open:
#### If used application without FaqGen
![UI result page](../../../../assets/img/ui-result-page.png)
#### If used application with FaqGen
![UI result page](../../../../assets/img/ui-result-page-faqgen.png)
If the result shown on the page is correct, then we consider the verification of the UI service to be successful.
### 5. Stop application
#### If you use vLLM
To stop the containers associated with the deployment, execute the following command:
```bash
cd ~/chatqna-install/GenAIExamples/ChatQnA/docker_compose/amd/gpu/rocm
docker compose -f compose_vllm.yaml down
```
#### If you use vLLM with FaqGen
```bash
cd ~/chatqna-install/GenAIExamples/ChatQnA/docker_compose/amd/gpu/rocm
docker compose -f compose_faqgen_vllm.yaml down
```
#### If you use TGI
```bash
cd ~/chatqna-install/GenAIExamples/ChatQnA/docker_compose/amd/gpu/rocm
# if used TGI
docker compose -f compose.yaml down
# if used TGI with FaqGen
# docker compose -f compose_faqgen.yaml down
# if used vLLM
# docker compose -f compose_vllm.yaml down
# if used vLLM with FaqGen
# docker compose -f compose_faqgen_vllm.yaml down
```
#### If you use TGI with FaqGen
## ChatQnA Docker Compose Files
```bash
cd ~/chatqna-install/GenAIExamples/ChatQnA/docker_compose/amd/gpu/rocm
docker compose -f compose_faqgen.yaml down
```
In the context of deploying an ChatQnA pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks, or single English TTS/multi-language TTS component. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
| File | Description |
| ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------ |
| [compose.yaml](./compose.yaml) | The LLM serving framework is TGI. Default compose file using TGI as serving framework and redis as vector database |
| [compose_faqgen.yaml](./compose_faqgen.yaml) | The LLM serving framework is TGI with FaqGen. All other configurations remain the same as the default |
| [compose_vllm.yaml](./compose_vllm.yaml) | The LLM serving framework is vLLM. Compose file using vllm as serving framework and redis as vector database |
| [compose_faqgen_vllm.yaml](./compose_faqgen_vllm.yaml) | The LLM serving framework is vLLM with FaqGen. Compose file using vllm as serving framework and redis as vector database |
## Validate MicroServices
1. TEI Embedding Service
```bash
curl http://${HOST_IP}:${CHATQNA_TEI_EMBEDDING_PORT}/embed \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H 'Content-Type: application/json'
```
2. Retriever Microservice
```bash
export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://${HOST_IP}:${CHATQNA_REDIS_RETRIEVER_PORT}/v1/retrieval \
-X POST \
-d "{\"text\":\"test\",\"embedding\":${your_embedding}}" \
-H 'Content-Type: application/json'
```
3. TEI Reranking Service
```bash
curl http://${HOST_IP}:${CHATQNA_TEI_RERANKING_PORT}/rerank \
-X POST \
-d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
-H 'Content-Type: application/json'
```
4. vLLM/TGI Service
If you use vLLM:
```bash
DATA='{"model": "meta-llama/Meta-Llama-3-8B-Instruct", '\
'"messages": [{"role": "user", "content": "What is a Deep Learning?"}], "max_tokens": 64}'
curl http://${HOST_IP}:${CHATQNA_VLLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
If you use TGI:
```bash
DATA='{"inputs":"What is a Deep Learning?",'\
'"parameters":{"max_new_tokens":64,"do_sample": true}}'
curl http://${HOST_IP}:${CHATQNA_TGI_SERVICE_PORT}/generate \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
5. LLM Service (if your used application with FaqGen)
```bash
DATA='{"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source '\
'text embeddings and sequence classification models. TEI enables high-performance extraction for the most '\
'popular models, including FlagEmbedding, Ember, GTE and E5.","max_tokens": 128}'
curl http://${HOST_IP}:${CHATQNA_LLM_FAQGEN_PORT}/v1/faqgen \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
## Conclusion
This guide should enable developers to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.

View File

@@ -165,7 +165,7 @@ services:
chatqna-nginx-server:
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
container_name: chaqna-nginx-server
container_name: chatqna-nginx-server
depends_on:
- chatqna-backend-server
- chatqna-ui-server

View File

@@ -187,7 +187,7 @@ services:
chatqna-nginx-server:
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
container_name: chaqna-nginx-server
container_name: chatqna-nginx-server
depends_on:
- chatqna-backend-server
- chatqna-ui-server

View File

@@ -192,7 +192,7 @@ services:
chatqna-nginx-server:
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
container_name: chaqna-nginx-server
container_name: chatqna-nginx-server
depends_on:
- chatqna-backend-server
- chatqna-ui-server

View File

@@ -32,7 +32,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"
@@ -65,7 +65,7 @@ services:
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_REDIS"
restart: unless-stopped
tei-reranking-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-reranking-server
ports:
- "8808:80"

View File

@@ -39,7 +39,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"
@@ -72,7 +72,7 @@ services:
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_REDIS"
restart: unless-stopped
tei-reranking-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-reranking-server
ports:
- "8808:80"

View File

@@ -32,7 +32,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"
@@ -65,7 +65,7 @@ services:
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_REDIS"
restart: unless-stopped
tei-reranking-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-reranking-server
ports:
- "8808:80"

View File

@@ -32,7 +32,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"
@@ -65,7 +65,7 @@ services:
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_REDIS"
restart: unless-stopped
tei-reranking-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-reranking-server
ports:
- "8808:80"

View File

@@ -113,7 +113,7 @@ services:
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"
@@ -127,7 +127,7 @@ services:
command: --model-id ${EMBEDDING_MODEL_ID} --auto-truncate
tei-reranking-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-reranking-server
ports:
- "8808:80"

View File

@@ -29,7 +29,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"
@@ -60,7 +60,7 @@ services:
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_PINECONE"
restart: unless-stopped
tei-reranking-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-reranking-server
ports:
- "8808:80"

View File

@@ -33,7 +33,7 @@ services:
TEI_ENDPOINT: http://tei-embedding-service:80
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"
@@ -66,7 +66,7 @@ services:
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_REDIS"
restart: unless-stopped
tei-reranking-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-reranking-server
ports:
- "8808:80"

View File

@@ -32,7 +32,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"
@@ -65,7 +65,7 @@ services:
RETRIEVER_COMPONENT_NAME: "OPEA_RETRIEVER_REDIS"
restart: unless-stopped
tei-reranking-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-reranking-server
ports:
- "8808:80"

View File

@@ -32,7 +32,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-server
ports:
- "6006:80"

View File

@@ -95,7 +95,7 @@ d560c232b120 opea/retriever:latest
a1d7ca2d3787 ghcr.io/huggingface/tei-gaudi:1.5.0 "text-embeddings-rou…" 2 minutes ago Up 2 minutes 0.0.0.0:8808->80/tcp, [::]:8808->80/tcp tei-reranking-gaudi-server
9a9f3fd4fd4c opea/vllm-gaudi:latest "python3 -m vllm.ent…" 2 minutes ago Exited (1) 2 minutes ago vllm-gaudi-server
1ab9bbdf5182 redis/redis-stack:7.2.0-v9 "/entrypoint.sh" 2 minutes ago Up 2 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp, 0.0.0.0:8001->8001/tcp, :::8001->8001/tcp redis-vector-db
9ee0789d819e ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 "text-embeddings-rou…" 2 minutes ago Up 2 minutes 0.0.0.0:8090->80/tcp, [::]:8090->80/tcp tei-embedding-gaudi-server
9ee0789d819e ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 2 minutes ago Up 2 minutes 0.0.0.0:8090->80/tcp, [::]:8090->80/tcp tei-embedding-gaudi-server
```
### Test the Pipeline
@@ -148,7 +148,7 @@ The default deployment utilizes Gaudi devices primarily for the `vllm-service`,
| ---------------------------- | ----------------------------------------------------- | ------------ |
| redis-vector-db | redis/redis-stack:7.2.0-v9 | No |
| dataprep-redis-service | opea/dataprep:latest | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 | No |
| retriever | opea/retriever:latest | No |
| tei-reranking-service | ghcr.io/huggingface/tei-gaudi:1.5.0 | 1 card |
| vllm-service | opea/vllm-gaudi:latest | Configurable |
@@ -164,7 +164,7 @@ The TGI (Text Generation Inference) deployment and the default deployment differ
| ---------------------------- | ----------------------------------------------------- | -------------- |
| redis-vector-db | redis/redis-stack:7.2.0-v9 | No |
| dataprep-redis-service | opea/dataprep:latest | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 | No |
| retriever | opea/retriever:latest | No |
| tei-reranking-service | ghcr.io/huggingface/tei-gaudi:1.5.0 | 1 card |
| **tgi-service** | ghcr.io/huggingface/tgi-gaudi:2.3.1 | Configurable |
@@ -184,7 +184,7 @@ The TGI (Text Generation Inference) deployment and the default deployment differ
| ---------------------------- | ----------------------------------------------------- | ------------ |
| redis-vector-db | redis/redis-stack:7.2.0-v9 | No |
| dataprep-redis-service | opea/dataprep:latest | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 | No |
| retriever | opea/retriever:latest | No |
| tei-reranking-service | ghcr.io/huggingface/tei-gaudi:1.5.0 | 1 card |
| vllm-service | opea/vllm-gaudi:latest | Configurable |
@@ -203,7 +203,7 @@ The _compose_without_rerank.yaml_ Docker Compose file is distinct from the defau
| ---------------------------- | ----------------------------------------------------- | -------------- |
| redis-vector-db | redis/redis-stack:7.2.0-v9 | No |
| dataprep-redis-service | opea/dataprep:latest | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 | No |
| retriever | opea/retriever:latest | No |
| vllm-service | opea/vllm-gaudi:latest | Configurable |
| chatqna-gaudi-backend-server | opea/chatqna:latest | No |
@@ -222,7 +222,7 @@ The _compose_guardrails.yaml_ Docker Compose file introduces enhancements over t
| dataprep-redis-service | opea/dataprep:latest | No | No |
| _vllm-guardrails-service_ | opea/vllm-gaudi:latest | 1 card | Yes |
| _guardrails_ | opea/guardrails:latest | No | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 | No | No |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 | No | No |
| retriever | opea/retriever:latest | No | No |
| tei-reranking-service | ghcr.io/huggingface/tei-gaudi:1.5.0 | 1 card | No |
| vllm-service | opea/vllm-gaudi:latest | Configurable | Yes |
@@ -258,7 +258,7 @@ The table provides a comprehensive overview of the ChatQnA services utilized acr
| ---------------------------- | ----------------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------- |
| redis-vector-db | redis/redis-stack:7.2.0-v9 | No | Acts as a Redis database for storing and managing data. |
| dataprep-redis-service | opea/dataprep:latest | No | Prepares data and interacts with the Redis database. |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 | No | Provides text embedding services, often using Hugging Face models. |
| tei-embedding-service | ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 | No | Provides text embedding services, often using Hugging Face models. |
| retriever | opea/retriever:latest | No | Retrieves data from the Redis database and interacts with embedding services. |
| tei-reranking-service | ghcr.io/huggingface/tei-gaudi:1.5.0 | Yes | Reranks text embeddings, typically using Gaudi hardware for enhanced performance. |
| vllm-service | opea/vllm-gaudi:latest | No | Handles large language model (LLM) tasks, utilizing Gaudi hardware. |
@@ -284,7 +284,7 @@ ChatQnA now supports running the latest DeepSeek models, including [deepseek-ai/
### tei-embedding-service & tei-reranking-service
The `ghcr.io/huggingface/text-embeddings-inference:cpu-1.6` image supporting `tei-embedding-service` and `tei-reranking-service` depends on the `EMBEDDING_MODEL_ID` or `RERANK_MODEL_ID` environment variables respectively to specify the embedding model and reranking model used for converting text into vector representations and rankings. This choice impacts the quality and relevance of the embeddings rerankings for various applications. Unlike the `vllm-service`, the `tei-embedding-service` and `tei-reranking-service` each typically acquires only one Gaudi device and does not use the `NUM_CARDS` parameter; embedding and reranking tasks generally do not require extensive parallel processing and one Gaudi per service is appropriate. The list of [supported embedding and reranking models](https://github.com/huggingface/tei-gaudi?tab=readme-ov-file#supported-models) can be found at the [huggingface/tei-gaudi](https://github.com/huggingface/tei-gaudi?tab=readme-ov-file#supported-models) website.
The `ghcr.io/huggingface/text-embeddings-inference:cpu-1.5` image supporting `tei-embedding-service` and `tei-reranking-service` depends on the `EMBEDDING_MODEL_ID` or `RERANK_MODEL_ID` environment variables respectively to specify the embedding model and reranking model used for converting text into vector representations and rankings. This choice impacts the quality and relevance of the embeddings rerankings for various applications. Unlike the `vllm-service`, the `tei-embedding-service` and `tei-reranking-service` each typically acquires only one Gaudi device and does not use the `NUM_CARDS` parameter; embedding and reranking tasks generally do not require extensive parallel processing and one Gaudi per service is appropriate. The list of [supported embedding and reranking models](https://github.com/huggingface/tei-gaudi?tab=readme-ov-file#supported-models) can be found at the [huggingface/tei-gaudi](https://github.com/huggingface/tei-gaudi?tab=readme-ov-file#supported-models) website.
### tgi-guardrails-service

View File

@@ -39,7 +39,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-gaudi-server
ports:
- "8090:80"

View File

@@ -33,7 +33,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-gaudi-server
ports:
- "8090:80"

View File

@@ -33,7 +33,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-gaudi-server
ports:
- "8090:80"

View File

@@ -76,7 +76,7 @@ services:
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-gaudi-server
ports:
- "8090:80"

View File

@@ -32,7 +32,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-gaudi-server
ports:
- "8090:80"

View File

@@ -32,7 +32,7 @@ services:
retries: 50
restart: unless-stopped
tei-embedding-service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-embedding-gaudi-server
ports:
- "8090:80"

View File

@@ -51,7 +51,7 @@ f810f3b4d329 opea/embedding:latest "python embed
174bd43fa6b5 ghcr.io/huggingface/tei-gaudi:1.5.0 "text-embeddings-rou…" 2 minutes ago Up 2 minutes 0.0.0.0:8090->80/tcp, :::8090->80/tcp tei-embedding-gaudi-server
05c40b636239 ghcr.io/huggingface/tgi-gaudi:2.3.1 "text-generation-lau…" 2 minutes ago Exited (1) About a minute ago tgi-gaudi-server
74084469aa33 redis/redis-stack:7.2.0-v9 "/entrypoint.sh" 2 minutes ago Up 2 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp, 0.0.0.0:8001->8001/tcp, :::8001->8001/tcp redis-vector-db
88399dbc9e43 ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 "text-embeddings-rou…" 2 minutes ago Up 2 minutes 0.0.0.0:8808->80/tcp, :::8808->80/tcp tei-reranking-gaudi-server
88399dbc9e43 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" 2 minutes ago Up 2 minutes 0.0.0.0:8808->80/tcp, :::8808->80/tcp tei-reranking-gaudi-server
```
In this case, `ghcr.io/huggingface/tgi-gaudi:2.3.1` Existed.

View File

@@ -31,8 +31,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever llm-faqgen vllm-gaudi nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker pull ghcr.io/huggingface/tei-gaudi:1.5.0
docker images && sleep 1s
}

View File

@@ -69,9 +69,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever llm-faqgen nginx"
docker compose -f build.yaml build ${service_list} --no-cache > "${LOG_PATH}"/docker_image_build.log
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}

View File

@@ -32,7 +32,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever llm-faqgen vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}

View File

@@ -28,9 +28,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever llm-faqgen nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.6
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker pull ghcr.io/huggingface/tei-gaudi:1.5.0
docker images && sleep 1s
}

View File

@@ -32,8 +32,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever llm-faqgen nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}

View File

@@ -31,9 +31,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever vllm-gaudi guardrails nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker pull ghcr.io/huggingface/tei-gaudi:1.5.0
docker images && sleep 1s
}

View File

@@ -35,13 +35,10 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}
function start_services() {
cd $WORKPATH/docker_compose/intel/cpu/xeon/
export no_proxy=${no_proxy},${ip_address}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export LLM_MODEL_ID="meta-llama/Meta-Llama-3-8B-Instruct"

View File

@@ -31,8 +31,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever vllm-gaudi nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker pull ghcr.io/huggingface/tei-gaudi:1.5.0
docker images && sleep 1s
}

View File

@@ -67,9 +67,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever nginx"
docker compose -f build.yaml build ${service_list} --no-cache > "${LOG_PATH}"/docker_image_build.log
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}

View File

@@ -34,8 +34,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}

View File

@@ -35,8 +35,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}

View File

@@ -27,10 +27,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/tgi-gaudi:2.3.1
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker pull ghcr.io/huggingface/tei-gaudi:1.5.0
docker images && sleep 1s
}

View File

@@ -27,9 +27,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}

View File

@@ -31,9 +31,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever vllm-gaudi nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker pull ghcr.io/huggingface/tei-gaudi:1.5.0
docker images && sleep 1s
}

View File

@@ -35,8 +35,6 @@ function build_docker_images() {
service_list="chatqna chatqna-ui dataprep retriever vllm nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.6
docker images && sleep 1s
}

View File

@@ -2,78 +2,69 @@
This README provides instructions for deploying the CodeGen application using Docker Compose on a system equipped with AMD GPUs supporting ROCm, detailing the steps to configure, run, and validate the services. This guide defaults to using the **vLLM** backend for LLM serving.
If the service response has a meaningful response in the value of the "choices.text" key,
then we consider the vLLM service to be successfully launched
## Table of Contents
- [Steps to Run with Docker Compose (Default vLLM)](#steps-to-run-with-docker-compose-default-vllm)
- [Service Overview](#service-overview)
- [Overview](#overview)
- [Prerequisites](#prerequisites)
- [Quick Start](#quick-start)
- [Available Deployment Options](#available-deployment-options)
- [compose_vllm.yaml (vLLM - Default)](#compose_vllyaml-vllm---default)
- [compose.yaml (TGI)](#composeyaml-tgi)
- [Configuration Parameters and Usage](#configuration-parameters-and-usage)
- [Docker Compose GPU Configuration](#docker-compose-gpu-configuration)
- [Environment Variables (`set_env*.sh`)](#environment-variables-set_envsh)
- [Building Docker Images Locally (Optional)](#building-docker-images-locally-optional)
- [1. Setup Build Environment](#1-setup-build-environment)
- [2. Clone Repositories](#2-clone-repositories)
- [3. Select Services and Build](#3-select-services-and-build)
- [Validate Service Health](#validate-service-health)
- [1. Validate the vLLM/TGI Service](#1-validate-the-vllmtgi-service)
- [2. Validate the LLM Service](#2-validate-the-llm-service)
- [3. Validate the MegaService (Backend)](#3-validate-the-megaservice-backend)
- [4. Validate the Frontend (UI)](#4-validate-the-frontend-ui)
- [How to Open the UI](#how-to-open-the-ui)
- [Default: vLLM-based Deployment (`--profile codegen-xeon-vllm`)](#default-vllm-based-deployment---profile-codegen-xeon-vllm)
- [TGI-based Deployment (`--profile codegen-xeon-tgi`)](#tgi-based-deployment---profile-codegen-xeon-tgi)
- [Configuration Parameters](#configuration-parameters)
- [Environment Variables](#environment-variables)
- [Compose Profiles](#compose-profiles)
- [Building Custom Images (Optional)](#building-custom-images-optional)
- [Validate Services](#validate-services)
- [Check Container Status](#check-container-status)
- [Run Validation Script/Commands](#run-validation-scriptcommands)
- [Accessing the User Interface (UI)](#accessing-the-user-interface-ui)
- [Gradio UI (Default)](#gradio-ui-default)
- [Svelte UI (Optional)](#svelte-ui-optional)
- [React UI (Optional)](#react-ui-optional)
- [VS Code Extension (Optional)](#vs-code-extension-optional)
- [Troubleshooting](#troubleshooting)
- [Stopping the Application](#stopping-the-application)
- [Next Steps](#next-steps)
## Steps to Run with Docker Compose (Default vLLM)
## Overview
_This section assumes you are using pre-built images and targets the default vLLM deployment._
This guide focuses on running the pre-configured CodeGen service using Docker Compose on AMD ROCm processing acelarating platform. It leverages containers optimized for Intel architecture for the CodeGen gateway, LLM serving (vLLM or TGI), and UI.
1. **Set Deploy Environment Variables:**
## CodeGen Quick Start Deployment
- Go to the Docker Compose directory:
```bash
# Adjust path if your GenAIExamples clone is located elsewhere
cd GenAIExamples/CodeGen/docker_compose/amd/gpu/rocm
```
- Setting variables in the operating system environment:
- Set variable `HUGGINGFACEHUB_API_TOKEN`:
```bash
### Replace the string 'your_huggingfacehub_token' with your HuggingFacehub repository access token.
export HUGGINGFACEHUB_API_TOKEN='your_huggingfacehub_token'
```
- Edit the environment script for the **vLLM** deployment (`set_env_vllm.sh`):
```bash
nano set_env_vllm.sh
```
- Configure `HOST_IP`, `EXTERNAL_HOST_IP`, `*_PORT` variables, and proxies (`http_proxy`, `https_proxy`, `no_proxy`) as described in the Configuration section below.
- Source the environment variables:
```bash
. set_env_vllm.sh
```
This section describes how to quickly deploy and test the CodeGen service manually on an AMD GPU (ROCm) platform. The basic steps are:
2. **Start the Services (vLLM):**
1. [Prerequisites](#prerequisites)
2. [Generate a HuggingFace Access Token](#generate-a-huggingface-access-token)
3. [Configure the Deployment Environment](#configure-the-deployment-environment)
4. [Deploy the Services Using Docker Compose](#deploy-the-services-using-docker-compose)
5. [Check the Deployment Status](#check-the-deployment-status)
6. [Test the Pipeline](#test-the-pipeline)
7. [Cleanup the Deployment](#cleanup-the-deployment)
```bash
docker compose -f compose_vllm.yaml up -d
```
## Prerequisites
3. **Verify:** Proceed to the [Validate Service Health](#validate-service-health) section after allowing time for services to start.
- Docker and Docker Compose installed.
- x86 Intel or AMD CPU.
- 4x AMD Instinct MI300X Accelerators.
- Git installed (for cloning repository).
- Hugging Face Hub API Token (for downloading models).
- Access to the internet (or a private model cache).
- Clone the `GenAIExamples` repository:
## Service Overview
```bash
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/CodeGen/docker_compose/amd/gpu/rocm/
```
When using the default `compose_vllm.yaml` (vLLM-based), the following services are deployed:
Checkout a released version, such as v1.3:
| Service Name | Default Port (Host) | Internal Port | Purpose |
| :--------------------- | :--------------------------------------------- | :------------ | :-------------------------- |
| codegen-vllm-service | `${CODEGEN_VLLM_SERVICE_PORT}` (e.g., 8028) | 8000 | LLM Serving (vLLM on ROCm) |
| codegen-llm-server | `${CODEGEN_LLM_SERVICE_PORT}` (e.g., 9000) | 80 | LLM Microservice Wrapper |
| codegen-backend-server | `${CODEGEN_BACKEND_SERVICE_PORT}` (e.g., 7778) | 80 | CodeGen MegaService/Gateway |
| codegen-ui-server | `${CODEGEN_UI_SERVICE_PORT}` (e.g., 5173) | 80 | Frontend User Interface |
_(Note: Ports are configurable via `set_env_vllm.sh`. Check the script for actual defaults used.)_
_(Note: The TGI deployment (`compose.yaml`) uses `codegen-tgi-service` instead of `codegen-vllm-service`)_
```bash
git checkout v1.3
```
## Available Deployment Options
@@ -91,6 +82,69 @@ This directory provides different Docker Compose files:
## Configuration Parameters and Usage
### Environment Variables (`set_env*.sh`)
These scripts (`set_env_vllm.sh` for vLLM, `set_env.sh` for TGI) configure crucial parameters passed to the containers.
This example covers the single-node on-premises deployment of the CodeGen example using OPEA components. There are various ways to enable CodeGen, but this example will focus on four options available for deploying the CodeGen pipeline to AMD ROCm AI Accelerators. This example begins with a Quick Start section and then documents how to modify deployments, leverage new models and configure the number of allocated devices.
This example includes the following sections:
- [CodeGen Quick Start Deployment](#CodeGen-quick-start-deployment): Demonstrates how to quickly deploy a CodeGen application/pipeline on AMD GPU (ROCm) platform.
- [CodeGen Docker Compose Files](#CodeGen-docker-compose-files): Describes some example deployments and their docker compose files.
- [CodeGen Service Configuration](#CodeGen-service-configuration): Describes the services and possible configuration changes.
**Note** This example requires access to a properly installed AMD ROCm platform with a functional Docker service configured
## Generate a HuggingFace Access Token
Some HuggingFace resources, such as some models, are only accessible if you have an access token. If you do not already have a HuggingFace access token, you can create one by first creating an account by following the steps provided at [HuggingFace](https://huggingface.co/) and then generating a [user access token](https://huggingface.co/docs/transformers.js/en/guides/private#step-1-generating-a-user-access-token).
## Configure the Deployment Environment
### Environment Variables
Key parameters are configured via environment variables set before running `docker compose up`.
| Environment Variable | Description | Default (Set Externally) |
| :-------------------------------------- | :------------------------------------------------------------------------------------------------------------------ | :----------------------------------------------------------------------------------------------- |
| `HOST_IP` | External IP address of the host machine. **Required.** | `your_external_ip_address` |
| `HUGGINGFACEHUB_API_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingface_token` |
| `LLM_MODEL_ID` | Hugging Face model ID for the CodeGen LLM (used by TGI/vLLM service). Configured within `compose.yaml` environment. | `Qwen/Qwen2.5-Coder-7B-Instruct` |
| `EMBEDDING_MODEL_ID` | Hugging Face model ID for the embedding model (used by TEI service). Configured within `compose.yaml` environment. | `BAAI/bge-base-en-v1.5` |
| `LLM_ENDPOINT` | Internal URL for the LLM serving endpoint (used by `codegen-llm-server`). Configured in `compose.yaml`. | `http://codegen-tgi-server:80/generate` or `http://codegen-vllm-server:8000/v1/chat/completions` |
| `TEI_EMBEDDING_ENDPOINT` | Internal URL for the Embedding service. Configured in `compose.yaml`. | `http://codegen-tei-embedding-server:80/embed` |
| `DATAPREP_ENDPOINT` | Internal URL for the Data Preparation service. Configured in `compose.yaml`. | `http://codegen-dataprep-server:80/dataprep` |
| `BACKEND_SERVICE_ENDPOINT` | External URL for the CodeGen Gateway (MegaService). Derived from `HOST_IP` and port `7778`. | `http://${HOST_IP}:7778/v1/codegen` |
| `*_PORT` (Internal) | Internal container ports (e.g., `80`, `6379`). Defined in `compose.yaml`. | N/A |
| `http_proxy` / `https_proxy`/`no_proxy` | Network proxy settings (if required). | `""` |
To set up environment variables for deploying CodeGen services, source the _setup_env.sh_ script in this directory:
For TGI
```bash
export host_ip="External_Public_IP" #ip address of the node
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
export http_proxy="Your_HTTP_Proxy" #http proxy if any
export https_proxy="Your_HTTPs_Proxy" #https proxy if any
export no_proxy=localhost,127.0.0.1,$host_ip #additional no proxies if needed
export no_proxy=$no_proxy
source ./set_env.sh
```
For vLLM
```bash
export host_ip="External_Public_IP" #ip address of the node
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
export http_proxy="Your_HTTP_Proxy" #http proxy if any
export https_proxy="Your_HTTPs_Proxy" #https proxy if any
export no_proxy=localhost,127.0.0.1,$host_ip #additional no proxies if needed
export no_proxy=$no_proxy
source ./set_env_vllm.sh
```
### Docker Compose GPU Configuration
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose files (`compose.yaml`, `compose_vllm.yaml`) for the LLM serving container:
@@ -103,7 +157,6 @@ shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/:/dev/dri/
# - /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
@@ -112,302 +165,329 @@ security_opt:
- seccomp:unconfined
```
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs (e.g., `/dev/dri/card0:/dev/dri/card0`, `/dev/dri/render128:/dev/dri/render128`). Use AMD GPU driver utilities to identify device IDs.
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs (e.g., `/dev/dri/card0:/dev/dri/card0`, `/dev/dri/render128:/dev/dri/render128`). For example:
### Environment Variables (`set_env*.sh`)
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/card0:/dev/dri/card0
- /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
These scripts (`set_env_vllm.sh` for vLLM, `set_env.sh` for TGI) configure crucial parameters passed to the containers.
**How to Identify GPU Device IDs:**
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
| Environment Variable | Description | Example Value (Edit in Script) |
| :----------------------------- | :------------------------------------------------------------------------------------------------------- | :------------------------------- |
| `HUGGINGFACEHUB_API_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingfacehub_token` |
| `HOST_IP` | Internal/Primary IP address of the host machine. Used for inter-service communication. **Required.** | `192.168.1.100` |
| `EXTERNAL_HOST_IP` | External IP/hostname used to access the UI from outside. Same as `HOST_IP` if no proxy/LB. **Required.** | `192.168.1.100` |
| `CODEGEN_LLM_MODEL_ID` | Hugging Face model ID for the CodeGen LLM. | `Qwen/Qwen2.5-Coder-7B-Instruct` |
| `CODEGEN_VLLM_SERVICE_PORT` | Host port mapping for the vLLM serving endpoint (in `set_env_vllm.sh`). | `8028` |
| `CODEGEN_TGI_SERVICE_PORT` | Host port mapping for the TGI serving endpoint (in `set_env.sh`). | `8028` |
| `CODEGEN_LLM_SERVICE_PORT` | Host port mapping for the LLM Microservice wrapper. | `9000` |
| `CODEGEN_BACKEND_SERVICE_PORT` | Host port mapping for the CodeGen MegaService/Gateway. | `7778` |
| `CODEGEN_UI_SERVICE_PORT` | Host port mapping for the UI service. | `5173` |
| `http_proxy` | Network HTTP Proxy URL (if required). | `Your_HTTP_Proxy` |
| `https_proxy` | Network HTTPS Proxy URL (if required). | `Your_HTTPs_Proxy` |
| `no_proxy` | Comma-separated list of hosts to bypass proxy. Should include `localhost,127.0.0.1,$HOST_IP`. | `localhost,127.0.0.1` |
### Deploy the Services Using Docker Compose
**How to Use:** Edit the relevant `set_env*.sh` file (`set_env_vllm.sh` for the default) with your values, then source it (`. ./set_env*.sh`) before running `docker compose`.
Please refer to the table below to build different microservices from source:
## Building Docker Images Locally (Optional)
When using the default `compose_vllm.yaml` (vLLM-based), the following services are deployed:
Follow these steps if you need to build the Docker images from source instead of using pre-built ones.
| Service Name | Default Port (Host) | Internal Port | Purpose |
| :--------------------- | :--------------------------------------------- | :------------ | :-------------------------- |
| codegen-vllm-service | `${CODEGEN_VLLM_SERVICE_PORT}` (e.g., 8028) | 8000 | LLM Serving (vLLM on ROCm) |
| codegen-llm-server | `${CODEGEN_LLM_SERVICE_PORT}` (e.g., 9000) | 80 | LLM Microservice Wrapper |
| codegen-backend-server | `${CODEGEN_BACKEND_SERVICE_PORT}` (e.g., 7778) | 80 | CodeGen MegaService/Gateway |
| codegen-ui-server | `${CODEGEN_UI_SERVICE_PORT}` (e.g., 5173) | 80 | Frontend User Interface |
### 1. Setup Build Environment
To deploy the CodeGen services, execute the `docker compose up` command with the appropriate arguments. For a vLLM deployment, execute:
- #### Create application install directory and go to it:
```bash
docker compose -f compose_vllm.sh up -d
```
```bash
mkdir ~/codegen-install && cd codegen-install
```
The CodeGen docker images should automatically be downloaded from the `OPEA registry` and deployed on the AMD GPU (ROCM) Platform:
### 2. Clone Repositories
```bash
[+] Running 5/5_default Created 0.3s
✔ Network rocm_default Created 0.3s
✔ Container codegen-vllm-service Healthy 100.9s
✔ Container codegen-llm-server Started 101.2s
✔ Container codegen-backend-server Started 101.5s
✔ Container codegen-ui-server Started 101.9s
```
- #### Clone the repository GenAIExamples (the default repository branch "main" is used here):
# To deploy the CodeGen services, execute the `docker compose up` command with the appropriate arguments. For a TGI deployment, execute:
```bash
git clone https://github.com/opea-project/GenAIExamples.git
```
```
docker compose up -d
```
If you need to use a specific branch/tag of the GenAIExamples repository, then (v1.3 replace with its own value):
The CodeGen docker images should automatically be downloaded from the `OPEA registry` and deployed on the AMD GPU (ROCM) Platform:
```bash
git clone https://github.com/opea-project/GenAIExamples.git && cd GenAIExamples && git checkout v1.3
```
```bash
[+] Running 5/5_default Created 0.4s
✔ Network rocm_default Created 0.4s
✔ Container codegen-tgi-service Healthy 102.6s
✔ Container codegen-llm-server Started 100.2s
✔ Container codegen-backend-server Started 103.7s
✔ Container codegen-ui-server Started 102.9s
```
We remind you that when using a specific version of the code, you need to use the README from this version.
## Building Custom Images (Optional)
- #### Go to build directory:
If you need to modify the microservices:
```bash
cd ~/codegen-install/GenAIExamples/CodeGen/docker_image_build
```
1. Clone the [OPEA GenAIComps](https://github.com/opea-project/GenAIComps) repository.
2. Follow build instructions in the respective component directories (e.g., `comps/llms/text-generation`, `comps/codegen`, `comps/ui/gradio`, etc.). Use the provided Dockerfiles (e.g., `CodeGen/Dockerfile`, `CodeGen/ui/docker/Dockerfile.gradio`).
3. Tag your custom images appropriately (e.g., `my-custom-codegen:latest`).
4. Update the `image:` fields in the `compose.yaml` file to use your custom image tags.
- Cleaning up the GenAIComps repository if it was previously cloned in this directory.
This is necessary if the build was performed earlier and the GenAIComps folder exists and is not empty:
_Refer to the main [CodeGen README](../../../../README.md) for links to relevant GenAIComps components._
```bash
echo Y | rm -R GenAIComps
```
## Validate Services
- #### Clone the repository GenAIComps (the default repository branch "main" is used here):
### Check the Deployment Status for TGI base deployment
```bash
git clone https://github.com/opea-project/GenAIComps.git
```
After running docker compose, check if all the containers launched via docker compose have started:
If you use a specific tag of the GenAIExamples repository,
then you should also use the corresponding tag for GenAIComps. (v1.3 replace with its own value):
```bash
docker ps -a
```
```bash
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout v1.3
```
For the default deployment, the following 10 containers should have started:
We remind you that when using a specific version of the code, you need to use the README from this version.
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d08caeae2ed opea/codegen-ui:latest "docker-entrypoint.s…" 2 minutes ago Up About a minute 0.0.0.0:18151->5173/tcp, [::]:18151->5173/tcp codegen-ui-server
f52adc66c116 opea/codegen:latest "python codegen.py" 2 minutes ago Up About a minute 0.0.0.0:18150->7778/tcp, [::]:18150->7778/tcp codegen-backend-server
4b1cb8f5d4ff opea/llm-textgen:latest "bash entrypoint.sh" 2 minutes ago Up About a minute 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp codegen-llm-server
3bb4ee0abf15 ghcr.io/huggingface/text-generation-inference:2.4.1-rocm "/tgi-entrypoint.sh …" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:8028->80/tcp, [::]:8028->80/tcp codegen-tgi-service
```
### 3. Select Services and Build
### Check the Deployment Status for vLLM base deployment
- #### Setting the list of images for the build (from the build file.yaml)
After running docker compose, check if all the containers launched via docker compose have started:
Select the services corresponding to your desired deployment (vLLM is the default):
```bash
docker ps -a
```
##### vLLM-based application (Default)
For the default deployment, the following 10 containers should have started:
```bash
service_list="vllm-rocm llm-textgen codegen codegen-ui"
```
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f100cc326343 opea/codegen-ui:latest "docker-entrypoint.s…" 16 minutes ago Up 14 minutes 0.0.0.0:18151->5173/tcp, [::]:18151->5173/tcp codegen-ui-server
c59de0b2da5b opea/codegen:latest "python codegen.py" 16 minutes ago Up 14 minutes 0.0.0.0:18150->7778/tcp, [::]:18150->7778/tcp codegen-backend-server
dcd83e0e4c0f opea/llm-textgen:latest "bash entrypoint.sh" 16 minutes ago Up 14 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp codegen-llm-server
d091d8f2fab6 opea/vllm-rocm:latest "python3 /workspace/…" 16 minutes ago Up 16 minutes (healthy) 0.0.0.0:8028->8011/tcp, [::]:8028->8011/tcp codegen-vllm-service
```
##### TGI-based application
### Test the Pipeline
```bash
service_list="llm-textgen codegen codegen-ui"
```
### If you use vLLM:
- #### Optional. Pull TGI Docker Image (Do this if you plan to build/use the TGI variant)
```bash
DATA='{"model": "Qwen/Qwen2.5-Coder-7B-Instruct", '\
'"messages": [{"role": "user", "content": "Implement a high-level API for a TODO list application. '\
'The API takes as input an operation request and updates the TODO list in place. '\
'If the request is invalid, raise an exception."}], "max_tokens": 256}'
```bash
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
```
curl http://${HOST_IP}:${CODEGEN_VLLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
- #### Build Docker Images
Checking the response from the service. The response should be similar to JSON:
_Ensure you are in the `~/codegen-install/GenAIExamples/CodeGen/docker_image_build` directory._
````json
{
"id": "chatcmpl-142f34ef35b64a8db3deedd170fed951",
"object": "chat.completion",
"created": 1742270316,
"model": "Qwen/Qwen2.5-Coder-7B-Instruct",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "```python\nfrom typing import Optional, List, Dict, Union\nfrom pydantic import BaseModel, validator\n\nclass OperationRequest(BaseModel):\n # Assuming OperationRequest is already defined as per the given text\n pass\n\nclass UpdateOperation(OperationRequest):\n new_items: List[str]\n\n def apply_and_maybe_raise(self, updatable_item: \"Updatable todo list\") -> None:\n # Assuming updatable_item is an instance of Updatable todo list\n self.validate()\n updatable_item.add_items(self.new_items)\n\nclass Updatable:\n # Abstract class for items that can be updated\n pass\n\nclass TodoList(Updatable):\n # Class that represents a todo list\n items: List[str]\n\n def add_items(self, new_items: List[str]) -> None:\n self.items.extend(new_items)\n\ndef handle_request(operation_request: OperationRequest) -> None:\n # Function to handle an operation request\n if isinstance(operation_request, UpdateOperation):\n operation_request.apply_and_maybe_raise(get_todo_list_for_update())\n else:\n raise ValueError(\"Invalid operation request\")\n\ndef get_todo_list_for_update() -> TodoList:\n # Function to get the todo list for update\n # Assuming this function returns the",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "length",
"stop_reason": null
}
],
"usage": { "prompt_tokens": 66, "total_tokens": 322, "completion_tokens": 256, "prompt_tokens_details": null },
"prompt_logprobs": null
}
````
```bash
docker compose -f build.yaml build ${service_list} --no-cache
```
If the service response has a meaningful response in the value of the "choices.message.content" key,
then we consider the vLLM service to be successfully launched
After the build, check the list of images with the command:
### If you use TGI:
```bash
docker image ls
```
```bash
DATA='{"inputs":"Implement a high-level API for a TODO list application. '\
'The API takes as input an operation request and updates the TODO list in place. '\
'If the request is invalid, raise an exception.",'\
'"parameters":{"max_new_tokens":256,"do_sample": true}}'
The list of images should include (depending on `service_list`):
curl http://${HOST_IP}:${CODEGEN_TGI_SERVICE_PORT}/generate \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
###### vLLM-based application:
Checking the response from the service. The response should be similar to JSON:
- opea/vllm-rocm:latest
- opea/llm-textgen:latest
- opea/codegen:latest
- opea/codegen-ui:latest
````json
{
"generated_text": " The supported operations are \"add_task\", \"complete_task\", and \"remove_task\". Each operation can be defined with a corresponding function in the API.\n\nAdd your API in the following format:\n\n```\nTODO App API\n\nsupported operations:\n\noperation name description\n----------------------- ------------------------------------------------\n<operation_name> <operation description>\n```\n\nUse type hints for function parameters and return values. Specify a text description of the API's supported operations.\n\nUse the following code snippet as a starting point for your high-level API function:\n\n```\nclass TodoAPI:\n def __init__(self, tasks: List[str]):\n self.tasks = tasks # List of tasks to manage\n\n def add_task(self, task: str) -> None:\n self.tasks.append(task)\n\n def complete_task(self, task: str) -> None:\n self.tasks = [t for t in self.tasks if t != task]\n\n def remove_task(self, task: str) -> None:\n self.tasks = [t for t in self.tasks if t != task]\n\n def handle_request(self, request: Dict[str, str]) -> None:\n operation = request.get('operation')\n if operation == 'add_task':\n self.add_task(request.get('task'))\n elif"
}
````
###### TGI-based application:
- ghcr.io/huggingface/text-generation-inference:2.3.1-rocm (if pulled)
- opea/llm-textgen:latest
- opea/codegen:latest
- opea/codegen-ui:latest
_After building, ensure the `image:` tags in the main `compose_vllm.yaml` or `compose.yaml` (in the `amd/gpu/rocm` directory) match these built images (e.g., `opea/vllm-rocm:latest`)._
## Validate Service Health
Run these checks after starting the services to ensure they are operational. Focus on the vLLM checks first as it's the default.
### 1. Validate the vLLM/TGI Service
#### If you use vLLM (Default - using `compose_vllm.yaml` and `set_env_vllm.sh`)
- **How Tested:** Send a POST request with a sample prompt to the vLLM endpoint.
- **CURL Command:**
```bash
DATA='{"model": "Qwen/Qwen2.5-Coder-7B-Instruct", '\
'"messages": [{"role": "user", "content": "Implement a high-level API for a TODO list application. '\
'The API takes as input an operation request and updates the TODO list in place. '\
'If the request is invalid, raise an exception."}], "max_tokens": 256}'
curl http://${HOST_IP}:${CODEGEN_VLLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
- **Sample Output:**
```json
{
"id": "chatcmpl-142f34ef35b64a8db3deedd170fed951",
"object": "chat.completion"
// ... (rest of output) ...
}
```
- **Expected Result:** A JSON response with a `choices[0].message.content` field containing meaningful generated code.
#### If you use TGI (using `compose.yaml` and `set_env.sh`)
- **How Tested:** Send a POST request with a sample prompt to the TGI endpoint.
- **CURL Command:**
```bash
DATA='{"inputs":"Implement a high-level API for a TODO list application. '\
# ... (data payload as before) ...
'"parameters":{"max_new_tokens":256,"do_sample": true}}'
curl http://${HOST_IP}:${CODEGEN_TGI_SERVICE_PORT}/generate \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
- **Sample Output:**
```json
{
"generated_text": " The supported operations are \"add_task\", \"complete_task\", and \"remove_task\". # ... (generated code) ..."
}
```
- **Expected Result:** A JSON response with a `generated_text` field containing meaningful generated code.
If the service response has a meaningful response in the value of the "generated_text" key,
then we consider the TGI service to be successfully launched
### 2. Validate the LLM Service
- **Service Name:** `codegen-llm-server`
- **How Tested:** Send a POST request to the LLM microservice wrapper endpoint.
- **CURL Command:**
```bash
DATA='{"query":"Implement a high-level API for a TODO list application. '\
'The API takes as input an operation request and updates the TODO list in place. '\
'If the request is invalid, raise an exception.",'\
'"max_tokens":256,"top_k":10,"top_p":0.95,"typical_p":0.95,"temperature":0.01,'\
'"repetition_penalty":1.03,"stream":false}'
```bash
DATA='{"query":"Implement a high-level API for a TODO list application. '\
# ... (data payload as before) ...
'"repetition_penalty":1.03,"stream":false}'
curl http://${HOST_IP}:${CODEGEN_LLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
curl http://${HOST_IP}:${CODEGEN_LLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to JSON:
- **Sample Output:** (Structure may vary slightly depending on whether vLLM or TGI is backend)
```json
{
"id": "cmpl-4e89a590b1af46bfb37ce8f12b2996f8" // Example ID
// ... (output structure depends on backend, check original validation) ...
````json
{
"id": "cmpl-4e89a590b1af46bfb37ce8f12b2996f8",
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"text": " The API should support the following operations:\n\n1. Add a new task to the TODO list.\n2. Remove a task from the TODO list.\n3. Mark a task as completed.\n4. Retrieve the list of all tasks.\n\nThe API should also support the following features:\n\n1. The ability to filter tasks based on their completion status.\n2. The ability to sort tasks based on their priority.\n3. The ability to search for tasks based on their description.\n\nHere is an example of how the API can be used:\n\n```python\ntodo_list = []\napi = TodoListAPI(todo_list)\n\n# Add tasks\napi.add_task(\"Buy groceries\")\napi.add_task(\"Finish homework\")\n\n# Mark a task as completed\napi.mark_task_completed(\"Buy groceries\")\n\n# Retrieve the list of all tasks\nprint(api.get_all_tasks())\n\n# Filter tasks based on completion status\nprint(api.filter_tasks(completed=True))\n\n# Sort tasks based on priority\napi.sort_tasks(priority=\"high\")\n\n# Search for tasks based on description\nprint(api.search_tasks(description=\"homework\"))\n```\n\nIn this example, the `TodoListAPI` class is used to manage the TODO list. The `add_task` method adds a new task to the list, the `mark_task_completed` method",
"stop_reason": null,
"prompt_logprobs": null
}
],
"created": 1742270567,
"model": "Qwen/Qwen2.5-Coder-7B-Instruct",
"object": "text_completion",
"system_fingerprint": null,
"usage": {
"completion_tokens": 256,
"prompt_tokens": 37,
"total_tokens": 293,
"completion_tokens_details": null,
"prompt_tokens_details": null
}
```
- **Expected Result:** A JSON response containing meaningful generated code within the `choices` array.
}
````
### 3. Validate the MegaService (Backend)
## Accessing the User Interface (UI)
- **Service Name:** `codegen-backend-server`
- **How Tested:** Send a POST request to the main CodeGen gateway endpoint.
- **CURL Command:**
Multiple UI options can be configured via the `compose.yaml`.
```bash
DATA='{"messages": "Implement a high-level API for a TODO list application. '\
# ... (data payload as before) ...
'If the request is invalid, raise an exception."}'
### Svelte UI (Optional)
curl http://${HOST_IP}:${CODEGEN_BACKEND_SERVICE_PORT}/v1/codegen \
-H "Content-Type: application/json" \
-d "$DATA"
```
1. Modify `compose.yaml`: Comment out the `codegen-gradio-ui-server` service and uncomment/add the `codegen-xeon-ui-server` (Svelte) service definition, ensuring the port mapping is correct (e.g., `"- 5173:5173"`).
2. Restart Docker Compose: `docker compose --profile <profile_name> up -d`
3. Access: `http://{HOST_IP}:5173` (or the host port you mapped).
- **Sample Output:**
```textmate
data: {"id":"cmpl-...", ...}
# ... more data chunks ...
data: [DONE]
```
- **Expected Result:** A stream of server-sent events (SSE) containing JSON data with generated code tokens, ending with `data: [DONE]`.
![Svelte UI Init](../../../../assets/img/codeGen_ui_init.jpg)
### 4. Validate the Frontend (UI)
### VS Code Extension (Optional)
- **Service Name:** `codegen-ui-server`
- **How Tested:** Access the UI URL in a web browser and perform a test query.
- **Steps:** See [How to Open the UI](#how-to-open-the-ui).
- **Expected Result:** The UI loads correctly, and submitting a prompt results in generated code displayed on the page.
Users can interact with the backend service using the `Neural Copilot` VS Code extension.
## How to Open the UI
1. Determine the UI access URL using the `EXTERNAL_HOST_IP` and `CODEGEN_UI_SERVICE_PORT` variables defined in your sourced `set_env*.sh` file (use `set_env_vllm.sh` for the default vLLM deployment). The default URL format is:
`http://${EXTERNAL_HOST_IP}:${CODEGEN_UI_SERVICE_PORT}`
(e.g., `http://192.168.1.100:5173`)
2. Open this URL in your web browser.
3. You should see the CodeGen starting page:
![UI start page](../../../../assets/img/ui-starting-page.png)
4. Enter a prompt in the input field (e.g., "Write a Python code that returns the current time and date") and press Enter or click the submit button.
5. Verify that the generated code appears correctly:
![UI result page](../../../../assets/img/ui-result-page.png)
1. **Install:** Find and install `Neural Copilot` from the VS Code Marketplace.
![Install Copilot](../../../../assets/img/codegen_copilot.png)
2. **Configure:** Set the "Service URL" in the extension settings to your CodeGen backend endpoint: `http://${HOST_IP}:7778/v1/codegen` (use the correct port if changed).
![Configure Endpoint](../../../../assets/img/codegen_endpoint.png)
3. **Usage:**
- **Inline Suggestion:** Type a comment describing the code you want (e.g., `# Python function to read a file`) and wait for suggestions.
![Code Suggestion](../../../../assets/img/codegen_suggestion.png)
- **Chat:** Use the Neural Copilot panel to chat with the AI assistant about code.
![Chat Dialog](../../../../assets/img/codegen_dialog.png)
## Troubleshooting
_(No specific troubleshooting steps provided in the original content for this file. Add common issues if known.)_
- **Model Download Issues:** Check `HUGGINGFACEHUB_API_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
- **Connection Errors:** Verify `HOST_IP` is correct and accessible. Check `docker ps` for port mappings. Ensure `no_proxy` includes `HOST_IP` if using a proxy. Check logs of the service failing to connect (e.g., `codegen-backend-server` logs if it can't reach `codegen-llm-server`).
- **"Container name is in use"**: Stop existing containers (`docker compose down`) or change `container_name` in `compose.yaml`.
- **Resource Issues:** CodeGen models can be memory-intensive. Monitor host RAM usage. Increase Docker resources if needed.
- Check container logs (`docker compose -f <file> logs <service_name>`), especially for `codegen-vllm-service` or `codegen-tgi-service`.
- Ensure `HUGGINGFACEHUB_API_TOKEN` is correct.
- Verify ROCm drivers and Docker setup for GPU access.
- Confirm network connectivity and proxy settings.
- Ensure `HOST_IP` and `EXTERNAL_HOST_IP` are correctly set and accessible.
- If building locally, ensure build steps completed without error and image tags match compose file.
### Cleanup the Deployment
## Stopping the Application
### If you use vLLM (Default)
To stop the containers associated with the deployment, execute the following command:
```bash
# Ensure you are in the correct directory
# cd GenAIExamples/CodeGen/docker_compose/amd/gpu/rocm
docker compose -f compose_vllm.yaml down
```
### If you use TGI
```bash
# Ensure you are in the correct directory
# cd GenAIExamples/CodeGen/docker_compose/amd/gpu/rocm
docker compose -f compose.yaml down
```
```bash
[+] Running 0/1
[+] Running 1/2degen-ui-server Stopping 0.4s
[+] Running 2/3degen-ui-server Removed 10.5s
[+] Running 2/3degen-ui-server Removed 10.5s
[+] Running 3/4degen-ui-server Removed 10.5s
[+] Running 5/5degen-ui-server Removed 10.5s
✔ Container codegen-ui-server Removed 10.5s
✔ Container codegen-backend-server Removed 10.4s
✔ Container codegen-llm-server Removed 10.4s
✔ Container codegen-tgi-service Removed 8.0s
✔ Network rocm_default Removed 0.6s
```
### compose.yaml - TGI Deployment
The TGI (Text Generation Inference) deployment and the default deployment differ primarily in their service configurations and specific focus on handling large language models (LLMs). The TGI deployment includes a unique `codegen-tgi-service`, which utilizes the `ghcr.io/huggingface/text-generation-inference:2.4.1-rocm` image and is specifically configured to run on AMD hardware.
| Service Name | Image Name | AMD Use |
| ---------------------- | -------------------------------------------------------- | ------- |
| codegen-backend-server | opea/codegen:latest | no |
| codegen-llm-server | opea/codegen:latest | no |
| codegen-tgi-service | ghcr.io/huggingface/text-generation-inference:2.4.1-rocm | yes |
| codegen-ui-server | opea/codegen-ui:latest | no |
### compose_vllm.yaml - vLLM Deployment
The vLLM deployment utilizes AMD devices primarily for the `vllm-service`, which handles large language model (LLM) tasks. This service is configured to maximize the use of AMD's capabilities, potentially allocating multiple devices to enhance parallel processing and throughput.
| Service Name | Image Name | AMD Use |
| ---------------------- | ---------------------- | ------- |
| codegen-backend-server | opea/codegen:latest | no |
| codegen-llm-server | opea/codegen:latest | no |
| codegen-vllm-service | opea/vllm-rocm:latest | yes |
| codegen-ui-server | opea/codegen-ui:latest | no |
## CodeGen Service Configuration
The table provides a comprehensive overview of the CodeGen services utilized across various deployments as illustrated in the example Docker Compose files. Each row in the table represents a distinct service, detailing its possible images used to enable it and a concise description of its function within the deployment architecture. These services collectively enable functionalities such as data storage and management, text embedding, retrieval, reranking, and large language model processing.
ex.: (From ChatQna)
| Service Name | Possible Image Names | Optional | Description
| redis-vector-db | redis/redis-stack:7.2.0-v9 | No | Acts as a Redis database for storing and managing
## Conclusion
In the configuration of the `vllm-service` and the `tgi-service`, two variables play a primary role in determining the service's performance and functionality. The `LLM_MODEL_ID` parameter specifies the particular large language model (LLM) that the service will utilize, effectively determining the capabilities and characteristics of the language processing tasks it can perform. This model identifier ensures that the service is aligned with the specific requirements of the application, whether it involves text generation, comprehension, or other language-related tasks.
However, developers need to be aware of the models that have been tested with the respective service image supporting the `vllm-service` and `tgi-service`. For example, documentation for the OPEA GenAIComps v1.0 release specify the list of [validated LLM models](https://github.com/opea-project/GenAIComps/blob/v1.0/comps/llms/text-generation/README.md#validated-llm-models) for each AMD ROCm enabled service image. Specific models may have stringent requirements on the number of AMD ROCm devices required to support them.
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
## Next Steps
- Explore the alternative TGI deployment option if needed.
- Refer to the main [CodeGen README](../../../../README.md) for architecture details and links to other deployment methods (Kubernetes, Xeon).
- Consult the [OPEA GenAIComps](https://github.com/opea-project/GenAIComps) repository for details on individual microservices.
- Refer to the main [CodeGen README](../../../../README.md) for links to benchmarking and Kubernetes deployment options.

View File

@@ -1,8 +1,10 @@
# Deploy CodeTrans on AMD GPU (ROCm)
# Deploying CodeTrans on AMD ROCm GPU
This document outlines the single node deployment process for a CodeTrans application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices on AMD GPU (ROCm) server. The steps include pulling Docker images, container deployment via Docker Compose, and service execution using microservices `llm`.
This document outlines the single node deployment process for a CodeTrans application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices on Intel Xeon server and AMD GPU. The steps include pulling Docker images, container deployment via Docker Compose, and service execution using microservices `llm`.
# Table of Contents
Note: The default LLM is `Qwen/Qwen2.5-Coder-7B-Instruct`. Before deploying the application, please make sure either you've requested and been granted the access to it on [Huggingface](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) or you've downloaded the model locally from [ModelScope](https://www.modelscope.cn/models).
## Table of Contents
1. [CodeTrans Quick Start Deployment](#codetrans-quick-start-deployment)
2. [CodeTrans Docker Compose Files](#codetrans-docker-compose-files)
@@ -11,7 +13,7 @@ This document outlines the single node deployment process for a CodeTrans applic
## CodeTrans Quick Start Deployment
This section describes how to quickly deploy and test the CodeTrans service manually on an AMD GPU (ROCm) processor. The basic steps are:
This section describes how to quickly deploy and test the CodeTrans service manually on an AMD ROCm GPU. The basic steps are:
1. [Access the Code](#access-the-code)
2. [Configure the Deployment Environment](#configure-the-deployment-environment)
@@ -22,7 +24,7 @@ This section describes how to quickly deploy and test the CodeTrans service manu
### Access the Code
Clone the GenAIExample repository and access the CodeTrans AMD GPU (ROCm) platform Docker Compose files and supporting scripts:
Clone the GenAIExample repository and access the CodeTrans AMD ROCm GPU platform Docker Compose files and supporting scripts:
```bash
git clone https://github.com/opea-project/GenAIExamples.git
@@ -37,29 +39,84 @@ git checkout v1.2
### Configure the Deployment Environment
To set up environment variables for deploying CodeTrans services, set up some parameters specific to the deployment environment and source the `set_env.sh` script in this directory:
To set up environment variables for deploying CodeTrans services, set up some parameters specific to the deployment environment and source the `set_env_*.sh` script in this directory:
- if used vLLM - set_env_vllm.sh
- if used TGI - set_env.sh
Set the values of the variables:
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
We set these values in the file set_env\*\*\*\*.sh
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
The values shown in the file set_env.sh or set_env_vllm they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
Setting variables in the operating system environment:
```bash
export host_ip="External_Public_IP" # ip address of the node
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
export http_proxy="Your_HTTP_Proxy" # http proxy if any
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
export no_proxy=localhost,127.0.0.1,$host_ip # additional no proxies if needed
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
source ./set_env.sh
source ./set_env_*.sh # replace the script name with the appropriate one
```
Consult the section on [CodeTrans Service configuration](#codetrans-configuration) for information on how service specific configuration parameters affect deployments.
### Deploy the Services Using Docker Compose
To deploy the CodeTrans services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
To deploy the CodeTrans services, execute the `docker compose up` command with the appropriate arguments. For a default deployment with TGI, execute the command below. It uses the 'compose.yaml' file.
```bash
cd docker_compose/amd/gpu/rocm
# if used TGI
docker compose -f compose.yaml up -d
# if used vLLM
# docker compose -f compose_vllm.yaml up -d
```
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
- compose_vllm.yaml - for vLLM-based application
- compose.yaml - for TGI-based
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/card0:/dev/dri/card0
- /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
**How to Identify GPU Device IDs:**
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
> **Note**: developers should build docker image from source when:
>
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
@@ -71,9 +128,11 @@ Please refer to the table below to build different microservices from source:
| Microservice | Deployment Guide |
| ------------ | -------------------------------------------------------------------------------------------------------------- |
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
| TGI | [TGI project](https://github.com/huggingface/text-generation-inference.git) |
| LLM | [LLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/llms) |
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
| MegaService | [MegaService guide](../../../../README.md) |
| UI | [UI guide](../../../../ui/svelte/README.md) |
| Nginx | [Nginx guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/nginx) |
### Check the Deployment Status
@@ -83,15 +142,26 @@ After running docker compose, check if all the containers launched via docker co
docker ps -a
```
For the default deployment, the following 5 containers should have started:
For the default deployment with TGI, the following 9 containers should have started:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b3e1388fa2ca opea/nginx:${RELEASE_VERSION} "/usr/local/bin/star…" 32 hours ago Up 2 hours 0.0.0.0:80->80/tcp, :::80->80/tcp codetrans-nginx-server
3b5fa9a722da opea/codetrans-ui:${RELEASE_VERSION} "docker-entrypoint.s…" 32 hours ago Up 2 hours 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp codetrans-ui-server
d3b37f3d1faa opea/codetrans:${RELEASE_VERSION} "python codetrans.py" 32 hours ago Up 2 hours 0.0.0.0:7777->7777/tcp, :::7777->7777/tcp codetrans-backend-server
24cae0db1a70 opea/llm-textgen:${RELEASE_VERSION} "bash entrypoint.sh" 32 hours ago Up 2 hours 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp codetrans-llm-server
b98fa07a4f5c opea/vllm:${RELEASE_VERSION} "python3 -m vllm.ent…" 32 hours ago Up 2 hours 0.0.0.0:9009->80/tcp, :::9009->80/tcp codetrans-tgi-service
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eaf24161aca8 opea/nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 5 seconds 0.0.0.0:18104->80/tcp, [::]:18104->80/tcp chaqna-nginx-server
2fce48a4c0f4 opea/codetrans-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 5 seconds 0.0.0.0:18101->5173/tcp, [::]:18101->5173/tcp codetrans-ui-server
613c384979f4 opea/codetrans:latest "bash entrypoint.sh" 37 seconds ago Up 5 seconds 0.0.0.0:18102->8888/tcp, [::]:18102->8888/tcp codetrans-backend-server
e0ef1ea67640 opea/llm-textgen:latest "bash entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:18011->9000/tcp, [::]:18011->9000/tcp codetrans-llm-server
342f01bfdbb2 ghcr.io/huggingface/text-generation-inference:2.3.1-rocm"python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:18008->8011/tcp, [::]:18008->8011/tcp codetrans-tgi-service
```
if used vLLM:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eaf24161aca8 opea/nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 5 seconds 0.0.0.0:18104->80/tcp, [::]:18104->80/tcp chaqna-nginx-server
2fce48a4c0f4 opea/codetrans-ui:latest "docker-entrypoint.s…" 37 seconds ago Up 5 seconds 0.0.0.0:18101->5173/tcp, [::]:18101->5173/tcp codetrans-ui-server
613c384979f4 opea/codetrans:latest "bash entrypoint.sh" 37 seconds ago Up 5 seconds 0.0.0.0:18102->8888/tcp, [::]:18102->8888/tcp codetrans-backend-server
e0ef1ea67640 opea/llm-textgen:latest "bash entrypoint.sh" 37 seconds ago Up 36 seconds 0.0.0.0:18011->9000/tcp, [::]:18011->9000/tcp codetrans-llm-server
342f01bfdbb2 opea/vllm-rocm:latest "python3 /workspace/…" 37 seconds ago Up 36 seconds 0.0.0.0:18008->8011/tcp, [::]:18008->8011/tcp codetrans-vllm-service
```
If any issues are encountered during deployment, refer to the [Troubleshooting](../../../../README_miscellaneous.md#troubleshooting) section.
@@ -109,65 +179,68 @@ curl http://${HOST_IP}:${CODETRANS_BACKEND_SERVICE_PORT}/v1/codetrans \
-d "$DATA"
```
**Note** : Access the CodeTrans UI by web browser through this URL: `http://${host_ip}:80`. Please confirm the `80` port is opened in the firewall. To validate each microservie used in the pipeline refer to the [Validate Microservices](#validate-microservices) section.
**Note** : Access the CodeTrans UI by web browser through this URL: `http://${HOST_IP_EXTERNAL}:${CODETRANS_NGINX_PORT}`
### Cleanup the Deployment
To stop the containers associated with the deployment, execute the following command:
```bash
# if used TGI
docker compose -f compose.yaml down
# if used vLLM
# docker compose -f compose_vllm.yaml down
```
## CodeTrans Docker Compose Files
In the context of deploying a CodeTrans pipeline on an AMD GPU (ROCm) platform, we can pick and choose different large language model serving frameworks. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
In the context of deploying an ChatQnA pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks, or single English TTS/multi-language TTS component. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
| File | Description |
| ---------------------------------------- | ------------------------------------------------------------------------------------------ |
| [compose.yaml](./compose.yaml) | Default compose file using TGI as serving framework |
| [compose_vllm.yaml](./compose_vllm.yaml) | The LLM serving framework is vLLM. All other configurations remain the same as the default |
| File | Description |
| ---------------------------------------- | ------------------------------------------------------------------------------------- |
| [compose.yaml](./compose.yaml) | The LLM serving framework is TGI. Default compose file using TGI as serving framework |
| [compose_vllm.yaml](./compose_vllm.yaml) | The LLM serving framework is vLLM. Compose file using vllm as serving framework |
## Validate Microservices
## Validate MicroServices
1. LLM backend Service
LLM backend Service
In the first startup, this service will take more time to download, load and warm up the model. After it's finished, the service will be ready.
In the first startup, this service will take more time to download, load and warm up the model. After it's finished, the service will be ready.
Try the command below to check whether the LLM serving is ready.
Try the command below to check whether the LLM serving is ready.
```bash
# vLLM service
docker logs codetrans-vllm-service 2>&1 | grep complete
# If the service is ready, you will get the response like below.
INFO: Application startup complete.
```
```bash
# vLLM service
docker logs codetrans-vllm-service 2>&1 | grep complete
# If the service is ready, you will get the response like below.
INFO: Application startup complete.
```
```bash
# TGI service
docker logs codetrans-tgi-service | grep Connected
# If the service is ready, you will get the response like below.
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected
```
```bash
# TGI service
docker logs codetrans-tgi-service | grep Connected
# If the service is ready, you will get the response like below.
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected
```
Then try the `cURL` command below to validate services.
Then try the `cURL` command below to validate services.
```bash
# either vLLM or TGI service
# for vllm service
export port=${CODETRANS_VLLM_SERVICE_PORT}
# for tgi service
export port=${CODETRANS_TGI_SERVICE_PORT}
curl http://${HOST_IP}:${port}/v1/chat/completions \
-X POST \
-d '{"inputs":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:","parameters":{"max_new_tokens":17, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
```bash
# either vLLM or TGI service
# for vllm service
export port=${CODETRANS_VLLM_SERVICE_PORT}
# for tgi service
export port=${CODETRANS_TGI_SERVICE_PORT}
curl http://${HOST_IP}:${port}/v1/chat/completions \
-X POST \
-d '{"inputs":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:","parameters":{"max_new_tokens":17, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
2. LLM Microservice
```bash
curl http://${HOST_IP}:${CODETRANS_LLM_SERVICE_PORT}/v1/chat/completions\
curl http://${HOST_IP}:${CODETRANS_LLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d '{"query":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:"}' \
-H 'Content-Type: application/json'

View File

@@ -43,9 +43,6 @@ function build_docker_images() {
function start_services() {
cd $WORKPATH/docker_compose/intel/hpu/gaudi
export http_proxy=${http_proxy}
export https_proxy=${http_proxy}
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
export LLM_ENDPOINT="http://${ip_address}:8008"
export LLM_COMPONENT_NAME="OpeaTextGenService"

View File

@@ -42,8 +42,6 @@ function build_docker_images() {
function start_services() {
cd $WORKPATH/docker_compose/amd/gpu/rocm/
export http_proxy=${http_proxy}
export https_proxy=${http_proxy}
export CODETRANS_TGI_SERVICE_PORT=8008
export CODETRANS_LLM_SERVICE_PORT=9000
export CODETRANS_LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"

View File

@@ -45,8 +45,6 @@ function build_docker_images() {
function start_services() {
cd $WORKPATH/docker_compose/intel/cpu/xeon/
export http_proxy=${http_proxy}
export https_proxy=${http_proxy}
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
export LLM_ENDPOINT="http://${ip_address}:8008"
export LLM_COMPONENT_NAME="OpeaTextGenService"

View File

@@ -41,8 +41,6 @@ function build_docker_images() {
function start_services() {
cd $WORKPATH/docker_compose/intel/hpu/gaudi/
export http_proxy=${http_proxy}
export https_proxy=${http_proxy}
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
export LLM_ENDPOINT="http://${ip_address}:8008"
export LLM_COMPONENT_NAME="OpeaTextGenService"

View File

@@ -41,8 +41,6 @@ function build_docker_images() {
function start_services() {
cd $WORKPATH/docker_compose/intel/cpu/xeon/
export http_proxy=${http_proxy}
export https_proxy=${http_proxy}
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
export LLM_ENDPOINT="http://${ip_address}:8008"
export LLM_COMPONENT_NAME="OpeaTextGenService"

View File

@@ -40,8 +40,6 @@ function build_docker_images() {
function start_services() {
cd $WORKPATH/docker_compose/amd/gpu/rocm/
export http_proxy=${http_proxy}
export https_proxy=${http_proxy}
export HOST_IP=${ip_address}
export CODETRANS_VLLM_SERVICE_PORT=8008
export CODETRANS_LLM_SERVICE_PORT=9000

View File

@@ -50,7 +50,7 @@ flowchart LR
### 💬 SQL Query Generation
The key feature of DBQnA app is that it converts a user's natural language query into an SQL query and automatically executes the generated SQL query on the database to return the relevant results. BAsically ask questions to database, receive corresponding SQL query and real-time query execution output, all without needing any SQL knowledge.
The key feature of DBQnA app is that it converts a user's natural language query into an SQL query and automatically executes the generated SQL query on the database to return the relevant results. Basically ask questions to database, receive corresponding SQL query and real-time query execution output, all without needing any SQL knowledge.
---

View File

@@ -3,7 +3,7 @@
deploy:
device: gaudi
version: 1.2.0
version: 1.3.0
modelUseHostPath: /mnt/models
HUGGINGFACEHUB_API_TOKEN: "" # mandatory
node: [1]
@@ -20,14 +20,10 @@ deploy:
memory_capacity: "8000Mi"
replicaCount: [1]
teirerank:
enabled: False
llm:
engine: vllm # or tgi
model_id: "meta-llama/Llama-3.2-3B-Instruct" # mandatory
replicaCount:
without_teirerank: [1] # When teirerank.enabled is False
replicaCount: [1]
resources:
enabled: False
cards_per_instance: 1
@@ -78,7 +74,7 @@ benchmark:
# workload, all of the test cases will run for benchmark
bench_target: ["docsumfixed"] # specify the bench_target for benchmark
dataset: "/home/sdp/upload.txt" # specify the absolute path to the dataset file
dataset: "/home/sdp/pubmed_10.txt" # specify the absolute path to the dataset file
summary_type: "stuff"
stream: True

View File

@@ -23,17 +23,17 @@ This section describes how to quickly deploy and test the DocSum service manuall
### Access the Code
Clone the GenAIExample repository and access the ChatQnA AMD GPU platform Docker Compose files and supporting scripts:
Clone the GenAIExample repository and access the DocSum AMD GPU platform Docker Compose files and supporting scripts:
```
```bash
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/DocSum/docker_compose/amd/gpu/rocm
```
Checkout a released version, such as v1.2:
Checkout a released version, such as v1.3:
```
git checkout v1.2
git checkout v1.3
```
### Generate a HuggingFace Access Token
@@ -42,33 +42,96 @@ Some HuggingFace resources, such as some models, are only accessible if you have
### Configure the Deployment Environment
To set up environment variables for deploying DocSum services, source the _set_env.sh_ script in this directory:
To set up environment variables for deploying DocSum services, set up some parameters specific to the deployment environment and source the `set_env_*.sh` script in this directory:
```
source ./set_env.sh
- if used vLLM - set_env_vllm.sh
- if used TGI - set_env.sh
Set the values of the variables:
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
We set these values in the file set_env\*\*\*\*.sh
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
The values shown in the file set_env.sh or set_env_vllm.sh they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
Setting variables in the operating system environment:
```bash
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
source ./set_env_*.sh # replace the script name with the appropriate one
```
The _set_env.sh_ script will prompt for required and optional environment variables used to configure the DocSum services. If a value is not entered, the script will use a default value for the same. It will also generate a _.env_ file defining the desired configuration. Consult the section on [DocSum Service configuration](#docsum-service-configuration) for information on how service specific configuration parameters affect deployments.
Consult the section on [DocSum Service configuration](#docsum-configuration) for information on how service specific configuration parameters affect deployments.
### Deploy the Services Using Docker Compose
To deploy the DocSum services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute:
To deploy the DocSum services, execute the `docker compose up` command with the appropriate arguments. For a default deployment with TGI, execute the command below. It uses the 'compose.yaml' file.
```bash
docker compose up -d
cd docker_compose/amd/gpu/rocm
# if used TGI
docker compose -f compose.yaml up -d
# if used vLLM
# docker compose -f compose_vllm.yaml up -d
```
**Note**: developers should build docker image from source when:
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
- Developing off the git main branch (as the container's ports in the repo may be different from the published docker image).
- Unable to download the docker image.
- Use a specific version of Docker image.
- compose_vllm.yaml - for vLLM-based application
- compose.yaml - for TGI-based
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/card0:/dev/dri/card0
- /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
**How to Identify GPU Device IDs:**
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
> **Note**: developers should build docker image from source when:
>
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
> - Unable to download the docker image.
> - Use a specific version of Docker image.
Please refer to the table below to build different microservices from source:
| Microservice | Deployment Guide |
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------- |
| whisper | [whisper build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/whisper/src) |
| TGI | [TGI project](https://github.com/huggingface/text-generation-inference.git) |
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
| llm-docsum | [LLM-DocSum build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/src/doc-summarization#12-build-docker-image) |
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
@@ -84,6 +147,8 @@ docker ps -a
For the default deployment, the following 5 containers should have started:
If used TGI:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
748f577b3c78 opea/whisper:latest "python whisper_s…" 5 minutes ago Up About a minute 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
@@ -93,24 +158,39 @@ fds3dd5b9fd8 opea/docsum:latest "py
78964d0c1hg5 ghcr.io/huggingface/text-generation-inference:2.4.1-rocm "/tgi-entrypoint.sh" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:8008->80/tcp, [::]:8008->80/tcp docsum-tgi-service
```
If used vLLM:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
748f577b3c78 opea/whisper:latest "python whisper_s…" 5 minutes ago Up About a minute 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
4eq8b7034fd9 opea/docsum-gradio-ui:latest "docker-entrypoint.s…" 5 minutes ago Up About a minute 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp docsum-ui-server
fds3dd5b9fd8 opea/docsum:latest "python docsum.py" 5 minutes ago Up About a minute 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp docsum-backend-server
78fsd6fabfs7 opea/llm-docsum:latest "bash entrypoint.sh" 5 minutes ago Up About a minute 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp docsum-llm-server
78964d0c1hg5 opea/vllm-rocm:latest "python3 /workspace/…" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:8008->80/tcp, [::]:8008->80/tcp docsum-vllm-service
```
### Test the Pipeline
Once the DocSum services are running, test the pipeline using the following command:
```bash
curl -X POST http://${host_ip}:8888/v1/docsum \
curl -X POST http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: application/json" \
-d '{"type": "text", "messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."}'
```
**Note** The value of _host_ip_ was set using the _set_env.sh_ script and can be found in the _.env_ file.
**Note** The value of _HOST_IP_ was set using the _set_env.sh_ script and can be found in the _.env_ file.
### Cleanup the Deployment
To stop the containers associated with the deployment, execute the following command:
```
```bash
# if used TGI
docker compose -f compose.yaml down
# if used vLLM
# docker compose -f compose_vllm.yaml down
```
All the DocSum containers will be stopped and then removed on completion of the "down" command.
@@ -132,7 +212,7 @@ There are also some customized usage.
```bash
# form input. Use English mode (default).
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=text" \
-F "messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5." \
@@ -141,7 +221,7 @@ curl http://${host_ip}:8888/v1/docsum \
-F "stream=True"
# Use Chinese mode.
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=text" \
-F "messages=2024年9月26日北京——今日英特尔正式发布英特尔® 至强® 6性能核处理器代号Granite Rapids为AI、数据分析、科学计算等计算密集型业务提供卓越性能。" \
@@ -150,7 +230,7 @@ curl http://${host_ip}:8888/v1/docsum \
-F "stream=True"
# Upload file
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=text" \
-F "messages=" \
@@ -166,11 +246,11 @@ curl http://${host_ip}:8888/v1/docsum \
Audio:
```bash
curl -X POST http://${host_ip}:8888/v1/docsum \
curl -X POST http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: application/json" \
-d '{"type": "audio", "messages": "UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA"}'
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=audio" \
-F "messages=UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA" \
@@ -182,11 +262,11 @@ curl http://${host_ip}:8888/v1/docsum \
Video:
```bash
curl -X POST http://${host_ip}:8888/v1/docsum \
curl -X POST http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: application/json" \
-d '{"type": "video", "messages": "convert your video to base64 data type"}'
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=video" \
-F "messages=convert your video to base64 data type" \
@@ -208,7 +288,7 @@ If you want to deal with long context, can set following parameters and select s
"summary_type" is set to be "auto" by default, in this mode we will check input token length, if it exceed `MAX_INPUT_TOKENS`, `summary_type` will automatically be set to `refine` mode, otherwise will be set to `stuff` mode.
```bash
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=text" \
-F "messages=" \
@@ -223,7 +303,7 @@ curl http://${host_ip}:8888/v1/docsum \
In this mode LLM generate summary based on complete input text. In this case please carefully set `MAX_INPUT_TOKENS` and `MAX_TOTAL_TOKENS` according to your model and device memory, otherwise it may exceed LLM context limit and raise error when meet long context.
```bash
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=text" \
-F "messages=" \
@@ -238,7 +318,7 @@ curl http://${host_ip}:8888/v1/docsum \
Truncate mode will truncate the input text and keep only the first chunk, whose length is equal to `min(MAX_TOTAL_TOKENS - input.max_tokens - 50, MAX_INPUT_TOKENS)`
```bash
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=text" \
-F "messages=" \
@@ -255,7 +335,7 @@ Map_reduce mode will split the inputs into multiple chunks, map each document to
In this mode, default `chunk_size` is set to be `min(MAX_TOTAL_TOKENS - input.max_tokens - 50, MAX_INPUT_TOKENS)`
```bash
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=text" \
-F "messages=" \
@@ -272,7 +352,7 @@ Refin mode will split the inputs into multiple chunks, generate summary for the
In this mode, default `chunk_size` is set to be `min(MAX_TOTAL_TOKENS - 2 * input.max_tokens - 128, MAX_INPUT_TOKENS)`.
```bash
curl http://${host_ip}:8888/v1/docsum \
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
-H "Content-Type: multipart/form-data" \
-F "type=text" \
-F "messages=" \
@@ -288,7 +368,7 @@ Several UI options are provided. If you need to work with multimedia documents,
### Gradio UI
To access the UI, use the URL - http://${EXTERNAL_HOST_IP}:${FAGGEN_UI_PORT}
To access the UI, use the URL - http://${HOST_IP}:${DOCSUM_FRONTEND_PORT}
A page should open when you click through to this address:
![UI start page](../../../../assets/img/ui-starting-page.png)

View File

@@ -1,8 +1,9 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
ARG IMAGE_REPO=opea
ARG BASE_TAG=latest
FROM opea/comps-base:$BASE_TAG
FROM $IMAGE_REPO/comps-base:$BASE_TAG
COPY ./chatqna.py $HOME/chatqna.py

View File

@@ -40,7 +40,7 @@ USER user
WORKDIR /home/user/edgecraftrag
RUN pip install --no-cache-dir --upgrade pip setuptools==70.0.0 && \
pip install --no-cache-dir -r requirements.txt
pip install --no-cache-dir --extra-index-url https://download.pytorch.org/whl/cpu -r requirements.txt
WORKDIR /home/user/
RUN git clone https://github.com/openvinotoolkit/openvino.genai.git genai

View File

@@ -63,7 +63,7 @@ services:
- ecrag
vllm-openvino-server:
container_name: vllm-openvino-server
image: opea/vllm-arc:latest
image: ${REGISTRY:-opea}/vllm-arc:${TAG:-latest}
ports:
- ${VLLM_SERVICE_PORT:-8008}:80
environment:

View File

@@ -2,35 +2,33 @@
# SPDX-License-Identifier: Apache-2.0
services:
edgecraftrag-server:
build:
context: ../
args:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
dockerfile: ./Dockerfile.server
image: ${REGISTRY:-opea}/edgecraftrag-server:${TAG:-latest}
edgecraftrag-ui:
build:
context: ../
args:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
dockerfile: ./ui/docker/Dockerfile.ui
image: ${REGISTRY:-opea}/edgecraftrag-ui:${TAG:-latest}
edgecraftrag-ui-gradio:
build:
context: ../
args:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
dockerfile: ./ui/docker/Dockerfile.gradio
image: ${REGISTRY:-opea}/edgecraftrag-ui-gradio:${TAG:-latest}
edgecraftrag:
build:
context: ../
args:
IMAGE_REPO: ${REGISTRY}
BASE_TAG: ${TAG}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
dockerfile: ./Dockerfile
image: ${REGISTRY:-opea}/edgecraftrag:${TAG:-latest}
edgecraftrag-server:
build:
dockerfile: ./Dockerfile.server
extends: edgecraftrag
image: ${REGISTRY:-opea}/edgecraftrag-server:${TAG:-latest}
edgecraftrag-ui:
build:
dockerfile: ./ui/docker/Dockerfile.ui
extends: edgecraftrag
image: ${REGISTRY:-opea}/edgecraftrag-ui:${TAG:-latest}
edgecraftrag-ui-gradio:
build:
dockerfile: ./ui/docker/Dockerfile.gradio
extends: edgecraftrag
image: ${REGISTRY:-opea}/edgecraftrag-ui-gradio:${TAG:-latest}
vllm-arc:
build:
context: GenAIComps
dockerfile: comps/third_parties/vllm/src/Dockerfile.intel_gpu
image: ${REGISTRY:-opea}/vllm-arc:${TAG:-latest}

View File

@@ -30,8 +30,16 @@ HF_ENDPOINT=https://hf-mirror.com
function build_docker_images() {
opea_branch=${opea_branch:-"main"}
cd $WORKPATH/docker_image_build
git clone --depth 1 --branch ${opea_branch} https://github.com/opea-project/GenAIComps.git
pushd GenAIComps
echo "GenAIComps test commit is $(git rev-parse HEAD)"
docker build --no-cache -t ${REGISTRY}/comps-base:${TAG} --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
popd && sleep 1s
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="edgecraftrag edgecraftrag-server edgecraftrag-ui"
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log
docker images && sleep 1s
@@ -102,16 +110,30 @@ function stop_docker() {
function main() {
mkdir -p $LOG_PATH
echo "::group::stop_docker"
stop_docker
echo "::endgroup::"
echo "::group::build_docker_images"
if [[ "$IMAGE_REPO" == "opea" ]]; then build_docker_images; fi
echo "::endgroup::"
echo "::group::start_services"
start_services
echo "EC_RAG service started" && sleep 1s
echo "::endgroup::"
echo "::group::validate_rag"
validate_rag
validate_megaservice
echo "::endgroup::"
echo "::group::validate_megaservice"
validate_megaservice
echo "::endgroup::"
echo "::group::stop_docker"
stop_docker
echo y | docker system prune
echo "::endgroup::"
}

View File

@@ -33,7 +33,14 @@ vLLM_ENDPOINT="http://${HOST_IP}:${VLLM_SERVICE_PORT}"
function build_docker_images() {
opea_branch=${opea_branch:-"main"}
cd $WORKPATH/docker_image_build
git clone --depth 1 --branch ${opea_branch} https://github.com/opea-project/GenAIComps.git
pushd GenAIComps
echo "GenAIComps test commit is $(git rev-parse HEAD)"
docker build --no-cache -t ${REGISTRY}/comps-base:${TAG} --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
popd && sleep 1s
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log
@@ -152,19 +159,30 @@ function stop_docker() {
function main() {
mkdir -p "$LOG_PATH"
echo "::group::stop_docker"
stop_docker
echo "::endgroup::"
echo "::group::build_docker_images"
if [[ "$IMAGE_REPO" == "opea" ]]; then build_docker_images; fi
start_time=$(date +%s)
echo "::endgroup::"
echo "::group::start_services"
start_services
end_time=$(date +%s)
duration=$((end_time-start_time))
echo "EC_RAG service start duration is $duration s" && sleep 1s
echo "::endgroup::"
echo "::group::validate_rag"
validate_rag
validate_megaservice
echo "::endgroup::"
echo "::group::validate_megaservice"
validate_megaservice
echo "::endgroup::"
echo "::group::stop_docker"
stop_docker
echo y | docker system prune
echo "::endgroup::"
}

View File

@@ -44,6 +44,8 @@ git clone https://github.com/opea-project/GenAIExamples.git
### 2.2 Set up env vars
```bash
export ip_address="External_Public_IP"
export no_proxy=${your_no_proxy},${ip_address}
export HF_CACHE_DIR=/path/to/your/model/cache/
export HF_TOKEN=<you-hf-token>
export FINNHUB_API_KEY=<your-finnhub-api-key> # go to https://finnhub.io/ to get your free api key
@@ -100,8 +102,8 @@ bash launch_dataprep.sh
Validate datat ingest data and retrieval from database:
```bash
python $WORKPATH/tests/test_redis_finance.py --port 6007 --test_option ingest
python $WORKPATH/tests/test_redis_finance.py --port 6007 --test_option get
python $WORKDIR/GenAIExamples/FinanceAgent/tests/test_redis_finance.py --port 6007 --test_option ingest
python $WORKDIR/GenAIExamples/FinanceAgent/tests/test_redis_finance.py --port 6007 --test_option get
```
### 3.3 Launch the multi-agent system

View File

@@ -241,7 +241,7 @@ docker compose -f compose.yaml up -d
export MILVUS_HOST=${host_ip}
export MILVUS_PORT=19530
export MILVUS_RETRIEVER_PORT=7000
export COLLECTION_NAME=mm_rag_milvus
export COLLECTION_NAME=LangChainCollection
cd GenAIExamples/MultimodalQnA/docker_compose/intel/cpu/xeon/
docker compose -f compose_milvus.yaml up -d
```
@@ -385,6 +385,8 @@ curl --silent --write-out "HTTPSTATUS:%{http_code}" \
Now, test the microservice with posting a custom caption along with an image and a PDF containing images and text. The image caption can be provided as a text (`.txt`) or as spoken audio (`.wav` or `.mp3`).
> Note: Audio captions for images are currently only supported when using the Redis data prep backend.
```bash
curl --silent --write-out "HTTPSTATUS:%{http_code}" \
${DATAPREP_INGEST_SERVICE_ENDPOINT} \

View File

@@ -226,6 +226,8 @@ services:
- DATAPREP_INGEST_SERVICE_ENDPOINT=${DATAPREP_INGEST_SERVICE_ENDPOINT}
- DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT=${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT}
- DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT=${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT}
- DATAPREP_GET_FILE_ENDPOINT=${DATAPREP_GET_FILE_ENDPOINT}
- DATAPREP_DELETE_FILE_ENDPOINT=${DATAPREP_DELETE_FILE_ENDPOINT}
- MEGA_SERVICE_PORT:=${MEGA_SERVICE_PORT}
- UI_PORT=${UI_PORT}
- DATAPREP_MMR_PORT=${DATAPREP_MMR_PORT}

View File

@@ -1,4 +1,4 @@
# ChatQnA Benchmarking
# Deploy and Benchmark
## Purpose
@@ -8,6 +8,11 @@ We aim to run these benchmarks and share them with the OPEA community for three
- To establish a baseline for validating optimization solutions across different implementations, providing clear guidance on which methods are most effective for your use case.
- To inspire the community to build upon our benchmarks, allowing us to better quantify new solutions in conjunction with current leading LLMs, serving frameworks etc.
### Support Example List
- ChatQnA
- DocSum
## Table of Contents
- [Prerequisites](#prerequisites)
@@ -68,6 +73,7 @@ Before running the benchmarks, ensure you have:
```bash
pip install -r requirements.txt
```
notes: the benchmark need `opea-eval>=1.3`, if v1.3 is not released, please build the `opea-eval` from [source](https://github.com/opea-project/GenAIEval).
## Data Preparation

View File

@@ -1,28 +1,29 @@
# Example SearchQnA deployments on AMD GPU (ROCm)
# Deploying SearchQnA on AMD ROCm Platform
This document outlines the deployment process for a SearchQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on AMD GPU (ROCm).
This document outlines the single node deployment process for a SearchQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices on AMD ROCm Platform.
This example includes the following sections:
## Table of Contents
- [SearchQnA Quick Start Deployment](#searchqna-quick-start-deployment): Demonstrates how to quickly deploy a SearchQnA application/pipeline on AMD GPU platform.
- [SearchQnA Docker Compose Files](#searchqna-docker-compose-files): Describes some example deployments and their docker compose files.
- [Launch the UI](#launch-the-ui): Guideline for UI usage
1. [SearchQnA Quick Start Deployment](#searchqna-quick-start-deployment)
2. [SearchQnA Docker Compose Files](#searchqna-docker-compose-files)
3. [Validate Microservices](#validate-microservices)
4. [Launch the UI](#launch-the-ui): Guideline for UI usage
5. [Conclusion](#conclusion)
## SearchQnA Quick Start Deployment
This section describes how to quickly deploy and test the SearchQnA service manually on AMD GPU (ROCm). The basic steps are:
This section describes how to quickly deploy and test the SearchQnA service manually on an AMD ROCm Platform. The basic steps are:
1. [Access the Code](#access-the-code)
2. [Generate a HuggingFace Access Token](#generate-a-huggingface-access-token)
3. [Configure the Deployment Environment](#configure-the-deployment-environment)
4. [Deploy the Services Using Docker Compose](#deploy-the-services-using-docker-compose)
5. [Check the Deployment Status](#check-the-deployment-status)
6. [Test the Pipeline](#test-the-pipeline)
7. [Cleanup the Deployment](#cleanup-the-deployment)
2. [Configure the Deployment Environment](#configure-the-deployment-environment)
3. [Deploy the Services Using Docker Compose](#deploy-the-services-using-docker-compose)
4. [Check the Deployment Status](#check-the-deployment-status)
5. [Validate the Pipeline](#validate-the-pipeline)
6. [Cleanup the Deployment](#cleanup-the-deployment)
### Access the Code
Clone the GenAIExample repository and access the SearchQnA AMD GPU (ROCm) Docker Compose files and supporting scripts:
Clone the GenAIExample repository and access the SearchQnA AMD ROCm Platform Docker Compose files and supporting scripts:
```bash
git clone https://github.com/opea-project/GenAIExamples.git
@@ -41,34 +42,56 @@ Some HuggingFace resources require an access token. Developers can create one by
### Configure the Deployment Environment
To set up environment variables for deploying SearchQnA services, source the _setup_env.sh_ script in this directory:
To set up environment variables for deploying SearchQnA services, set up some parameters specific to the deployment environment and source the `set_env.sh` script in this directory:
```
//with TGI:
source ./set_env.sh
```
#### For vLLM inference type deployment (default)
```
//with VLLM:
```bash
export host_ip="External_Public_IP" # ip address of the node
export GOOGLE_CSE_ID="your cse id"
export GOOGLE_API_KEY="your google api key"
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
export http_proxy="Your_HTTP_Proxy" # http proxy if any
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
export no_proxy=localhost,127.0.0.1,$host_ip # additional no proxies if needed
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
source ./set_env_vllm.sh
```
The _setup_env.sh_ script will prompt for required and optional environment variables used to configure the SearchQnA services based on TGI. The _setup_env_vllm.sh_ script will prompt for required and optional environment variables used to configure the SearchQnA services based on VLLM. If a value is not entered, the script will use a default value for the same. It will also generate a _.env_ file defining the desired configuration. Consult the section on [SearchQnA Service configuration](#SearchQnA-service-configuration) for information on how service specific configuration parameters affect deployments.
#### For TGI inference type deployment
```bash
export host_ip="External_Public_IP" # ip address of the node
export GOOGLE_CSE_ID="your cse id"
export GOOGLE_API_KEY="your google api key"
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
export http_proxy="Your_HTTP_Proxy" # http proxy if any
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
export no_proxy=localhost,127.0.0.1,$host_ip # additional no proxies if needed
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
source ./set_env.sh
```
Consult the section on [SearchQnA Service configuration](#SearchQnA-configuration) for information on how service specific configuration parameters affect deployments.
### Deploy the Services Using Docker Compose
To deploy the SearchQnA services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute:
```bash
//with TGI:
docker compose -f compose.yaml up -d
```
#### For vLLM inference type deployment (default)
```bash
//with VLLM:
docker compose -f compose_vllm.yaml up -d
```
#### For TGI inference type deployment
```bash
//with TGI:
docker compose -f compose.yaml up -d
```
**Note**: developers should build docker image from source when:
- Developing off the git main branch (as the container's ports in the repo may be different from the published docker image).
@@ -97,7 +120,40 @@ docker ps -a
For the default deployment, the following containers should have started
### Test the Pipeline
#### For vLLM inference type deployment (default)
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
50e5f4a00fcc opea/searchqna-ui:latest "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:18143->5173/tcp, [::]:18143->5173/tcp search-ui-server
a8f030d17e40 opea/searchqna:latest "python searchqna.py" About a minute ago Up About a minute 0.0.0.0:18142->8888/tcp, [::]:18142->8888/tcp search-backend-server
916c5db048a2 opea/llm-textgen:latest "bash entrypoint.sh" About a minute ago Up About a minute 0.0.0.0:3007->9000/tcp, [::]:3007->9000/tcp search-llm-server
bb46cdaf1794 opea/reranking:latest "python opea_reranki…" About a minute ago Up About a minute 0.0.0.0:3005->8000/tcp, [::]:3005->8000/tcp search-reranking-server
d89ab0ef3f41 opea/embedding:latest "sh -c 'python $( [ …" About a minute ago Up About a minute 0.0.0.0:3002->6000/tcp, [::]:3002->6000/tcp search-embedding-server
b248e55dd20f opea/vllm-rocm:latest "python3 /workspace/…" About a minute ago Up About a minute 0.0.0.0:3080->8011/tcp, [::]:3080->8011/tcp search-vllm-service
c3800753fac5 opea/web-retriever:latest "python opea_web_ret…" About a minute ago Up About a minute 0.0.0.0:3003->7077/tcp, [::]:3003->7077/tcp search-web-retriever-server
0db8af486bd0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" About a minute ago Up About a minute 0.0.0.0:3001->80/tcp, [::]:3001->80/tcp search-tei-embedding-server
3125915447ef ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 "text-embeddings-rou…" About a minute ago Up About a minute 0.0.0.0:3004->80/tcp, [::]:3004->80/tcp search-tei-reranking-server
```
#### For TGI inference type deployment
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67cc886949a3 opea/searchqna-ui:latest "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:18143->5173/tcp, [::]:18143->5173/tcp search-ui-server
6547aca0d5fd opea/searchqna:latest "python searchqna.py" About a minute ago Up About a minute 0.0.0.0:18142->8888/tcp, [::]:18142->8888/tcp search-backend-server
213b5d4d5fa5 opea/embedding:latest "sh -c 'python $( [ …" About a minute ago Up About a minute 0.0.0.0:3002->6000/tcp, [::]:3002->6000/tcp search-embedding-server
6b90d16100b2 opea/reranking:latest "python opea_reranki…" About a minute ago Up About a minute 0.0.0.0:3005->8000/tcp, [::]:3005->8000/tcp search-reranking-server
3266fd85207e opea/llm-textgen:latest "bash entrypoint.sh" About a minute ago Up About a minute 0.0.0.0:3007->9000/tcp, [::]:3007->9000/tcp search-llm-server
d7322b70c15d ghcr.io/huggingface/text-generation-inference:2.4.1-rocm "/tgi-entrypoint.sh …" About a minute ago Up About a minute 0.0.0.0:3006->80/tcp, [::]:3006->80/tcp search-tgi-service
a703b91b28ed ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 "text-embeddings-rou…" About a minute ago Up About a minute 0.0.0.0:3001->80/tcp, [::]:3001->80/tcp search-tei-embedding-server
22098a5eaf59 ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 "text-embeddings-rou…" About a minute ago Up About a minute 0.0.0.0:3004->80/tcp, [::]:3004->80/tcp search-tei-reranking-server
830fe84c971d opea/web-retriever:latest "python opea_web_ret…" About a minute ago Up About a minute 0.0.0.0:3003->7077/tcp, [::]:3003->7077/tcp search-web-retriever-server
```
If any issues are encountered during deployment, refer to the [Troubleshooting](../../../../README_miscellaneous.md#troubleshooting) section.
### Validate the Pipeline
Once the SearchQnA services are running, test the pipeline using the following command:
@@ -131,31 +187,125 @@ data: [DONE]
A response text similar to the one above indicates that the service verification was successful.
**Note** : Access the SearchQnA UI by web browser through this URL: `http://${host_ip}:80`. Please confirm the `80` port is opened in the firewall. To validate each microservice used in the pipeline refer to the [Validate Microservices](#validate-microservices) section.
### Cleanup the Deployment
To stop the containers associated with the deployment, execute the following command:
```bash
//with TGI:
docker compose -f compose.yaml down
```
#### For vLLM inference type deployment (default)
```bash
//with VLLM:
docker compose -f compose_vllm.yaml down
```
#### For TGI inference type deployment
```bash
//with TGI:
docker compose -f compose.yaml down
```
All the SearchQnA containers will be stopped and then removed on completion of the "down" command.
## SearchQnA Docker Compose Files
When deploying the SearchQnA pipeline on AMD GPUs (ROCm), different large language model serving frameworks can be selected. The table below outlines the available configurations included in the application.
When deploying a SearchQnA pipeline on an AMD GPUs (ROCm), different large language model serving frameworks can be selected. The table below outlines the available configurations included in the application. These configurations can serve as templates and be extended to other components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
| File | Description |
| ---------------------------------------- | ------------------------------------------------------------------------------------------ |
| [compose.yaml](./compose.yaml) | Default compose file using tgi as serving framework |
| [compose_vllm.yaml](./compose_vllm.yaml) | The LLM serving framework is vLLM. All other configurations remain the same as the default |
## Validate Microservices
1. Embedding backend Service
```bash
curl http://${host_ip}:3001/embed \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H 'Content-Type: application/json'
```
2. Embedding Microservice
```bash
curl http://${host_ip}:3002/v1/embeddings\
-X POST \
-d '{"text":"hello"}' \
-H 'Content-Type: application/json'
```
3. Web Retriever Microservice
```bash
export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://${host_ip}:3003/v1/web_retrieval \
-X POST \
-d "{\"text\":\"What is the 2024 holiday schedule?\",\"embedding\":${your_embedding}}" \
-H 'Content-Type: application/json'
```
4. Reranking backend Service
```bash
# TEI Reranking service
curl http://${host_ip}:3004/rerank \
-X POST \
-d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
-H 'Content-Type: application/json'
```
5. Reranking Microservice
```bash
curl http://${host_ip}:3005/v1/reranking\
-X POST \
-d '{"initial_query":"What is Deep Learning?", "retrieved_docs": [{"text":"Deep Learning is not..."}, {"text":"Deep learning is..."}]}' \
-H 'Content-Type: application/json'
```
6. LLM backend Service
```bash
# TGI service
curl http://${host_ip}:3006/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
7. LLM Microservice
```bash
curl http://${host_ip}:3007/v1/chat/completions\
-X POST \
-d '{"query":"What is Deep Learning?","max_tokens":17,"top_k":10,"top_p":0.95,"typical_p":0.95,"temperature":0.01,"repetition_penalty":1.03,"stream":true}' \
-H 'Content-Type: application/json'
```
8. MegaService
```bash
curl http://${host_ip}:3008/v1/searchqna -H "Content-Type: application/json" -d '{
"messages": "What is the latest news? Give me also the source link.",
"stream": "true"
}'
```
9. Nginx Service
```bash
curl http://${host_ip}:${NGINX_PORT}/v1/searchqna \
-H "Content-Type: application/json" \
-d '{
"messages": "What is the latest news? Give me also the source link.",
"stream": "true"
}'
```
## Launch the UI
Access the UI at http://${EXTERNAL_HOST_IP}:${SEARCH_FRONTEND_SERVICE_PORT}. A page should open when navigating to this address.
@@ -167,3 +317,7 @@ Let's enter the task for the service in the "Enter prompt here" field. For examp
![UI start page](../../../../assets/img/searchqna-ui-response-example.png)
A correct result displayed on the page indicates that the UI service has been successfully verified.
## Conclusion
This guide should enable developers to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.

View File

@@ -224,6 +224,7 @@ def generate_helm_values(example_type, deploy_config, chart_dir, action_type, no
"modelUseHostPath": deploy_config.get("modelUseHostPath", ""),
}
}
os.environ["HF_TOKEN"] = deploy_config.get("HUGGINGFACEHUB_API_TOKEN", "")
# Configure components
values = configure_node_selectors(values, node_selector or {}, deploy_config)
@@ -338,17 +339,15 @@ def get_hw_values_file(deploy_config, chart_dir):
version = deploy_config.get("version", "1.1.0")
if os.path.isdir(chart_dir):
# Determine which values file to use based on version
if version in ["1.0.0", "1.1.0"]:
hw_values_file = os.path.join(chart_dir, f"{device_type}-values.yaml")
else:
hw_values_file = os.path.join(chart_dir, f"{device_type}-{llm_engine}-values.yaml")
hw_values_file = os.path.join(chart_dir, f"{device_type}-{llm_engine}-values.yaml")
if not os.path.exists(hw_values_file):
print(f"Warning: {hw_values_file} not found")
hw_values_file = None
else:
print(f"Device-specific values file found: {hw_values_file}")
hw_values_file = os.path.join(chart_dir, f"{device_type}-values.yaml")
if not os.path.exists(hw_values_file):
print(f"Warning: {hw_values_file} not found")
print(f"Error: Can not found a correct values file for {device_type} with {llm_engine}")
sys.exit(1)
print(f"Device-specific values file found: {hw_values_file}")
else:
print(f"Error: Could not find directory for {chart_dir}")
hw_values_file = None

View File

@@ -54,7 +54,7 @@ def construct_deploy_config(deploy_config, target_node, batch_param_value=None,
# First determine which llm replicaCount to use based on teirerank.enabled
services = new_config.get("services", {})
teirerank_enabled = services.get("teirerank", {}).get("enabled", True)
teirerank_enabled = services.get("teirerank", {}).get("enabled", False)
# Process each service's configuration
for service_name, service_config in services.items():

View File

@@ -16,7 +16,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
| [opea/chatqna-conversation-ui](https://hub.docker.com/r/opea/chatqna-conversation-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/docker/Dockerfile.react) | Chatqna React UI. Facilitates interaction with users, enabling chat-based Q&A with conversation history stored in the browser's local storage. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/react/README.md) |
| [opea/chatqna-ui](https://hub.docker.com/r/opea/chatqna-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/docker/Dockerfile) | Chatqna UI entry. Facilitates interaction with users to answer questions | [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/svelte/README.md) |
| [opea/codegen](https://hub.docker.com/r/opea/codegen) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/Dockerfile) | Codegen gateway. Provides automatic creation of source code from high-level representations | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/README.md) |
| [opea/codegen-gradio-ui]() | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/docker/Dockerfile.gradio) | Codegen Gradio UI entry. Interact with users to generate source code by providing high-level descriptions or inputs. | |
| [opea/codegen-gradio-ui](https://hub.docker.com/r/opea/codegen-gradio-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/docker/Dockerfile.gradio) | Codegen Gradio UI entry. Interact with users to generate source code by providing high-level descriptions or inputs. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/gradio/README.md) |
| [opea/codegen-react-ui](https://hub.docker.com/r/opea/codegen-react-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/docker/Dockerfile.react) | Codegen React UI. Interact with users to generate appropriate code based on current user input. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/react/README.md) |
| [opea/codegen-ui](https://hub.docker.com/r/opea/codegen-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/docker/Dockerfile) | Codegen UI entry. Facilitates interaction with users, automatically generate code based on user's descriptions | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeGen/ui/svelte/README.md) |
| [opea/codetrans](https://hub.docker.com/r/opea/codetrans) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeTrans/Dockerfile) | Codetrans gateway. Provide services to convert source code written in one programming language to an equivalent version in another programming language. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/CodeTrans/README.md) |
@@ -29,7 +29,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
| [opea/edgecraftrag](https://hub.docker.com/r/opea/edgecraftrag) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/Dockerfile) | Edge Craft RAG (EC-RAG) gateway. Provides a customizable, production-ready retrieval-enhanced generation system that is optimized for edge solutions. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/README.md) |
| [opea/edgecraftrag-server](https://hub.docker.com/r/opea/edgecraftrag-server) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/Dockerfile.server) | Edge Craft RAG (EC-RAG) server, Provides a customizable, production-ready retrieval-enhanced generation system that is optimized for edge solutions. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/README.md) |
| [opea/edgecraftrag-ui](https://hub.docker.com/r/opea/edgecraftrag-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/ui/docker/Dockerfile.ui) | Edge Craft RAG (EC-RAG) UI entry. Ensuring high-quality, performant interactions tailored for edge environments. | |
| [opea/edgecraftrag-ui-gradio]() | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/ui/docker/Dockerfile.gradio) | Edge Craft RAG (EC-RAG) Gradio UI entry. Interact with users to provide a customizable, production-ready retrieval-enhanced generation system optimized for edge solutions. | |
| [opea/edgecraftrag-ui-gradio](https://hub.docker.com/r/opea/edgecraftrag-ui-gradio) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/EdgeCraftRAG/ui/docker/Dockerfile.gradio) | Edge Craft RAG (EC-RAG) Gradio UI entry. Interact with users to provide a customizable, production-ready retrieval-enhanced generation system optimized for edge solutions. | |
| [opea/graphrag](https://hub.docker.com/r/opea/graphrag) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/Dockerfile) | GraphRAG gateway, Local and global queries are processed using knowledge graphs extracted from source documents. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/README.md) |
| [opea/graphrag-react-ui](https://hub.docker.com/r/opea/graphrag-react-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/ui/docker/Dockerfile.react) | Graphrag React UI entry. Facilitates interaction with users, enabling queries and providing relevant answers using knowledge graphs. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/ui/react/README.md) |
| [opea/graphrag-ui](https://hub.docker.com/r/opea/graphrag-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/ui/docker/Dockerfile) | Graphrag UI entry. Interact with users to facilitate queries and provide relevant answers using knowledge graphs. | [Link](https://github.com/opea-project/GenAIExamples/blob/main/GraphRAG/ui/svelte/README.md) |
@@ -54,7 +54,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
| [opea/animation](https://hub.docker.com/r/opea/animation) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/animation/src/Dockerfile) | OPEA Avatar Animation microservice for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/animation/src/README.md) |
| [opea/asr](https://hub.docker.com/r/opea/asr) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/asr/src/Dockerfile) | OPEA Audio-Speech-Recognition microservice for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/asr/src/README.md) |
| [opea/chathistory-mongo](https://hub.docker.com/r/opea/chathistory-mongo) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/chathistory/src/Dockerfile) | OPEA Chat History microservice is based on a MongoDB database and is designed to allow users to store, retrieve and manage chat conversations. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/chathistory/src/README.md) |
| [opea/comps-base]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/Dockerfile) | OPEA Microservice base image. | [Link](https://github.com/opea-project/GenAIComps/blob/main/README.md) |
| [opea/comps-base](https://hub.docker.com/r/opea/comps-base) | [Link](https://github.com/opea-project/GenAIComps/blob/main/Dockerfile) | OPEA Microservice base image. | [Link](https://github.com/opea-project/GenAIComps/blob/main/README.md) |
| [opea/dataprep](https://hub.docker.com/r/opea/dataprep) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/dataprep/src/Dockerfile) | OPEA data preparation microservices for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/dataprep/README.md) |
| [opea/embedding](https://hub.docker.com/r/opea/embedding) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/embeddings/src/Dockerfile) | OPEA mosec embedding microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/embeddings/src/README.md) |
| [opea/embedding-multimodal-bridgetower](https://hub.docker.com/r/opea/embedding-multimodal-bridgetower) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/bridgetower/src/Dockerfile) | OPEA multimodal embedded microservices based on bridgetower for use by GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/bridgetower/src/README.md) |
@@ -63,7 +63,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
| [opea/feedbackmanagement-mongo](https://hub.docker.com/r/opea/feedbackmanagement-mongo) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/feedback_management/src/Dockerfile) | OPEA feedback management microservice uses MongoDB database for GenAI applications. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/feedback_management/src/README.md) |
| [opea/finetuning](https://hub.docker.com/r/opea/finetuning) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/Dockerfile) | OPEA Fine-tuning microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/README.md) |
| [opea/finetuning-gaudi](https://hub.docker.com/r/opea/finetuning-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/Dockerfile.intel_hpu) | OPEA Fine-tuning microservice for GenAI application use on the Gaudi | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/README.md) |
| [opea/finetuning-xtune]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/Dockerfile.xtune) | OPEA Fine-tuning microservice base on Xtune for GenAI application use on the Arc A770 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/README.md) |
| [opea/finetuning-xtune](https://hub.docker.com/r/opea/finetuning-xtune) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/Dockerfile.xtune) | OPEA Fine-tuning microservice base on Xtune for GenAI application use on the Arc A770 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/src/README.md) |
| [opea/gpt-sovits](https://hub.docker.com/r/opea/gpt-sovits) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/gpt-sovits/src/Dockerfile) | OPEA GPT-SoVITS service for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/gpt-sovits/src/README.md) |
| [opea/guardrails](https://hub.docker.com/r/opea/guardrails) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/guardrails/src/guardrails/Dockerfile) | OPEA guardrail microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/guardrails/src/guardrails/README.md) |
| [opea/guardrails-bias-detection](https://hub.docker.com/r/opea/guardrails-bias-detection) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/guardrails/src/bias_detection/Dockerfile) | OPEA guardrail microservice to provide bias detection for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/guardrails/src/bias_detection/README.md) |
@@ -76,19 +76,19 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
| [opea/image2image-gaudi](https://hub.docker.com/r/opea/image2image-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2image/src/Dockerfile.intel_hpu) | OPEA Image-to-Image microservice for GenAI application use on the Gaudi. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2image/src/README.md) |
| [opea/image2video](https://hub.docker.com/r/opea/image2video) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2video/src/Dockerfile) | OPEA image-to-video microservice for GenAI application. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2video/src/README.md) |
| [opea/image2video-gaudi](https://hub.docker.com/r/opea/image2video-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2video/src/Dockerfile.intel_hpu) | OPEA image-to-video microservice for GenAI application use on the Gaudi. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/image2video/src/README.md) |
| [opea/ipex-llm]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/ipex/src/Dockerfile) | OPEA is a Large Language Model (LLM) service based on intel-extension-for-pytorch. It provides specialized optimizations, including technical points like paged attention, ROPE fusion, etc. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/ipex/src/README.md) |
| [opea/ipex-llm](https://hub.docker.com/r/opea/ipex-llm) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/ipex/src/Dockerfile) | OPEA is a Large Language Model (LLM) service based on intel-extension-for-pytorch. It provides specialized optimizations, including technical points like paged attention, ROPE fusion, etc. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/ipex/src/README.md) |
| [opea/llm-docsum](https://hub.docker.com/r/opea/llm-docsum) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/doc-summarization/Dockerfile) | OPEA LLM microservice upon docsum docker image for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/doc-summarization/README.md) |
| [opea/llm-eval](https://hub.docker.com/r/opea/llm-eval) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/utils/lm-eval/Dockerfile) | OPEA LLM microservice upon eval docker image for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/utils/lm-eval/README.md) |
| [opea/llm-faqgen](https://hub.docker.com/r/opea/llm-faqgen) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/faq-generation/Dockerfile) | OPEA FAQ Generation Microservice is designed to generate frequently asked questions from document input using the HuggingFace Text Generation Inference (TGI) framework. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/faq-generation/README.md) |
| [opea/llm-textgen](https://hub.docker.com/r/opea/llm-textgen) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/Dockerfile) | OPEA LLM microservice upon textgen docker image for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/README.md) |
| [opea/llm-textgen-gaudi](https://hub.docker.com/r/opea/llm-textgen-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/Dockerfile.intel_hpu) | OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/README.md) |
| [opea/llm-textgen-phi4-gaudi]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/Dockerfile.intel_hpu_phi4) | OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 with Phi4 optimization. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/README_native.md) |
| [opea/llm-textgen-phi4-gaudi](https://hub.docker.com/r/opea/llm-textgen-phi4-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/Dockerfile.intel_hpu_phi4) | OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 with Phi4 optimization. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/src/text-generation/README_native.md) |
| [opea/lvm](https://hub.docker.com/r/opea/lvm) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/Dockerfile) | OPEA large visual model (LVM) microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/README.md) |
| [opea/lvm-llama-vision](https://hub.docker.com/r/opea/lvm-llama-vision) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/Dockerfile) | OPEA microservice running Llama Vision as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/README.md) |
| [opea/lvm-llama-vision-guard](https://hub.docker.com/r/opea/lvm-llama-vision-guard) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/Dockerfile.guard) | OPEA microservice running Llama Vision Guard as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/README.md) |
| [opea/lvm-llama-vision-tp](https://hub.docker.com/r/opea/lvm-llama-vision-tp) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/Dockerfile.tp) | OPEA microservice running Llama Vision with DeepSpeed as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llama-vision/src/README.md) |
| [opea/lvm-llava](https://hub.docker.com/r/opea/lvm-llava) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/Dockerfile) | OPEA microservice running LLaVA as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/README.md) |
| [opea/lvm-llava-gaudi]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/Dockerfile.intel_hpu) | OPEA microservice running LLaVA as a large visualization model (LVM) server for GenAI applications on the Gaudi2 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/README.md) |
| [opea/lvm-llava-gaudi](https://hub.docker.com/r/opea/lvm-llava-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/Dockerfile.intel_hpu) | OPEA microservice running LLaVA as a large visualization model (LVM) server for GenAI applications on the Gaudi2 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/llava/src/README.md) |
| [opea/lvm-predictionguard](https://hub.docker.com/r/opea/lvm-predictionguard) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/predictionguard/src/Dockerfile) | OPEA microservice running PredictionGuard as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/predictionguard/src/README.md) |
| [opea/lvm-video-llama](https://hub.docker.com/r/opea/lvm-video-llama) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/video-llama/src/Dockerfile) | OPEA microservice running Video-Llama as a large visualization model (LVM) server for GenAI applications | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/video-llama/src/README.md) |
| [opea/nginx](https://hub.docker.com/r/opea/nginx) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/nginx/src/Dockerfile) | OPEA nginx microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/nginx/deployment/kubernetes/README.md) |
@@ -98,20 +98,20 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
| [opea/retriever](https://hub.docker.com/r/opea/retriever) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/src/Dockerfile) | OPEA retrieval microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/README.md) |
| [opea/speecht5](https://hub.docker.com/r/opea/speecht5) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/speecht5/src/Dockerfile) | OPEA SpeechT5 service for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/README.md) |
| [opea/speecht5-gaudi](https://hub.docker.com/r/opea/speecht5-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/speecht5/src/Dockerfile.intel_hpu) | OPEA SpeechT5 service on the Gaudi2 for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/README.md) |
| [opea/struct2graph]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/struct2graph/src/Dockerfile) | OPEA struct-to-graph service for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/struct2graph/src/README.md) |
| [opea/struct2graph](https://hub.docker.com/r/opea/struct2graph) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/struct2graph/src/Dockerfile) | OPEA struct-to-graph service for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/struct2graph/src/README.md) |
| [opea/text2cypher-gaudi]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2cypher/src/Dockerfile.intel_hpu) | OPEA Text-to-Cypher microservice for GenAI application use on the Gaudi2. | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2cypher/src/README.md) |
| [opea/text2graph]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2graph/src/Dockerfile) | OPEA Text-to-Graph microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2graph/src/README.md) |
| [opea/text2graph](https://hub.docker.com/r/opea/text2graph) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2graph/src/Dockerfile) | OPEA Text-to-Graph microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2graph/src/README.md) |
| [opea/text2image](https://hub.docker.com/r/opea/text2image) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2image/src/Dockerfile) | OPEA text-to-image microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2image/src/README.md) |
| [opea/text2image-gaudi](https://hub.docker.com/r/opea/text2image-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2image/src/Dockerfile.intel_hpu) | OPEA text-to-image microservice for GenAI application use on the Gaudi | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2image/src/README.md) |
| [opea/text2image-ui](https://hub.docker.com/r/opea/text2image-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/Text2Image/ui/docker/Dockerfile) | OPEA text-to-image microservice UI entry for GenAI application | [Link](https://github.com/opea-project/GenAIExamples/blob/main/Text2Image/README.md) |
| [opea/text2sql](https://hub.docker.com/r/opea/text2sql) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2sql/src/Dockerfile) | OPEA text to Structured Query Language microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/text2sql/src/README.md) |
| [opea/text2sql-react-ui](https://hub.docker.com/r/opea/text2sql-react-ui) | [Link](https://github.com/opea-project/GenAIExamples/blob/main/DBQnA/ui/docker/Dockerfile.react) | OPEA text to Structured Query Language microservice react UI entry for GenAI application | [Link](https://github.com/opea-project/GenAIExamples/blob/main/DBQnA/README.md) |
| [opea/tts](https://hub.docker.com/r/opea/tts) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/Dockerfile) | OPEA Text-To-Speech microservice for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/README.md) |
| [opea/vllm](https://hub.docker.com/r/opea/vllm) | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/Dockerfile.cpu) | Deploying and servicing VLLM models based on VLLM projects | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/README.md) |
| [opea/vllm](https://hub.docker.com/r/opea/vllm) | [Link](https://github.com/vllm-project/vllm/blob/v0.8.3/docker/Dockerfile.cpu) | Deploying and servicing VLLM models based on VLLM projects | [Link](https://github.com/vllm-project/vllm/blob/v0.8.3/README.md) |
| [opea/vllm-arc](https://hub.docker.com/r/opea/vllm-arc) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/src/Dockerfile.intel_gpu) | Deploying and servicing VLLM models on Arc based on VLLM projects | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/README.md) |
| [opea/vllm-gaudi](https://hub.docker.com/r/opea/vllm-gaudi) | [Link](https://github.com/HabanaAI/vllm-fork/blob/v0.6.6.post1%2BGaudi-1.20.0/Dockerfile.hpu) | Deploying and servicing VLLM models on Gaudi2 based on VLLM project | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/README.md) |
| [opea/vllm-openvino](https://hub.docker.com/r/opea/vllm-openvino) | [Link](https://github.com/vllm-project/vllm/blob/v0.6.1/Dockerfile.openvino) | VLLM Model for Deploying and Serving Openvino Framework Based on VLLM Project | [Link](https://github.com/vllm-project/vllm/blob/main/README.md) |
| [opea/vllm-rocm]() | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/src/Dockerfile.amd_gpu) | Deploying and servicing VLLM models on AMD Rocm based on VLLM project | |
| [opea/vllm-rocm](https://hub.docker.com/r/opea/vllm-rocm) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/src/Dockerfile.amd_gpu) | Deploying and servicing VLLM models on AMD Rocm based on VLLM project | |
| [opea/wav2lip](https://hub.docker.com/r/opea/wav2lip) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/wav2lip/src/Dockerfile) | OPEA Generate lip movements from audio files microservice with Pathway for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/wav2lip/deployment/kubernetes/README.md) |
| [opea/wav2lip-gaudi](https://hub.docker.com/r/opea/wav2lip-gaudi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/wav2lip/src/Dockerfile.intel_hpu) | OPEA Generate lip movements from audio files microservice with Pathway for GenAI application use on the Gaudi2 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/wav2lip/deployment/kubernetes/README.md) |
| [opea/web-retriever](https://hub.docker.com/r/opea/web-retriever)<br> | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/web_retrievers/src/Dockerfile) | OPEA retrieval microservice based on chroma vectordb for GenAI application | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/web_retrievers/src/README.md) |

View File

@@ -1,7 +1,7 @@
kubernetes
locust
numpy
opea-eval>=1.2
opea-eval>=1.3
prometheus_client
pytest
pyyaml

3
version.txt Normal file
View File

@@ -0,0 +1,3 @@
VERSION_MAJOR 1
VERSION_MINOR 3
VERSION_PATCH 0