Adding files to deploy DocSum application on ROCm vLLM (#1572)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
This commit is contained in:
committed by
GitHub
parent
1a0c5f03c6
commit
319dbdaa6b
BIN
DocSum/assets/img/ui-result-page.png
Normal file
BIN
DocSum/assets/img/ui-result-page.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 117 KiB |
BIN
DocSum/assets/img/ui-starting-page.png
Normal file
BIN
DocSum/assets/img/ui-starting-page.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 22 KiB |
@@ -1,175 +1,405 @@
|
||||
# Build and deploy DocSum Application on AMD GPU (ROCm)
|
||||
# Build and Deploy DocSum Application on AMD GPU (ROCm)
|
||||
|
||||
## Build images
|
||||
## Build Docker Images
|
||||
|
||||
## 🚀 Build Docker Images
|
||||
### 1. Build Docker Image
|
||||
|
||||
First of all, you need to build Docker Images locally and install the python package of it.
|
||||
- #### Create application install directory and go to it:
|
||||
|
||||
### 1. Build LLM Image
|
||||
```bash
|
||||
mkdir ~/docsum-install && cd docsum-install
|
||||
```
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIComps.git
|
||||
cd GenAIComps
|
||||
docker build -t opea/llm-docsum-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/doc-summarization/Dockerfile .
|
||||
```
|
||||
- #### Clone the repository GenAIExamples (the default repository branch "main" is used here):
|
||||
|
||||
Then run the command `docker images`, you will have the following four Docker Images:
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples.git
|
||||
```
|
||||
|
||||
### 2. Build MegaService Docker Image
|
||||
If you need to use a specific branch/tag of the GenAIExamples repository, then (v1.3 replace with its own value):
|
||||
|
||||
To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the `docsum.py` Python script. Build the MegaService Docker image via below command:
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples.git && cd GenAIExamples && git checkout v1.3
|
||||
```
|
||||
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIExamples
|
||||
cd GenAIExamples/DocSum/
|
||||
docker build -t opea/docsum:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
|
||||
```
|
||||
We remind you that when using a specific version of the code, you need to use the README from this version:
|
||||
|
||||
### 3. Build UI Docker Image
|
||||
- #### Go to build directory:
|
||||
|
||||
Build the frontend Docker image via below command:
|
||||
```bash
|
||||
cd ~/docsum-install/GenAIExamples/DocSum/docker_image_build
|
||||
```
|
||||
|
||||
```bash
|
||||
cd GenAIExamples/DocSum/ui
|
||||
docker build -t opea/docsum-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f docker/Dockerfile .
|
||||
```
|
||||
- Cleaning up the GenAIComps repository if it was previously cloned in this directory.
|
||||
This is necessary if the build was performed earlier and the GenAIComps folder exists and is not empty:
|
||||
|
||||
Then run the command `docker images`, you will have the following Docker Images:
|
||||
```bash
|
||||
echo Y | rm -R GenAIComps
|
||||
```
|
||||
|
||||
1. `opea/llm-docsum-tgi:latest`
|
||||
2. `opea/docsum:latest`
|
||||
3. `opea/docsum-ui:latest`
|
||||
- #### Clone the repository GenAIComps (the default repository branch "main" is used here):
|
||||
|
||||
### 4. Build React UI Docker Image
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIComps.git
|
||||
```
|
||||
|
||||
Build the frontend Docker image via below command:
|
||||
If you use a specific tag of the GenAIExamples repository,
|
||||
then you should also use the corresponding tag for GenAIComps. (v1.3 replace with its own value):
|
||||
|
||||
```bash
|
||||
cd GenAIExamples/DocSum/ui
|
||||
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/docsum"
|
||||
docker build -t opea/docsum-react-ui:latest --build-arg BACKEND_SERVICE_ENDPOINT=$BACKEND_SERVICE_ENDPOINT -f ./docker/Dockerfile.react .
|
||||
```bash
|
||||
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout v1.3
|
||||
```
|
||||
|
||||
docker build -t opea/docsum-react-ui:latest --build-arg BACKEND_SERVICE_ENDPOINT=$BACKEND_SERVICE_ENDPOINT --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile.react .
|
||||
```
|
||||
We remind you that when using a specific version of the code, you need to use the README from this version.
|
||||
|
||||
Then run the command `docker images`, you will have the following Docker Images:
|
||||
- #### Setting the list of images for the build (from the build file.yaml)
|
||||
|
||||
1. `opea/llm-docsum-tgi:latest`
|
||||
2. `opea/docsum:latest`
|
||||
3. `opea/docsum-ui:latest`
|
||||
4. `opea/docsum-react-ui:latest`
|
||||
If you want to deploy a vLLM-based or TGI-based application, then the set of services is installed as follows:
|
||||
|
||||
## 🚀 Start Microservices and MegaService
|
||||
#### vLLM-based application
|
||||
|
||||
### Required Models
|
||||
```bash
|
||||
service_list="docsum docsum-gradio-ui whisper llm-docsum vllm-rocm"
|
||||
```
|
||||
|
||||
Default model is "Intel/neural-chat-7b-v3-3". Change "LLM_MODEL_ID" in environment variables below if you want to use another model.
|
||||
For gated models, you also need to provide [HuggingFace token](https://huggingface.co/docs/hub/security-tokens) in "HUGGINGFACEHUB_API_TOKEN" environment variable.
|
||||
#### TGI-based application
|
||||
|
||||
### Setup Environment Variables
|
||||
```bash
|
||||
service_list="docsum docsum-gradio-ui whisper llm-docsum"
|
||||
```
|
||||
|
||||
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.
|
||||
- #### Optional. Pull TGI Docker Image (Do this if you want to use TGI)
|
||||
|
||||
```bash
|
||||
export DOCSUM_TGI_IMAGE="ghcr.io/huggingface/text-generation-inference:2.4.1-rocm"
|
||||
export DOCSUM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
|
||||
export HOST_IP=${host_ip}
|
||||
export DOCSUM_TGI_SERVICE_PORT="18882"
|
||||
export DOCSUM_TGI_LLM_ENDPOINT="http://${HOST_IP}:${DOCSUM_TGI_SERVICE_PORT}"
|
||||
export DOCSUM_HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
|
||||
export DOCSUM_LLM_SERVER_PORT="8008"
|
||||
export DOCSUM_BACKEND_SERVER_PORT="8888"
|
||||
export DOCSUM_FRONTEND_PORT="5173"
|
||||
export DocSum_COMPONENT_NAME="OpeaDocSumTgi"
|
||||
```
|
||||
```bash
|
||||
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
```
|
||||
|
||||
Note: Please replace with `host_ip` with your external IP address, do not use localhost.
|
||||
- #### Build Docker Images
|
||||
|
||||
Note: In order to limit access to a subset of GPUs, please pass each device individually using one or more -device /dev/dri/rendered<node>, where <node> is the card index, starting from 128. (https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html#docker-restrict-gpus)
|
||||
```bash
|
||||
docker compose -f build.yaml build ${service_list} --no-cache
|
||||
```
|
||||
|
||||
Example for set isolation for 1 GPU
|
||||
After the build, we check the list of images with the command:
|
||||
|
||||
```
|
||||
- /dev/dri/card0:/dev/dri/card0
|
||||
- /dev/dri/renderD128:/dev/dri/renderD128
|
||||
```
|
||||
```bash
|
||||
docker image ls
|
||||
```
|
||||
|
||||
Example for set isolation for 2 GPUs
|
||||
The list of images should include:
|
||||
|
||||
```
|
||||
- /dev/dri/card0:/dev/dri/card0
|
||||
- /dev/dri/renderD128:/dev/dri/renderD128
|
||||
- /dev/dri/card1:/dev/dri/card1
|
||||
- /dev/dri/renderD129:/dev/dri/renderD129
|
||||
```
|
||||
##### vLLM-based application:
|
||||
|
||||
Please find more information about accessing and restricting AMD GPUs in the link (https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html#docker-restrict-gpus)
|
||||
- opea/vllm-rocm:latest
|
||||
- opea/llm-docsum:latest
|
||||
- opea/whisper:latest
|
||||
- opea/docsum:latest
|
||||
- opea/docsum-gradio-ui:latest
|
||||
|
||||
### Start Microservice Docker Containers
|
||||
##### TGI-based application:
|
||||
|
||||
```bash
|
||||
cd GenAIExamples/DocSum/docker_compose/amd/gpu/rocm
|
||||
docker compose up -d
|
||||
```
|
||||
- ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
- opea/llm-docsum:latest
|
||||
- opea/whisper:latest
|
||||
- opea/docsum:latest
|
||||
- opea/docsum-gradio-ui:latest
|
||||
|
||||
### Validate Microservices
|
||||
---
|
||||
|
||||
1. TGI Service
|
||||
## Deploy the DocSum Application
|
||||
|
||||
```bash
|
||||
curl http://${host_ip}:8008/generate \
|
||||
-X POST \
|
||||
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":64, "do_sample": true}}' \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
### Docker Compose Configuration for AMD GPUs
|
||||
|
||||
2. LLM Microservice
|
||||
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
|
||||
|
||||
```bash
|
||||
curl http://${host_ip}:9000/v1/docsum \
|
||||
-X POST \
|
||||
-d '{"query":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."}' \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
3. MegaService
|
||||
|
||||
```bash
|
||||
curl http://${host_ip}:8888/v1/docsum -H "Content-Type: application/json" -d '{
|
||||
"messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.","max_tokens":32, "language":"en", "stream":false
|
||||
}'
|
||||
```
|
||||
|
||||
## 🚀 Launch the Svelte UI
|
||||
|
||||
Open this URL `http://{host_ip}:5173` in your browser to access the frontend.
|
||||
|
||||

|
||||
|
||||
Here is an example for summarizing a article.
|
||||
|
||||

|
||||
|
||||
## 🚀 Launch the React UI (Optional)
|
||||
|
||||
To access the React-based frontend, modify the UI service in the `compose.yaml` file. Replace `docsum-rocm-ui-server` service with the `docsum-rocm-react-ui-server` service as per the config below:
|
||||
- compose_vllm.yaml - for vLLM-based application
|
||||
- compose.yaml - for TGI-based
|
||||
|
||||
```yaml
|
||||
docsum-rocm-react-ui-server:
|
||||
image: ${REGISTRY:-opea}/docsum-react-ui:${TAG:-latest}
|
||||
container_name: docsum-rocm-react-ui-server
|
||||
depends_on:
|
||||
- docsum-rocm-backend-server
|
||||
ports:
|
||||
- "5174:80"
|
||||
environment:
|
||||
- no_proxy=${no_proxy}
|
||||
- https_proxy=${https_proxy}
|
||||
- http_proxy=${http_proxy}
|
||||
- DOC_BASE_URL=${BACKEND_SERVICE_ENDPOINT}
|
||||
shm_size: 1g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/:/dev/dri/
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
- video
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
```
|
||||
|
||||
Open this URL `http://{host_ip}:5175` in your browser to access the frontend.
|
||||
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
|
||||
|
||||

|
||||
```yaml
|
||||
shm_size: 1g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/card0:/dev/dri/card0
|
||||
- /dev/dri/render128:/dev/dri/render128
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
- video
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
```
|
||||
|
||||
**How to Identify GPU Device IDs:**
|
||||
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
|
||||
|
||||
### Set deploy environment variables
|
||||
|
||||
#### Setting variables in the operating system environment:
|
||||
|
||||
##### Set variable HUGGINGFACEHUB_API_TOKEN:
|
||||
|
||||
```bash
|
||||
### Replace the string 'your_huggingfacehub_token' with your HuggingFacehub repository access token.
|
||||
export HUGGINGFACEHUB_API_TOKEN='your_huggingfacehub_token'
|
||||
```
|
||||
|
||||
#### Set variables value in set_env\*\*\*\*.sh file:
|
||||
|
||||
Go to Docker Compose directory:
|
||||
|
||||
```bash
|
||||
cd ~/docsum-install/GenAIExamples/DocSum/docker_compose/amd/gpu/rocm
|
||||
```
|
||||
|
||||
The example uses the Nano text editor. You can use any convenient text editor:
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
nano set_env_vllm.sh
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
|
||||
```bash
|
||||
nano set_env.sh
|
||||
```
|
||||
|
||||
If you are in a proxy environment, also set the proxy-related environment variables:
|
||||
|
||||
```bash
|
||||
export http_proxy="Your_HTTP_Proxy"
|
||||
export https_proxy="Your_HTTPs_Proxy"
|
||||
```
|
||||
|
||||
Set the values of the variables:
|
||||
|
||||
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
|
||||
|
||||
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
|
||||
|
||||
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
|
||||
|
||||
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
|
||||
|
||||
We set these values in the file set_env\*\*\*\*.sh
|
||||
|
||||
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
|
||||
The values shown in the file set_env.sh or set_env_vllm they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
|
||||
|
||||
#### Set variables with script set_env\*\*\*\*.sh
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
. set_env_vllm.sh
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
|
||||
```bash
|
||||
. set_env.sh
|
||||
```
|
||||
|
||||
### Start the services:
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
docker compose -f compose_vllm.yaml up -d
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
|
||||
```bash
|
||||
docker compose -f compose.yaml up -d
|
||||
```
|
||||
|
||||
All containers should be running and should not restart:
|
||||
|
||||
##### If you use vLLM:
|
||||
|
||||
- docsum-vllm-service
|
||||
- docsum-llm-server
|
||||
- whisper-service
|
||||
- docsum-backend-server
|
||||
- docsum-ui-server
|
||||
|
||||
##### If you use TGI:
|
||||
|
||||
- docsum-tgi-service
|
||||
- docsum-llm-server
|
||||
- whisper-service
|
||||
- docsum-backend-server
|
||||
- docsum-ui-server
|
||||
|
||||
---
|
||||
|
||||
## Validate the Services
|
||||
|
||||
### 1. Validate the vLLM/TGI Service
|
||||
|
||||
#### If you use vLLM:
|
||||
|
||||
```bash
|
||||
curl http://${HOST_IP}:${FAQGEN_VLLM_SERVICE_PORT}/v1/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "meta-llama/Meta-Llama-3-8B-Instruct",
|
||||
"prompt": "What is a Deep Learning?",
|
||||
"max_tokens": 30,
|
||||
"temperature": 0
|
||||
}'
|
||||
```
|
||||
|
||||
Checking the response from the service. The response should be similar to JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "cmpl-0844e21b824c4472b77f2851a177eca2",
|
||||
"object": "text_completion",
|
||||
"created": 1742385979,
|
||||
"model": "meta-llama/Meta-Llama-3-8B-Instruct",
|
||||
"choices": [
|
||||
{
|
||||
"index": 0,
|
||||
"text": " Deep learning is a subset of machine learning that involves the use of artificial neural networks to analyze and interpret data. It is called \"deep\" because it",
|
||||
"logprobs": null,
|
||||
"finish_reason": "length",
|
||||
"stop_reason": null,
|
||||
"prompt_logprobs": null
|
||||
}
|
||||
],
|
||||
"usage": { "prompt_tokens": 7, "total_tokens": 37, "completion_tokens": 30, "prompt_tokens_details": null }
|
||||
}
|
||||
```
|
||||
|
||||
If the service response has a meaningful response in the value of the "choices.text" key,
|
||||
then we consider the vLLM service to be successfully launched
|
||||
|
||||
#### If you use TGI:
|
||||
|
||||
```bash
|
||||
curl http://${HOST_IP}:${FAQGEN_TGI_SERVICE_PORT}/generate \
|
||||
-X POST \
|
||||
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":64, "do_sample": true}}' \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
Checking the response from the service. The response should be similar to JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"generated_text": " In-Depth Explanation\nDeep Learning involves the use of artificial neural networks (ANNs) with multiple layers to analyze and interpret complex data. In this article, we will explore what is deep learning, its types, and how it works.\n\n### What is Deep Learning?\n\nDeep Learning is a subset of Machine Learning that involves"
|
||||
}
|
||||
```
|
||||
|
||||
If the service response has a meaningful response in the value of the "generated_text" key,
|
||||
then we consider the TGI service to be successfully launched
|
||||
|
||||
### 2. Validate the LLM Service
|
||||
|
||||
```bash
|
||||
curl http://${HOST_IP}:${FAQGEN_LLM_SERVER_PORT}/v1/docsum \
|
||||
-X POST \
|
||||
-d '{"messages":"What is Deep Learning?"}' \
|
||||
-H 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
Checking the response from the service. The response should be similar to JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "1e47daf13a8bc73495dbfd9836eaa7e4",
|
||||
"text": " Q: What is Deep Learning?\n A: Deep Learning is a subset of Machine Learning that involves the use of artificial neural networks to analyze and interpret data. It is called \"deep\" because it involves multiple layers of interconnected nodes or \"neurons\" that process and transform the data.\n\n Q: What is the main difference between Deep Learning and Machine Learning?\n A: The main difference between Deep Learning and Machine Learning is the complexity of the models used. Machine Learning models are typically simpler and more linear, while Deep Learning models are more complex and non-linear, allowing them to learn and represent more abstract and nuanced patterns in data.\n\n Q: What are some common applications of Deep Learning?\n A: Some common applications of Deep Learning include image and speech recognition, natural language processing, recommender systems, and autonomous vehicles.\n\n Q: Is Deep Learning a new field?\n A: Deep Learning is not a new field, but it has gained significant attention and popularity in recent years due to advances in computing power, data storage, and algorithms.\n\n Q: Can Deep Learning be used for any type of data?\n A: Deep Learning can be used for any type of data that can be represented as a numerical array, such as images, audio, text, and time series data.\n\n Q: Is Deep Learning a replacement for traditional Machine Learning?\n A: No, Deep Learning is not a replacement for traditional Machine Learning. Instead, it is a complementary technology that can be used in conjunction with traditional Machine Learning techniques to solve complex problems.\n\n Q: What are some of the challenges associated with Deep Learning?\n A: Some of the challenges associated with Deep Learning include the need for large amounts of data, the risk of overfitting, and the difficulty of interpreting the results of the models.\n\n Q: Can Deep Learning be used for real-time applications?\n A: Yes, Deep Learning can be used for real-time applications, such as image and speech recognition, and autonomous vehicles.\n\n Q: Is Deep Learning a field that requires a lot of mathematical knowledge?\n A: While some mathematical knowledge is helpful, it is not necessary to have a deep understanding of mathematics to work with Deep Learning. Many Deep Learning libraries and frameworks provide pre-built functions and tools that can be used to implement Deep Learning models.",
|
||||
"prompt": "What is Deep Learning?"
|
||||
}
|
||||
```
|
||||
|
||||
If the service response has a meaningful response in the value of the "text" key,
|
||||
then we consider the vLLM service to be successfully launched
|
||||
|
||||
### 3. Validate the MegaService
|
||||
|
||||
```bash
|
||||
curl http://${HOST_IP}:${FAQGEN_BACKEND_SERVER_PORT}/v1/docsum \
|
||||
-H "Content-Type: multipart/form-data" \
|
||||
-F "messages=What is Deep Learning?" \
|
||||
-F "max_tokens=100" \
|
||||
-F "stream=False"
|
||||
```
|
||||
|
||||
Checking the response from the service. The response should be similar to text:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "chatcmpl-tjwp8giP2vyvRRxnqzc3FU",
|
||||
"object": "chat.completion",
|
||||
"created": 1742386156,
|
||||
"model": "docsum",
|
||||
"choices": [
|
||||
{
|
||||
"index": 0,
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": " Q: What is Deep Learning?\n A: Deep Learning is a subset of Machine Learning that involves the use of artificial neural networks to analyze and interpret data. It is called \"deep\" because it involves multiple layers of interconnected nodes or \"neurons\" that process and transform the data.\n\n Q: What is the main difference between Deep Learning and Machine Learning?\n A: The main difference between Deep Learning and Machine Learning is the complexity of the models used. Machine Learning models are typically simpler and"
|
||||
},
|
||||
"finish_reason": "stop",
|
||||
"metadata": null
|
||||
}
|
||||
],
|
||||
"usage": { "prompt_tokens": 0, "total_tokens": 0, "completion_tokens": 0 }
|
||||
}
|
||||
```
|
||||
|
||||
If the service response has a meaningful response in the value of the "choices.message.content" key,
|
||||
then we consider the MegaService to be successfully launched
|
||||
|
||||
### 4. Validate the Frontend (UI)
|
||||
|
||||
To access the UI, use the URL - http://${EXTERNAL_HOST_IP}:${FAGGEN_UI_PORT}
|
||||
A page should open when you click through to this address:
|
||||
|
||||

|
||||
|
||||
If a page of this type has opened, then we believe that the service is running and responding,
|
||||
and we can proceed to functional UI testing.
|
||||
|
||||
For example, let's take the description of water from the Wiki.
|
||||
Copy the first few paragraphs from the Wiki and put them in the text field and then click Generate FAQs.
|
||||
After that, a page with the result of the task should open:
|
||||
|
||||

|
||||
|
||||
If the result shown on the page is correct, then we consider the verification of the UI service to be successful.
|
||||
|
||||
### 5. Stop application
|
||||
|
||||
#### If you use vLLM
|
||||
|
||||
```bash
|
||||
cd ~/docsum-install/GenAIExamples/DocSum/docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose_vllm.yaml down
|
||||
```
|
||||
|
||||
#### If you use TGI
|
||||
|
||||
```bash
|
||||
cd ~/docsum-install/GenAIExamples/DocSum/docker_compose/amd/gpu/rocm
|
||||
docker compose -f compose.yaml down
|
||||
```
|
||||
|
||||
@@ -6,7 +6,7 @@ services:
|
||||
image: ghcr.io/huggingface/text-generation-inference:2.4.1-rocm
|
||||
container_name: docsum-tgi-service
|
||||
ports:
|
||||
- "${DOCSUM_TGI_SERVICE_PORT}:80"
|
||||
- "${DOCSUM_TGI_SERVICE_PORT:-8008}:80"
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
@@ -16,12 +16,11 @@ services:
|
||||
host_ip: ${host_ip}
|
||||
DOCSUM_TGI_SERVICE_PORT: ${DOCSUM_TGI_SERVICE_PORT}
|
||||
volumes:
|
||||
- "/var/opea/docsum-service/data:/data"
|
||||
shm_size: 1g
|
||||
- "${MODEL_CACHE:-./data}:/data"
|
||||
shm_size: 20g
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/${DOCSUM_CARD_ID}:/dev/dri/${DOCSUM_CARD_ID}
|
||||
- /dev/dri/${DOCSUM_RENDER_ID}:/dev/dri/${DOCSUM_RENDER_ID}
|
||||
- /dev/dri/:/dev/dri/
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
@@ -34,7 +33,7 @@ services:
|
||||
interval: 10s
|
||||
timeout: 10s
|
||||
retries: 100
|
||||
command: --model-id ${DOCSUM_LLM_MODEL_ID} --max-input-length ${MAX_INPUT_TOKENS} --max-total-tokens ${MAX_TOTAL_TOKENS}
|
||||
command: --model-id ${DOCSUM_LLM_MODEL_ID} --max-input-length ${DOCSUM_MAX_INPUT_TOKENS} --max-total-tokens ${DOCSUM_MAX_TOTAL_TOKENS}
|
||||
|
||||
docsum-llm-server:
|
||||
image: ${REGISTRY:-opea}/llm-docsum:${TAG:-latest}
|
||||
@@ -45,26 +44,16 @@ services:
|
||||
ports:
|
||||
- "${DOCSUM_LLM_SERVER_PORT}:9000"
|
||||
ipc: host
|
||||
group_add:
|
||||
- video
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/${DOCSUM_CARD_ID}:/dev/dri/${DOCSUM_CARD_ID}
|
||||
- /dev/dri/${DOCSUM_RENDER_ID}:/dev/dri/${DOCSUM_RENDER_ID}
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
LLM_ENDPOINT: "http://${HOST_IP}:${DOCSUM_TGI_SERVICE_PORT}"
|
||||
LLM_ENDPOINT: ${DOCSUM_TGI_LLM_ENDPOINT}
|
||||
HUGGINGFACEHUB_API_TOKEN: ${DOCSUM_HUGGINGFACEHUB_API_TOKEN}
|
||||
MAX_INPUT_TOKENS: ${MAX_INPUT_TOKENS}
|
||||
MAX_TOTAL_TOKENS: ${MAX_TOTAL_TOKENS}
|
||||
MAX_INPUT_TOKENS: ${DOCSUM_MAX_INPUT_TOKENS}
|
||||
MAX_TOTAL_TOKENS: ${DOCSUM_MAX_TOTAL_TOKENS}
|
||||
LLM_MODEL_ID: ${DOCSUM_LLM_MODEL_ID}
|
||||
DocSum_COMPONENT_NAME: ${DocSum_COMPONENT_NAME}
|
||||
DocSum_COMPONENT_NAME: "OpeaDocSumTgi"
|
||||
LOGFLAG: ${LOGFLAG:-False}
|
||||
restart: unless-stopped
|
||||
|
||||
@@ -72,7 +61,7 @@ services:
|
||||
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}
|
||||
container_name: whisper-service
|
||||
ports:
|
||||
- "7066:7066"
|
||||
- "${DOCSUM_WHISPER_PORT:-7066}:7066"
|
||||
ipc: host
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
@@ -89,13 +78,14 @@ services:
|
||||
ports:
|
||||
- "${DOCSUM_BACKEND_SERVER_PORT}:8888"
|
||||
environment:
|
||||
- no_proxy=${no_proxy}
|
||||
- https_proxy=${https_proxy}
|
||||
- http_proxy=${http_proxy}
|
||||
- MEGA_SERVICE_HOST_IP=${HOST_IP}
|
||||
- LLM_SERVICE_HOST_IP=${HOST_IP}
|
||||
- ASR_SERVICE_HOST_IP=${ASR_SERVICE_HOST_IP}
|
||||
|
||||
no_proxy: ${no_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
MEGA_SERVICE_HOST_IP: ${HOST_IP}
|
||||
LLM_SERVICE_HOST_IP: ${HOST_IP}
|
||||
LLM_SERVICE_PORT: ${DOCSUM_LLM_SERVER_PORT}
|
||||
ASR_SERVICE_HOST_IP: ${ASR_SERVICE_HOST_IP}
|
||||
ASR_SERVICE_PORT: ${DOCSUM_WHISPER_PORT}
|
||||
ipc: host
|
||||
restart: always
|
||||
|
||||
@@ -107,11 +97,11 @@ services:
|
||||
ports:
|
||||
- "5173:5173"
|
||||
environment:
|
||||
- no_proxy=${no_proxy}
|
||||
- https_proxy=${https_proxy}
|
||||
- http_proxy=${http_proxy}
|
||||
- BACKEND_SERVICE_ENDPOINT=${BACKEND_SERVICE_ENDPOINT}
|
||||
- DOC_BASE_URL=${BACKEND_SERVICE_ENDPOINT}
|
||||
no_proxy: ${no_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
BACKEND_SERVICE_ENDPOINT: ${BACKEND_SERVICE_ENDPOINT}
|
||||
DOC_BASE_URL: ${BACKEND_SERVICE_ENDPOINT}
|
||||
ipc: host
|
||||
restart: always
|
||||
|
||||
|
||||
111
DocSum/docker_compose/amd/gpu/rocm/compose_vllm.yaml
Normal file
111
DocSum/docker_compose/amd/gpu/rocm/compose_vllm.yaml
Normal file
@@ -0,0 +1,111 @@
|
||||
# Copyright (C) 2024 Advanced Micro Devices, Inc.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
services:
|
||||
docsum-vllm-service:
|
||||
image: ${REGISTRY:-opea}/vllm-rocm:${TAG:-latest}
|
||||
container_name: docsum-vllm-service
|
||||
ports:
|
||||
- "${DOCSUM_VLLM_SERVICE_PORT:-8081}:8011"
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
HUGGINGFACEHUB_API_TOKEN: ${DOCSUM_HUGGINGFACEHUB_API_TOKEN}
|
||||
HF_TOKEN: ${DOCSUM_HUGGINGFACEHUB_API_TOKEN}
|
||||
HF_HUB_DISABLE_PROGRESS_BARS: 1
|
||||
HF_HUB_ENABLE_HF_TRANSFER: 0
|
||||
VLLM_USE_TRITON_FLASH_ATTENTION: 0
|
||||
PYTORCH_JIT: 0
|
||||
healthcheck:
|
||||
test: [ "CMD-SHELL", "curl -f http://${HOST_IP}:${DOCSUM_VLLM_SERVICE_PORT:-8081}/health || exit 1" ]
|
||||
interval: 10s
|
||||
timeout: 10s
|
||||
retries: 100
|
||||
volumes:
|
||||
- "${MODEL_CACHE:-./data}:/data"
|
||||
shm_size: 20G
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri/:/dev/dri/
|
||||
cap_add:
|
||||
- SYS_PTRACE
|
||||
group_add:
|
||||
- video
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
- apparmor=unconfined
|
||||
command: "--model ${DOCSUM_LLM_MODEL_ID} --swap-space 16 --disable-log-requests --dtype float16 --tensor-parallel-size 4 --host 0.0.0.0 --port 8011 --num-scheduler-steps 1 --distributed-executor-backend \"mp\""
|
||||
ipc: host
|
||||
|
||||
docsum-llm-server:
|
||||
image: ${REGISTRY:-opea}/llm-docsum:${TAG:-latest}
|
||||
container_name: docsum-llm-server
|
||||
depends_on:
|
||||
docsum-vllm-service:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "${DOCSUM_LLM_SERVER_PORT}:9000"
|
||||
ipc: host
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
LLM_ENDPOINT: ${DOCSUM_LLM_ENDPOINT}
|
||||
HUGGINGFACEHUB_API_TOKEN: ${DOCSUM_HUGGINGFACEHUB_API_TOKEN}
|
||||
MAX_INPUT_TOKENS: ${DOCSUM_MAX_INPUT_TOKENS}
|
||||
MAX_TOTAL_TOKENS: ${DOCSUM_MAX_TOTAL_TOKENS}
|
||||
LLM_MODEL_ID: ${DOCSUM_LLM_MODEL_ID}
|
||||
DocSum_COMPONENT_NAME: "OpeaDocSumvLLM"
|
||||
LOGFLAG: ${LOGFLAG:-False}
|
||||
restart: unless-stopped
|
||||
|
||||
whisper:
|
||||
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}
|
||||
container_name: whisper-service
|
||||
ports:
|
||||
- "${DOCSUM_WHISPER_PORT:-7066}:7066"
|
||||
ipc: host
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
restart: unless-stopped
|
||||
|
||||
docsum-backend-server:
|
||||
image: ${REGISTRY:-opea}/docsum:${TAG:-latest}
|
||||
container_name: docsum-backend-server
|
||||
depends_on:
|
||||
- docsum-vllm-service
|
||||
- docsum-llm-server
|
||||
ports:
|
||||
- "${DOCSUM_BACKEND_SERVER_PORT}:8888"
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
MEGA_SERVICE_HOST_IP: ${HOST_IP}
|
||||
LLM_SERVICE_HOST_IP: ${HOST_IP}
|
||||
ASR_SERVICE_HOST_IP: ${ASR_SERVICE_HOST_IP}
|
||||
ipc: host
|
||||
restart: always
|
||||
|
||||
docsum-gradio-ui:
|
||||
image: ${REGISTRY:-opea}/docsum-gradio-ui:${TAG:-latest}
|
||||
container_name: docsum-ui-server
|
||||
depends_on:
|
||||
- docsum-backend-server
|
||||
ports:
|
||||
- "${DOCSUM_FRONTEND_PORT:-5173}:5173"
|
||||
environment:
|
||||
no_proxy: ${no_proxy}
|
||||
https_proxy: ${https_proxy}
|
||||
http_proxy: ${http_proxy}
|
||||
BACKEND_SERVICE_ENDPOINT: ${BACKEND_SERVICE_ENDPOINT}
|
||||
DOC_BASE_URL: ${BACKEND_SERVICE_ENDPOINT}
|
||||
ipc: host
|
||||
restart: always
|
||||
|
||||
networks:
|
||||
default:
|
||||
driver: bridge
|
||||
@@ -3,15 +3,16 @@
|
||||
# Copyright (C) 2024 Advanced Micro Devices, Inc.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
export MAX_INPUT_TOKENS=2048
|
||||
export MAX_TOTAL_TOKENS=4096
|
||||
export DOCSUM_TGI_IMAGE="ghcr.io/huggingface/text-generation-inference:2.4.1-rocm"
|
||||
export HOST_IP=''
|
||||
export DOCSUM_MAX_INPUT_TOKENS="2048"
|
||||
export DOCSUM_MAX_TOTAL_TOKENS="4096"
|
||||
export DOCSUM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
|
||||
export HOST_IP=${host_ip}
|
||||
export DOCSUM_TGI_SERVICE_PORT="8008"
|
||||
export DOCSUM_TGI_LLM_ENDPOINT="http://${HOST_IP}:${DOCSUM_TGI_SERVICE_PORT}"
|
||||
export DOCSUM_HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
|
||||
export DOCSUM_HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
|
||||
export DOCSUM_WHISPER_PORT="7066"
|
||||
export ASR_SERVICE_HOST_IP="${HOST_IP}"
|
||||
export DOCSUM_LLM_SERVER_PORT="9000"
|
||||
export DOCSUM_BACKEND_SERVER_PORT="8888"
|
||||
export DOCSUM_FRONTEND_PORT="5173"
|
||||
export DOCSUM_BACKEND_SERVER_PORT="18072"
|
||||
export DOCSUM_FRONTEND_PORT="18073"
|
||||
export BACKEND_SERVICE_ENDPOINT="http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum"
|
||||
|
||||
18
DocSum/docker_compose/amd/gpu/rocm/set_env_vllm.sh
Normal file
18
DocSum/docker_compose/amd/gpu/rocm/set_env_vllm.sh
Normal file
@@ -0,0 +1,18 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (C) 2024 Advanced Micro Devices, Inc.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
export HOST_IP=''
|
||||
export DOCSUM_HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
|
||||
export DOCSUM_MAX_INPUT_TOKENS=2048
|
||||
export DOCSUM_MAX_TOTAL_TOKENS=4096
|
||||
export DOCSUM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
|
||||
export DOCSUM_VLLM_SERVICE_PORT="8008"
|
||||
export DOCSUM_LLM_ENDPOINT="http://${HOST_IP}:${DOCSUM_VLLM_SERVICE_PORT}"
|
||||
export DOCSUM_WHISPER_PORT="7066"
|
||||
export ASR_SERVICE_HOST_IP="${HOST_IP}"
|
||||
export DOCSUM_LLM_SERVER_PORT="9000"
|
||||
export DOCSUM_BACKEND_SERVER_PORT="18072"
|
||||
export DOCSUM_FRONTEND_PORT="18073"
|
||||
export BACKEND_SERVICE_ENDPOINT="http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum"
|
||||
@@ -49,6 +49,11 @@ services:
|
||||
dockerfile: comps/llms/src/doc-summarization/Dockerfile
|
||||
extends: docsum
|
||||
image: ${REGISTRY:-opea}/llm-docsum:${TAG:-latest}
|
||||
vllm-rocm:
|
||||
build:
|
||||
context: GenAIComps
|
||||
dockerfile: comps/third_parties/vllm/src/Dockerfile.amd_gpu
|
||||
image: ${REGISTRY:-opea}/vllm-rocm:${TAG:-latest}
|
||||
vllm:
|
||||
build:
|
||||
context: vllm
|
||||
|
||||
@@ -7,36 +7,34 @@ IMAGE_REPO=${IMAGE_REPO:-"opea"}
|
||||
IMAGE_TAG=${IMAGE_TAG:-"latest"}
|
||||
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
|
||||
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
|
||||
export REGISTRY=${IMAGE_REPO}
|
||||
export TAG=${IMAGE_TAG}
|
||||
export MODEL_CACHE="./data"
|
||||
|
||||
WORKPATH=$(dirname "$PWD")
|
||||
LOG_PATH="$WORKPATH/tests"
|
||||
ip_address=$(hostname -I | awk '{print $1}')
|
||||
export MAX_INPUT_TOKENS=1024
|
||||
export MAX_TOTAL_TOKENS=2048
|
||||
export REGISTRY=${IMAGE_REPO}
|
||||
export TAG=${IMAGE_TAG}
|
||||
export DOCSUM_TGI_IMAGE="ghcr.io/huggingface/text-generation-inference:2.4.1-rocm"
|
||||
export DOCSUM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
|
||||
|
||||
export HOST_IP=${ip_address}
|
||||
export host_ip=${ip_address}
|
||||
export DOCSUM_MAX_INPUT_TOKENS="2048"
|
||||
export DOCSUM_MAX_TOTAL_TOKENS="4096"
|
||||
export DOCSUM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
|
||||
export DOCSUM_TGI_SERVICE_PORT="8008"
|
||||
export DOCSUM_HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
|
||||
export DOCSUM_TGI_LLM_ENDPOINT="http://${HOST_IP}:${DOCSUM_TGI_SERVICE_PORT}"
|
||||
export DOCSUM_HUGGINGFACEHUB_API_TOKEN=''
|
||||
export DOCSUM_WHISPER_PORT="7066"
|
||||
export ASR_SERVICE_HOST_IP="${HOST_IP}"
|
||||
export DOCSUM_LLM_SERVER_PORT="9000"
|
||||
export DOCSUM_BACKEND_SERVER_PORT="8888"
|
||||
export DOCSUM_FRONTEND_PORT="5552"
|
||||
export MEGA_SERVICE_HOST_IP=${host_ip}
|
||||
export LLM_SERVICE_HOST_IP=${host_ip}
|
||||
export ASR_SERVICE_HOST_IP=${host_ip}
|
||||
export BACKEND_SERVICE_ENDPOINT="http://${ip_address}:8888/v1/docsum"
|
||||
export DOCSUM_CARD_ID="card1"
|
||||
export DOCSUM_RENDER_ID="renderD136"
|
||||
export DocSum_COMPONENT_NAME="OpeaDocSumTgi"
|
||||
export LOGFLAG=True
|
||||
export DOCSUM_BACKEND_SERVER_PORT="18072"
|
||||
export DOCSUM_FRONTEND_PORT="18073"
|
||||
export BACKEND_SERVICE_ENDPOINT="http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum"
|
||||
|
||||
function build_docker_images() {
|
||||
opea_branch=${opea_branch:-"main"}
|
||||
cd $WORKPATH/docker_image_build
|
||||
git clone --depth 1 --branch ${opea_branch} https://github.com/opea-project/GenAIComps.git
|
||||
|
||||
pushd GenAIComps
|
||||
docker build --no-cache -t ${REGISTRY}/comps-base:${TAG} --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
|
||||
popd && sleep 1s
|
||||
@@ -45,8 +43,8 @@ function build_docker_images() {
|
||||
service_list="docsum docsum-gradio-ui whisper llm-docsum"
|
||||
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
|
||||
|
||||
docker pull ghcr.io/huggingface/text-generation-inference:2.4.1
|
||||
docker images && sleep 1s
|
||||
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
docker images && sleep 3s
|
||||
}
|
||||
|
||||
function start_services() {
|
||||
@@ -54,7 +52,16 @@ function start_services() {
|
||||
sed -i "s/backend_address/$ip_address/g" "$WORKPATH"/ui/svelte/.env
|
||||
# Start Docker Containers
|
||||
docker compose up -d > "${LOG_PATH}"/start_services_with_compose.log
|
||||
sleep 1m
|
||||
n=0
|
||||
until [[ "$n" -ge 500 ]]; do
|
||||
docker logs docsum-tgi-service >& "${LOG_PATH}"/docsum-tgi-service_start.log
|
||||
if grep -q "Connected" "${LOG_PATH}"/docsum-tgi-service_start.log; then
|
||||
break
|
||||
fi
|
||||
sleep 10s
|
||||
n=$((n+1))
|
||||
done
|
||||
sleep 5s
|
||||
}
|
||||
|
||||
function validate_services() {
|
||||
@@ -122,7 +129,7 @@ function validate_microservices() {
|
||||
# whisper microservice
|
||||
ulimit -s 65536
|
||||
validate_services \
|
||||
"${host_ip}:7066/v1/asr" \
|
||||
"${host_ip}:${DOCSUM_WHISPER_PORT}/v1/asr" \
|
||||
'{"asr_result":"well"}' \
|
||||
"whisper-service" \
|
||||
"whisper-service" \
|
||||
@@ -130,7 +137,7 @@ function validate_microservices() {
|
||||
|
||||
# tgi for llm service
|
||||
validate_services \
|
||||
"${host_ip}:8008/generate" \
|
||||
"${host_ip}:${DOCSUM_TGI_SERVICE_PORT}/generate" \
|
||||
"generated_text" \
|
||||
"docsum-tgi-service" \
|
||||
"docsum-tgi-service" \
|
||||
@@ -138,7 +145,7 @@ function validate_microservices() {
|
||||
|
||||
# llm microservice
|
||||
validate_services \
|
||||
"${host_ip}:9000/v1/docsum" \
|
||||
"${host_ip}:${DOCSUM_LLM_SERVER_PORT}/v1/docsum" \
|
||||
"text" \
|
||||
"docsum-llm-server" \
|
||||
"docsum-llm-server" \
|
||||
@@ -151,7 +158,7 @@ function validate_megaservice() {
|
||||
local DOCKER_NAME="docsum-backend-server"
|
||||
local EXPECTED_RESULT="[DONE]"
|
||||
local INPUT_DATA="messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."
|
||||
local URL="${host_ip}:8888/v1/docsum"
|
||||
local URL="${host_ip}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum"
|
||||
local DATA_TYPE="type=text"
|
||||
|
||||
local HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X POST -F "$DATA_TYPE" -F "$INPUT_DATA" -H 'Content-Type: multipart/form-data' "$URL")
|
||||
@@ -181,7 +188,7 @@ function validate_megaservice_json() {
|
||||
echo ""
|
||||
echo ">>> Checking text data with Content-Type: application/json"
|
||||
validate_services \
|
||||
"${host_ip}:8888/v1/docsum" \
|
||||
"${host_ip}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum" \
|
||||
"[DONE]" \
|
||||
"docsum-backend-server" \
|
||||
"docsum-backend-server" \
|
||||
@@ -189,7 +196,7 @@ function validate_megaservice_json() {
|
||||
|
||||
echo ">>> Checking audio data"
|
||||
validate_services \
|
||||
"${host_ip}:8888/v1/docsum" \
|
||||
"${host_ip}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum" \
|
||||
"[DONE]" \
|
||||
"docsum-backend-server" \
|
||||
"docsum-backend-server" \
|
||||
@@ -197,7 +204,7 @@ function validate_megaservice_json() {
|
||||
|
||||
echo ">>> Checking video data"
|
||||
validate_services \
|
||||
"${host_ip}:8888/v1/docsum" \
|
||||
"${host_ip}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum" \
|
||||
"[DONE]" \
|
||||
"docsum-backend-server" \
|
||||
"docsum-backend-server" \
|
||||
|
||||
257
DocSum/tests/test_compose_vllm_on_rocm.sh
Normal file
257
DocSum/tests/test_compose_vllm_on_rocm.sh
Normal file
@@ -0,0 +1,257 @@
|
||||
#!/bin/bash
|
||||
# Copyright (C) 2024 Advanced Micro Devices, Inc.
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
set -xe
|
||||
IMAGE_REPO=${IMAGE_REPO:-"opea"}
|
||||
IMAGE_TAG=${IMAGE_TAG:-"latest"}
|
||||
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
|
||||
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
|
||||
export REGISTRY=${IMAGE_REPO}
|
||||
export TAG=${IMAGE_TAG}
|
||||
export MODEL_CACHE="./data"
|
||||
|
||||
|
||||
WORKPATH=$(dirname "$PWD")
|
||||
LOG_PATH="$WORKPATH/tests"
|
||||
ip_address=$(hostname -I | awk '{print $1}')
|
||||
|
||||
export host_ip=${ip_address}
|
||||
export HOST_IP=${ip_address}
|
||||
export EXTERNAL_HOST_IP=${ip_address}
|
||||
export DOCSUM_HUGGINGFACEHUB_API_TOKEN="${HUGGINGFACEHUB_API_TOKEN}"
|
||||
export DOCSUM_MAX_INPUT_TOKENS=2048
|
||||
export DOCSUM_MAX_TOTAL_TOKENS=4096
|
||||
export DOCSUM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
|
||||
export DOCSUM_VLLM_SERVICE_PORT="8008"
|
||||
export DOCSUM_LLM_ENDPOINT="http://${HOST_IP}:${DOCSUM_VLLM_SERVICE_PORT}"
|
||||
export DOCSUM_WHISPER_PORT="7066"
|
||||
export ASR_SERVICE_HOST_IP="${HOST_IP}"
|
||||
export DOCSUM_LLM_SERVER_PORT="9000"
|
||||
export DOCSUM_BACKEND_SERVER_PORT="18072"
|
||||
export DOCSUM_FRONTEND_PORT="18073"
|
||||
export BACKEND_SERVICE_ENDPOINT="http://${EXTERNAL_HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum"
|
||||
|
||||
function build_docker_images() {
|
||||
opea_branch=${opea_branch:-"main"}
|
||||
cd $WORKPATH/docker_image_build
|
||||
git clone --depth 1 --branch ${opea_branch} https://github.com/opea-project/GenAIComps.git
|
||||
pushd GenAIComps
|
||||
docker build --no-cache -t ${REGISTRY}/comps-base:${TAG} --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
|
||||
popd && sleep 1s
|
||||
|
||||
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
|
||||
service_list="docsum docsum-gradio-ui whisper llm-docsum vllm-rocm"
|
||||
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
|
||||
|
||||
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
|
||||
docker images && sleep 3s
|
||||
}
|
||||
|
||||
function start_services() {
|
||||
cd "$WORKPATH"/docker_compose/amd/gpu/rocm
|
||||
sed -i "s/backend_address/$ip_address/g" "$WORKPATH"/ui/svelte/.env
|
||||
# Start Docker Containers
|
||||
docker compose -f compose_vllm.yaml up -d > "${LOG_PATH}"/start_services_with_compose.log
|
||||
n=0
|
||||
until [[ "$n" -ge 500 ]]; do
|
||||
docker logs docsum-vllm-service >& "${LOG_PATH}"/docsum-vllm-service_start.log
|
||||
if grep -q "Application startup complete" "${LOG_PATH}"/docsum-vllm-service_start.log; then
|
||||
break
|
||||
fi
|
||||
sleep 10s
|
||||
n=$((n+1))
|
||||
done
|
||||
sleep 5s
|
||||
}
|
||||
|
||||
function validate_services() {
|
||||
local URL="$1"
|
||||
local EXPECTED_RESULT="$2"
|
||||
local SERVICE_NAME="$3"
|
||||
local DOCKER_NAME="$4"
|
||||
local INPUT_DATA="$5"
|
||||
|
||||
local HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL")
|
||||
|
||||
echo "==========================================="
|
||||
|
||||
if [ "$HTTP_STATUS" -eq 200 ]; then
|
||||
echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..."
|
||||
|
||||
local CONTENT=$(curl -s -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL" | tee ${LOG_PATH}/${SERVICE_NAME}.log)
|
||||
|
||||
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
|
||||
echo "[ $SERVICE_NAME ] Content is as expected."
|
||||
else
|
||||
echo "EXPECTED_RESULT==> $EXPECTED_RESULT"
|
||||
echo "CONTENT==> $CONTENT"
|
||||
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
|
||||
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
|
||||
exit 1
|
||||
|
||||
fi
|
||||
else
|
||||
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS"
|
||||
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
|
||||
exit 1
|
||||
fi
|
||||
sleep 1s
|
||||
}
|
||||
|
||||
get_base64_str() {
|
||||
local file_name=$1
|
||||
base64 -w 0 "$file_name"
|
||||
}
|
||||
|
||||
# Function to generate input data for testing based on the document type
|
||||
input_data_for_test() {
|
||||
local document_type=$1
|
||||
case $document_type in
|
||||
("text")
|
||||
echo "THIS IS A TEST >>>> and a number of states are starting to adopt them voluntarily special correspondent john delenco of education week reports it takes just 10 minutes to cross through gillette wyoming this small city sits in the northeast corner of the state surrounded by 100s of miles of prairie but schools here in campbell county are on the edge of something big the next generation science standards you are going to build a strand of dna and you are going to decode it and figure out what that dna actually says for christy mathis at sage valley junior high school the new standards are about learning to think like a scientist there is a lot of really good stuff in them every standard is a performance task it is not you know the child needs to memorize these things it is the student needs to be able to do some pretty intense stuff we are analyzing we are critiquing we are."
|
||||
;;
|
||||
("audio")
|
||||
get_base64_str "$WORKPATH/tests/data/test.wav"
|
||||
;;
|
||||
("video")
|
||||
get_base64_str "$WORKPATH/tests/data/test.mp4"
|
||||
;;
|
||||
(*)
|
||||
echo "Invalid document type" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
function validate_microservices() {
|
||||
# Check if the microservices are running correctly.
|
||||
|
||||
# whisper microservice
|
||||
ulimit -s 65536
|
||||
validate_services \
|
||||
"${host_ip}:${DOCSUM_WHISPER_PORT}/v1/asr" \
|
||||
'{"asr_result":"well"}' \
|
||||
"whisper-service" \
|
||||
"whisper-service" \
|
||||
"{\"audio\": \"$(input_data_for_test "audio")\"}"
|
||||
|
||||
# vLLM service
|
||||
validate_services \
|
||||
"${host_ip}:${DOCSUM_VLLM_SERVICE_PORT}/v1/chat/completions" \
|
||||
"content" \
|
||||
"docsum-vllm-service" \
|
||||
"docsum-vllm-service" \
|
||||
'{"model": "Intel/neural-chat-7b-v3-3", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 17}'
|
||||
|
||||
# llm microservice
|
||||
validate_services \
|
||||
"${host_ip}:${DOCSUM_LLM_SERVER_PORT}/v1/docsum" \
|
||||
"text" \
|
||||
"docsum-llm-server" \
|
||||
"docsum-llm-server" \
|
||||
'{"messages":"What is a Deep Learning?"}'
|
||||
|
||||
}
|
||||
|
||||
function validate_megaservice() {
|
||||
local SERVICE_NAME="docsum-backend-server"
|
||||
local DOCKER_NAME="docsum-backend-server"
|
||||
local EXPECTED_RESULT="[DONE]"
|
||||
local INPUT_DATA="messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."
|
||||
local URL="${host_ip}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum"
|
||||
local DATA_TYPE="type=text"
|
||||
|
||||
local HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X POST -F "$DATA_TYPE" -F "$INPUT_DATA" -H 'Content-Type: multipart/form-data' "$URL")
|
||||
|
||||
if [ "$HTTP_STATUS" -eq 200 ]; then
|
||||
echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..."
|
||||
|
||||
local CONTENT=$(curl -s -X POST -F "$DATA_TYPE" -F "$INPUT_DATA" -H 'Content-Type: multipart/form-data' "$URL" | tee ${LOG_PATH}/${SERVICE_NAME}.log)
|
||||
|
||||
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
|
||||
echo "[ $SERVICE_NAME ] Content is as expected."
|
||||
else
|
||||
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
|
||||
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS"
|
||||
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
|
||||
exit 1
|
||||
fi
|
||||
sleep 1s
|
||||
}
|
||||
|
||||
function validate_megaservice_json() {
|
||||
# Curl the Mega Service
|
||||
echo ""
|
||||
echo ">>> Checking text data with Content-Type: application/json"
|
||||
validate_services \
|
||||
"${host_ip}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum" \
|
||||
"[DONE]" \
|
||||
"docsum-backend-server" \
|
||||
"docsum-backend-server" \
|
||||
'{"type": "text", "messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."}'
|
||||
|
||||
echo ">>> Checking audio data"
|
||||
validate_services \
|
||||
"${host_ip}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum" \
|
||||
"[DONE]" \
|
||||
"docsum-backend-server" \
|
||||
"docsum-backend-server" \
|
||||
"{\"type\": \"audio\", \"messages\": \"$(input_data_for_test "audio")\"}"
|
||||
|
||||
echo ">>> Checking video data"
|
||||
validate_services \
|
||||
"${host_ip}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum" \
|
||||
"[DONE]" \
|
||||
"docsum-backend-server" \
|
||||
"docsum-backend-server" \
|
||||
"{\"type\": \"video\", \"messages\": \"$(input_data_for_test "video")\"}"
|
||||
|
||||
}
|
||||
|
||||
function stop_docker() {
|
||||
cd $WORKPATH/docker_compose/amd/gpu/rocm/
|
||||
docker compose -f compose_vllm.yaml stop && docker compose -f compose_vllm.yaml rm -f
|
||||
}
|
||||
|
||||
function main() {
|
||||
echo "==========================================="
|
||||
echo ">>>> Stopping any running Docker containers..."
|
||||
stop_docker
|
||||
|
||||
echo "==========================================="
|
||||
if [[ "$IMAGE_REPO" == "opea" ]]; then
|
||||
echo ">>>> Building Docker images..."
|
||||
build_docker_images
|
||||
fi
|
||||
|
||||
echo "==========================================="
|
||||
echo ">>>> Starting Docker services..."
|
||||
start_services
|
||||
|
||||
echo "==========================================="
|
||||
echo ">>>> Validating microservices..."
|
||||
validate_microservices
|
||||
|
||||
echo "==========================================="
|
||||
echo ">>>> Validating megaservice..."
|
||||
validate_megaservice
|
||||
echo ">>>> Validating validate_megaservice_json..."
|
||||
validate_megaservice_json
|
||||
|
||||
echo "==========================================="
|
||||
echo ">>>> Stopping Docker containers..."
|
||||
stop_docker
|
||||
|
||||
echo "==========================================="
|
||||
echo ">>>> Pruning Docker system..."
|
||||
echo y | docker system prune
|
||||
echo ">>>> Docker system pruned successfully."
|
||||
echo "==========================================="
|
||||
}
|
||||
|
||||
main
|
||||
Reference in New Issue
Block a user