Add example for text2image (#920)

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
XinyuYe-Intel
2024-10-24 11:43:44 +08:00
committed by GitHub
parent 15cc457cea
commit 085d859a70
5 changed files with 201 additions and 0 deletions

21
Text2Image/README.md Normal file
View File

@@ -0,0 +1,21 @@
# Text-to-Image Microservice
Text-to-Image is a task that generate image conditioning on the provided text. This microservice supports text-to-image task by using Stable Diffusion (SD) model.
## Deploy Text-to-Image Service
### Deploy Text-to-Image Service on Xeon
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for detail.
### Deploy Text-to-Image Service on Gaudi
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) for detail.
## Consume Text-to-Image Service
Use below command to generate image.
```bash
http_proxy="" curl http://localhost:9379/v1/text2image -XPOST -d '{"prompt":"An astronaut riding a green horse", "num_images_per_prompt":1}' -H 'Content-Type: application/json'
```

View File

@@ -0,0 +1,44 @@
# Deploy Text-to-Image Service on Xeon
This document outlines the deployment process for a text-to-image service utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice on Intel Xeon server. The steps include Docker image creation, container deployment. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.
## 🚀 Build Docker Images
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
### 1. Build Docker Image
Build text-to-image service image on Xeon with below command:
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/text2image:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/text2image/Dockerfile .
```
### 2. Run Docker with CLI
Select Stable Diffusion (SD) model and assign its name to a environment variable as below:
```bash
# SD1.5
export MODEL=stable-diffusion-v1-5/stable-diffusion-v1-5
# SD2.1
export MODEL=stabilityai/stable-diffusion-2-1
# SDXL
export MODEL=stabilityai/stable-diffusion-xl-base-1.0
# SD3
export MODEL=stabilityai/stable-diffusion-3-medium-diffusers
```
Set huggingface token:
```bash
export HF_TOKEN=<your huggingface token>
```
Start text-to-image service on Xeon with below command:
```bash
docker run --ipc=host -p 9379:9379 -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e HF_TOKEN=$HF_TOKEN -e MODEL=$MODEL opea/text2image:latest
```

View File

@@ -0,0 +1,44 @@
# Deploy Text-to-Image Service on Gaudi
This document outlines the deployment process for a text-to-image service utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice on Intel Xeon server. The steps include Docker image creation, container deployment. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.
## 🚀 Build Docker Images
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
### 1. Build Docker Image
Build text-to-image service image on Gaudi with below command:
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/text2image-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/text2image/Dockerfile.intel_hpu .
```
### 2. Run Docker with CLI
Select Stable Diffusion (SD) model and assign its name to a environment variable as below:
```bash
# SD1.5
export MODEL=stable-diffusion-v1-5/stable-diffusion-v1-5
# SD2.1
export MODEL=stabilityai/stable-diffusion-2-1
# SDXL
export MODEL=stabilityai/stable-diffusion-xl-base-1.0
# SD3
export MODEL=stabilityai/stable-diffusion-3-medium-diffusers
```
Set huggingface token:
```bash
export HF_TOKEN=<your huggingface token>
```
Start text-to-image service on Gaudi with below command:
```bash
docker run -p 9379:9379 --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e HF_TOKEN=$HF_TOKEN -e MODEL=$MODEL opea/text2image-gaudi:latest
```

View File

@@ -0,0 +1,13 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
text2image:
build:
args:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
no_proxy: ${no_proxy}
context: GenAIComps
dockerfile: comps/text2image/Dockerfile
image: ${REGISTRY:-opea}/text2image:${TAG:-latest}

View File

@@ -0,0 +1,79 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -x
IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
export REGISTRY=${IMAGE_REPO}
export TAG=${IMAGE_TAG}
WORKPATH=$(dirname "$PWD")
LOG_PATH="$WORKPATH/tests"
ip_address=$(hostname -I | awk '{print $1}')
text2image_service_port=9379
MODEL=stabilityai/stable-diffusion-2-1
function build_docker_images() {
cd $WORKPATH/docker_image_build
if [ ! -d "GenAIComps" ] ; then
git clone https://github.com/opea-project/GenAIComps.git
fi
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log
}
function start_service() {
export no_proxy="localhost,127.0.0.1,"${ip_address}
docker run -d --name="text2image-server" -p $text2image_service_port:$text2image_service_port --runtime=runc --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e MODEL=$MODEL -e no_proxy=$no_proxy ${IMAGE_REPO}/text2image:${IMAGE_TAG}
sleep 30s
}
function validate_microservice() {
cd $LOG_PATH
export no_proxy="localhost,127.0.0.1,"${ip_address}
# test /v1/text2image generate image
URL="http://${ip_address}:$text2image_service_port/v1/text2image"
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -d '{"prompt":"An astronaut riding a green horse", "num_images_per_prompt":1}' -H 'Content-Type: application/json' "$URL")
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
RESPONSE_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
SERVICE_NAME="text2image-server - generate image"
if [ "$HTTP_STATUS" -ne "200" ]; then
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS"
docker logs text2image-server >> ${LOG_PATH}/text2image-server_generate_image.log
exit 1
else
echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..."
fi
# Check if the parsed values match the expected values
if [[ $RESPONSE_BODY == *"images"* ]]; then
echo "Content correct."
else
echo "Content wrong."
docker logs text2image-server >> ${LOG_PATH}/text2image-server_generate_image.log
exit 1
fi
}
function stop_docker() {
cid=$(docker ps -aq --filter "name=text2image-server*")
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && sleep 1s; fi
}
function main() {
stop_docker
build_docker_images
start_service
validate_microservice
stop_docker
echo y | docker system prune
}
main