Add MultimodalQnA as MMRAG usecase in Example (#751)

Signed-off-by: Tiep Le <tiep.le@intel.com>
Signed-off-by: siddhivelankar23 <siddhi.velankar@intel.com>
Signed-off-by: sjagtap1803 <siddhant.jagtap@intel.com>
This commit is contained in:
Tiep Le
2024-09-14 01:55:29 -07:00
committed by GitHub
parent 06696c8e58
commit b6cce35a93
21 changed files with 2558 additions and 0 deletions

31
MultimodalQnA/Dockerfile Normal file
View File

@@ -0,0 +1,31 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
FROM python:3.11-slim
RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
libgl1-mesa-glx \
libjemalloc-dev \
git
RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/
WORKDIR /home/user/
RUN git clone https://github.com/opea-project/GenAIComps.git
WORKDIR /home/user/GenAIComps
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r /home/user/GenAIComps/requirements.txt
COPY ./multimodalqna.py /home/user/multimodalqna.py
ENV PYTHONPATH=$PYTHONPATH:/home/user/GenAIComps
USER user
WORKDIR /home/user
ENTRYPOINT ["python", "multimodalqna.py"]
# ENTRYPOINT ["/usr/bin/sleep", "infinity"]

188
MultimodalQnA/README.md Normal file
View File

@@ -0,0 +1,188 @@
# MultimodalQnA Application
Suppose you possess a set of videos and wish to perform question-answering to extract insights from these videos. To respond to your questions, it typically necessitates comprehension of visual cues within the videos, knowledge derived from the audio content, or often a mix of both these visual elements and auditory facts. The MultimodalQnA framework offers an optimal solution for this purpose.
`MultimodalQnA` addresses your questions by dynamically fetching the most pertinent multimodal information (frames, transcripts, and/or captions) from your collection of videos. For this purpose, MultimodalQnA utilizes [BridgeTower model](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-gaudi), a multimodal encoding transformer model which merges visual and textual data into a unified semantic space. During the video ingestion phase, the BridgeTower model embeds both visual cues and auditory facts as texts, and those embeddings are then stored in a vector database. When it comes to answering a question, the MultimodalQnA will fetch its most relevant multimodal content from the vector store and feed it into a downstream Large Vision-Language Model (LVM) as input context to generate a response for the user.
The MultimodalQnA architecture shows below:
![architecture](./assets/img/MultimodalQnA.png)
MultimodalQnA is implemented on top of [GenAIComps](https://github.com/opea-project/GenAIComps), the MultimodalQnA Flow Chart shows below:
```mermaid
---
config:
flowchart:
nodeSpacing: 100
rankSpacing: 100
curve: linear
theme: base
themeVariables:
fontSize: 42px
---
flowchart LR
%% Colors %%
classDef blue fill:#ADD8E6,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
classDef orange fill:#FBAA60,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
classDef orchid fill:#C26DBC,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
classDef invisible fill:transparent,stroke:transparent;
style MultimodalQnA-MegaService stroke:#000000
%% Subgraphs %%
subgraph MultimodalQnA-MegaService["MultimodalQnA-MegaService"]
direction LR
EM([Embedding <br>]):::blue
RET([Retrieval <br>]):::blue
LVM([LVM <br>]):::blue
end
subgraph User Interface
direction TB
a([User Input Query]):::orchid
Ingest([Ingest data]):::orchid
UI([UI server<br>]):::orchid
end
subgraph MultimodalQnA GateWay
direction LR
invisible1[ ]:::invisible
GW([MultimodalQnA GateWay<br>]):::orange
end
subgraph .
X([OPEA Microservice]):::blue
Y{{Open Source Service}}
Z([OPEA Gateway]):::orange
Z1([UI]):::orchid
end
TEI_EM{{Embedding service <br>}}
VDB{{Vector DB<br><br>}}
R_RET{{Retriever service <br>}}
DP([Data Preparation<br>]):::blue
LVM_gen{{LVM Service <br>}}
%% Data Preparation flow
%% Ingest data flow
direction LR
Ingest[Ingest data] -->|a| UI
UI -->|b| DP
DP <-.->|c| TEI_EM
%% Questions interaction
direction LR
a[User Input Query] -->|1| UI
UI -->|2| GW
GW <==>|3| MultimodalQnA-MegaService
EM ==>|4| RET
RET ==>|5| LVM
%% Embedding service flow
direction TB
EM <-.->|3'| TEI_EM
RET <-.->|4'| R_RET
LVM <-.->|5'| LVM_gen
direction TB
%% Vector DB interaction
R_RET <-.->|d|VDB
DP <-.->|e|VDB
```
This MultimodalQnA use case performs Multimodal-RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel Xeon Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
In the below, we provide a table that describes for each microservice component in the MultimodalQnA architecture, the default configuration of the open source project, hardware, port, and endpoint.
<details>
<summary><b>Gaudi default compose.yaml</b></summary>
| MicroService | Open Source Project | HW | Port | Endpoint |
| ------------ | --------------------- | ----- | ---- | ----------------------------------------------- |
| Embedding | Langchain | Xeon | 6000 | /v1/embeddings |
| Retriever | Langchain, Redis | Xeon | 7000 | /v1/multimodal_retrieval |
| LVM | Langchain, TGI | Gaudi | 9399 | /v1/lvm |
| Dataprep | Redis, Langchain, TGI | Gaudi | 6007 | /v1/generate_transcripts, /v1/generate_captions |
</details>
## Required Models
By default, the embedding and LVM models are set to a default value as listed below:
| Service | Model |
| -------------------- | ------------------------------------------- |
| embedding-multimodal | BridgeTower/bridgetower-large-itm-mlm-gaudi |
| LVM | llava-hf/llava-v1.6-vicuna-13b-hf |
You can choose other LVM models, such as `llava-hf/llava-1.5-7b-hf ` and `llava-hf/llava-1.5-13b-hf`, as needed.
## Deploy MultimodalQnA Service
The MultimodalQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
Currently we support deploying MultimodalQnA services with docker compose.
### Setup Environment Variable
To set up environment variables for deploying MultimodalQnA services, follow these steps:
1. Set the required environment variables:
```bash
# Example: export host_ip=$(hostname -I | awk '{print $1}')
export host_ip="External_Public_IP"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
```
2. If you are in a proxy environment, also set the proxy-related environment variables:
```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
```
3. Set up other environment variables:
> Notice that you can only choose **one** command below to set up envs according to your hardware. Other that the port numbers may be set incorrectly.
```bash
# on Gaudi
source ./docker_compose/intel/hpu/gaudi/set_env.sh
# on Xeon
source ./docker_compose/intel/cpu/xeon/set_env.sh
```
### Deploy MultimodalQnA on Gaudi
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
Find the corresponding [compose.yaml](./docker_compose/intel/hpu/gaudi/compose.yaml).
```bash
cd GenAIExamples/MultimodalQnA/docker_compose/intel/hpu/gaudi/
docker compose -f compose.yaml up -d
```
> Notice: Currently only the **Habana Driver 1.17.x** is supported for Gaudi.
### Deploy MultimodalQnA on Xeon
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source.
Find the corresponding [compose.yaml](./docker_compose/intel/cpu/xeon/compose.yaml).
```bash
cd GenAIExamples/MultimodalQnA/docker_compose/intel/cpu/xeon/
docker compose -f compose.yaml up -d
```
## MultimodalQnA Demo on Gaudi2
![MultimodalQnA-upload-waiting-screenshot](./assets/img/upload-gen-trans.png)
![MultimodalQnA-upload-done-screenshot](./assets/img/upload-gen-captions.png)
![MultimodalQnA-query-example-screenshot](./assets/img/example_query.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 404 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 395 KiB

View File

@@ -0,0 +1,345 @@
# Build Mega Service of MultimodalQnA on Xeon
This document outlines the deployment process for a MultimodalQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `multimodal_embedding` that employs [BridgeTower](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-gaudi) model as embedding model, `multimodal_retriever`, `lvm`, and `multimodal-data-prep`. We will publish the Docker images to Docker Hub soon, it will simplify the deployment process for this service.
## 🚀 Apply Xeon Server on AWS
To apply a Xeon server on AWS, start by creating an AWS account if you don't have one already. Then, head to the [EC2 Console](https://console.aws.amazon.com/ec2/v2/home) to begin the process. Within the EC2 service, select the Amazon EC2 M7i or M7i-flex instance type to leverage the power of 4th Generation Intel Xeon Scalable processors. These instances are optimized for high-performance computing and demanding workloads.
For detailed information about these instance types, you can refer to this [link](https://aws.amazon.com/ec2/instance-types/m7i/). Once you've chosen the appropriate instance type, proceed with configuring your instance settings, including network configurations, security groups, and storage options.
After launching your instance, you can connect to it using SSH (for Linux instances) or Remote Desktop Protocol (RDP) (for Windows instances). From there, you'll have full access to your Xeon server, allowing you to install, configure, and manage your applications as needed.
**Certain ports in the EC2 instance need to opened up in the security group, for the microservices to work with the curl commands**
> See one example below. Please open up these ports in the EC2 instance based on the IP addresses you want to allow
```
redis-vector-db
===============
Port 6379 - Open to 0.0.0.0/0
Port 8001 - Open to 0.0.0.0/0
embedding-multimodal-bridgetower
=====================
Port 6006 - Open to 0.0.0.0/0
embedding-multimodal
=========
Port 6000 - Open to 0.0.0.0/0
retriever-multimodal-redis
=========
Port 7000 - Open to 0.0.0.0/0
lvm-llava
================
Port 8399 - Open to 0.0.0.0/0
lvm-llava-svc
===
Port 9399 - Open to 0.0.0.0/0
dataprep-multimodal-redis
===
Port 6007 - Open to 0.0.0.0/0
multimodalqna
==========================
Port 8888 - Open to 0.0.0.0/0
multimodalqna-ui
=====================
Port 5173 - Open to 0.0.0.0/0
```
## Setup Environment Variables
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.
**Export the value of the public IP address of your Xeon server to the `host_ip` environment variable**
> Change the External_Public_IP below with the actual IPV4 value
```
export host_ip="External_Public_IP"
```
**Append the value of the public IP address to the no_proxy list**
```bash
export your_no_proxy=${your_no_proxy},"External_Public_IP"
```
```bash
export no_proxy=${your_no_proxy}
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export EMBEDDER_PORT=6006
export MMEI_EMBEDDING_ENDPOINT="http://${host_ip}:$EMBEDDER_PORT/v1/encode"
export MM_EMBEDDING_PORT_MICROSERVICE=6000
export REDIS_URL="redis://${host_ip}:6379"
export REDIS_HOST=${host_ip}
export INDEX_NAME="mm-rag-redis"
export LLAVA_SERVER_PORT=8399
export LVM_ENDPOINT="http://${host_ip}:8399"
export EMBEDDING_MODEL_ID="BridgeTower/bridgetower-large-itm-mlm-itc"
export WHISPER_MODEL="base"
export MM_EMBEDDING_SERVICE_HOST_IP=${host_ip}
export MM_RETRIEVER_SERVICE_HOST_IP=${host_ip}
export LVM_SERVICE_HOST_IP=${host_ip}
export MEGA_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/multimodalqna"
export DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_transcripts"
export DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_captions"
export DATAPREP_GET_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_videos"
export DATAPREP_DELETE_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_videos"
```
Note: Please replace with `host_ip` with you external IP address, do not use localhost.
## 🚀 Build Docker Images
First of all, you need to build Docker Images locally and install the python package of it.
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
```
### 1. Build embedding-multimodal-bridgetower Image
Build embedding-multimodal-bridgetower docker image
```bash
docker build --no-cache -t opea/embedding-multimodal-bridgetower:latest --build-arg EMBEDDER_PORT=$EMBEDDER_PORT --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/multimodal/bridgetower/Dockerfile .
```
Build embedding-multimodal microservice image
```bash
docker build --no-cache -t opea/embedding-multimodal:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/multimodal/multimodal_langchain/Dockerfile .
```
### 2. Build retriever-multimodal-redis Image
```bash
docker build --no-cache -t opea/retriever-multimodal-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/retrievers/multimodal/redis/langchain/Dockerfile .
```
### 3. Build LVM Images
Build lvm-llava image
```bash
docker build --no-cache -t opea/lvm-llava:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/llava/dependency/Dockerfile .
```
Build lvm-llava-svc microservice image
```bash
docker build --no-cache -t opea/lvm-llava-svc:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/llava/Dockerfile .
```
### 4. Build dataprep-multimodal-redis Image
```bash
docker build --no-cache -t opea/dataprep-multimodal-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/multimodal/redis/langchain/Dockerfile .
```
### 5. Build MegaService Docker Image
To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the [multimodalqna.py](../../../../multimodalqna.py) Python script. Build MegaService Docker image via below command:
```bash
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/MultimodalQnA
docker build --no-cache -t opea/multimodalqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
cd ../..
```
### 6. Build UI Docker Image
Build frontend Docker image via below command:
```bash
cd GenAIExamples/MultimodalQnA/ui/
docker build --no-cache -t opea/multimodalqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
cd ../../../
```
Then run the command `docker images`, you will have the following 8 Docker Images:
1. `opea/dataprep-multimodal-redis:latest`
2. `opea/lvm-llava-svc:latest`
3. `opea/lvm-llava:latest`
4. `opea/retriever-multimodal-redis:latest`
5. `opea/embedding-multimodal:latest`
6. `opea/embedding-multimodal-bridgetower:latest`
7. `opea/multimodalqna:latest`
8. `opea/multimodalqna-ui:latest`
## 🚀 Start Microservices
### Required Models
By default, the multimodal-embedding and LVM models are set to a default value as listed below:
| Service | Model |
| -------------------- | ------------------------------------------- |
| embedding-multimodal | BridgeTower/bridgetower-large-itm-mlm-gaudi |
| LVM | llava-hf/llava-1.5-7b-hf |
### Start all the services Docker Containers
> Before running the docker compose command, you need to be in the folder that has the docker compose yaml file
```bash
cd GenAIExamples/MultimodalQnA/docker_compose/intel/cpu/xeon/
docker compose -f compose.yaml up -d
```
### Validate Microservices
1. embedding-multimodal-bridgetower
```bash
curl http://${host_ip}:${EMBEDDER_PORT}/v1/encode \
-X POST \
-H "Content-Type:application/json" \
-d '{"text":"This is example"}'
```
```bash
curl http://${host_ip}:${EMBEDDER_PORT}/v1/encode \
-X POST \
-H "Content-Type:application/json" \
-d '{"text":"This is example", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'
```
2. embedding-multimodal
```bash
curl http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings \
-X POST \
-H "Content-Type: application/json" \
-d '{"text" : "This is some sample text."}'
```
```bash
curl http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": {"text" : "This is some sample text."}, "image" : {"url": "https://github.com/docarray/docarray/blob/main/tests/toydata/image-data/apple.png?raw=true"}}'
```
3. retriever-multimodal-redis
```bash
export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(512)]; print(embedding)")
curl http://${host_ip}:7000/v1/multimodal_retrieval \
-X POST \
-H "Content-Type: application/json" \
-d "{\"text\":\"test\",\"embedding\":${your_embedding}}"
```
4. lvm-llava
```bash
curl http://${host_ip}:${LLAVA_SERVER_PORT}/generate \
-X POST \
-H "Content-Type:application/json" \
-d '{"prompt":"Describe the image please.", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'
```
5. lvm-llava-svc
```bash
curl http://${host_ip}:9399/v1/lvm \
-X POST \
-H 'Content-Type: application/json' \
-d '{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [{"b64_img_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "transcript_for_inference": "yellow image", "video_id": "8c7461df-b373-4a00-8696-9a2234359fe0", "time_of_frame_ms":"37000000", "source_video":"WeAreGoingOnBullrun_8c7461df-b373-4a00-8696-9a2234359fe0.mp4"}], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'
```
```bash
curl http://${host_ip}:9399/v1/lvm \
-X POST \
-H 'Content-Type: application/json' \
-d '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}'
```
Also, validate LVM Microservice with empty retrieval results
```bash
curl http://${host_ip}:9399/v1/lvm \
-X POST \
-H 'Content-Type: application/json' \
-d '{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'
```
6. dataprep-multimodal-redis
Download a sample video
```bash
export video_fn="WeAreGoingOnBullrun.mp4"
wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WeAreGoingOnBullrun.mp4 -O ${video_fn}
```
Test dataprep microservice. This command updates a knowledge base by uploading a local video .mp4.
```bash
curl --silent --write-out "HTTPSTATUS:%{http_code}" \
${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT} \
-H 'Content-Type: multipart/form-data' \
-X POST -F "files=@./${video_fn}"
```
Also, test dataprep microservice with generating caption using lvm microservice
```bash
curl --silent --write-out "HTTPSTATUS:%{http_code}" \
${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT} \
-H 'Content-Type: multipart/form-data' \
-X POST -F "files=@./${video_fn}"
```
Also, you are able to get the list of all videos that you uploaded:
```bash
curl -X POST \
-H "Content-Type: application/json" \
${DATAPREP_GET_VIDEO_ENDPOINT}
```
Then you will get the response python-style LIST like this. Notice the name of each uploaded video e.g., `videoname.mp4` will become `videoname_uuid.mp4` where `uuid` is a unique ID for each uploaded video. The same video that are uploaded twice will have different `uuid`.
```bash
[
"WeAreGoingOnBullrun_7ac553a1-116c-40a2-9fc5-deccbb89b507.mp4",
"WeAreGoingOnBullrun_6d13cf26-8ba2-4026-a3a9-ab2e5eb73a29.mp4"
]
```
To delete all uploaded videos along with data indexed with `$INDEX_NAME` in REDIS.
```bash
curl -X POST \
-H "Content-Type: application/json" \
${DATAPREP_DELETE_VIDEO_ENDPOINT}
```
7. MegaService
```bash
curl http://${host_ip}:8888/v1/multimodalqna \
-H "Content-Type: application/json" \
-X POST \
-d '{"messages": "What is the revenue of Nike in 2023?"}'
```
```bash
curl http://${host_ip}:8888/v1/multimodalqna \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": [{"type": "text", "text": "hello, "}, {"type": "image_url", "image_url": {"url": "https://www.ilankelman.org/stopsigns/australia.jpg"}}]}, {"role": "assistant", "content": "opea project! "}, {"role": "user", "content": "chao, "}], "max_tokens": 10}'
```

View File

@@ -0,0 +1,135 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
redis-vector-db:
image: redis/redis-stack:7.2.0-v9
container_name: redis-vector-db
ports:
- "6379:6379"
- "8001:8001"
dataprep-multimodal-redis:
image: ${REGISTRY:-opea}/dataprep-multimodal-redis:${TAG:-latest}
container_name: dataprep-multimodal-redis
depends_on:
- redis-vector-db
- lvm-llava
ports:
- "6007:6007"
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
REDIS_URL: ${REDIS_URL}
REDIS_HOST: ${REDIS_HOST}
INDEX_NAME: ${INDEX_NAME}
LVM_ENDPOINT: "http://${LVM_SERVICE_HOST_IP}:9399/v1/lvm"
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
restart: unless-stopped
embedding-multimodal-bridgetower:
image: ${REGISTRY:-opea}/embedding-multimodal-bridgetower:${TAG:-latest}
container_name: embedding-multimodal-bridgetower
ports:
- ${EMBEDDER_PORT}:${EMBEDDER_PORT}
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
PORT: ${EMBEDDER_PORT}
restart: unless-stopped
embedding-multimodal:
image: ${REGISTRY:-opea}/embedding-multimodal:${TAG:-latest}
container_name: embedding-multimodal
depends_on:
- embedding-multimodal-bridgetower
ports:
- ${MM_EMBEDDING_PORT_MICROSERVICE}:${MM_EMBEDDING_PORT_MICROSERVICE}
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
MMEI_EMBEDDING_ENDPOINT: ${MMEI_EMBEDDING_ENDPOINT}
MM_EMBEDDING_PORT_MICROSERVICE: ${MM_EMBEDDING_PORT_MICROSERVICE}
restart: unless-stopped
retriever-multimodal-redis:
image: ${REGISTRY:-opea}/retriever-multimodal-redis:${TAG:-latest}
container_name: retriever-multimodal-redis
depends_on:
- redis-vector-db
ports:
- "7000:7000"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
REDIS_URL: ${REDIS_URL}
INDEX_NAME: ${INDEX_NAME}
restart: unless-stopped
lvm-llava:
image: ${REGISTRY:-opea}/lvm-llava:${TAG:-latest}
container_name: lvm-llava
ports:
- "8399:8399"
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
restart: unless-stopped
lvm-llava-svc:
image: ${REGISTRY:-opea}/lvm-llava-svc:${TAG:-latest}
container_name: lvm-llava-svc
depends_on:
- lvm-llava
ports:
- "9399:9399"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LVM_ENDPOINT: ${LVM_ENDPOINT}
restart: unless-stopped
multimodalqna:
image: ${REGISTRY:-opea}/multimodalqna:${TAG:-latest}
container_name: multimodalqna-backend-server
depends_on:
- redis-vector-db
- dataprep-multimodal-redis
- embedding-multimodal
- retriever-multimodal-redis
- lvm-llava-svc
ports:
- "8888:8888"
environment:
no_proxy: ${no_proxy}
https_proxy: ${https_proxy}
http_proxy: ${http_proxy}
MEGA_SERVICE_HOST_IP: ${MEGA_SERVICE_HOST_IP}
MM_EMBEDDING_SERVICE_HOST_IP: ${MM_EMBEDDING_SERVICE_HOST_IP}
MM_EMBEDDING_PORT_MICROSERVICE: ${MM_EMBEDDING_PORT_MICROSERVICE}
MM_RETRIEVER_SERVICE_HOST_IP: ${MM_RETRIEVER_SERVICE_HOST_IP}
LVM_SERVICE_HOST_IP: ${LVM_SERVICE_HOST_IP}
ipc: host
restart: always
multimodalqna-ui:
image: ${REGISTRY:-opea}/multimodalqna-ui:${TAG:-latest}
container_name: multimodalqna-gradio-ui-server
depends_on:
- multimodalqna
ports:
- "5173:5173"
environment:
- no_proxy=${no_proxy}
- https_proxy=${https_proxy}
- http_proxy=${http_proxy}
- BACKEND_SERVICE_ENDPOINT=${BACKEND_SERVICE_ENDPOINT}
- DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT=${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT}
- DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT=${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT}
ipc: host
restart: always
networks:
default:
driver: bridge

View File

@@ -0,0 +1,27 @@
#!/usr/bin/env bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
export no_proxy=${your_no_proxy}
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export EMBEDDER_PORT=6006
export MMEI_EMBEDDING_ENDPOINT="http://${host_ip}:$EMBEDDER_PORT/v1/encode"
export MM_EMBEDDING_PORT_MICROSERVICE=6000
export REDIS_URL="redis://${host_ip}:6379"
export REDIS_HOST=${host_ip}
export INDEX_NAME="mm-rag-redis"
export LLAVA_SERVER_PORT=8399
export LVM_ENDPOINT="http://${host_ip}:8399"
export EMBEDDING_MODEL_ID="BridgeTower/bridgetower-large-itm-mlm-itc"
export WHISPER_MODEL="base"
export MM_EMBEDDING_SERVICE_HOST_IP=${host_ip}
export MM_RETRIEVER_SERVICE_HOST_IP=${host_ip}
export LVM_SERVICE_HOST_IP=${host_ip}
export MEGA_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/multimodalqna"
export DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_transcripts"
export DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_captions"
export DATAPREP_GET_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_videos"
export DATAPREP_DELETE_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_videos"

View File

@@ -0,0 +1,297 @@
# Build Mega Service of MultimodalRAGWithVideos on Gaudi
This document outlines the deployment process for a MultimodalQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Gaudi server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `multimodal_embedding` that employs [BridgeTower](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-gaudi) model as embedding model, `multimodal_retriever`, `lvm`, and `multimodal-data-prep`. We will publish the Docker images to Docker Hub soon, it will simplify the deployment process for this service.
## Setup Environment Variables
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.
**Export the value of the public IP address of your Xeon server to the `host_ip` environment variable**
> Change the External_Public_IP below with the actual IPV4 value
```
export host_ip="External_Public_IP"
```
**Append the value of the public IP address to the no_proxy list**
```bash
export your_no_proxy=${your_no_proxy},"External_Public_IP"
```
```bash
export no_proxy=${your_no_proxy}
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export EMBEDDER_PORT=6006
export MMEI_EMBEDDING_ENDPOINT="http://${host_ip}:$EMBEDDER_PORT/v1/encode"
export MM_EMBEDDING_PORT_MICROSERVICE=6000
export REDIS_URL="redis://${host_ip}:6379"
export REDIS_HOST=${host_ip}
export INDEX_NAME="mm-rag-redis"
export LLAVA_SERVER_PORT=8399
export LVM_ENDPOINT="http://${host_ip}:8399"
export EMBEDDING_MODEL_ID="BridgeTower/bridgetower-large-itm-mlm-itc"
export LVM_MODEL_ID="llava-hf/llava-v1.6-vicuna-13b-hf"
export WHISPER_MODEL="base"
export MM_EMBEDDING_SERVICE_HOST_IP=${host_ip}
export MM_RETRIEVER_SERVICE_HOST_IP=${host_ip}
export LVM_SERVICE_HOST_IP=${host_ip}
export MEGA_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/multimodalqna"
export DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_transcripts"
export DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_captions"
export DATAPREP_GET_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_videos"
export DATAPREP_DELETE_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_videos"
```
Note: Please replace with `host_ip` with you external IP address, do not use localhost.
## 🚀 Build Docker Images
First of all, you need to build Docker Images locally and install the python package of it.
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
```
### 1. Build embedding-multimodal-bridgetower Image
Build embedding-multimodal-bridgetower docker image
```bash
docker build --no-cache -t opea/embedding-multimodal-bridgetower:latest --build-arg EMBEDDER_PORT=$EMBEDDER_PORT --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/multimodal/bridgetower/Dockerfile .
```
Build embedding-multimodal microservice image
```bash
docker build --no-cache -t opea/embedding-multimodal:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/multimodal/multimodal_langchain/Dockerfile .
```
### 2. Build retriever-multimodal-redis Image
```bash
docker build --no-cache -t opea/retriever-multimodal-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/retrievers/multimodal/redis/langchain/Dockerfile .
```
### 3. Build LVM Images
Build TGI Gaudi image
```bash
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.4
```
Build lvm-tgi microservice image
```bash
docker build --no-cache -t opea/lvm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/tgi-llava/Dockerfile .
```
### 4. Build dataprep-multimodal-redis Image
```bash
docker build --no-cache -t opea/dataprep-multimodal-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/multimodal/redis/langchain/Dockerfile .
```
### 5. Build MegaService Docker Image
To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the [multimodalqna.py](../../../../multimodalqna.py) Python script. Build MegaService Docker image via below command:
```bash
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/MultimodalQnA
docker build --no-cache -t opea/multimodalqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
cd ../..
```
### 6. Build UI Docker Image
Build frontend Docker image via below command:
```bash
cd GenAIExamples/MultimodalQnA/ui/
docker build --no-cache -t opea/multimodalqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
cd ../../../
```
Then run the command `docker images`, you will have the following 8 Docker Images:
1. `opea/dataprep-multimodal-redis:latest`
2. `opea/lvm-tgi:latest`
3. `ghcr.io/huggingface/tgi-gaudi:2.0.4`
4. `opea/retriever-multimodal-redis:latest`
5. `opea/embedding-multimodal:latest`
6. `opea/embedding-multimodal-bridgetower:latest`
7. `opea/multimodalqna:latest`
8. `opea/multimodalqna-ui:latest`
## 🚀 Start Microservices
### Required Models
By default, the multimodal-embedding and LVM models are set to a default value as listed below:
| Service | Model |
| -------------------- | ------------------------------------------- |
| embedding-multimodal | BridgeTower/bridgetower-large-itm-mlm-gaudi |
| LVM | llava-hf/llava-v1.6-vicuna-13b-hf |
### Start all the services Docker Containers
> Before running the docker compose command, you need to be in the folder that has the docker compose yaml file
```bash
cd GenAIExamples/MultimodalQnA/docker_compose/intel/hpu/gaudi/
docker compose -f compose.yaml up -d
```
### Validate Microservices
1. embedding-multimodal-bridgetower
```bash
curl http://${host_ip}:${EMBEDDER_PORT}/v1/encode \
-X POST \
-H "Content-Type:application/json" \
-d '{"text":"This is example"}'
```
```bash
curl http://${host_ip}:${EMBEDDER_PORT}/v1/encode \
-X POST \
-H "Content-Type:application/json" \
-d '{"text":"This is example", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'
```
2. embedding-multimodal
```bash
curl http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings \
-X POST \
-H "Content-Type: application/json" \
-d '{"text" : "This is some sample text."}'
```
```bash
curl http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": {"text" : "This is some sample text."}, "image" : {"url": "https://github.com/docarray/docarray/blob/main/tests/toydata/image-data/apple.png?raw=true"}}'
```
3. retriever-multimodal-redis
```bash
export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(512)]; print(embedding)")
curl http://${host_ip}:7000/v1/multimodal_retrieval \
-X POST \
-H "Content-Type: application/json" \
-d "{\"text\":\"test\",\"embedding\":${your_embedding}}"
```
4. TGI LLaVA Gaudi Server
```bash
curl http://${host_ip}:${LLAVA_SERVER_PORT}/generate \
-X POST \
-d '{"inputs":"![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png)What is this a picture of?\n\n","parameters":{"max_new_tokens":16, "seed": 42}}' \
-H 'Content-Type: application/json'
```
5. lvm-tgi
```bash
curl http://${host_ip}:9399/v1/lvm \
-X POST \
-H 'Content-Type: application/json' \
-d '{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [{"b64_img_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "transcript_for_inference": "yellow image", "video_id": "8c7461df-b373-4a00-8696-9a2234359fe0", "time_of_frame_ms":"37000000", "source_video":"WeAreGoingOnBullrun_8c7461df-b373-4a00-8696-9a2234359fe0.mp4"}], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'
```
```bash
curl http://${host_ip}:9399/v1/lvm \
-X POST \
-H 'Content-Type: application/json' \
-d '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}'
```
Also, validate LVM TGI Gaudi Server with empty retrieval results
```bash
curl http://${host_ip}:9399/v1/lvm \
-X POST \
-H 'Content-Type: application/json' \
-d '{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'
```
6. Multimodal Dataprep Microservice
Download a sample video
```bash
export video_fn="WeAreGoingOnBullrun.mp4"
wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WeAreGoingOnBullrun.mp4 -O ${video_fn}
```
Test dataprep microservice. This command updates a knowledge base by uploading a local video .mp4.
Test dataprep microservice with generating transcript using whisper model
```bash
curl --silent --write-out "HTTPSTATUS:%{http_code}" \
${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT} \
-H 'Content-Type: multipart/form-data' \
-X POST -F "files=@./${video_fn}"
```
Also, test dataprep microservice with generating caption using lvm-tgi
```bash
curl --silent --write-out "HTTPSTATUS:%{http_code}" \
${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT} \
-H 'Content-Type: multipart/form-data' \
-X POST -F "files=@./${video_fn}"
```
Also, you are able to get the list of all videos that you uploaded:
```bash
curl -X POST \
-H "Content-Type: application/json" \
${DATAPREP_GET_VIDEO_ENDPOINT}
```
Then you will get the response python-style LIST like this. Notice the name of each uploaded video e.g., `videoname.mp4` will become `videoname_uuid.mp4` where `uuid` is a unique ID for each uploaded video. The same video that are uploaded twice will have different `uuid`.
```bash
[
"WeAreGoingOnBullrun_7ac553a1-116c-40a2-9fc5-deccbb89b507.mp4",
"WeAreGoingOnBullrun_6d13cf26-8ba2-4026-a3a9-ab2e5eb73a29.mp4"
]
```
To delete all uploaded videos along with data indexed with `$INDEX_NAME` in REDIS.
```bash
curl -X POST \
-H "Content-Type: application/json" \
${DATAPREP_DELETE_VIDEO_ENDPOINT}
```
7. MegaService
```bash
curl http://${host_ip}:8888/v1/multimodalqna \
-H "Content-Type: application/json" \
-X POST \
-d '{"messages": "What is the revenue of Nike in 2023?"}'
```
```bash
curl http://${host_ip}:8888/v1/multimodalqna \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": [{"type": "text", "text": "hello, "}, {"type": "image_url", "image_url": {"url": "https://www.ilankelman.org/stopsigns/australia.jpg"}}]}, {"role": "assistant", "content": "opea project! "}, {"role": "user", "content": "chao, "}], "max_tokens": 10}'
```

View File

@@ -0,0 +1,149 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
redis-vector-db:
image: redis/redis-stack:7.2.0-v9
container_name: redis-vector-db
ports:
- "6379:6379"
- "8001:8001"
dataprep-multimodal-redis:
image: ${REGISTRY:-opea}/dataprep-multimodal-redis:${TAG:-latest}
container_name: dataprep-multimodal-redis
depends_on:
- redis-vector-db
- lvm-tgi
ports:
- "6007:6007"
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
REDIS_URL: ${REDIS_URL}
REDIS_HOST: ${REDIS_HOST}
INDEX_NAME: ${INDEX_NAME}
LVM_ENDPOINT: "http://${LVM_SERVICE_HOST_IP}:9399/v1/lvm"
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
restart: unless-stopped
embedding-multimodal-bridgetower:
image: ${REGISTRY:-opea}/embedding-multimodal-bridgetower:${TAG:-latest}
container_name: embedding-multimodal-bridgetower
ports:
- ${EMBEDDER_PORT}:${EMBEDDER_PORT}
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
PORT: ${EMBEDDER_PORT}
restart: unless-stopped
embedding-multimodal:
image: ${REGISTRY:-opea}/embedding-multimodal:${TAG:-latest}
container_name: embedding-multimodal
depends_on:
- embedding-multimodal-bridgetower
ports:
- ${MM_EMBEDDING_PORT_MICROSERVICE}:${MM_EMBEDDING_PORT_MICROSERVICE}
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
MMEI_EMBEDDING_ENDPOINT: ${MMEI_EMBEDDING_ENDPOINT}
MM_EMBEDDING_PORT_MICROSERVICE: ${MM_EMBEDDING_PORT_MICROSERVICE}
restart: unless-stopped
retriever-multimodal-redis:
image: ${REGISTRY:-opea}/retriever-multimodal-redis:${TAG:-latest}
container_name: retriever-multimodal-redis
depends_on:
- redis-vector-db
ports:
- "7000:7000"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
REDIS_URL: ${REDIS_URL}
INDEX_NAME: ${INDEX_NAME}
restart: unless-stopped
tgi-gaudi:
image: ghcr.io/huggingface/tgi-gaudi:2.0.4
container_name: tgi-llava-gaudi-server
ports:
- "8399:80"
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0
HABANA_VISIBLE_DEVICES: all
OMPI_MCA_btl_vader_single_copy_mechanism: none
PREFILL_BATCH_BUCKET_SIZE: 1
BATCH_BUCKET_SIZE: 1
MAX_BATCH_TOTAL_TOKENS: 4096
runtime: habana
cap_add:
- SYS_NICE
ipc: host
command: --model-id ${LVM_MODEL_ID} --max-input-tokens 3048 --max-total-tokens 4096
restart: unless-stopped
lvm-tgi:
image: ${REGISTRY:-opea}/lvm-tgi:${TAG:-latest}
container_name: lvm-tgi
depends_on:
- tgi-gaudi
ports:
- "9399:9399"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LVM_ENDPOINT: ${LVM_ENDPOINT}
HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0
restart: unless-stopped
multimodalqna:
image: ${REGISTRY:-opea}/multimodalqna:${TAG:-latest}
container_name: multimodalqna-backend-server
depends_on:
- redis-vector-db
- dataprep-multimodal-redis
- embedding-multimodal
- retriever-multimodal-redis
- lvm-tgi
ports:
- "8888:8888"
environment:
no_proxy: ${no_proxy}
https_proxy: ${https_proxy}
http_proxy: ${http_proxy}
MEGA_SERVICE_HOST_IP: ${MEGA_SERVICE_HOST_IP}
MM_EMBEDDING_SERVICE_HOST_IP: ${MM_EMBEDDING_SERVICE_HOST_IP}
MM_EMBEDDING_PORT_MICROSERVICE: ${MM_EMBEDDING_PORT_MICROSERVICE}
MM_RETRIEVER_SERVICE_HOST_IP: ${MM_RETRIEVER_SERVICE_HOST_IP}
LVM_SERVICE_HOST_IP: ${LVM_SERVICE_HOST_IP}
ipc: host
restart: always
multimodalqna-ui:
image: ${REGISTRY:-opea}/multimodalqna-ui:${TAG:-latest}
container_name: multimodalqna-gradio-ui-server
depends_on:
- multimodalqna
ports:
- "5173:5173"
environment:
- no_proxy=${no_proxy}
- https_proxy=${https_proxy}
- http_proxy=${http_proxy}
- BACKEND_SERVICE_ENDPOINT=${BACKEND_SERVICE_ENDPOINT}
- DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT=${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT}
- DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT=${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT}
ipc: host
restart: always
networks:
default:
driver: bridge

View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
export no_proxy=${your_no_proxy}
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export EMBEDDER_PORT=6006
export MMEI_EMBEDDING_ENDPOINT="http://${host_ip}:$EMBEDDER_PORT/v1/encode"
export MM_EMBEDDING_PORT_MICROSERVICE=6000
export REDIS_URL="redis://${host_ip}:6379"
export REDIS_HOST=${host_ip}
export INDEX_NAME="mm-rag-redis"
export LLAVA_SERVER_PORT=8399
export LVM_ENDPOINT="http://${host_ip}:8399"
export EMBEDDING_MODEL_ID="BridgeTower/bridgetower-large-itm-mlm-itc"
export LVM_MODEL_ID="llava-hf/llava-v1.6-vicuna-13b-hf"
export WHISPER_MODEL="base"
export MM_EMBEDDING_SERVICE_HOST_IP=${host_ip}
export MM_RETRIEVER_SERVICE_HOST_IP=${host_ip}
export LVM_SERVICE_HOST_IP=${host_ip}
export MEGA_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/multimodalqna"
export DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_transcripts"
export DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_captions"
export DATAPREP_GET_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_videos"
export DATAPREP_DELETE_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_videos"

View File

@@ -0,0 +1,61 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
multimodalqna:
build:
args:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
no_proxy: ${no_proxy}
context: ../
dockerfile: ./Dockerfile
image: ${REGISTRY:-opea}/multimodalqna:${TAG:-latest}
multimodalqna-ui:
build:
context: ../ui
dockerfile: ./docker/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/multimodalqna-ui:${TAG:-latest}
embedding-multimodal-bridgetower:
build:
context: GenAIComps
dockerfile: comps/embeddings/multimodal/bridgetower/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/embedding-multimodal-bridgetower:${TAG:-latest}
embedding-multimodal:
build:
context: GenAIComps
dockerfile: comps/embeddings/multimodal/multimodal_langchain/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/embedding-multimodal:${TAG:-latest}
retriever-multimodal-redis:
build:
context: GenAIComps
dockerfile: comps/retrievers/multimodal/redis/langchain/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/retriever-multimodal-redis:${TAG:-latest}
lvm-llava:
build:
context: GenAIComps
dockerfile: comps/lvms/llava/dependency/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/lvm-llava:${TAG:-latest}
lvm-llava-svc:
build:
context: GenAIComps
dockerfile: comps/lvms/llava/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/lvm-llava-svc:${TAG:-latest}
lvm-tgi:
build:
context: GenAIComps
dockerfile: comps/lvms/tgi-llava/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/lvm-tgi:${TAG:-latest}
dataprep-multimodal-redis:
build:
context: GenAIComps
dockerfile: comps/dataprep/multimodal/redis/langchain/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/dataprep-multimodal-redis:${TAG:-latest}

View File

@@ -0,0 +1,70 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
from comps import MicroService, MultimodalQnAGateway, ServiceOrchestrator, ServiceType
MEGA_SERVICE_HOST_IP = os.getenv("MEGA_SERVICE_HOST_IP", "0.0.0.0")
MEGA_SERVICE_PORT = int(os.getenv("MEGA_SERVICE_PORT", 8888))
MM_EMBEDDING_SERVICE_HOST_IP = os.getenv("MM_EMBEDDING_SERVICE_HOST_IP", "0.0.0.0")
MM_EMBEDDING_PORT_MICROSERVICE = int(os.getenv("MM_EMBEDDING_PORT_MICROSERVICE", 6000))
MM_RETRIEVER_SERVICE_HOST_IP = os.getenv("MM_RETRIEVER_SERVICE_HOST_IP", "0.0.0.0")
MM_RETRIEVER_SERVICE_PORT = int(os.getenv("MM_RETRIEVER_SERVICE_PORT", 7000))
LVM_SERVICE_HOST_IP = os.getenv("LVM_SERVICE_HOST_IP", "0.0.0.0")
LVM_SERVICE_PORT = int(os.getenv("LVM_SERVICE_PORT", 9399))
class MultimodalQnAService:
def __init__(self, host="0.0.0.0", port=8000):
self.host = host
self.port = port
self.mmrag_megaservice = ServiceOrchestrator()
self.lvm_megaservice = ServiceOrchestrator()
def add_remote_service(self):
mm_embedding = MicroService(
name="embedding",
host=MM_EMBEDDING_SERVICE_HOST_IP,
port=MM_EMBEDDING_PORT_MICROSERVICE,
endpoint="/v1/embeddings",
use_remote_service=True,
service_type=ServiceType.EMBEDDING,
)
mm_retriever = MicroService(
name="retriever",
host=MM_RETRIEVER_SERVICE_HOST_IP,
port=MM_RETRIEVER_SERVICE_PORT,
endpoint="/v1/multimodal_retrieval",
use_remote_service=True,
service_type=ServiceType.RETRIEVER,
)
lvm = MicroService(
name="lvm",
host=LVM_SERVICE_HOST_IP,
port=LVM_SERVICE_PORT,
endpoint="/v1/lvm",
use_remote_service=True,
service_type=ServiceType.LVM,
)
# for mmrag megaservice
self.mmrag_megaservice.add(mm_embedding).add(mm_retriever).add(lvm)
self.mmrag_megaservice.flow_to(mm_embedding, mm_retriever)
self.mmrag_megaservice.flow_to(mm_retriever, lvm)
# for lvm megaservice
self.lvm_megaservice.add(lvm)
self.gateway = MultimodalQnAGateway(
multimodal_rag_megaservice=self.mmrag_megaservice,
lvm_megaservice=self.lvm_megaservice,
host="0.0.0.0",
port=self.port,
)
if __name__ == "__main__":
mmragwithvideos = MultimodalQnAService(host=MEGA_SERVICE_HOST_IP, port=MEGA_SERVICE_PORT)
mmragwithvideos.add_remote_service()

View File

@@ -0,0 +1,264 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
export REGISTRY=${IMAGE_REPO}
export TAG=${IMAGE_TAG}
WORKPATH=$(dirname "$PWD")
LOG_PATH="$WORKPATH/tests"
ip_address=$(hostname -I | awk '{print $1}')
export video_fn="WeAreGoingOnBullrun.mp4"
function build_docker_images() {
cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="multimodalqna multimodalqna-ui embedding-multimodal-bridgetower embedding-multimodal retriever-multimodal-redis lvm-tgi dataprep-multimodal-redis"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.4
docker images && sleep 1s
}
function setup_env() {
export host_ip=${ip_address}
export EMBEDDER_PORT=6006
export MMEI_EMBEDDING_ENDPOINT="http://${host_ip}:$EMBEDDER_PORT/v1/encode"
export MM_EMBEDDING_PORT_MICROSERVICE=6000
export REDIS_URL="redis://${host_ip}:6379"
export REDIS_HOST=${host_ip}
export INDEX_NAME="mm-rag-redis"
export LLAVA_SERVER_PORT=8399
export LVM_ENDPOINT="http://${host_ip}:8399"
export EMBEDDING_MODEL_ID="BridgeTower/bridgetower-large-itm-mlm-itc"
export LVM_MODEL_ID="llava-hf/llava-v1.6-vicuna-13b-hf"
export WHISPER_MODEL="base"
export MM_EMBEDDING_SERVICE_HOST_IP=${host_ip}
export MM_RETRIEVER_SERVICE_HOST_IP=${host_ip}
export LVM_SERVICE_HOST_IP=${host_ip}
export MEGA_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/multimodalqna"
export DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_transcripts"
export DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_captions"
export DATAPREP_GET_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_videos"
export DATAPREP_DELETE_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_videos"
}
function start_services() {
cd $WORKPATH/docker_compose/intel/hpu/gaudi
# Start Docker Containers
docker compose -f compose.yaml up -d > ${LOG_PATH}/start_services_with_compose.log
sleep 2m
}
function prepare_data() {
cd $LOG_PATH
echo "Downloading video"
wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WeAreGoingOnBullrun.mp4 -O ${video_fn}
sleep 30s
}
function validate_service() {
local URL="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
local DOCKER_NAME="$4"
local INPUT_DATA="$5"
if [[ $SERVICE_NAME == *"dataprep-multimodal-redis"* ]]; then
cd $LOG_PATH
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -F "files=@./${video_fn}" -H 'Content-Type: multipart/form-data' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_get"* ]]; then
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -H 'Content-Type: application/json' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_del"* ]]; then
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -H 'Content-Type: application/json' "$URL")
else
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL")
fi
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
RESPONSE_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
# check response status
if [ "$HTTP_STATUS" -ne "200" ]; then
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS"
exit 1
else
echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..."
fi
# check response body
if [[ "$RESPONSE_BODY" != *"$EXPECTED_RESULT"* ]]; then
echo "[ $SERVICE_NAME ] Content does not match the expected result: $RESPONSE_BODY"
exit 1
else
echo "[ $SERVICE_NAME ] Content is as expected."
fi
sleep 1s
}
function validate_microservices() {
# Check if the microservices are running correctly.
# Bridgetower Embedding Server
echo "Validating embedding-multimodal-bridgetower"
validate_service \
"http://${host_ip}:${EMBEDDER_PORT}/v1/encode" \
'"embedding":[' \
"embedding-multimodal-bridgetower" \
"embedding-multimodal-bridgetower" \
'{"text":"This is example"}'
validate_service \
"http://${host_ip}:${EMBEDDER_PORT}/v1/encode" \
'"embedding":[' \
"embedding-multimodal-bridgetower" \
"embedding-multimodal-bridgetower" \
'{"text":"This is example", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'
# embedding microservice
echo "Validating embedding-multimodal"
validate_service \
"http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings" \
'"embedding":[' \
"embedding-multimodal" \
"embedding-multimodal" \
'{"text" : "This is some sample text."}'
validate_service \
"http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings" \
'"embedding":[' \
"embedding-multimodal" \
"embedding-multimodal" \
'{"text": {"text" : "This is some sample text."}, "image" : {"url": "https://github.com/docarray/docarray/blob/main/tests/toydata/image-data/apple.png?raw=true"}}'
sleep 1m # retrieval can't curl as expected, try to wait for more time
# test data prep
echo "Data Prep with Generating Transcript"
validate_service \
"${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT}" \
"Data preparation succeeded" \
"dataprep-multimodal-redis" \
"dataprep-multimodal-redis"
echo "Data Prep with Generating Transcript"
validate_service \
"${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT}" \
"Data preparation succeeded" \
"dataprep-multimodal-redis" \
"dataprep-multimodal-redis"
echo "Validating get file"
validate_service \
"${DATAPREP_GET_VIDEO_ENDPOINT}" \
'.mp4' \
"dataprep_get" \
"dataprep-multimodal-redis"
sleep 1m
# multimodal retrieval microservice
echo "Validating retriever-multimodal-redis"
your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(512)]; print(embedding)")
validate_service \
"http://${host_ip}:7000/v1/multimodal_retrieval" \
"retrieved_docs" \
"retriever-multimodal-redis" \
"retriever-multimodal-redis" \
"{\"text\":\"test\",\"embedding\":${your_embedding}}"
sleep 10s
# llava server
echo "Evaluating LLAVA tgi-gaudi"
validate_service \
"http://${host_ip}:${LLAVA_SERVER_PORT}/generate" \
'"generated_text":' \
"tgi-gaudi" \
"tgi-llava-gaudi-server" \
'{"inputs":"![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png)What is this a picture of?\n\n","parameters":{"max_new_tokens":16, "seed": 42}}'
# lvm
echo "Evaluating lvm-tgi"
validate_service \
"http://${host_ip}:9399/v1/lvm" \
'"text":"' \
"lvm-tgi" \
"lvm-tgi" \
'{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [{"b64_img_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "transcript_for_inference": "yellow image", "video_id": "8c7461df-b373-4a00-8696-9a2234359fe0", "time_of_frame_ms":"37000000", "source_video":"WeAreGoingOnBullrun_8c7461df-b373-4a00-8696-9a2234359fe0.mp4"}], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'
sleep 1m
}
function validate_megaservice() {
# Curl the Mega Service with retrieval
echo "Validate megaservice with first query"
validate_service \
"http://${host_ip}:8888/v1/multimodalqna" \
'"time_of_frame_ms":' \
"multimodalqna" \
"multimodalqna-backend-server" \
'{"messages": "What is the revenue of Nike in 2023?"}'
echo "Validate megaservice with follow-up query"
validate_service \
"http://${host_ip}:8888/v1/multimodalqna" \
'"content":"' \
"multimodalqna" \
"multimodalqna-backend-server" \
'{"messages": [{"role": "user", "content": [{"type": "text", "text": "hello, "}, {"type": "image_url", "image_url": {"url": "https://www.ilankelman.org/stopsigns/australia.jpg"}}]}, {"role": "assistant", "content": "opea project! "}, {"role": "user", "content": "chao, "}], "max_tokens": 10}'
}
function validate_delete {
echo "Validate data prep delete videos"
validate_service \
"${DATAPREP_DELETE_VIDEO_ENDPOINT}" \
'{"status":true}' \
"dataprep_del" \
"dataprep-multimodal-redis"
}
function stop_docker() {
cd $WORKPATH/docker_compose/intel/hpu/gaudi
docker compose -f compose.yaml stop && docker compose -f compose.yaml rm -f
}
function main() {
setup_env
stop_docker
if [[ "$IMAGE_REPO" == "opea" ]]; then build_docker_images; fi
start_time=$(date +%s)
start_services
end_time=$(date +%s)
duration=$((end_time-start_time))
echo "Mega service start duration is $duration s" && sleep 1s
prepare_data
validate_microservices
echo "==== microservices validated ===="
validate_megaservice
echo "==== megaservice validated ===="
validate_delete
echo "==== delete validated ===="
stop_docker
echo y | docker system prune
}
main

View File

@@ -0,0 +1,262 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
export REGISTRY=${IMAGE_REPO}
export TAG=${IMAGE_TAG}
WORKPATH=$(dirname "$PWD")
LOG_PATH="$WORKPATH/tests"
ip_address=$(hostname -I | awk '{print $1}')
export video_fn="WeAreGoingOnBullrun.mp4"
function build_docker_images() {
cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="multimodalqna multimodalqna-ui embedding-multimodal-bridgetower embedding-multimodal retriever-multimodal-redis lvm-llava lvm-llava-svc dataprep-multimodal-redis"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker images && sleep 1m
}
function setup_env() {
export host_ip=${ip_address}
export EMBEDDER_PORT=6006
export MMEI_EMBEDDING_ENDPOINT="http://${host_ip}:$EMBEDDER_PORT/v1/encode"
export MM_EMBEDDING_PORT_MICROSERVICE=6000
export REDIS_URL="redis://${host_ip}:6379"
export REDIS_HOST=${host_ip}
export INDEX_NAME="mm-rag-redis"
export LLAVA_SERVER_PORT=8399
export LVM_ENDPOINT="http://${host_ip}:8399"
export EMBEDDING_MODEL_ID="BridgeTower/bridgetower-large-itm-mlm-itc"
export WHISPER_MODEL="base"
export MM_EMBEDDING_SERVICE_HOST_IP=${host_ip}
export MM_RETRIEVER_SERVICE_HOST_IP=${host_ip}
export LVM_SERVICE_HOST_IP=${host_ip}
export MEGA_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/multimodalqna"
export DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_transcripts"
export DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/generate_captions"
export DATAPREP_GET_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_videos"
export DATAPREP_DELETE_VIDEO_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_videos"
}
function start_services() {
cd $WORKPATH/docker_compose/intel/cpu/xeon
# Start Docker Containers
docker compose -f compose.yaml up -d > ${LOG_PATH}/start_services_with_compose.log
sleep 2m
}
function prepare_data() {
cd $LOG_PATH
echo "Downloading video"
wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WeAreGoingOnBullrun.mp4 -O ${video_fn}
sleep 1m
}
function validate_service() {
local URL="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
local DOCKER_NAME="$4"
local INPUT_DATA="$5"
if [[ $SERVICE_NAME == *"dataprep-multimodal-redis"* ]]; then
cd $LOG_PATH
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -F "files=@./${video_fn}" -H 'Content-Type: multipart/form-data' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_get"* ]]; then
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -H 'Content-Type: application/json' "$URL")
elif [[ $SERVICE_NAME == *"dataprep_del"* ]]; then
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -H 'Content-Type: application/json' "$URL")
else
HTTP_RESPONSE=$(curl --silent --write-out "HTTPSTATUS:%{http_code}" -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL")
fi
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
RESPONSE_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log
# check response status
if [ "$HTTP_STATUS" -ne "200" ]; then
echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS"
exit 1
else
echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..."
fi
# check response body
if [[ "$RESPONSE_BODY" != *"$EXPECTED_RESULT"* ]]; then
echo "[ $SERVICE_NAME ] Content does not match the expected result: $RESPONSE_BODY"
exit 1
else
echo "[ $SERVICE_NAME ] Content is as expected."
fi
sleep 1s
}
function validate_microservices() {
# Check if the microservices are running correctly.
# Bridgetower Embedding Server
echo "Validating embedding-multimodal-bridgetower"
validate_service \
"http://${host_ip}:${EMBEDDER_PORT}/v1/encode" \
'"embedding":[' \
"embedding-multimodal-bridgetower" \
"embedding-multimodal-bridgetower" \
'{"text":"This is example"}'
validate_service \
"http://${host_ip}:${EMBEDDER_PORT}/v1/encode" \
'"embedding":[' \
"embedding-multimodal-bridgetower" \
"embedding-multimodal-bridgetower" \
'{"text":"This is example", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'
# embedding microservice
echo "Validating embedding-multimodal"
validate_service \
"http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings" \
'"embedding":[' \
"embedding-multimodal" \
"embedding-multimodal" \
'{"text" : "This is some sample text."}'
validate_service \
"http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings" \
'"embedding":[' \
"embedding-multimodal" \
"embedding-multimodal" \
'{"text": {"text" : "This is some sample text."}, "image" : {"url": "https://github.com/docarray/docarray/blob/main/tests/toydata/image-data/apple.png?raw=true"}}'
sleep 1m # retrieval can't curl as expected, try to wait for more time
# test data prep
echo "Data Prep with Generating Transcript"
validate_service \
"${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT}" \
"Data preparation succeeded" \
"dataprep-multimodal-redis" \
"dataprep-multimodal-redis"
# echo "Data Prep with Generating Caption"
# validate_service \
# "${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT}" \
# "Data preparation succeeded" \
# "dataprep-multimodal-redis" \
# "dataprep-multimodal-redis"
echo "Validating get file"
validate_service \
"${DATAPREP_GET_VIDEO_ENDPOINT}" \
'.mp4' \
"dataprep_get" \
"dataprep-multimodal-redis"
sleep 1m
# multimodal retrieval microservice
echo "Validating retriever-multimodal-redis"
your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(512)]; print(embedding)")
validate_service \
"http://${host_ip}:7000/v1/multimodal_retrieval" \
"retrieved_docs" \
"retriever-multimodal-redis" \
"retriever-multimodal-redis" \
"{\"text\":\"test\",\"embedding\":${your_embedding}}"
sleep 10s
# llava server
echo "Evaluating lvm-llava"
validate_service \
"http://${host_ip}:${LLAVA_SERVER_PORT}/generate" \
'"text":' \
"lvm-llava" \
"lvm-llava" \
'{"prompt":"Describe the image please.", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'
# lvm
echo "Evaluating lvm-llava-svc"
validate_service \
"http://${host_ip}:9399/v1/lvm" \
'"text":"' \
"lvm-llava-svc" \
"lvm-llava-svc" \
'{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [{"b64_img_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "transcript_for_inference": "yellow image", "video_id": "8c7461df-b373-4a00-8696-9a2234359fe0", "time_of_frame_ms":"37000000", "source_video":"WeAreGoingOnBullrun_8c7461df-b373-4a00-8696-9a2234359fe0.mp4"}], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'
sleep 3m
}
function validate_megaservice() {
# Curl the Mega Service with retrieval
echo "Validate megaservice with first query"
validate_service \
"http://${host_ip}:8888/v1/multimodalqna" \
'"time_of_frame_ms":' \
"multimodalqna" \
"multimodalqna-backend-server" \
'{"messages": "What is the revenue of Nike in 2023?"}'
echo "Validate megaservice with follow-up query"
validate_service \
"http://${host_ip}:8888/v1/multimodalqna" \
'"content":"' \
"multimodalqna" \
"multimodalqna-backend-server" \
'{"messages": [{"role": "user", "content": [{"type": "text", "text": "hello, "}, {"type": "image_url", "image_url": {"url": "https://www.ilankelman.org/stopsigns/australia.jpg"}}]}, {"role": "assistant", "content": "opea project! "}, {"role": "user", "content": "chao, "}], "max_tokens": 10}'
}
function validate_delete {
echo "Validate data prep delete videos"
validate_service \
"${DATAPREP_DELETE_VIDEO_ENDPOINT}" \
'{"status":true}' \
"dataprep_del" \
"dataprep-multimodal-redis"
}
function stop_docker() {
cd $WORKPATH/docker_compose/intel/cpu/xeon
docker compose -f compose.yaml stop && docker compose -f compose.yaml rm -f
}
function main() {
setup_env
stop_docker
if [[ "$IMAGE_REPO" == "opea" ]]; then build_docker_images; fi
start_time=$(date +%s)
start_services
end_time=$(date +%s)
duration=$((end_time-start_time))
echo "Mega service start duration is $duration s" && sleep 1s
prepare_data
validate_microservices
echo "==== microservices validated ===="
validate_megaservice
echo "==== megaservice validated ===="
validate_delete
echo "==== delete validated ===="
stop_docker
echo y | docker system prune
}
main

View File

@@ -0,0 +1,35 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
FROM python:3.11-slim
ENV LANG=C.UTF-8
ARG ARCH="cpu"
RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
build-essential \
libgl1-mesa-glx \
libjemalloc-dev \
default-jre \
wget \
vim
# Install ffmpeg static build
WORKDIR /root
RUN wget https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz && \
mkdir ffmpeg-git-amd64-static && tar -xvf ffmpeg-git-amd64-static.tar.xz -C ffmpeg-git-amd64-static --strip-components 1 && \
export PATH=/root/ffmpeg-git-amd64-static:$PATH && \
cp /root/ffmpeg-git-amd64-static/ffmpeg /usr/local/bin/ && \
cp /root/ffmpeg-git-amd64-static/ffprobe /usr/local/bin/
RUN mkdir -p /home/user
COPY gradio /home/user/gradio
RUN pip install --no-cache-dir --upgrade pip setuptools && \
pip install --no-cache-dir -r /home/user/gradio/requirements.txt
WORKDIR /home/user/gradio
ENTRYPOINT ["python", "multimodalqna_ui_gradio.py"]
# ENTRYPOINT ["/usr/bin/sleep", "infinity"]

View File

@@ -0,0 +1,155 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import dataclasses
from enum import Enum, auto
from typing import List
from utils import get_b64_frame_from_timestamp
class SeparatorStyle(Enum):
"""Different separator style."""
SINGLE = auto()
@dataclasses.dataclass
class Conversation:
"""A class that keeps all conversation history."""
system: str
roles: List[str]
messages: List[List[str]]
offset: int
sep_style: SeparatorStyle = SeparatorStyle.SINGLE
sep: str = "\n"
video_file: str = None
caption: str = None
time_of_frame_ms: str = None
base64_frame: str = None
skip_next: bool = False
split_video: str = None
def _template_caption(self):
out = ""
if self.caption is not None:
out = f"The caption associated with the image is '{self.caption}'. "
return out
def get_prompt(self):
messages = self.messages
if len(messages) > 1 and messages[1][1] is None:
# Need to do RAG. prompt is the query only
ret = messages[0][1]
else:
# No need to do RAG. Thus, prompt of chatcompletion format
conv_dict = []
if self.sep_style == SeparatorStyle.SINGLE:
for i, (role, message) in enumerate(messages):
if message:
if i != 0:
dic = {"role": role, "content": message}
else:
dic = {"role": role}
if self.time_of_frame_ms and self.video_file:
content = [{"type": "text", "text": message}]
if self.base64_frame:
base64_frame = self.base64_frame
else:
base64_frame = get_b64_frame_from_timestamp(self.video_file, self.time_of_frame_ms)
self.base64_frame = base64_frame
content.append({"type": "image_url", "image_url": {"url": base64_frame}})
else:
content = message
dic["content"] = content
conv_dict.append(dic)
else:
raise ValueError(f"Invalid style: {self.sep_style}")
ret = conv_dict
return ret
def append_message(self, role, message):
self.messages.append([role, message])
def get_b64_image(self):
b64_img = None
if self.time_of_frame_ms and self.video_file:
time_of_frame_ms = self.time_of_frame_ms
video_file = self.video_file
b64_img = get_b64_frame_from_timestamp(video_file, time_of_frame_ms)
return b64_img
def to_gradio_chatbot(self):
ret = []
for i, (role, msg) in enumerate(self.messages[self.offset :]):
if i % 2 == 0:
if type(msg) is tuple:
import base64
from io import BytesIO
msg, image, image_process_mode = msg
max_hw, min_hw = max(image.size), min(image.size)
aspect_ratio = max_hw / min_hw
max_len, min_len = 800, 400
shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw))
longest_edge = int(shortest_edge * aspect_ratio)
W, H = image.size
if H > W:
H, W = longest_edge, shortest_edge
else:
H, W = shortest_edge, longest_edge
image = image.resize((W, H))
buffered = BytesIO()
image.save(buffered, format="JPEG")
img_b64_str = base64.b64encode(buffered.getvalue()).decode()
img_str = f'<img src="data:image/png;base64,{img_b64_str}" alt="user upload image" />'
msg = img_str + msg.replace("<image>", "").strip()
ret.append([msg, None])
else:
ret.append([msg, None])
else:
ret[-1][-1] = msg
return ret
def copy(self):
return Conversation(
system=self.system,
roles=self.roles,
messages=[[x, y] for x, y in self.messages],
offset=self.offset,
sep_style=self.sep_style,
sep=self.sep,
video_file=self.video_file,
caption=self.caption,
base64_frame=self.base64_frame,
)
def dict(self):
return {
"system": self.system,
"roles": self.roles,
"messages": self.messages,
"offset": self.offset,
"sep": self.sep,
"time_of_frame_ms": self.time_of_frame_ms,
"video_file": self.video_file,
"caption": self.caption,
"base64_frame": self.base64_frame,
"split_video": self.split_video,
}
multimodalqna_conv = Conversation(
system="",
roles=("user", "assistant"),
messages=(),
offset=0,
sep_style=SeparatorStyle.SINGLE,
sep="\n",
video_file=None,
caption=None,
time_of_frame_ms=None,
base64_frame=None,
split_video=None,
)

View File

@@ -0,0 +1,337 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import os
import shutil
import time
from pathlib import Path
import gradio as gr
import requests
import uvicorn
from conversation import multimodalqna_conv
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
from utils import build_logger, moderation_msg, server_error_msg, split_video
logger = build_logger("gradio_web_server", "gradio_web_server.log")
headers = {"Content-Type": "application/json"}
css = """
h1 {
text-align: center;
display:block;
}
"""
# create a FastAPI app
app = FastAPI()
cur_dir = os.getcwd()
static_dir = Path(os.path.join(cur_dir, "static/"))
tmp_dir = Path(os.path.join(cur_dir, "split_tmp_videos/"))
Path(static_dir).mkdir(parents=True, exist_ok=True)
app.mount("/static", StaticFiles(directory=static_dir), name="static")
description = "This Space lets you engage with MultimodalQnA on a video through a chat box."
no_change_btn = gr.Button()
enable_btn = gr.Button(interactive=True)
disable_btn = gr.Button(interactive=False)
def clear_history(state, request: gr.Request):
logger.info(f"clear_history. ip: {request.client.host}")
if state.split_video and os.path.exists(state.split_video):
os.remove(state.split_video)
state = multimodalqna_conv.copy()
return (state, state.to_gradio_chatbot(), "", None) + (disable_btn,) * 1
def add_text(state, text, request: gr.Request):
logger.info(f"add_text. ip: {request.client.host}. len: {len(text)}")
if len(text) <= 0:
state.skip_next = True
return (state, state.to_gradio_chatbot(), "", None) + (no_change_btn,) * 1
text = text[:2000] # Hard cut-off
state.append_message(state.roles[0], text)
state.append_message(state.roles[1], None)
state.skip_next = False
return (state, state.to_gradio_chatbot(), "") + (disable_btn,) * 1
def http_bot(state, request: gr.Request):
global gateway_addr
logger.info(f"http_bot. ip: {request.client.host}")
url = gateway_addr
is_very_first_query = False
if state.skip_next:
# This generate call is skipped due to invalid inputs
path_to_sub_videos = state.get_path_to_subvideos()
yield (state, state.to_gradio_chatbot(), path_to_sub_videos) + (no_change_btn,) * 1
return
if len(state.messages) == state.offset + 2:
# First round of conversation
is_very_first_query = True
new_state = multimodalqna_conv.copy()
new_state.append_message(new_state.roles[0], state.messages[-2][1])
new_state.append_message(new_state.roles[1], None)
state = new_state
# Construct prompt
prompt = state.get_prompt()
# Make requests
pload = {
"messages": prompt,
}
logger.info(f"==== request ====\n{pload}")
logger.info(f"==== url request ====\n{gateway_addr}")
state.messages[-1][-1] = ""
yield (state, state.to_gradio_chatbot(), state.split_video) + (disable_btn,) * 1
try:
response = requests.post(
url,
headers=headers,
json=pload,
timeout=100,
)
print(response.status_code)
print(response.json())
if response.status_code == 200:
response = response.json()
choice = response["choices"][-1]
metadata = choice["metadata"]
message = choice["message"]["content"]
if (
is_very_first_query
and not state.video_file
and "source_video" in metadata
and not state.time_of_frame_ms
and "time_of_frame_ms" in metadata
):
video_file = metadata["source_video"]
state.video_file = os.path.join(static_dir, metadata["source_video"])
state.time_of_frame_ms = metadata["time_of_frame_ms"]
splited_video_path = split_video(
state.video_file, state.time_of_frame_ms, tmp_dir, f"{state.time_of_frame_ms}__{video_file}"
)
state.split_video = splited_video_path
print(splited_video_path)
else:
raise requests.exceptions.RequestException
except requests.exceptions.RequestException as e:
state.messages[-1][-1] = server_error_msg
yield (state, state.to_gradio_chatbot(), None) + (enable_btn,)
return
state.messages[-1][-1] = message
yield (state, state.to_gradio_chatbot(), state.split_video) + (enable_btn,) * 1
logger.info(f"{state.messages[-1][-1]}")
return
def ingest_video_gen_transcript(filepath, request: gr.Request):
yield (gr.Textbox(visible=True, value="Please wait for ingesting your uploaded video into database..."))
basename = os.path.basename(filepath)
dest = os.path.join(static_dir, basename)
shutil.copy(filepath, dest)
print("Done copy uploaded file to static folder!")
headers = {
# 'Content-Type': 'multipart/form-data'
}
files = {
"files": open(dest, "rb"),
}
response = requests.post(dataprep_gen_transcript_addr, headers=headers, files=files)
print(response.status_code)
if response.status_code == 200:
response = response.json()
print(response)
yield (gr.Textbox(visible=True, value="Video ingestion is done. Saving your uploaded video..."))
time.sleep(2)
fn_no_ext = Path(dest).stem
if "video_id_maps" in response and fn_no_ext in response["video_id_maps"]:
new_dst = os.path.join(static_dir, response["video_id_maps"][fn_no_ext])
print(response["video_id_maps"][fn_no_ext])
os.rename(dest, new_dst)
yield (
gr.Textbox(
visible=True,
value="Congratulation! Your upload is done!\nClick the X button on the top right of the video upload box to upload another video.",
)
)
return
else:
yield (
gr.Textbox(
visible=True,
value="Something wrong!\nPlease click the X button on the top right of the video upload boxreupload your video!",
)
)
time.sleep(2)
return
def ingest_video_gen_caption(filepath, request: gr.Request):
yield (gr.Textbox(visible=True, value="Please wait for ingesting your uploaded video into database..."))
basename = os.path.basename(filepath)
dest = os.path.join(static_dir, basename)
shutil.copy(filepath, dest)
print("Done copy uploaded file to static folder!")
headers = {
# 'Content-Type': 'multipart/form-data'
}
files = {
"files": open(dest, "rb"),
}
response = requests.post(dataprep_gen_captiono_addr, headers=headers, files=files)
print(response.status_code)
if response.status_code == 200:
response = response.json()
print(response)
yield (gr.Textbox(visible=True, value="Video ingestion is done. Saving your uploaded video..."))
time.sleep(2)
fn_no_ext = Path(dest).stem
if "video_id_maps" in response and fn_no_ext in response["video_id_maps"]:
new_dst = os.path.join(static_dir, response["video_id_maps"][fn_no_ext])
print(response["video_id_maps"][fn_no_ext])
os.rename(dest, new_dst)
yield (
gr.Textbox(
visible=True,
value="Congratulation! Your upload is done!\nClick the X button on the top right of the video upload box to upload another video.",
)
)
return
else:
yield (
gr.Textbox(
visible=True,
value="Something wrong!\nPlease click the X button on the top right of the video upload boxreupload your video!",
)
)
time.sleep(2)
return
def clear_uploaded_video(request: gr.Request):
return gr.Textbox(visible=False)
with gr.Blocks() as upload_gen_trans:
gr.Markdown("# Ingest Your Own Video - Utilizing Generated Transcripts")
gr.Markdown(
"Please use this interface to ingest your own video if the video has meaningful audio (e.g., announcements, discussions, etc...)"
)
with gr.Row():
with gr.Column(scale=6):
video_upload = gr.Video(sources="upload", height=512, width=512, elem_id="video_upload")
with gr.Column(scale=3):
text_upload_result = gr.Textbox(visible=False, interactive=False, label="Upload Status")
video_upload.upload(ingest_video_gen_transcript, [video_upload], [text_upload_result])
video_upload.clear(clear_uploaded_video, [], [text_upload_result])
with gr.Blocks() as upload_gen_captions:
gr.Markdown("# Ingest Your Own Video - Utilizing Generated Captions")
gr.Markdown(
"Please use this interface to ingest your own video if the video has meaningless audio (e.g., background musics, etc...)"
)
with gr.Row():
with gr.Column(scale=6):
video_upload_cap = gr.Video(sources="upload", height=512, width=512, elem_id="video_upload_cap")
with gr.Column(scale=3):
text_upload_result_cap = gr.Textbox(visible=False, interactive=False, label="Upload Status")
video_upload_cap.upload(ingest_video_gen_transcript, [video_upload_cap], [text_upload_result_cap])
video_upload_cap.clear(clear_uploaded_video, [], [text_upload_result_cap])
with gr.Blocks() as qna:
state = gr.State(multimodalqna_conv.copy())
with gr.Row():
with gr.Column(scale=4):
video = gr.Video(height=512, width=512, elem_id="video")
with gr.Column(scale=7):
chatbot = gr.Chatbot(elem_id="chatbot", label="MultimodalQnA Chatbot", height=390)
with gr.Row():
with gr.Column(scale=6):
# textbox.render()
textbox = gr.Textbox(
# show_label=False,
# container=False,
label="Query",
info="Enter your query here!",
)
with gr.Column(scale=1, min_width=100):
with gr.Row():
submit_btn = gr.Button(value="Send", variant="primary", interactive=True)
with gr.Row(elem_id="buttons") as button_row:
clear_btn = gr.Button(value="🗑️ Clear", interactive=False)
clear_btn.click(
clear_history,
[
state,
],
[state, chatbot, textbox, video, clear_btn],
)
submit_btn.click(
add_text,
[state, textbox],
[state, chatbot, textbox, clear_btn],
).then(
http_bot,
[
state,
],
[state, chatbot, video, clear_btn],
)
with gr.Blocks(css=css) as demo:
gr.Markdown("# MultimodalQnA")
with gr.Tabs():
with gr.TabItem("MultimodalQnA With Your Videos"):
qna.render()
with gr.TabItem("Upload Your Own Videos"):
upload_gen_trans.render()
with gr.TabItem("Upload Your Own Videos"):
upload_gen_captions.render()
demo.queue()
app = gr.mount_gradio_app(app, demo, path="/")
share = False
enable_queue = True
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--host", type=str, default="0.0.0.0")
parser.add_argument("--port", type=int, default=5173)
parser.add_argument("--concurrency-count", type=int, default=20)
parser.add_argument("--share", action="store_true")
backend_service_endpoint = os.getenv("BACKEND_SERVICE_ENDPOINT", "http://localhost:8888/v1/multimodalqna")
dataprep_gen_transcript_endpoint = os.getenv(
"DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT", "http://localhost:6007/v1/generate_transcripts"
)
dataprep_gen_caption_endpoint = os.getenv(
"DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT", "http://localhost:6007/v1/generate_captions"
)
args = parser.parse_args()
logger.info(f"args: {args}")
global gateway_addr
gateway_addr = backend_service_endpoint
global dataprep_gen_transcript_addr
dataprep_gen_transcript_addr = dataprep_gen_transcript_endpoint
global dataprep_gen_captiono_addr
dataprep_gen_captiono_addr = dataprep_gen_caption_endpoint
uvicorn.run(app, host=args.host, port=args.port)

View File

@@ -0,0 +1,5 @@
gradio==4.44.0
moviepy==1.0.3
numpy==1.26.4
opencv-python==4.10.0.82
Pillow==10.3.0

View File

@@ -0,0 +1,169 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import base64
import logging
import logging.handlers
import os
import sys
from pathlib import Path
import cv2
from moviepy.video.io.VideoFileClip import VideoFileClip
LOGDIR = "."
server_error_msg = "**NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.**"
moderation_msg = "YOUR INPUT VIOLATES OUR CONTENT MODERATION GUIDELINES. PLEASE TRY AGAIN."
handler = None
save_log = False
def build_logger(logger_name, logger_filename):
global handler
formatter = logging.Formatter(
fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
# Set the format of root handlers
if not logging.getLogger().handlers:
logging.basicConfig(level=logging.INFO)
logging.getLogger().handlers[0].setFormatter(formatter)
# Redirect stdout and stderr to loggers
stdout_logger = logging.getLogger("stdout")
stdout_logger.setLevel(logging.INFO)
sl = StreamToLogger(stdout_logger, logging.INFO)
sys.stdout = sl
stderr_logger = logging.getLogger("stderr")
stderr_logger.setLevel(logging.ERROR)
sl = StreamToLogger(stderr_logger, logging.ERROR)
sys.stderr = sl
# Get logger
logger = logging.getLogger(logger_name)
logger.setLevel(logging.INFO)
# Add a file handler for all loggers
if save_log and handler is None:
os.makedirs(LOGDIR, exist_ok=True)
filename = os.path.join(LOGDIR, logger_filename)
handler = logging.handlers.TimedRotatingFileHandler(filename, when="D", utc=True)
handler.setFormatter(formatter)
for name, item in logging.root.manager.loggerDict.items():
if isinstance(item, logging.Logger):
item.addHandler(handler)
return logger
class StreamToLogger(object):
"""Fake file-like stream object that redirects writes to a logger instance."""
def __init__(self, logger, log_level=logging.INFO):
self.terminal = sys.stdout
self.logger = logger
self.log_level = log_level
self.linebuf = ""
def __getattr__(self, attr):
return getattr(self.terminal, attr)
def write(self, buf):
temp_linebuf = self.linebuf + buf
self.linebuf = ""
for line in temp_linebuf.splitlines(True):
# From the io.TextIOWrapper docs:
# On output, if newline is None, any '\n' characters written
# are translated to the system default line separator.
# By default sys.stdout.write() expects '\n' newlines and then
# translates them so this is still cross platform.
if line[-1] == "\n":
self.logger.log(self.log_level, line.rstrip())
else:
self.linebuf += line
def flush(self):
if self.linebuf != "":
self.logger.log(self.log_level, self.linebuf.rstrip())
self.linebuf = ""
def maintain_aspect_ratio_resize(image, width=None, height=None, inter=cv2.INTER_AREA):
# Grab the image size and initialize dimensions
dim = None
(h, w) = image.shape[:2]
# Return original image if no need to resize
if width is None and height is None:
return image
# We are resizing height if width is none
if width is None:
# Calculate the ratio of the height and construct the dimensions
r = height / float(h)
dim = (int(w * r), height)
# We are resizing width if height is none
else:
# Calculate the ratio of the width and construct the dimensions
r = width / float(w)
dim = (width, int(h * r))
# Return the resized image
return cv2.resize(image, dim, interpolation=inter)
# function to split video at a timestamp
def split_video(
video_path,
timestamp_in_ms,
output_video_path: str = "./public/splitted_videos",
output_video_name: str = "video_tmp.mp4",
play_before_sec: int = 5,
play_after_sec: int = 5,
):
timestamp_in_sec = int(timestamp_in_ms) / 1000
# create output_video_name folder if not exist:
Path(output_video_path).mkdir(parents=True, exist_ok=True)
output_video = os.path.join(output_video_path, output_video_name)
with VideoFileClip(video_path) as video:
duration = video.duration
start_time = max(timestamp_in_sec - play_before_sec, 0)
end_time = min(timestamp_in_sec + play_after_sec, duration)
new = video.subclip(start_time, end_time)
new.write_videofile(output_video, audio_codec="aac")
return output_video
def delete_split_video(video_path):
if os.path.exists(video_path):
os.remove(video_path)
return True
else:
print("The file does not exist")
return False
def convert_img_to_base64(image):
"Convert image to base64 string"
_, buffer = cv2.imencode(".png", image)
encoded_string = base64.b64encode(buffer)
return encoded_string.decode("utf-8")
def get_b64_frame_from_timestamp(video_path, timestamp_in_ms, maintain_aspect_ratio: bool = False):
print(f"video path: {video_path}")
vidcap = cv2.VideoCapture(video_path)
vidcap.set(cv2.CAP_PROP_POS_MSEC, int(timestamp_in_ms))
success, frame = vidcap.read()
if success:
if maintain_aspect_ratio:
frame = maintain_aspect_ratio_resize(frame, height=350)
b64_img_str = convert_img_to_base64(frame)
return b64_img_str
return None