doc: fix headings (#656)

* doc: fix headings

* Fix incorrect uses of heading levels
* fix indenting within lists

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
David Kinder
2024-08-28 08:45:18 -04:00
committed by GitHub
parent beda609b4b
commit 7a0fca73e6
6 changed files with 206 additions and 192 deletions

View File

@@ -26,50 +26,50 @@ This example showcases a hierarchical multi-agent system for question-answering
1. Build agent docker image </br>
First, clone the opea GenAIComps repo
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIComps.git
```
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIComps.git
```
Then build the agent docker image. Both the supervisor agent and the worker agent will use the same docker image, but when we launch the two agents we will specify different strategies and register different tools.
Then build the agent docker image. Both the supervisor agent and the worker agent will use the same docker image, but when we launch the two agents we will specify different strategies and register different tools.
```
cd GenAIComps
docker build -t opea/comps-agent-langchain:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/agent/langchain/docker/Dockerfile .
```
```
cd GenAIComps
docker build -t opea/comps-agent-langchain:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/agent/langchain/docker/Dockerfile .
```
2. Launch tool services </br>
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
3. Set up environment for this example </br>
First, clone this repo
```
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
```
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
Second, set up env vars
Second, set up env vars
```
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
# optional: OPANAI_API_KEY
export OPENAI_API_KEY=<your-openai-key>
```
```
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
# optional: OPANAI_API_KEY
export OPENAI_API_KEY=<your-openai-key>
```
4. Launch agent services</br>
The configurations of the supervisor agent and the worker agent are defined in the docker-compose yaml file. We currently use openAI GPT-4o-mini as LLM, and we plan to add support for llama3.1-70B-instruct (served by TGI-Gaudi) in a subsequent release.
To use openai llm, run command below.
```
cd docker/openai/
bash launch_agent_service_openai.sh
```
```
cd docker/openai/
bash launch_agent_service_openai.sh
```
## Validate services

View File

@@ -1,36 +1,36 @@
# DocRetriever Application
# DocRetriever Application with Docker
DocRetriever are the most widely adopted use case for leveraging the different methodologies to match user query against a set of free-text records. DocRetriever is essential to RAG system, which bridges the knowledge gap by dynamically fetching relevant information from external sources, ensuring that responses generated remain factual and current. The core of this architecture are vector databases, which are instrumental in enabling efficient and semantic retrieval of information. These databases store data as vectors, allowing RAG to swiftly access the most pertinent documents or data points based on semantic similarity.
### 1. Build Images for necessary microservices. (This step will not needed after docker image released)
## 1. Build Images for necessary microservices. (This step will not needed after docker image released)
- Embedding TEI Image
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/embedding-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/langchain/docker/Dockerfile .
```
```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/embedding-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/langchain/docker/Dockerfile .
```
- Retriever Vector store Image
```bash
docker build -t opea/retriever-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/retrievers/langchain/redis/docker/Dockerfile .
```
```bash
docker build -t opea/retriever-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/retrievers/langchain/redis/docker/Dockerfile .
```
- Rerank TEI Image
```bash
docker build -t opea/reranking-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/tei/docker/Dockerfile .
```
```bash
docker build -t opea/reranking-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/tei/docker/Dockerfile .
```
- Dataprep Image
```bash
docker build -t opea/dataprep-on-ray-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/langchain_ray/docker/Dockerfile .
```
```bash
docker build -t opea/dataprep-on-ray-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/langchain_ray/docker/Dockerfile .
```
### 2. Build Images for MegaService
## 2. Build Images for MegaService
```bash
cd ..
@@ -38,7 +38,7 @@ git clone https://github.com/opea-project/GenAIExamples.git
docker build --no-cache -t opea/doc-index-retriever:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f GenAIExamples/DocIndexRetriever/docker/Dockerfile .
```
### 3. Start all the services Docker Containers
## 3. Start all the services Docker Containers
```bash
export host_ip="YOUR IP ADDR"
@@ -62,7 +62,7 @@ cd GenAIExamples/DocIndexRetriever/docker/${llm_hardware}/
docker compose -f docker-compose.yaml up -d
```
### 3. Validation
## 3. Validation
Add Knowledge Base via HTTP Links:
@@ -86,41 +86,41 @@ curl http://${host_ip}:8889/v1/retrievaltool -X POST -H "Content-Type: applicati
{"id":"354e62c703caac8c547b3061433ec5e8","reranked_docs":[{"id":"06d5a5cefc06cf9a9e0b5fa74a9f233c","text":"Close SearchsearchMenu WikiNewsCommunity Daysx-twitter linkedin github searchStreamlining implementation of enterprise-grade Generative AIEfficiently integrate secure, performant, and cost-effective Generative AI workflows into business value.TODAYOPEA..."}],"initial_query":"Explain the OPEA project?"}
```
### 4. Trouble shooting
## 4. Trouble shooting
1. check all containers are alive
```bash
# redis vector store
docker container logs redis-vector-db
# dataprep to redis microservice, input document files
docker container logs dataprep-redis-server
```bash
# redis vector store
docker container logs redis-vector-db
# dataprep to redis microservice, input document files
docker container logs dataprep-redis-server
# embedding microservice
curl http://${host_ip}:6000/v1/embeddings \
-X POST \
-d '{"text":"Explain the OPEA project"}' \
-H 'Content-Type: application/json' > query
docker container logs embedding-tei-server
# embedding microservice
curl http://${host_ip}:6000/v1/embeddings \
-X POST \
-d '{"text":"Explain the OPEA project"}' \
-H 'Content-Type: application/json' > query
docker container logs embedding-tei-server
# if you used tei-gaudi
docker container logs tei-embedding-gaudi-server
# if you used tei-gaudi
docker container logs tei-embedding-gaudi-server
# retriever microservice, input embedding output docs
curl http://${host_ip}:7000/v1/retrieval \
-X POST \
-d @query \
-H 'Content-Type: application/json' > rerank_query
docker container logs retriever-redis-server
# retriever microservice, input embedding output docs
curl http://${host_ip}:7000/v1/retrieval \
-X POST \
-d @query \
-H 'Content-Type: application/json' > rerank_query
docker container logs retriever-redis-server
# reranking microservice
curl http://${host_ip}:8000/v1/reranking \
-X POST \
-d @rerank_query \
-H 'Content-Type: application/json' > output
docker container logs reranking-tei-server
# reranking microservice
curl http://${host_ip}:8000/v1/reranking \
-X POST \
-d @rerank_query \
-H 'Content-Type: application/json' > output
docker container logs reranking-tei-server
# megaservice gateway
docker container logs doc-index-retriever-server
```
# megaservice gateway
docker container logs doc-index-retriever-server
```

View File

@@ -2,7 +2,7 @@
OPEA Productivity Suite, is a powerful tool designed to streamline your workflow and boost productivity. This application leverages the cutting-edge OPEA microservices to provide a comprehensive suite of features that cater to the diverse needs of modern enterprises.
### Key Features
## Key Features
- Chat with Documents: Engage in intelligent conversations with your documents using our advanced RAG Capabilities. Our Retrieval-Augmented Generation (RAG) model allows you to ask questions, receive relevant information, and gain insights from your documents in real-time.

View File

@@ -1,66 +1,72 @@
<h1 align="center" id="title"> Productivity Suite React UI</h1>
# Productivity Suite React UI
### 📸 Project Screenshots
## 📸 Project Screenshots
![project-screenshot](../../../assets/img/chat_qna_init.png)
![project-screenshot](../../../assets/img/Login_page.png)
<h2>🧐 Features</h2>
## 🧐 Features
Here're some of the project's features:
#### CHAT QNA
### CHAT QNA
- Start a Text ChatInitiate a text chat with the ability to input written conversations, where the dialogue content can also be customized based on uploaded files.
- Context Awareness: The AI assistant maintains the context of the conversation, understanding references to previous statements or questions. This allows for more natural and coherent exchanges.
##### DATA SOURCE
#### DATA SOURCE
- The choice between uploading locally or copying a remote link. Chat according to uploaded knowledge base.
- Uploaded File would get listed and user would be able add or remove file/links
- The choice between uploading locally or copying a remote link. Chat according to uploaded knowledge base.
- Uploaded File would get listed and user would be able add or remove file/links
###### Screen Shot
##### Screen Shot
![project-screenshot](../../../assets/img/data_source.png)
![project-screenshot](../../../assets/img/data_source.png)
- Clear: Clear the record of the current dialog box without retaining the contents of the dialog box.
- Chat history: Historical chat records can still be retained after refreshing, making it easier for users to view the context.
- Conversational Chat : The application maintains a history of the conversation, allowing users to review previous messages and the AI to refer back to earlier points in the dialogue when necessary.
###### Screen Shots
![project-screenshot](../../../assets/img/chat_qna_init.png)
![project-screenshot](../../../assets/img/chatqna_with_conversation.png)
#### CODEGEN
##### Screen Shots
![project-screenshot](../../../assets/img/chat_qna_init.png)
![project-screenshot](../../../assets/img/chatqna_with_conversation.png)
### CODEGEN
- Generate code: generate the corresponding code based on the current user's input.
###### Screen Shot
![project-screenshot](../../../assets/img/codegen.png)
#### DOC SUMMARY
### DOC SUMMARY
- Summarizing Uploaded Files: Upload files from their local device, then click 'Generate Summary' to summarize the content of the uploaded file. The summary will be displayed on the 'Summary' box.
- Summarizing Text via Pasting: Paste the text to be summarized into the text box, then click 'Generate Summary' to produce a condensed summary of the content, which will be displayed in the 'Summary' box on the right.
- Scroll to Bottom: The summarized content will automatically scroll to the bottom.
###### Screen Shot
![project-screenshot](../../../assets/img/doc_summary_paste.png)
![project-screenshot](../../../assets/img/doc_summary_file.png)
#### FAQ Generator
#### Screen Shot
![project-screenshot](../../../assets/img/doc_summary_paste.png)
![project-screenshot](../../../assets/img/doc_summary_file.png)
### FAQ Generator
- Generate FAQs from Text via Pasting: Paste the text to into the text box, then click 'Generate FAQ' to produce a condensed FAQ of the content, which will be displayed in the 'FAQ' box below.
- Generate FAQs from Text via txt file Upload: Upload the file in the Upload bar, then click 'Generate FAQ' to produce a condensed FAQ of the content, which will be displayed in the 'FAQ' box below.
###### Screen Shot
![project-screenshot](../../../assets/img/faq_generator.png)
<h2>🛠️ Get it Running:</h2>
#### Screen Shot
![project-screenshot](../../../assets/img/faq_generator.png)
## 🛠️ Get it Running:
1. Clone the repo.
2. cd command to the current folder.
3. create a .env file and add the following variables and values.
```env
```
VITE_BACKEND_SERVICE_ENDPOINT_CHATQNA=''
VITE_BACKEND_SERVICE_ENDPOINT_CODEGEN=''
VITE_BACKEND_SERVICE_ENDPOINT_DOCSUM=''

View File

@@ -63,7 +63,7 @@ cd ..
The Productivity Suite is composed of multiple GenAIExample reference solutions composed together.
### 8.1 Build ChatQnA MegaService Docker Images
#### 8.1 Build ChatQnA MegaService Docker Images
```bash
git clone https://github.com/opea-project/GenAIExamples.git
@@ -72,7 +72,7 @@ docker build --no-cache -t opea/chatqna:latest --build-arg https_proxy=$https_pr
cd ../../..
```
### 8.2 Build DocSum Megaservice Docker Images
#### 8.2 Build DocSum Megaservice Docker Images
```bash
cd GenAIExamples/DocSum/docker
@@ -80,7 +80,7 @@ docker build --no-cache -t opea/docsum:latest --build-arg https_proxy=$https_pro
cd ../../..
```
### 8.3 Build CodeGen Megaservice Docker Images
#### 8.3 Build CodeGen Megaservice Docker Images
```bash
cd GenAIExamples/CodeGen/docker
@@ -88,7 +88,7 @@ docker build --no-cache -t opea/codegen:latest --build-arg https_proxy=$https_pr
cd ../../..
```
### 8.4 Build FAQGen Megaservice Docker Images
#### 8.4 Build FAQGen Megaservice Docker Images
```bash
cd GenAIExamples/FaqGen/docker
@@ -206,84 +206,84 @@ Please refer to [keycloak_setup_guide](keycloak_setup_guide.md) for more detail
1. TEI Embedding Service
```bash
curl ${host_ip}:6006/embed \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H 'Content-Type: application/json'
```
```bash
curl ${host_ip}:6006/embed \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H 'Content-Type: application/json'
```
2. Embedding Microservice
```bash
curl http://${host_ip}:6000/v1/embeddings\
-X POST \
-d '{"text":"hello"}' \
-H 'Content-Type: application/json'
```
```bash
curl http://${host_ip}:6000/v1/embeddings\
-X POST \
-d '{"text":"hello"}' \
-H 'Content-Type: application/json'
```
3. Retriever Microservice
To consume the retriever microservice, you need to generate a mock embedding vector by Python script. The length of embedding vector
is determined by the embedding model.
Here we use the model `EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"`, which vector size is 768.
To consume the retriever microservice, you need to generate a mock embedding vector by Python script. The length of embedding vector
is determined by the embedding model.
Here we use the model `EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"`, which vector size is 768.
Check the vector dimension of your embedding model, set `your_embedding` dimension equals to it.
Check the vector dimension of your embedding model, set `your_embedding` dimension equals to it.
```bash
export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://${host_ip}:7000/v1/retrieval \
-X POST \
-d "{\"text\":\"test\",\"embedding\":${your_embedding}}" \
-H 'Content-Type: application/json'
```
```bash
export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://${host_ip}:7000/v1/retrieval \
-X POST \
-d "{\"text\":\"test\",\"embedding\":${your_embedding}}" \
-H 'Content-Type: application/json'
```
4. TEI Reranking Service
```bash
curl http://${host_ip}:8808/rerank \
-X POST \
-d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
-H 'Content-Type: application/json'
```
```bash
curl http://${host_ip}:8808/rerank \
-X POST \
-d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
-H 'Content-Type: application/json'
```
5. Reranking Microservice
```bash
curl http://${host_ip}:8000/v1/reranking\
-X POST \
-d '{"initial_query":"What is Deep Learning?", "retrieved_docs": [{"text":"Deep Learning is not..."}, {"text":"Deep learning is..."}]}' \
-H 'Content-Type: application/json'
```
```bash
curl http://${host_ip}:8000/v1/reranking\
-X POST \
-d '{"initial_query":"What is Deep Learning?", "retrieved_docs": [{"text":"Deep Learning is not..."}, {"text":"Deep learning is..."}]}' \
-H 'Content-Type: application/json'
```
6. LLM backend Service (ChatQnA, DocSum, FAQGen)
```bash
curl http://${host_ip}:9009/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
```bash
curl http://${host_ip}:9009/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
8. LLM backend Service (CodeGen)
7. LLM backend Service (CodeGen)
```bash
curl http://${host_ip}:8028/generate \
-X POST \
-d '{"inputs":"def print_hello_world():","parameters":{"max_new_tokens":256, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
```bash
curl http://${host_ip}:8028/generate \
-X POST \
-d '{"inputs":"def print_hello_world():","parameters":{"max_new_tokens":256, "do_sample": true}}' \
-H 'Content-Type: application/json'
```
9. ChatQnA LLM Microservice
8. ChatQnA LLM Microservice
```bash
curl http://${host_ip}:9000/v1/chat/completions\
-X POST \
-d '{"query":"What is Deep Learning?","max_new_tokens":17,"top_k":10,"top_p":0.95,"typical_p":0.95,"temperature":0.01,"repetition_penalty":1.03,"streaming":true}' \
-H 'Content-Type: application/json'
```
```bash
curl http://${host_ip}:9000/v1/chat/completions\
-X POST \
-d '{"query":"What is Deep Learning?","max_new_tokens":17,"top_k":10,"top_p":0.95,"typical_p":0.95,"temperature":0.01,"repetition_penalty":1.03,"streaming":true}' \
-H 'Content-Type: application/json'
```
10. CodeGen LLM Microservice
9. CodeGen LLM Microservice
```bash
curl http://${host_ip}:9001/v1/chat/completions\
@@ -498,50 +498,56 @@ Here is an example of running Productivity Suite
![project-screenshot](../../assets/img/chat_qna_init.png)
![project-screenshot](../../assets/img/Login_page.png)
<h2>🧐 Features</h2>
## 🧐 Features
Here're some of the project's features:
#### CHAT QNA
### CHAT QNA
- Start a Text ChatInitiate a text chat with the ability to input written conversations, where the dialogue content can also be customized based on uploaded files.
- Context Awareness: The AI assistant maintains the context of the conversation, understanding references to previous statements or questions. This allows for more natural and coherent exchanges.
##### DATA SOURCE
### DATA SOURCE
- The choice between uploading locally or copying a remote link. Chat according to uploaded knowledge base.
- Uploaded File would get listed and user would be able add or remove file/links
- The choice between uploading locally or copying a remote link. Chat according to uploaded knowledge base.
- Uploaded File would get listed and user would be able add or remove file/links
###### Screen Shot
#### Screen Shot
![project-screenshot](../../assets/img/data_source.png)
![project-screenshot](../../assets/img/data_source.png)
- Clear: Clear the record of the current dialog box without retaining the contents of the dialog box.
- Chat history: Historical chat records can still be retained after refreshing, making it easier for users to view the context.
- Conversational Chat : The application maintains a history of the conversation, allowing users to review previous messages and the AI to refer back to earlier points in the dialogue when necessary.
###### Screen Shots
![project-screenshot](../../assets/img/chat_qna_init.png)
![project-screenshot](../../assets/img/chatqna_with_conversation.png)
#### CODEGEN
#### Screen Shots
![project-screenshot](../../assets/img/chat_qna_init.png)
![project-screenshot](../../assets/img/chatqna_with_conversation.png)
### CODEGEN
- Generate code: generate the corresponding code based on the current user's input.
###### Screen Shot
![project-screenshot](../../assets/img/codegen.png)
#### DOC SUMMARY
### DOC SUMMARY
- Summarizing Uploaded Files: Upload files from their local device, then click 'Generate Summary' to summarize the content of the uploaded file. The summary will be displayed on the 'Summary' box.
- Summarizing Text via Pasting: Paste the text to be summarized into the text box, then click 'Generate Summary' to produce a condensed summary of the content, which will be displayed in the 'Summary' box on the right.
- Scroll to Bottom: The summarized content will automatically scroll to the bottom.
###### Screen Shot
![project-screenshot](../../assets/img/doc_summary_paste.png)
![project-screenshot](../../assets/img/doc_summary_file.png)
#### FAQ Generator
#### Screen Shot
![project-screenshot](../../assets/img/doc_summary_paste.png)
![project-screenshot](../../assets/img/doc_summary_file.png)
### FAQ Generator
- Generate FAQs from Text via Pasting: Paste the text to into the text box, then click 'Generate FAQ' to produce a condensed FAQ of the content, which will be displayed in the 'FAQ' box below.
- Generate FAQs from Text via txt file Upload: Upload the file in the Upload bar, then click 'Generate FAQ' to produce a condensed FAQ of the content, which will be displayed in the 'FAQ' box below.
###### Screen Shot
![project-screenshot](../../assets/img/faq_generator.png)
#### Screen Shot
![project-screenshot](../../assets/img/faq_generator.png)

View File

@@ -22,24 +22,26 @@ To begin with, ensure that you have following prerequisites in place:
1. Kubernetes installation: Make sure that you have Kubernetes installed.
2. Images: Make sure you have all the images ready for the examples and components stated above. You may refer to [README](../../docker/xeon/README.md) for steps to build the images.
3. Configuration Values: Set the following values in all the yaml files before proceeding with the deployment:
#### a. HUGGINGFACEHUB_API_TOKEN (Your HuggingFace token to download your desired model from HuggingFace):
```
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
cd GenAIExamples/ProductivitySuite/kubernetes/manifests/xeon/
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" *.yaml
```
#### b. Set the proxies based on your network configuration
```
# Look for http_proxy, https_proxy and no_proxy key and fill up the values for all the yaml files with your system proxy configuration.
```
a. HUGGINGFACEHUB_API_TOKEN (Your HuggingFace token to download your desired model from HuggingFace):
```
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
cd GenAIExamples/ProductivitySuite/kubernetes/manifests/xeon/
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" *.yaml
```
b. Set the proxies based on your network configuration
```
# Look for http_proxy, https_proxy and no_proxy key and fill up the values for all the yaml files with your system proxy configuration.
```
c. Set all the backend service endpoint for REACT UI service
```
# Setup all the backend service endpoint in productivity_suite_reactui.yaml for UI to consume with.
# Look for ENDPOINT in the yaml and insert all the url endpoint for all the required backend service.
```
#### c. Set all the backend service endpoint for REACT UI service
```
# Setup all the backend service endpoint in productivity_suite_reactui.yaml for UI to consume with.
# Look for ENDPOINT in the yaml and insert all the url endpoint for all the required backend service.
```
4. MODEL_ID and model-volume (OPTIONAL): You may as well customize the "MODEL_ID" to use different model and model-volume for the volume to be mounted.
5. After finish with steps above, you can proceed with the deployment of the yaml file.