Explain Default Model in ChatQnA and CodeTrans READMEs (#694)

* explain default model in CodeTrans READMEs

Signed-off-by: letonghan <letong.han@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* explain default model in ChatQnA READMEs

Signed-off-by: letonghan <letong.han@intel.com>

* add required models

Signed-off-by: letonghan <letong.han@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Letong Han
2024-08-29 21:22:59 +08:00
committed by GitHub
parent 6a679ba80f
commit 2a2ff45e2b
10 changed files with 112 additions and 5 deletions

View File

@@ -121,6 +121,18 @@ Currently we support two ways of deploying ChatQnA services with docker compose:
2. Start services using the docker images `built from source`: [Guide](./docker)
### Required Models
By default, the embedding, reranking and LLM models are set to a default value as listed below:
| Service | Model |
| --------- | ------------------------- |
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| LLM | Intel/neural-chat-7b-v3-3 |
Change the `xxx_MODEL_ID` in `docker/xxx/set_env.sh` for your needs.
### Setup Environment Variable
To set up environment variables for deploying ChatQnA services, follow these steps:

View File

@@ -159,6 +159,18 @@ If Guardrails docker image is built, you will find one more image:
## 🚀 Start MicroServices and MegaService
### Required Models
By default, the embedding, reranking and LLM models are set to a default value as listed below:
| Service | Model |
| --------- | ------------------------- |
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| LLM | Intel/neural-chat-7b-v3-3 |
Change the `xxx_MODEL_ID` below for your needs.
### Setup Environment Variables
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.

View File

@@ -87,6 +87,18 @@ Then run the command `docker images`, you will have the following 7 Docker Image
## 🚀 Start MicroServices and MegaService
### Required Models
By default, the embedding, reranking and LLM models are set to a default value as listed below:
| Service | Model |
| --------- | ------------------------- |
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| LLM | Intel/neural-chat-7b-v3-3 |
Change the `xxx_MODEL_ID` below for your needs.
### Setup Environment Variables
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.

View File

@@ -161,6 +161,18 @@ Then run the command `docker images`, you will have the following 7 Docker Image
## 🚀 Start Microservices
### Required Models
By default, the embedding, reranking and LLM models are set to a default value as listed below:
| Service | Model |
| --------- | ------------------------- |
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| LLM | Intel/neural-chat-7b-v3-3 |
Change the `xxx_MODEL_ID` below for your needs.
### Setup Environment Variables
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.
@@ -183,7 +195,7 @@ export your_hf_api_token="Your_Huggingface_API_Token"
**Append the value of the public IP address to the no_proxy list**
```
```bash
export your_no_proxy=${your_no_proxy},"External_Public_IP"
```

View File

@@ -148,6 +148,18 @@ Then run the command `docker images`, you will have the following 7 Docker Image
## 🚀 Start Microservices
### Required Models
By default, the embedding, reranking and LLM models are set to a default value as listed below:
| Service | Model |
| --------- | ------------------------- |
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| LLM | Intel/neural-chat-7b-v3-3 |
Change the `xxx_MODEL_ID` below for your needs.
### Setup Environment Variables
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.

View File

@@ -22,6 +22,16 @@ Currently we support two ways of deploying Code Translation services on docker:
2. Start services using the docker images `built from source`: [Guide](./docker)
### Required Models
By default, the LLM model is set to a default value as listed below:
| Service | Model |
| ------- | ----------------------------- |
| LLM | HuggingFaceH4/mistral-7b-grok |
Change the `LLM_MODEL_ID` in `docker/set_env.sh` for your needs.
### Setup Environment Variable
To set up environment variables for deploying Code Translation services, follow these steps:

View File

@@ -42,9 +42,17 @@ Then run the command `docker images`, you will have the following Docker Images:
## 🚀 Start Microservices
### Setup Environment Variables
### Required Models
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. Notice that the `LLM_MODEL_ID` indicates the LLM model used for TGI service.
By default, the LLM model is set to a default value as listed below:
| Service | Model |
| ------- | ----------------------------- |
| LLM | HuggingFaceH4/mistral-7b-grok |
Change the `LLM_MODEL_ID` below for your needs.
### Setup Environment Variables
```bash
export no_proxy=${your_no_proxy}

View File

@@ -50,9 +50,17 @@ Then run the command `docker images`, you will have the following Docker Images:
## 🚀 Start Microservices
### Setup Environment Variables
### Required Models
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. Notice that the `LLM_MODEL_ID` indicates the LLM model used for TGI service.
By default, the LLM model is set to a default value as listed below:
| Service | Model |
| ------- | ----------------------------- |
| LLM | HuggingFaceH4/mistral-7b-grok |
Change the `LLM_MODEL_ID` below for your needs.
### Setup Environment Variables
```bash
export no_proxy=${your_no_proxy}

View File

@@ -7,9 +7,20 @@ Please install GMC in your Kubernetes cluster, if you have not already done so,
If you have only Intel Xeon machines you could use the codetrans_xeon.yaml file or if you have a Gaudi cluster you could use codetrans_gaudi.yaml
In the below example we illustrate on Xeon.
## Required Models
By default, the LLM model is set to a default value as listed below:
|Service |Model |
|---------|-------------------------|
|LLM |HuggingFaceH4/mistral-7b-grok|
Change the `MODEL_ID` in `codetrans_xeon.yaml` for your needs.
## Deploy the RAG application
1. Create the desired namespace if it does not already exist and deploy the application
```bash
export APP_NAMESPACE=CT
kubectl create ns $APP_NAMESPACE

View File

@@ -8,6 +8,16 @@
> You need to make sure you have created the directory `/mnt/opea-models` to save the cached model on the node where the CodeTrans workload is running. Otherwise, you need to modify the `codetrans.yaml` file to change the `model-volume` to a directory that exists on the node.
## Required Models
By default, the LLM model is set to a default value as listed below:
|Service |Model |
|---------|-------------------------|
|LLM |HuggingFaceH4/mistral-7b-grok|
Change the `MODEL_ID` in `codetrans.yaml` for your needs.
## Deploy On Xeon
```bash