Explain Default Model in ChatQnA and CodeTrans READMEs (#694)

* explain default model in CodeTrans READMEs

Signed-off-by: letonghan <letong.han@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* explain default model in ChatQnA READMEs

Signed-off-by: letonghan <letong.han@intel.com>

* add required models

Signed-off-by: letonghan <letong.han@intel.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Letong Han
2024-08-29 21:22:59 +08:00
committed by GitHub
parent 6a679ba80f
commit 2a2ff45e2b
10 changed files with 112 additions and 5 deletions

View File

@@ -121,6 +121,18 @@ Currently we support two ways of deploying ChatQnA services with docker compose:
2. Start services using the docker images `built from source`: [Guide](./docker)
### Required Models
By default, the embedding, reranking and LLM models are set to a default value as listed below:
| Service | Model |
| --------- | ------------------------- |
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| LLM | Intel/neural-chat-7b-v3-3 |
Change the `xxx_MODEL_ID` in `docker/xxx/set_env.sh` for your needs.
### Setup Environment Variable
To set up environment variables for deploying ChatQnA services, follow these steps: