Add a new section to change LLM model such as deepseek based on validated model table in LLM microservice (#1501)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Wang, Kai Lawrence <109344418+wangkl2@users.noreply.github.com>
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com>
This commit is contained in:
Louie Tsai
2025-02-11 17:34:56 -08:00
committed by GitHub
parent 87ff149f61
commit 970b869838
2 changed files with 31 additions and 0 deletions

View File

@@ -34,10 +34,22 @@ To set up environment variables for deploying ChatQnA services, follow these ste
```
3. Set up other environment variables:
```bash
source ./set_env.sh
```
4. Change Model for LLM serving
By default, Meta-Llama-3-8B-Instruct is used for LLM serving, the default model can be changed to other validated LLM models.
Please pick a [validated llm models](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/src/text-generation#validated-llm-models) from the table.
To change the default model defined in set_env.sh, overwrite it by exporting LLM_MODEL_ID to the new model or by modifying set_env.sh, and then repeat step 3.
For example, change to Llama-2-7b-chat-hf using the following command.
```bash
export LLM_MODEL_ID="meta-llama/Llama-2-7b-chat-hf"
```
## Quick Start: 2.Run Docker Compose
```bash

View File

@@ -39,6 +39,25 @@ To set up environment variables for deploying ChatQnA services, follow these ste
source ./set_env.sh
```
4. Change Model for LLM serving
By default, Meta-Llama-3-8B-Instruct is used for LLM serving, the default model can be changed to other validated LLM models.
Please pick a [validated llm models](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/src/text-generation#validated-llm-models) from the table.
To change the default model defined in set_env.sh, overwrite it by exporting LLM_MODEL_ID to the new model or by modifying set_env.sh, and then repeat step 3.
For example, change to DeepSeek-R1-Distill-Qwen-32B using the following command.
```bash
export LLM_MODEL_ID="deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
```
Please also check [required gaudi cards for different models](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/src/text-generation#system-requirements-for-llm-models) for new models.
It might be necessary to increase the number of Gaudi cards for the model by exporting NUM_CARDS to the new model or by modifying set_env.sh, and then repeating step 3. For example, increase the number of Gaudi cards for DeepSeek-R1-
Distill-Qwen-32B using the following command:
```bash
export NUM_CARDS=4
```
## Quick Start: 2.Run Docker Compose
```bash