Explain Default Model in ChatQnA and CodeTrans READMEs (#694)
* explain default model in CodeTrans READMEs Signed-off-by: letonghan <letong.han@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * explain default model in ChatQnA READMEs Signed-off-by: letonghan <letong.han@intel.com> * add required models Signed-off-by: letonghan <letong.han@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: letonghan <letong.han@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
@@ -8,6 +8,16 @@
|
||||
|
||||
> You need to make sure you have created the directory `/mnt/opea-models` to save the cached model on the node where the CodeTrans workload is running. Otherwise, you need to modify the `codetrans.yaml` file to change the `model-volume` to a directory that exists on the node.
|
||||
|
||||
## Required Models
|
||||
|
||||
By default, the LLM model is set to a default value as listed below:
|
||||
|
||||
|Service |Model |
|
||||
|---------|-------------------------|
|
||||
|LLM |HuggingFaceH4/mistral-7b-grok|
|
||||
|
||||
Change the `MODEL_ID` in `codetrans.yaml` for your needs.
|
||||
|
||||
## Deploy On Xeon
|
||||
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user