chen, suyue
c546d96e98
downgrade tei version from 1.6 to 1.5, fix the chatqna perf regression ( #1886 )
...
Signed-off-by: chensuyue <suyue.chen@intel.com >
2025-04-25 23:00:36 +08:00
Liang Lv
1eb2e36a18
Refine ChatQnA READMEs ( #1850 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-04-20 10:34:24 +08:00
Liang Lv
71fe886ce9
Replaced TGI with vLLM for guardrail serving ( #1815 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-04-16 17:06:11 +08:00
Xiaotian Chen
1bd56af994
Update TGI image versions ( #1625 )
...
Signed-off-by: xiaotia3 <xiaotian.chen@intel.com >
2025-04-01 11:27:51 +08:00
xiguiw
87baeb833d
Update TEI docker image to 1.6 ( #1650 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2025-03-27 09:40:22 +08:00
XinyaoWa
6d24c1c77a
Merge FaqGen into ChatQnA ( #1654 )
...
1. Delete FaqGen
2. Refactor FaqGen into ChatQnA, serve as a LLM selection.
3. Combine all ChatQnA related Dockerfile into one
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
2025-03-20 17:40:00 +08:00
James Edwards
527b146a80
Add final README.md and set_env.sh script for quickstart review. Previous pull request was 1595. ( #1662 )
...
Signed-off-by: Edwards, James A <jaedwards@habana.ai >
Co-authored-by: Edwards, James A <jaedwards@habana.ai >
2025-03-14 16:05:01 -07:00
Louie Tsai
671dff7f51
[ChatQnA] Enable Prometheus and Grafana with telemetry docker compose file. ( #1623 )
...
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
2025-03-13 23:18:29 -07:00
Louie Tsai
970b869838
Add a new section to change LLM model such as deepseek based on validated model table in LLM microservice ( #1501 )
...
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
Co-authored-by: Wang, Kai Lawrence <109344418+wangkl2@users.noreply.github.com >
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com >
2025-02-12 09:34:56 +08:00
chen, suyue
81b02bb947
Revert "HUGGINGFACEHUB_API_TOKEN environment is change to HF_TOKEN (#… ( #1521 )
...
Revert this PR since the test is not triggered properly due to the false merge of a WIP CI PR, 44a689b0bf , which block the CI test.
This change will be submitted in another PR.
2025-02-11 18:36:12 +08:00
Louie Tsai
ad5523bac7
Enable OpenTelemtry Tracing for ChatQnA on Xeon and Gaudi by docker compose merge feature ( #1488 )
...
Signed-off-by: Louie, Tsai <louie.tsai@intel.com >
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
2025-02-10 22:58:50 -08:00
xiguiw
45d5da2ddd
HUGGINGFACEHUB_API_TOKEN environment is change to HF_TOKEN ( #1503 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2025-02-09 20:33:06 +08:00
Liang Lv
9adf7a6af0
Add support for latest deepseek models on Gaudi ( #1491 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-02-05 08:30:04 +08:00
Wang, Kai Lawrence
3d3ac59bfb
[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu ( #1430 )
...
Update the default LLM to llama3-8B on cpu/nvgpu/amdgpu/gaudi for docker-compose deployment to avoid the potential model serving issue or the missing chat-template issue using neural-chat-7b.
Slow serving issue of neural-chat-7b on ICX: #1420
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com >
2025-01-20 22:47:56 +08:00
Liang Lv
0f7e5a37ac
Adapt code for dataprep microservice refactor ( #1408 )
...
https://github.com/opea-project/GenAIComps/pull/1153
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-01-20 20:37:03 +08:00
Wang, Kai Lawrence
00e9da9ced
[ChatQnA] Switch to vLLM as default llm backend on Gaudi ( #1404 )
...
Switching from TGI to vLLM as the default LLM serving backend on Gaudi for the ChatQnA example to enhance the perf.
https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com >
2025-01-17 20:46:38 +08:00
Letong Han
4cabd55778
Refactor Retrievers related Examples ( #1387 )
...
Delete redundant retrievers docker image in docker_images_list.md.
Refactor Retrievers related Examples READMEs.
Change all of the comps/retrievers/xxx/xxx/Dockerfile path into comps/retrievers/src/Dockerfile.
Fix the Examples CI issues of PR opea-project/GenAIComps#1138 .
Signed-off-by: letonghan <letong.han@intel.com >
2025-01-16 14:21:48 +08:00
Liang Lv
3ca78867eb
Update example code for embedding dependency moving to 3rd_party ( #1368 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-01-10 15:36:58 +08:00
Liang Lv
b3c405a5f6
Adapt example code for guardrails refactor ( #1360 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Signed-off-by: chensuyue <suyue.chen@intel.com >
2025-01-08 14:35:23 +08:00
chen, suyue
5c7a5bd850
Update Code and README for GenAIComps Refactor ( #1285 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Signed-off-by: chensuyue <suyue.chen@intel.com >
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
Signed-off-by: letonghan <letong.han@intel.com >
Signed-off-by: ZePan110 <ze.pan@intel.com >
Signed-off-by: WenjiaoYue <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>
2025-01-02 20:03:26 +08:00
Sihan Chen
907b30b7fe
Refactor service names ( #1199 )
2024-11-28 10:01:31 +08:00
Wang, Kai Lawrence
ac470421d0
Update the llm backend ports ( #1172 )
...
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com >
2024-11-22 09:20:09 +08:00
Louie Tsai
00d9bb6128
Enable vLLM Profiling for ChatQnA on Gaudi ( #1128 )
...
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
2024-11-14 15:46:33 -08:00
lvliang-intel
1ff85f6a85
Upgrade TGI Gaudi version to v2.0.6 ( #1088 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Co-authored-by: chen, suyue <suyue.chen@intel.com >
2024-11-12 14:38:22 +08:00
Letong Han
aa314f6757
[Readme] Update ChatQnA Readme for LLM Endpoint ( #1086 )
...
Signed-off-by: letonghan <letong.han@intel.com >
2024-11-11 13:53:06 +08:00
XinyaoWa
40386d9bd6
remove vllm-on-ray ( #1084 )
...
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
2024-11-08 13:01:48 +08:00
xiguiw
a0921f127f
[Doc] Fix broken build instruction ( #1063 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2024-11-05 13:35:12 +08:00
Louie Tsai
a10b4a1f1d
Address request from Issue#971 ( #1018 )
2024-10-23 23:57:52 -07:00
lvliang-intel
0eedbbfce0
Update aipc ollama docker compose and readme ( #984 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com >
2024-10-22 10:30:47 +08:00
lvliang-intel
9438d392b4
Update README for some minor issues ( #1000 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2024-10-22 10:30:18 +08:00
lvliang-intel
256b58c07e
Replace environment variables with service name for ChatQnA ( #977 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2024-10-18 11:31:24 +08:00
lvliang-intel
619d941047
Set no wrapper ChatQnA as default ( #891 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-10-11 13:30:45 +08:00
Letong Han
c35fe0b429
[Doc] Update ChatQnA README for Nginx Docker Image ( #862 )
...
Signed-off-by: letonghan <letong.han@intel.com >
2024-09-23 12:25:30 +09:00
Letong Han
7eaab93d0b
[Doc] Refine ChatQnA README ( #855 )
...
Signed-off-by: letonghan <letong.han@intel.com >
2024-09-20 11:20:20 +08:00
Neo Zhang Jianyu
bc817700b9
refactor the network port setting for AWS ( #849 )
...
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com >
2024-09-19 21:58:56 +08:00
Letong Han
6c364487d3
[ChatQnA] Add Nginx in Docker Compose and README ( #850 )
...
Signed-off-by: letonghan <letong.han@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-09-19 20:39:58 +08:00
XinyaoWa
2f03a3a894
Align parameters for "max_token, repetition_penalty,presence_penalty,frequency_penalty" ( #726 )
...
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-09-19 14:15:25 +08:00
kevinintel
3b70fb0d42
Refine the quick start of ChatQnA ( #828 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-09-18 22:23:22 +08:00
lvliang-intel
bceacdc804
Fix README issues ( #817 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-09-18 09:50:17 +08:00
XinyaoWa
d2bab99835
refine readme for reorg ( #782 )
...
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-09-11 14:57:29 +08:00
feng-intel
63406dc050
Yaml: add comments to specify gaudi device ids. ( #753 )
...
Signed-off-by: fengding <feng1.ding@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-09-11 12:02:18 +08:00
XinyaoWa
d73129cbf0
Refactor folder to support different vendors ( #743 )
...
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
Signed-off-by: chensuyue <suyue.chen@intel.com >
2024-09-10 23:27:19 +08:00