ZePan110
581e954a8d
Integrate ChatQnA set_env to ut scripts and add README.md for UT scripts. ( #1971 )
...
Signed-off-by: ZePan110 <ze.pan@intel.com >
2025-05-20 13:42:18 +08:00
Razvan Liviu Varzaru
ebb7c24ca8
Add ChatQnA docker-compose example on Intel Xeon using MariaDB Vector ( #1916 )
...
Signed-off-by: Razvan-Liviu Varzaru <razvan@mariadb.org >
Co-authored-by: Liang Lv <liang1.lv@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-05-08 21:08:15 -07:00
chen, suyue
c546d96e98
downgrade tei version from 1.6 to 1.5, fix the chatqna perf regression ( #1886 )
...
Signed-off-by: chensuyue <suyue.chen@intel.com >
2025-04-25 23:00:36 +08:00
Liang Lv
1eb2e36a18
Refine ChatQnA READMEs ( #1850 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-04-20 10:34:24 +08:00
sri-intel
c63e2cd067
Remote inference support for examples in Productivity suite ( #1818 )
...
Signed-off-by: Srinarayan Srikanthan <srinarayan.srikanthan@intel.com >
2025-04-18 14:36:57 +08:00
Ying Hu
1b3f1f632a
Update README.md of ChatQnA for layout ( #1842 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
2025-04-18 11:41:35 +08:00
sri-intel
90cfe89e21
new chatqna readme template ( #1755 )
...
Signed-off-by: Srinarayan Srikanthan <srinarayan.srikanthan@intel.com >
2025-04-17 16:38:40 +08:00
Letong Han
ae31e4fb75
Enable health check for dataprep in ChatQnA ( #1799 )
...
Signed-off-by: letonghan <letong.han@intel.com >
2025-04-17 15:01:57 +08:00
Liang Lv
7b7728c6c3
Fix vLLM CPU initialize engine issue for DeepSeek models ( #1762 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-04-09 09:47:08 +08:00
XinyaoWa
6917d5bdb1
Fix ChatQnA port to internal vllm port ( #1763 )
...
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
2025-04-09 09:37:11 +08:00
Louie Tsai
e8cdf7d668
[ChatQnA] update to the latest Grafana Dashboard ( #1728 )
...
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
2025-04-03 12:14:55 -07:00
xiguiw
87baeb833d
Update TEI docker image to 1.6 ( #1650 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2025-03-27 09:40:22 +08:00
XinyaoWa
6d24c1c77a
Merge FaqGen into ChatQnA ( #1654 )
...
1. Delete FaqGen
2. Refactor FaqGen into ChatQnA, serve as a LLM selection.
3. Combine all ChatQnA related Dockerfile into one
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
2025-03-20 17:40:00 +08:00
Louie Tsai
671dff7f51
[ChatQnA] Enable Prometheus and Grafana with telemetry docker compose file. ( #1623 )
...
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
2025-03-13 23:18:29 -07:00
Li Gang
0701b8cfff
[ChatQnA][docker]Check healthy of redis to avoid dataprep failure ( #1591 )
...
Signed-off-by: Li Gang <gang.g.li@intel.com >
2025-03-13 10:52:33 +08:00
chen, suyue
43d0a18270
Enhance ChatQnA test scripts ( #1643 )
...
Signed-off-by: chensuyue <suyue.chen@intel.com >
2025-03-10 17:36:26 +08:00
Wang, Kai Lawrence
5362321d3a
Fix vllm model cache directory ( #1642 )
...
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com >
2025-03-10 13:40:42 +08:00
ZePan110
785ffb9a1e
Update compose.yaml for ChatQnA ( #1621 )
...
Update compose.yaml for ChatQnA
Signed-off-by: ZePan110 <ze.pan@intel.com >
2025-03-07 09:19:39 +08:00
ZePan110
6ead1b12db
Enable ChatQnA model cache for docker compose test. ( #1605 )
...
Enable ChatQnA model cache for docker compose test.
Signed-off-by: ZePan110 <ze.pan@intel.com >
2025-03-05 11:30:04 +08:00
Eze Lanza (Eze)
fba0de45d2
ChatQnA Docker compose file for Milvus as vdb ( #1548 )
...
Signed-off-by: Ezequiel Lanza <ezequiel.lanza@gmail.com >
Signed-off-by: Kendall González León <kendall.gonzalez.leon@intel.com >
Signed-off-by: chensuyue <suyue.chen@intel.com >
Signed-off-by: Spycsh <sihan.chen@intel.com >
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
Signed-off-by: ZePan110 <ze.pan@intel.com >
Signed-off-by: dependabot[bot] <support@github.com >
Signed-off-by: minmin-intel <minmin.hou@intel.com >
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com >
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com >
Signed-off-by: letonghan <letong.han@intel.com >
Signed-off-by: alexsin368 <alex.sin@intel.com >
Signed-off-by: WenjiaoYue <wenjiao.yue@intel.com >
Co-authored-by: Ezequiel Lanza <emlanza@CDQ242RKJDmac.local >
Co-authored-by: Kendall González León <kendallgonzalez@hotmail.es >
Co-authored-by: chen, suyue <suyue.chen@intel.com >
Co-authored-by: Spycsh <39623753+Spycsh@users.noreply.github.com >
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com >
Co-authored-by: jotpalch <49465120+jotpalch@users.noreply.github.com >
Co-authored-by: ZePan110 <ze.pan@intel.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: minmin-intel <minmin.hou@intel.com >
Co-authored-by: Ying Hu <ying.hu@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eero Tamminen <eero.t.tamminen@intel.com >
Co-authored-by: Liang Lv <liang1.lv@intel.com >
Co-authored-by: Artem Astafev <a.astafev@datamonsters.com >
Co-authored-by: XinyaoWa <xinyao.wang@intel.com >
Co-authored-by: alexsin368 <109180236+alexsin368@users.noreply.github.com >
Co-authored-by: WenjiaoYue <wenjiao.yue@intel.com >
2025-02-28 22:40:31 +08:00
Ying Hu
852bc7027c
Update README.md of AIPC quick start ( #1578 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-02-23 17:38:27 +08:00
xiguiw
d482554a6b
Fix mismatched environment variable ( #1575 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2025-02-19 19:24:10 +08:00
xiguiw
2ae6871fc5
Simplify ChatQnA AIPC user setting ( #1573 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2025-02-19 16:30:02 +08:00
Louie Tsai
970b869838
Add a new section to change LLM model such as deepseek based on validated model table in LLM microservice ( #1501 )
...
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
Co-authored-by: Wang, Kai Lawrence <109344418+wangkl2@users.noreply.github.com >
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com >
2025-02-12 09:34:56 +08:00
chen, suyue
81b02bb947
Revert "HUGGINGFACEHUB_API_TOKEN environment is change to HF_TOKEN (#… ( #1521 )
...
Revert this PR since the test is not triggered properly due to the false merge of a WIP CI PR, 44a689b0bf , which block the CI test.
This change will be submitted in another PR.
2025-02-11 18:36:12 +08:00
Louie Tsai
ad5523bac7
Enable OpenTelemtry Tracing for ChatQnA on Xeon and Gaudi by docker compose merge feature ( #1488 )
...
Signed-off-by: Louie, Tsai <louie.tsai@intel.com >
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
2025-02-10 22:58:50 -08:00
xiguiw
45d5da2ddd
HUGGINGFACEHUB_API_TOKEN environment is change to HF_TOKEN ( #1503 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2025-02-09 20:33:06 +08:00
Ervin Castelino
27fdbcab58
[chore/chatqna] Missing protocol in curl command ( #1447 )
...
This PR fixes the missing protocol for in the curl command mentioned in chatqna readme for tei-embedding-service.
2025-01-22 21:41:47 +08:00
Wang, Kai Lawrence
3d3ac59bfb
[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu ( #1430 )
...
Update the default LLM to llama3-8B on cpu/nvgpu/amdgpu/gaudi for docker-compose deployment to avoid the potential model serving issue or the missing chat-template issue using neural-chat-7b.
Slow serving issue of neural-chat-7b on ICX: #1420
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com >
2025-01-20 22:47:56 +08:00
Liang Lv
0f7e5a37ac
Adapt code for dataprep microservice refactor ( #1408 )
...
https://github.com/opea-project/GenAIComps/pull/1153
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-01-20 20:37:03 +08:00
xiguiw
2d5898244c
Enchance health check in GenAIExample docker-compose ( #1410 )
...
Fix service launch issue
1. Update Gaudi TGI image from 2.0.6 to 2.3.1
2. Change the hpu-gaudi TGI health check condition.
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2025-01-20 20:13:13 +08:00
Wang, Kai Lawrence
742cb6ddd3
[ChatQnA] Switch to vLLM as default llm backend on Xeon ( #1403 )
...
Switching from TGI to vLLM as the default LLM serving backend on Xeon for the ChatQnA example to enhance the perf.
https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com >
2025-01-17 20:48:19 +08:00
Letong Han
4cabd55778
Refactor Retrievers related Examples ( #1387 )
...
Delete redundant retrievers docker image in docker_images_list.md.
Refactor Retrievers related Examples READMEs.
Change all of the comps/retrievers/xxx/xxx/Dockerfile path into comps/retrievers/src/Dockerfile.
Fix the Examples CI issues of PR opea-project/GenAIComps#1138 .
Signed-off-by: letonghan <letong.han@intel.com >
2025-01-16 14:21:48 +08:00
xiguiw
698a06edbf
[DOC] Fix document issue ( #1395 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2025-01-16 11:30:07 +08:00
Liang Lv
3ca78867eb
Update example code for embedding dependency moving to 3rd_party ( #1368 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-01-10 15:36:58 +08:00
chen, suyue
5c7a5bd850
Update Code and README for GenAIComps Refactor ( #1285 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Signed-off-by: chensuyue <suyue.chen@intel.com >
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
Signed-off-by: letonghan <letong.han@intel.com >
Signed-off-by: ZePan110 <ze.pan@intel.com >
Signed-off-by: WenjiaoYue <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>
2025-01-02 20:03:26 +08:00
Ying Hu
597f17b979
Update set_env.sh to fix LOGFLAG warning ( #1319 )
2024-12-30 10:54:26 +08:00
pallavijaini0525
3a371ac102
Updated the Pinecone readme to reflect the new structure ( #1222 )
...
Signed-off-by: Pallavi Jaini <pallavi.jaini@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-05 10:04:09 +08:00
Sihan Chen
907b30b7fe
Refactor service names ( #1199 )
2024-11-28 10:01:31 +08:00
Louie Tsai
152adf8012
maintain a version info for docker_compose yaml files among release ( #1141 )
...
Signed-off-by: Tsai, Louie <louie.tsai@intel.com >
2024-11-17 22:39:41 -08:00
chen, suyue
393367e9f1
Fix left issue of tgi version update ( #1121 )
...
Signed-off-by: chensuyue <suyue.chen@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-13 15:42:42 +08:00
Louie Tsai
7adbba6add
Enable vLLM Profiling for ChatQnA ( #1124 )
2024-11-13 11:26:31 +08:00
pallavijaini0525
0d52c2f003
Pinecone update to Readme and docker compose for ChatQnA ( #540 )
...
Signed-off-by: pallavi jaini <pallavi.jaini@intel.com >
Signed-off-by: AI Workloads <aigoldrush1@g2-r3-2.iind.intel.com >
Signed-off-by: Pallavi Jaini <pallavi,jaini@intel.com >
Signed-off-by: Pallavi Jaini <pallavi.jaini@intel.com >
Signed-off-by: root <root@test-pjaini.535545281608.us-region-2.idcservice.net >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: AI Workloads <aigoldrush1@g2-r3-2.iind.intel.com >
Co-authored-by: Pallavi Jaini <pallavi,jaini@intel.com >
Co-authored-by: root <root@test-pjaini.535545281608.us-region-2.idcservice.net >
Co-authored-by: chen, suyue <suyue.chen@intel.com >
2024-11-13 09:32:37 +08:00
Letong Han
aa314f6757
[Readme] Update ChatQnA Readme for LLM Endpoint ( #1086 )
...
Signed-off-by: letonghan <letong.han@intel.com >
2024-11-11 13:53:06 +08:00
Arthur Leung
6263b517b9
[Doc] Add steps to deploy opea services using minikube ( #1058 )
...
Signed-off-by: Arthur Leung <arcyleung@gmail.com >
Co-authored-by: Arthur Leung <arcyleung@gmail.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-07 13:57:34 +08:00
lvliang-intel
0306c620b5
Update TGI CPU image to latest official release 2.4.0 ( #1035 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-04 11:28:43 +08:00
xiguiw
95b58b51fa
Fix AIPC docker container network issue ( #1021 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
2024-10-25 10:46:57 +08:00
Louie Tsai
a10b4a1f1d
Address request from Issue#971 ( #1018 )
2024-10-23 23:57:52 -07:00
RuijingGuo
def39cfcdc
setup ollama service in aipc docker compose ( #1008 )
...
Signed-off-by: Guo Ruijing <ruijing.guo@intel.com >
2024-10-23 14:22:48 +08:00
lvliang-intel
0eedbbfce0
Update aipc ollama docker compose and readme ( #984 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com >
2024-10-22 10:30:47 +08:00