chen, suyue
1095d88c5f
Group log lines in GHA outputs for better readable logs. ( #1821 )
...
Signed-off-by: chensuyue <suyue.chen@intel.com >
2025-04-16 13:17:53 +08:00
ZePan110
5f4b3a6d12
Adaptation to vllm v0.8.3 build paths ( #1761 )
...
Signed-off-by: ZePan110 <ze.pan@intel.com >
2025-04-09 13:20:02 +08:00
ZePan110
42735d0d7d
Fix vllm and vllm-fork tags ( #1766 )
...
Signed-off-by: ZePan110 <ze.pan@intel.com >
2025-04-07 22:58:50 +08:00
chen, suyue
c48cd651e4
[CICD enhance] ChatQnA run CI with latest base image, group logs in GHA outputs. ( #1736 )
...
Signed-off-by: chensuyue <suyue.chen@intel.com >
2025-04-03 22:03:20 +08:00
chen, suyue
43d0a18270
Enhance ChatQnA test scripts ( #1643 )
...
Signed-off-by: chensuyue <suyue.chen@intel.com >
2025-03-10 17:36:26 +08:00
ZePan110
6ead1b12db
Enable ChatQnA model cache for docker compose test. ( #1605 )
...
Enable ChatQnA model cache for docker compose test.
Signed-off-by: ZePan110 <ze.pan@intel.com >
2025-03-05 11:30:04 +08:00
xiguiw
0c0edffc5b
update vLLM CPU to the latest stable version ( #1546 )
...
Signed-off-by: Wang, Xigui <xigui.wang@intel.com >
Co-authored-by: chen, suyue <suyue.chen@intel.com >
2025-02-17 08:26:25 +08:00
chen, suyue
927698e23e
Simplify git clone code in CI test ( #1434 )
...
1. Simplify git clone code in CI test.
2. Replace git clone branch in Dockerfile.
Signed-off-by: chensuyue <suyue.chen@intel.com >
2025-01-21 23:00:08 +08:00
Wang, Kai Lawrence
3d3ac59bfb
[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu ( #1430 )
...
Update the default LLM to llama3-8B on cpu/nvgpu/amdgpu/gaudi for docker-compose deployment to avoid the potential model serving issue or the missing chat-template issue using neural-chat-7b.
Slow serving issue of neural-chat-7b on ICX: #1420
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com >
2025-01-20 22:47:56 +08:00
Liang Lv
0f7e5a37ac
Adapt code for dataprep microservice refactor ( #1408 )
...
https://github.com/opea-project/GenAIComps/pull/1153
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2025-01-20 20:37:03 +08:00
Wang, Kai Lawrence
742cb6ddd3
[ChatQnA] Switch to vLLM as default llm backend on Xeon ( #1403 )
...
Switching from TGI to vLLM as the default LLM serving backend on Xeon for the ChatQnA example to enhance the perf.
https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com >
2025-01-17 20:48:19 +08:00
Letong Han
4cabd55778
Refactor Retrievers related Examples ( #1387 )
...
Delete redundant retrievers docker image in docker_images_list.md.
Refactor Retrievers related Examples READMEs.
Change all of the comps/retrievers/xxx/xxx/Dockerfile path into comps/retrievers/src/Dockerfile.
Fix the Examples CI issues of PR opea-project/GenAIComps#1138 .
Signed-off-by: letonghan <letong.han@intel.com >
2025-01-16 14:21:48 +08:00
lvliang-intel
256b58c07e
Replace environment variables with service name for ChatQnA ( #977 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2024-10-18 11:31:24 +08:00
lvliang-intel
c930bea172
Add missing nginx microservice and fix frontend test ( #951 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
2024-10-16 13:29:31 +08:00
lvliang-intel
619d941047
Set no wrapper ChatQnA as default ( #891 )
...
Signed-off-by: lvliang-intel <liang1.lv@intel.com >
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-10-11 13:30:45 +08:00
XinyaoWa
d73129cbf0
Refactor folder to support different vendors ( #743 )
...
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com >
Signed-off-by: chensuyue <suyue.chen@intel.com >
2024-09-10 23:27:19 +08:00