chen, suyue
|
c546d96e98
|
downgrade tei version from 1.6 to 1.5, fix the chatqna perf regression (#1886)
Signed-off-by: chensuyue <suyue.chen@intel.com>
|
2025-04-25 23:00:36 +08:00 |
|
Letong Han
|
ae31e4fb75
|
Enable health check for dataprep in ChatQnA (#1799)
Signed-off-by: letonghan <letong.han@intel.com>
|
2025-04-17 15:01:57 +08:00 |
|
XinyaoWa
|
6917d5bdb1
|
Fix ChatQnA port to internal vllm port (#1763)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
|
2025-04-09 09:37:11 +08:00 |
|
xiguiw
|
87baeb833d
|
Update TEI docker image to 1.6 (#1650)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
|
2025-03-27 09:40:22 +08:00 |
|
ZePan110
|
785ffb9a1e
|
Update compose.yaml for ChatQnA (#1621)
Update compose.yaml for ChatQnA
Signed-off-by: ZePan110 <ze.pan@intel.com>
|
2025-03-07 09:19:39 +08:00 |
|
ZePan110
|
6ead1b12db
|
Enable ChatQnA model cache for docker compose test. (#1605)
Enable ChatQnA model cache for docker compose test.
Signed-off-by: ZePan110 <ze.pan@intel.com>
|
2025-03-05 11:30:04 +08:00 |
|
Liang Lv
|
0f7e5a37ac
|
Adapt code for dataprep microservice refactor (#1408)
https://github.com/opea-project/GenAIComps/pull/1153
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
|
2025-01-20 20:37:03 +08:00 |
|
Wang, Kai Lawrence
|
742cb6ddd3
|
[ChatQnA] Switch to vLLM as default llm backend on Xeon (#1403)
Switching from TGI to vLLM as the default LLM serving backend on Xeon for the ChatQnA example to enhance the perf.
https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
|
2025-01-17 20:48:19 +08:00 |
|