Files
GenAIExamples/ChatQnA/tests
Wang, Kai Lawrence 742cb6ddd3 [ChatQnA] Switch to vLLM as default llm backend on Xeon (#1403)
Switching from TGI to vLLM as the default LLM serving backend on Xeon for the ChatQnA example to enhance the perf.

https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-17 20:48:19 +08:00
..