Switching from TGI to vLLM as the default LLM serving backend on Gaudi for the ChatQnA example to enhance the perf. https://github.com/opea-project/GenAIExamples/issues/1213 Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
Switching from TGI to vLLM as the default LLM serving backend on Gaudi for the ChatQnA example to enhance the perf. https://github.com/opea-project/GenAIExamples/issues/1213 Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>