Remove Ollama folder since default openai API is able to consume Ollama service, modified Ollama readme and added UT.
#998
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* Align OpenAI API for FaqGen, DocSum, TextGen-native
Algin all the inputs to OpenAI API format for FaqGen, DocSum, TextGen-native, now all the services in llm comps should be OpenAI API compatiable
Related to issue https://github.com/opea-project/GenAIComps/issues/998
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
---------
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
There exist risks with vllm-fork main branch, change to latest stable release v0.6.4.post2+Gaudi-1.19.0
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
In animation/wav2lip component on Gaudi, a module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. So we set numpy's version to 1.23.5.
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Part work of code refactor to combine different text generation backends, remove duplcated native langchain and llama_index folder, consice the optimum habana implementation as a native integration OPEATextGen_Native.
Add feature for issue https://github.com/opea-project/GenAIComps/issues/998
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
vllm-openvino is a dependency for text generation comps, move it to third-parties folder, add UT for both cpu and gpu.
Related to feature issue#998
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
* Build guardrail "Hallucination Detection" microservice.
Signed-off-by: Qun Gao <qun.gao@intel.com>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update comps/guardrails/hallucination_detection/README.md
Co-authored-by: Daniel De León <111013930+daniel-de-leon-user293@users.noreply.github.com>
* - register Dockerfile
- rename file for hpu
- update endpoints to be consistent
Signed-off-by: Qun Gao <qun.gao@intel.com>
* Update repo structure
Signed-off-by: Qun Gao <qun.gao@intel.com>
* refactor
Signed-off-by: Qun Gao <qun.gao@intel.com>
* Refactored Hallucination Guardrail to wrap code under new OpeaComponent Class and leaverage OpeaComponentLoader Class for serving inference.
Signed-off-by: Qun Gao <qun.gao@intel.com>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Updated Copyright year to reflect right date
Signed-off-by: Qun Gao <qun.gao@intel.com>
---------
Signed-off-by: Qun Gao <qun.gao@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
Co-authored-by: Daniel De León <111013930+daniel-de-leon-user293@users.noreply.github.com>
Co-authored-by: ZePan110 <ze.pan@intel.com>
Build Dockerfile.intel_gpu will fail because there are dependency conflict when vllm==v0.6.3.post1. Upgrading vllm to v0.6.6.post1 could solve the issue
Fixes#1141
Signed-off-by: Zhu, Yongbo <yongbo.zhu@intel.com>
* make naming style compatible to the defined style.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Automatically create issue in CIInfra to track the changes of docker compose
files for correspdonding helm charts.
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>