Compare commits

...

406 Commits

Author SHA1 Message Date
xiguiw
a03feb700b Merge branch 'main' into update_vLLM 2025-05-16 11:18:10 +08:00
Zhu Yongbo
bb9ec6e5d2 fix EdgeCraftRAG UI image build bug (#1964)
Signed-off-by: Yongbozzz <yongbo.zhu@intel.com>
2025-05-16 10:06:46 +08:00
xiguiw
94222d5783 Merge branch 'main' into update_vLLM 2025-05-16 09:04:30 +08:00
CICD-at-OPEA
274af9eabc Update vLLM version to v0.9.0
Signed-off-by: CICD-at-OPEA <CICD@opea.dev>
2025-05-15 22:41:49 +00:00
Daniel De León
3fb59a9769 Update DocSum README and environment configuration (#1917)
Signed-off-by: Daniel Deleon <daniel.de.leon@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: Eero Tamminen <eero.t.tamminen@intel.com>
Co-authored-by: Zhenzhong Xu <zhenzhong.xu@intel.com>
2025-05-15 11:58:58 -07:00
chen, suyue
410df80925 [CICD enhance] AvatarChatbot run CI with latest base image, group logs in GHA outputs. (#1930)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-05-15 11:22:49 +08:00
chen, suyue
8eac02e58b [CICD enhance] DBQnA run CI with latest base image, group logs in GHA outputs. (#1931)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-05-14 17:12:09 +08:00
ZePan110
9f80a18cb5 Integrate GraphRAG set_env to ut scripts. (#1943)
Integrate GraphRAG set_env to ut scripts.
Add README.md for UT scripts.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-05-14 13:12:35 +08:00
ZePan110
f2c8e0b4ff Integrate DocIndexRetriever set_env to ut scripts. (#1945)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-05-14 13:00:51 +08:00
alexsin368
fb53c536a3 AgentQnA - add support for remote server (#1900)
Signed-off-by: alexsin368 <alex.sin@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: ZePan110 <ze.pan@intel.com>
2025-05-14 11:12:57 +08:00
chen, suyue
26d07019d0 [CICD enhance] CodeTrans run CI with latest base image, group logs in GHA outputs. (#1929)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-05-14 11:11:54 +08:00
ZePan110
bd6726c53a Blocking link checks that require a login (#1946)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-14 10:57:16 +08:00
CICD-at-OPEA
238fb52a92 Update vLLM version to v0.8.5
Signed-off-by: CICD-at-OPEA <CICD@opea.dev>
2025-05-13 22:42:16 +00:00
Ying Hu
4a17638b5c Merge branch 'main' into update_vLLM 2025-05-13 16:00:56 +08:00
ZePan110
a0bdf8eab2 Add opea/vllm-rocm README.md link in docker_images_list.md (#1925)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-05-13 13:34:31 +08:00
ZePan110
99f2f940b6 Fix input check for helm test workflow (#1938)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-05-12 17:41:57 +08:00
Ying Hu
2596671d3f Update README.md for remove the docker installer (#1927)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-12 11:40:33 +08:00
Sun, Xuehao
7ffb4107e6 set fail-fast to false in vLLM update actions (#1926)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2025-05-12 11:30:29 +08:00
ZePan110
7590b055aa Integrate DBQnA set_env to ut scripts and enhanced validation checks. (#1915)
Integrate DBQnA set_env to ut scripts.
Add README.md for ut scripts.
Enhanced validation checks

Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-05-12 10:19:18 +08:00
Eero Tamminen
4efb1e0833 Update paths to GenAIInfra scripts (#1923)
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
2025-05-10 21:57:52 +08:00
Razvan Liviu Varzaru
ebb7c24ca8 Add ChatQnA docker-compose example on Intel Xeon using MariaDB Vector (#1916)
Signed-off-by: Razvan-Liviu Varzaru <razvan@mariadb.org>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-05-08 21:08:15 -07:00
CICD-at-OPEA
2160d43a32 Update vLLM version to v0.8.5
Signed-off-by: CICD-at-OPEA <CICD@opea.dev>
2025-05-08 08:37:52 +00:00
Sun, Xuehao
bfefdfad34 Fix vllm version update workflow (#1919)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2025-05-08 16:36:37 +08:00
Sun, Xuehao
b467a13ec3 daily update vLLM&vLLM-fork version (#1914)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2025-05-08 10:34:36 +08:00
ZePan110
05011ebaac Integrate AudioQnA set_env to ut scripts. (#1897)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-05-08 09:14:44 +08:00
Melanie Hart Buehler
7bb05585b6 Move file processing from UI to DocSum backend service (#1899)
Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
2025-05-08 09:05:30 +08:00
Sun, Xuehao
f6013b8679 Add exempt-issue-labels to stale check workflow (#1861)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2025-05-07 11:35:37 +08:00
chen, suyue
505ec6d4b6 update PR reviewers (#1913)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-05-07 11:28:04 +08:00
lkk
ff66600ab4 Fix ui dockerfile. (#1909)
Signed-off-by: lkk <33276950+lkk12014402@users.noreply.github.com>
2025-05-06 16:34:16 +08:00
ZePan110
5375332fb3 Fix security issues for helm test workflow (#1908)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-05-06 15:54:43 +08:00
Omar Khleif
df33800945 CodeGen Gradio UI Enhancements (#1904)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
2025-05-06 13:41:21 +08:00
Ying Hu
40e44dfcd6 Update README.md of ChatQnA for broken URL (#1907)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2025-05-06 13:21:31 +08:00
ZePan110
9259ba41a5 Remove invalid codeowner. (#1896)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-30 13:24:42 +08:00
ZePan110
5c7f5718ed Restore context in EdgeCraftRAG build.yaml. (#1895)
Restore context in EdgeCraftRAG build.yaml to avoid the issue of can't find Dockerfiles.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-30 11:09:21 +08:00
lkk
d334f5c8fd build cpu agent ui docker image. (#1894) 2025-04-29 23:58:52 +08:00
ZePan110
670d9f3d18 Fix security issue. (#1892)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-29 19:44:48 +08:00
Zhu Yongbo
555c4100b3 Install cpu version for components (#1888)
Signed-off-by: Yongbozzz <yongbo.zhu@intel.com>
2025-04-29 10:08:23 +08:00
ZePan110
04d527d3b0 Integrate set_env to ut scripts for CodeTrans. (#1868)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-28 13:53:50 +08:00
ZePan110
13c4749ca3 Fix security issue (#1884)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-28 13:52:50 +08:00
ZePan110
99b62ae49e Integrate DocSum set_env to ut scripts. (#1860)
Integrate DocSum set_env to ut scripts.
Add README.md for DocSum and InstructionTuning UT scripts.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-28 13:35:05 +08:00
chen, suyue
c546d96e98 downgrade tei version from 1.6 to 1.5, fix the chatqna perf regression (#1886)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-25 23:00:36 +08:00
chen, suyue
be5933ad85 Update benchmark scripts (#1883)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-04-25 17:05:48 +08:00
rbrugaro
18b4f39f27 README fixes Finance Example (#1882)
Signed-off-by: Rita Brugarolas <rita.brugarolas.brufau@intel.com>
Co-authored-by: Ying Hu <ying.hu@intel.com>
2025-04-24 23:58:08 -07:00
chyundunovDatamonsters
ef9290f245 DocSum - refactoring README.md for deploy application on ROCm (#1881)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-04-25 13:36:40 +08:00
chyundunovDatamonsters
3b0bcb80a8 DocSum - Adding files to deploy an application in the K8S environment using Helm (#1758)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Signed-off-by: Chingis Yundunov <c.yundunov@datamonsters.com>
Co-authored-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-04-25 13:33:08 +08:00
Artem Astafev
ccc145ea1a Refine README.MD for SearchQnA on AMD ROCm platform (#1876)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-04-25 10:16:03 +08:00
chyundunovDatamonsters
bb7a675665 ChatQnA - refactoring README.md for deploy application on ROCm (#1857)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Signed-off-by: Chingis Yundunov <c.yundunov@datamonsters.com>
Co-authored-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-04-25 08:52:24 +08:00
chen, suyue
f90a6d2a8e [CICD enhance] EdgeCraftRAG run CI with latest base image, group logs in GHA outputs. (#1877)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-24 16:18:44 +08:00
chyundunovDatamonsters
1fdab591d9 CodeTrans - refactoring README.md for deploy application on ROCm with Docker Compose (#1875)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-04-24 15:28:57 +08:00
chen, suyue
13ea13862a Remove proxy in CodeTrans test (#1874)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-24 13:47:56 +08:00
ZePan110
1787d1ee98 Update image links. (#1866)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-24 13:34:41 +08:00
Artem Astafev
db4bf1a4c3 Refine README.MD for AMD ROCm docker compose deployment (#1856)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-04-24 11:00:51 +08:00
chen, suyue
f7002fcb70 Set opea_branch for CD test (#1870)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-24 09:49:20 +08:00
Artem Astafev
c39c875211 Fix compose file and functional tests for Avatarchatbot on AMD ROCm platform (#1872)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-04-23 22:58:25 +08:00
Artem Astafev
c2e9a259fe Refine AuidoQnA README.MD for AMD ROCm docker compose deployment (#1862)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-04-23 13:55:01 +08:00
Omar Khleif
48eaf9c1c9 Added CodeGen Gradio README link to Docker Images List (#1864)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2025-04-22 15:28:49 -07:00
Ervin Castelino
a39824f142 Update README.md of DBQnA (#1855)
Co-authored-by: Ying Hu <ying.hu@intel.com>
2025-04-22 15:56:37 -04:00
Dina Suehiro Jones
e10e6dd002 Fixes for MultimodalQnA with the Milvus vector db (#1859)
Signed-off-by: Dina Suehiro Jones <dina.s.jones@intel.com>
2025-04-21 16:05:11 -07:00
chen, suyue
ea17b38ac5 [CICD enhance] AudioQnA run CI with latest base image, group logs in GHA outputs. (#1854)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-21 21:58:02 +08:00
sgurunat
ea9d444bbf New Productivity Suite react UI and Bug Fixes (#1834)
Signed-off-by: Gurunath S <gurunath.s@intel.com>
2025-04-21 18:33:25 +08:00
Yao Qing
262ad7d6ec Refine readme of CodeGen (#1797)
Signed-off-by: Yao, Qing <qing.yao@intel.com>
2025-04-21 17:49:15 +08:00
Spycsh
608dc963c9 Refine readme of AudioQnA (#1804) 2025-04-21 17:30:14 +08:00
chen, suyue
ef2156fbf4 Enable more flexible support for test HWs (#1816)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-21 17:25:01 +08:00
WenjiaoYue
52c4db2fc6 [ SearchQnA ] Refine documents (#1803)
Signed-off-by: WenjiaoYue <wenjiao.yue@intel.com>
2025-04-21 17:16:41 +08:00
Letong Han
697f78ea71 Refine the READMEs of CodeTrans (#1796)
Signed-off-by: letonghan <letong.han@intel.com>
2025-04-21 17:14:46 +08:00
chen, suyue
e96f5a1ac5 AgentQnA group log lines in test outputs for better readable logs. (#1817)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-21 15:27:28 +08:00
Zhu Yongbo
82ef639ee3 hot fix for permission issue (#1849)
Signed-off-by: Yongbozzz <yongbo.zhu@intel.com>
2025-04-21 10:32:46 +08:00
vrantala
29d449b3ca Added Initial version of DocSum support for benchmarking scripts for OPEA (#1840)
Signed-off-by: Valtteri Rantala <valtteri.rantala@intel.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
Co-authored-by: ZePan110 <ze.pan@intel.com>
2025-04-21 10:32:28 +08:00
Shifani Rajabose
338f81430d [Bug: 900] Create a version of MultimodalQnA example with Zilliz/Milvus as Vector DB (#1639)
Signed-off-by: Shifani Rajabose <srajabose@habana.ai>
Signed-off-by: Pallavi Jaini <pallavi.jaini@intel.com>
2025-04-21 10:11:39 +08:00
dolpher
87e3c0f59f Update chatqna values file changes (#1844)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-04-21 09:38:07 +08:00
Spycsh
27813b3bf9 add AudioQnA key parameters to comply with the image size reduction (#1833) 2025-04-20 16:34:19 +08:00
XinyaoWa
c7f06d5e54 Refine documents for DocSum (#1802)
Signed-off-by: Xinyao <xinyao.wang@intel.com>
2025-04-20 16:20:20 +08:00
ZePan110
0967fcac86 [ Translation ] Refine documents (#1795)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-20 16:18:38 +08:00
Omar Khleif
a3eba01879 CodeGen Gradio UI Updates for new delete endpoint features (#1851)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
2025-04-20 16:17:32 +08:00
XinyuYe-Intel
bc168f1732 Refine readme of InstructionTuning (#1794)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-04-20 16:17:13 +08:00
Liang Lv
1eb2e36a18 Refine ChatQnA READMEs (#1850)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-04-20 10:34:24 +08:00
Louie Tsai
1a9a2dd53c Redirect users to new github.io sections for AgentQnA opentelemetry materials (#1846)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-04-17 23:40:15 -07:00
sri-intel
c63e2cd067 Remote inference support for examples in Productivity suite (#1818)
Signed-off-by: Srinarayan Srikanthan <srinarayan.srikanthan@intel.com>
2025-04-18 14:36:57 +08:00
Louie Tsai
c793dd0b51 Redirect Users to github.io for ChatQnA telemetry materials (#1845)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-04-17 23:35:30 -07:00
Ying Hu
1b3f1f632a Update README.md of ChatQnA for layout (#1842)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-18 11:41:35 +08:00
Zhu Yongbo
4c05e7fd1c fix missing package (#1841)
Signed-off-by: Yongbozzz <yongbo.zhu@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-04-17 22:57:53 +08:00
sri-intel
90cfe89e21 new chatqna readme template (#1755)
Signed-off-by: Srinarayan Srikanthan <srinarayan.srikanthan@intel.com>
2025-04-17 16:38:40 +08:00
ZePan110
62f7f5bd34 Update docker images list. (#1835)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-17 16:26:50 +08:00
Letong Han
7c6189cf43 Enable dataprep health check for examples (#1800)
Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-17 15:52:06 +08:00
Letong Han
ae31e4fb75 Enable health check for dataprep in ChatQnA (#1799)
Signed-off-by: letonghan <letong.han@intel.com>
2025-04-17 15:01:57 +08:00
xiguiw
4fc19c7d73 Update TEI docker images to CPU-1.6 (#1791)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-04-17 15:00:06 +08:00
Letong Han
b80449b8ab Fix Multimodal & ProductivitySuite Issue (#1820)
1. add data-prep health check
2. add create conda env

Signed-off-by: letonghan <letong.han@intel.com>
2025-04-17 09:30:15 +08:00
minmin-intel
8aa96c6278 Update FinanceAgent v1.3 (#1819)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-04-16 15:44:46 -07:00
Abolfazl Shahbazi
a7ef8333ee Adding the two missing packages for ingest script (#1822)
Signed-off-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2025-04-16 09:46:45 -07:00
Liang Lv
71fe886ce9 Replaced TGI with vLLM for guardrail serving (#1815)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-04-16 17:06:11 +08:00
XinyuYe-Intel
1a6f821c95 Remove template_llava.jinja in command (#1831)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-04-16 17:05:55 +08:00
chen, suyue
1095d88c5f Group log lines in GHA outputs for better readable logs. (#1821)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-16 13:17:53 +08:00
mahathis
c73b09a758 Update AgentQnA and DocSum for Gaudi Compatibility (#1777)
Signed-off-by: Mahathi Vatsal <mahathi.vatsal.salopanthula@intel.com>
2025-04-15 22:01:27 -07:00
Liang Lv
13dd27e6d5 Update vLLM parameter max-seq-len-to-capture (#1809)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-04-15 14:27:12 +08:00
chen, suyue
a222d1cfbb Optimize the nightly/weekly example test (#1806)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-14 17:46:49 +08:00
minmin-intel
1852e6bcc3 Add Finance Agent Example (#1752)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Signed-off-by: Rita Brugarolas <rita.brugarolas.brufau@intel.com>
Signed-off-by: rbrugaro <rita.brugarolas.brufau@intel.com>
Co-authored-by: rbrugaro <rita.brugarolas.brufau@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lkk <33276950+lkk12014402@users.noreply.github.com>
Co-authored-by: lkk12014402 <kaokao.lv@intel.com>
2025-04-14 14:27:07 +08:00
Neo Zhang Jianyu
72ce335663 add 'N/A' to option (#1801)
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
2025-04-14 11:05:56 +08:00
chen, suyue
15d76c0889 support rocm helm charts test (#1787)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-13 22:36:16 +08:00
Chaunte W. Lacewell
c4763434b8 Fix VideoQnA (#1696)
This PR fixes the VideoQnA example.

Fixes Issues #1476 #1478 #1477

Signed-off-by: zhanmyz <yazhan.ma@intel.com>
Signed-off-by: Lacewell, Chaunte W <chaunte.w.lacewell@intel.com>
2025-04-12 18:15:02 +08:00
minmin-intel
58b47c15c6 update AgentQnA (#1790)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-04-11 13:33:19 -07:00
ZePan110
8d421b7912 [Translation] Integrate set_env.sh into test scripts. (#1785)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-04-11 09:31:40 +08:00
ZePan110
e9cafb3343 Redefine docker images list. (#1743)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-10 16:38:46 +08:00
ZePan110
1737d4b2b4 Update model cache for MultimodalQnA (#1618)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-10 16:25:41 +08:00
chen, suyue
177da5e6fc Add new secrets for docker compose test (#1786)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-10 16:12:51 +08:00
XinyaoWa
063547fb66 Align DocSum env to vllm (#1784)
Signed-off-by: sys-lpot-val <sys_lpot_val@intel.com>
Co-authored-by: sys-lpot-val <sys_lpot_val@intel.com>
2025-04-10 11:38:24 +08:00
ZePan110
c3bb59a354 Unified build.yaml file writing style (#1781)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2025-04-10 10:58:45 +08:00
ZePan110
8c763cbe11 Enable AvatarChatbot model cache for docker compose test. (#1604)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-04-10 09:54:30 +08:00
minmin-intel
411bb28f41 fix bugs in DocIndexRetriever (#1770)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
2025-04-10 09:45:46 +08:00
ZePan110
00d7a65dd8 Enable model cache for Rocm docker compose test. (#1614)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-10 09:40:37 +08:00
Artem Astafev
795c29fe87 Adding files to deploy MultimodalQnA application on ROCm vLLM (#1737)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-04-10 09:34:58 +08:00
pre-commit-ci[bot]
094ca7aefe [pre-commit.ci] pre-commit autoupdate (#1771)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sun, Xuehao <xuehao.sun@intel.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2025-04-09 11:51:57 -07:00
Liang Lv
398441a10c Fix typo in CodeGen README (#1783)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-04-09 16:43:53 +08:00
Mustafa
892624f539 CodGen Examples using-RAG-and-Agents (#1757)
Signed-off-by: Mustafa <mustafa.cetin@intel.com>
2025-04-09 16:12:20 +08:00
Eero Tamminen
8b7cb3539e Use GenAIComp base image to simplify Dockerfiles & reduce image sizes (#1369)
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-04-09 14:51:10 +08:00
ZePan110
5f4b3a6d12 Adaptation to vllm v0.8.3 build paths (#1761)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-09 13:20:02 +08:00
Yazhan Ma
0392610776 Iteratively add image docker hub description (#1768)
Signed-off-by: zhanmyz <yazhan.ma@intel.com>
2025-04-09 12:00:45 +08:00
Lucas Melo
2d8a7e25f6 Update ChatQna & CodeGen README.md with new Automated Terraform Deployment Options (#1731)
Signed-off-by: lucasmelogithub <lucas.melo@intel.com>
2025-04-09 10:54:01 +08:00
Chun Tao
4d652719c2 Fix GenAIExamples #1607 (#1776)
Fix issue #1607

Signed-off-by: Chun Tao <chun.tao@intel.com>
2025-04-09 10:10:07 +08:00
Liang Lv
7b7728c6c3 Fix vLLM CPU initialize engine issue for DeepSeek models (#1762)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-04-09 09:47:08 +08:00
XinyaoWa
6917d5bdb1 Fix ChatQnA port to internal vllm port (#1763)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-04-09 09:37:11 +08:00
dolpher
46ebb78aa3 Sync values yaml file for 1.3 release (#1748)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-04-08 22:39:40 +08:00
chen, suyue
b14db6dbd3 fix docker image clean up issue (#1773)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-08 22:26:37 +08:00
lkk
ff8008b6d0 compatible open-webui for opea agent. (#1765) 2025-04-08 21:54:01 +08:00
Spycsh
d4952d1e7c Refine third parties links (#1764)
Signed-off-by: Spycsh <sihan.chen@intel.com>
2025-04-08 18:39:13 +08:00
chen, suyue
12932477ee Add dockerhub login step to avoid 429 Too Many Requests (#1772)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-08 14:29:36 +08:00
ZePan110
42735d0d7d Fix vllm and vllm-fork tags (#1766)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-07 22:58:50 +08:00
Artem Astafev
073e5443ec Adding files to deploy VisualQnA application on ROCm vLLM (#1751)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-04-07 09:27:19 +08:00
Louie Tsai
e8cdf7d668 [ChatQnA] update to the latest Grafana Dashboard (#1728)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-04-03 12:14:55 -07:00
chen, suyue
c48cd651e4 [CICD enhance] ChatQnA run CI with latest base image, group logs in GHA outputs. (#1736)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-04-03 22:03:20 +08:00
Spycsh
d627209ee3 Add AudioQnA multilang tts test (#1746) 2025-04-03 21:29:40 +08:00
chyundunovDatamonsters
c50dfb2510 Adding files to deploy ChatQnA application on ROCm vLLM (#1560)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-04-03 17:19:26 +08:00
ZePan110
4ce847cdb7 Fix relative path validity issue (#1750)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-03 17:08:36 +08:00
chyundunovDatamonsters
319dbdaa6b Adding files to deploy DocSum application on ROCm vLLM (#1572)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-04-03 14:20:23 +08:00
Zhu Yongbo
1a0c5f03c6 Code Enhancement for vllm inference (#1729)
Signed-off-by: Yongbozzz <yongbo.zhu@intel.com>
2025-04-03 13:37:49 +08:00
Melanie Hart Buehler
bbd53443ab MultimodalQnA audio features completion (#1698)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
Signed-off-by: Harsha Ramayanam <harsha.ramayanam@intel.com>
Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
Signed-off-by: dmsuehir <dina.s.jones@intel.com>
Signed-off-by: Dina Suehiro Jones <dina.s.jones@intel.com>
Co-authored-by: Omar Khleif <omar.khleif@intel.com>
Co-authored-by: Harsha Ramayanam <harsha.ramayanam@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2025-04-02 21:45:01 -07:00
chyundunovDatamonsters
2764a6dcd8 Fix README for deploy AgentQnA application on ROCm vLLM (#1742)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-04-03 11:09:11 +08:00
Louie Tsai
11fa7d5e99 Add Telemetry support for AgentQnA using Grafana, Prometheus and Jaeger (#1732)
Signed-off-by: louie tsai <louie.tsai@intel.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-04-02 17:37:13 -07:00
ZePan110
76c088dc0b Add model environment variable (#1660)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-04-02 22:21:11 +08:00
dolpher
cee24a083c Fix model cache path and use Random to avoid ns conflict (#1734)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-04-02 13:40:25 +08:00
chyundunovDatamonsters
5cc047ce34 Adding files to deploy AgentQnA application on ROCm vLLM (#1613)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-04-02 11:17:07 +08:00
Yazhan Ma
46a29cc253 Add short descriptions to the images OPEA publishes on Docker Hub (#1740)
Signed-off-by: zhanmyz <yazhan.ma@intel.com>
2025-04-02 10:32:20 +08:00
Louie Tsai
8fe2d5d0be Update README.md to have Table for contents (#1721)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-04-01 10:31:05 -07:00
Yazhan Ma
68747a9688 Add short descriptions to the images OPEA publishes on Docker Hub (#1637)
Signed-off-by: zhanmyz <yazhan.ma@intel.com>
2025-04-01 15:48:49 +08:00
Xiaotian Chen
1bd56af994 Update TGI image versions (#1625)
Signed-off-by: xiaotia3 <xiaotian.chen@intel.com>
2025-04-01 11:27:51 +08:00
Dina Suehiro Jones
583428c6a7 Update MMQnA tgi-gaudi verison to match compose.yaml (#1663)
Signed-off-by: Dina Suehiro Jones <dina.s.jones@intel.com>
2025-03-31 11:13:19 -07:00
chyundunovDatamonsters
853f1302af Adding files to deploy SearchQnA application on ROCm vLLM (#1649)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-03-31 17:51:51 +08:00
chyundunovDatamonsters
340fa075bd Adding files to deploy Translation application on ROCm vLLM (#1648)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-03-31 13:49:33 +08:00
chen, suyue
b7f24762a3 Expand example running timeout for the new test cluster with k8s runner set (#1723)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-03-31 11:48:47 +08:00
Letong Han
d4dcbd18ef Enable vllm for DocSum (#1716)
Set vllm as default llm serving, and add related docker compose files, readmes, and test scripts.

Fix issue #1436

Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-28 17:15:01 +08:00
xiguiw
87baeb833d Update TEI docker image to 1.6 (#1650)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-03-27 09:40:22 +08:00
Shifani Rajabose
03179296b4 [Bug: 899] Create a version of DocIndexRetriever example with Zilliz/Milvus as Vector DB (#1616)
Signed-off-by: Shifani Rajabose <srajabose@habana.ai>
Co-authored-by: pallavi jaini <pallavi.jaini@intel.com>
2025-03-26 15:19:38 +08:00
Louie Tsai
139f2aeeeb typo for docker image (#1717)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-03-25 08:34:59 -07:00
Pranav Singh
61a8befe05 [docs] Multimodal Endpoints Issue (#1700)
Signed-off-by: Pranav Singh <pranav.singh@intel.com>
Co-authored-by: Ying Hu <ying.hu@intel.com>
2025-03-25 14:35:12 +08:00
XinyaoWa
4582e53b8a Remove FaqGen from ProductivitySuite (#1709)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-03-24 17:42:02 +08:00
lkk
566ffb2edc remove 3 useless environments. (#1708) 2025-03-24 15:34:45 +08:00
chyundunovDatamonsters
a04463d5e3 Adding files to deploy CodeTrans application on ROCm vLLM (#1545)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-03-24 15:33:35 +08:00
chyundunovDatamonsters
31b1d69e40 Adding files to deploy CodeGen application on ROCm vLLM (#1544)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-03-24 14:45:17 +08:00
ZePan110
fe2a6674e0 Fix CD cancel issue (#1706)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-24 13:58:07 +08:00
chyundunovDatamonsters
60591d8d56 Adding files to deploy AudioQnA application on ROCm vLLM (#1655)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: Artem Astafev <a.astafev@datamonsters.com>
2025-03-24 10:03:37 +08:00
chen, suyue
7636de02e4 Enhance port release before CI test (#1704)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-03-24 09:24:43 +08:00
Eero Tamminen
d397e3f631 Use GenAIComp base image to simplify Dockerfiles - part 3/4 (#1671)
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
2025-03-24 09:17:12 +08:00
Louie Tsai
0736912c69 change gaudi node exporter from default one to 41612 (#1702)
Signed-off-by: Louie Tsai <louie.tsai@intel.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-03-20 21:38:24 -07:00
Louie Tsai
e8f2313e07 Integrate docker images into compose yaml file to simplify the run instructions. fix ui ip issue and add web search tool support (#1656)
Integrate docker images into compose yaml file to simplify the run instructions. fix ui ip issue and add web search tool support

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: alexsin368 <alex.sin@intel.com>
2025-03-21 09:42:20 +08:00
XinyaoWa
6d24c1c77a Merge FaqGen into ChatQnA (#1654)
1. Delete FaqGen
2. Refactor FaqGen into ChatQnA, serve as a LLM selection.
3. Combine all ChatQnA related Dockerfile into one

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-03-20 17:40:00 +08:00
Zhu Yongbo
5a50ae0471 Add new UI/new features for EC-RAG (#1665)
Signed-off-by: Zhu, Yongbo <yongbo.zhu@intel.com>
2025-03-20 10:46:01 +08:00
minmin-intel
fecc22719a fix errors for running AgentQnA on xeon with openai and update readme (#1664)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-20 09:57:18 +08:00
chen, suyue
2204fe8e36 Enable base image build in CI/CD (#1669)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-03-19 09:21:51 +08:00
ZePan110
b50dd8f47a Fix workflow issues. (#1691)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-19 09:21:27 +08:00
Spycsh
bf8d03425c Set vLLM as default model for VisualQnA (#1644) 2025-03-18 15:29:49 +08:00
chen, suyue
1b6342aa5b Fix input issue for manual-image-build.yml (#1666)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-03-17 13:11:53 +08:00
James Edwards
527b146a80 Add final README.md and set_env.sh script for quickstart review. Previous pull request was 1595. (#1662)
Signed-off-by: Edwards, James A <jaedwards@habana.ai>
Co-authored-by: Edwards, James A <jaedwards@habana.ai>
2025-03-14 16:05:01 -07:00
Sun, Xuehao
7159ce3731 Update stale issue and PR settings to 30 days for inactivity (#1661)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2025-03-14 17:55:49 +08:00
Louie Tsai
671dff7f51 [ChatQnA] Enable Prometheus and Grafana with telemetry docker compose file. (#1623)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-03-13 23:18:29 -07:00
Wang, Kai Lawrence
8fe19291c8 [AudioQnA] Enable vLLM and set it as default LLM serving (#1657)
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-14 09:56:33 +08:00
CharleneHu-42
35c5cf5de8 Refine README with highlighted examples and updated support info (#1006)
Signed-off-by: CharleneHu-42 <yabai.hu@intel.com>
Co-authored-by: Yi Yao <yi.a.yao@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Ying Hu <ying.hu@intel.com>
2025-03-13 13:50:28 +08:00
ZePan110
63b789ae91 Enable Gaudi3, Rocm and Arc on manually release test. (#1615)
1. Enable Gaudi3, Rocm and Arc on manually release test.
2. Fix the issue that manual workflow can't be canceled.

Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-13 13:38:53 +08:00
ZePan110
d670dbf0aa Enable GraphRAG and ProductivitySuite model cache for docker compose test. (#1608)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-13 11:23:03 +08:00
Li Gang
0701b8cfff [ChatQnA][docker]Check healthy of redis to avoid dataprep failure (#1591)
Signed-off-by: Li Gang <gang.g.li@intel.com>
2025-03-13 10:52:33 +08:00
xiguiw
effa2a28cf Enable CodeGen vLLM (#1636)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-13 10:38:47 +08:00
ZePan110
adcd113f53 Enable inject_commit to docker image feature. (#1653)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-13 09:39:42 +08:00
Eero Tamminen
4269669f73 Use GenAIComp base image to simplify Dockerfiles & reduce image sizes - part 2 (#1638)
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
2025-03-13 08:23:07 +08:00
Sun, Xuehao
12657ac945 Add GitHub Action to check and close stale issues and PRs (#1646)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2025-03-12 10:56:07 +08:00
chen, suyue
43d0a18270 Enhance ChatQnA test scripts (#1643)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-03-10 17:36:26 +08:00
Wang, Kai Lawrence
5362321d3a Fix vllm model cache directory (#1642)
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-03-10 13:40:42 +08:00
XinyaoWa
eb245fd085 Set vLLM as default model for FaqGen (#1580)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-03-10 09:39:35 +08:00
chen, suyue
4cab86260f Use the latest HabanaAI/vllm-fork release tag to build vllm-gaudi image (#1635)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
2025-03-07 20:40:32 +08:00
wangleflex
694207f76b [ChatQnA] Show spinner after query to improve user experience (#1003) (#1628)
Signed-off-by: Wang,Le3 <le3.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-07 17:08:53 +08:00
chen, suyue
555e2405b9 Fix corner CI issue when the example path deleted (#1634)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-03-07 15:05:08 +08:00
Shifani Rajabose
7a92435269 [Bug: 112] Fix introduction in GenAIExamples main README (#1631) 2025-03-07 14:31:34 +08:00
Eero Tamminen
c9085c3c68 Use GenAIComp base image to simplify Dockerfiles (#1612)
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
2025-03-07 13:13:29 +08:00
ZePan110
36aaed748b Update model cache for AgentQnA (#1627)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-07 11:00:48 +08:00
Letong Han
9180f1066d Enable vllm for CodeTrans (#1626)
Set vllm as default llm serving, and add related docker compose files, readmes, and test scripts.

Issue: https://github.com/opea-project/GenAIExamples/issues/1436

Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-07 10:56:21 +08:00
ZePan110
5aecea8e47 Update compose.yaml (#1619)
Update compose.yaml for CodeGen, CodeTrans and DocSum

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-07 09:20:28 +08:00
ZePan110
6723395e31 Update compose.yaml (#1620)
Update compose.yaml for AudioQnA, DBQnA, DocIndexRetriever, FaqGen, Translation and VisualQnA.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-07 09:20:08 +08:00
ZePan110
785ffb9a1e Update compose.yaml for ChatQnA (#1621)
Update compose.yaml for ChatQnA

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-07 09:19:39 +08:00
ZePan110
428ba481b2 Update compose.yaml for SearchQnA (#1622)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-07 08:38:59 +08:00
Wang, Kai Lawrence
2dfcfa0436 [AudioQnA] Fix the LLM model field for inputs alignment (#1611)
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-03-05 22:15:07 +08:00
Zhu Yongbo
8a5ad1fc72 Fix docker image opea/edgecraftrag security issue #1577 (#1617)
Signed-off-by: Zhu, Yongbo <yongbo.zhu@intel.com>
2025-03-05 22:13:53 +08:00
ZePan110
24cacaaa48 Enable SearchQnA model cache for docker compose test. (#1606)
Enable SearchQnA model cache for docker compose test.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-05 17:13:24 +08:00
ZePan110
6ead1b12db Enable ChatQnA model cache for docker compose test. (#1605)
Enable ChatQnA model cache for docker compose test.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-05 11:30:04 +08:00
rbrugaro
8dac9d1035 bugfix GraphRAG updated docker compose and env settings to fix issues post refactor (#1567)
Signed-off-by: rbrugaro <rita.brugarolas.brufau@intel.com>
Signed-off-by: Rita Brugarolas Brufau <rita.brugarolas.brufau@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: WenjiaoYue <wenjiao.yue@intel.com>
2025-03-04 09:44:13 -08:00
ZePan110
c1b5ba281f Enable CodeGen,CodeTrans and DocSum model cache for docker compose test. (#1599)
1.Add cache path check
2.Enable CodeGen,CodeTrans and DocSum model cache for docker compose test.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-04 16:10:20 +08:00
chen, suyue
8f8d3af7c3 open chatqna frontend test (#1594)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-03-04 10:41:22 +08:00
ZePan110
e4de76da78 Use model cache for docker compose test (#1582)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-03-04 09:48:27 +08:00
Spycsh
ce38a84372 Revert chatqna async and enhance tests (#1598)
align with opea-project/GenAIComps#1354
2025-03-03 23:03:44 +08:00
Ying Hu
e8b07c28ec Update DBQnA tgi docker image to latest tgi 2.4.0 (#1593) 2025-03-03 16:17:19 +08:00
chen, suyue
7b3a125bdf Fix cd workflow condition (#1588)
Fix cd workflow condition

Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: ZePan110 <ze.pan@intel.com>
2025-03-03 08:45:10 +08:00
Eze Lanza (Eze)
fba0de45d2 ChatQnA Docker compose file for Milvus as vdb (#1548)
Signed-off-by: Ezequiel Lanza <ezequiel.lanza@gmail.com>
Signed-off-by: Kendall González León <kendall.gonzalez.leon@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: Spycsh <sihan.chen@intel.com>
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
Signed-off-by: ZePan110 <ze.pan@intel.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: WenjiaoYue <wenjiao.yue@intel.com>
Co-authored-by: Ezequiel Lanza <emlanza@CDQ242RKJDmac.local>
Co-authored-by: Kendall González León <kendallgonzalez@hotmail.es>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: Spycsh <39623753+Spycsh@users.noreply.github.com>
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com>
Co-authored-by: jotpalch <49465120+jotpalch@users.noreply.github.com>
Co-authored-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: Ying Hu <ying.hu@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eero Tamminen <eero.t.tamminen@intel.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
Co-authored-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: XinyaoWa <xinyao.wang@intel.com>
Co-authored-by: alexsin368 <109180236+alexsin368@users.noreply.github.com>
Co-authored-by: WenjiaoYue <wenjiao.yue@intel.com>
2025-02-28 22:40:31 +08:00
WenjiaoYue
f2a5644d9c fix click example button issue (#1586)
Signed-off-by: WenjiaoYue <wenjiao.yue@intel.com>
2025-02-28 16:10:58 +08:00
alexsin368
6cd7827365 Top level README: add link to github.io documentation (#1584)
Signed-off-by: alexsin368 <alex.sin@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-02-28 13:43:43 +08:00
chen, suyue
3d8009aa91 Fix benchmark scripts (#1517)
- Align benchmark default config:  
1. Update default helm charts version. 
2. Add `# mandatory` comment. 
3. Update default model ID for LLM. 
- Fix deploy issue:  
1. Support different `replicaCount` for w/ w/o rerank test. 
2. Add `max_num_seqs` for vllm. 
3. Add resource setting for tune mode. 

- Fix Benchmark issue: 
1. Update `user_queries` and `concurrency` setting. 
2. Remove invalid parameters. 
3. Fix `dataset` and `prompt` setting. And dataset ingest into db. 
5. Fix the benchmark hang issue with large user queries. Update `"processes": 16` will fix this issue. 
6. Update the eval_path setting logical. 
- Optimize benchmark readme. 
- Optimize the log path to make the logs more readable. 

Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
2025-02-28 10:30:54 +08:00
XinyaoWa
78f8ae524d Fix async in chatqna bug (#1589)
Algin async with comps: related PR: opea-project/GenAIComps#1300

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-02-27 23:32:29 +08:00
Artem Astafev
6abf7652e8 Fix ChatQnA ROCm compose Readme file and absolute path for ROCM CI test (#1159)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
2025-02-27 15:26:45 +08:00
Spycsh
25c1aefc27 Align mongo related image names with comps (#1543)
- chathistory-mongo-server -> chathistory-mongo (except container names)
- feedbackmanagement -> feedbackmanagement-mongo
- promptregistry-server/promptregistry-mongo-server -> promptregistry-mongo (except container names)

Signed-off-by: Spycsh <sihan.chen@intel.com>
2025-02-27 09:25:49 +08:00
dependabot[bot]
d46df4331d Bump gradio from 5.5.0 to 5.11.0 in /DocSum/ui/gradio (#1576)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
2025-02-25 14:32:03 +08:00
Eero Tamminen
23a77df302 Fix "OpenAI" & "response" spelling (#1561) 2025-02-25 12:45:21 +08:00
Ying Hu
852bc7027c Update README.md of AIPC quick start (#1578)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-02-23 17:38:27 +08:00
minmin-intel
a7eced4161 Update AgentQnA and DocIndexRetriever (#1564)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
2025-02-22 09:51:26 +08:00
ZePan110
caec354324 Fix trivy issue (#1569)
Fix docker image security issue

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-02-20 14:41:52 +08:00
xiguiw
d482554a6b Fix mismatched environment variable (#1575)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-02-19 19:24:10 +08:00
xiguiw
2ae6871fc5 Simplify ChatQnA AIPC user setting (#1573)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-02-19 16:30:02 +08:00
dependabot[bot]
2ac5be9921 Bump gradio from 5.5.0 to 5.11.0 in /MultimodalQnA/ui/gradio (#1391)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-02-19 15:58:46 +08:00
ZePan110
799881a3fa Remove perf test code from test scripts. (#1510)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-02-18 16:23:49 +08:00
jotpalch
e5c6418c81 Fix minor typo in README (#1559)
Change Docker Compost<br/>Deployment on ROCm to Docker Compose<br/>Deployment on ROCm
2025-02-17 12:07:31 +08:00
xiguiw
0c0edffc5b update vLLM CPU to the latest stable version (#1546)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-02-17 08:26:25 +08:00
Spycsh
9f36e84c1c Refactor AudioQnA README (#1508)
Signed-off-by: Spycsh <sihan.chen@intel.com>
2025-02-15 11:30:16 +08:00
chen, suyue
8c547c2ba5 Expand CI test scope for common test scripts (#1554)
Expand CI test scope, trigger all hw test when the common test scripts changed.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-02-14 18:17:03 +08:00
Kendall González León
80dd86f122 Make a fix in the main README.md of the ChatQnA. (#1551)
Signed-off-by: Kendall González León <kendall.gonzalez.leon@intel.com>
2025-02-14 17:00:44 +08:00
ZePan110
6d781f7b2b Fix CICD workflow strategy running condition (#1533)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-02-13 16:10:00 +08:00
WenjiaoYue
abafd5de20 Update UI of the three demos: faqGen, VisualQnA, and DocSum. (#1528)
Signed-off-by: WenjiaoYue <wenjiao.yue@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-02-12 15:57:51 +08:00
Louie Tsai
970b869838 Add a new section to change LLM model such as deepseek based on validated model table in LLM microservice (#1501)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Wang, Kai Lawrence <109344418+wangkl2@users.noreply.github.com>
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com>
2025-02-12 09:34:56 +08:00
XinyaoWa
87ff149f61 Remove vllm hpu triton version fix (#1515)
vllm-fork has fix triton version issue, remove duplicated code https://github.com/HabanaAI/vllm-fork/blob/habana_main/requirements-hpu.txt

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-02-12 09:24:38 +08:00
chen, suyue
c39a569ab2 Update workflow condition and env (#1522)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-02-12 09:08:22 +08:00
chen, suyue
81b02bb947 Revert "HUGGINGFACEHUB_API_TOKEN environment is change to HF_TOKEN (#… (#1521)
Revert this PR since the test is not triggered properly due to the false merge of a WIP CI PR, 44a689b0bf, which block the CI test.

This change will be submitted in another PR.
2025-02-11 18:36:12 +08:00
Louie Tsai
47069ac70c fix a test script issue due to name change for telemetry yaml files (#1516)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-02-11 17:58:42 +08:00
chen, suyue
6ce7730863 Update CI/CD workflow (#1520)
1. Update auto commit account.
2. Fix test condition.

Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-02-11 17:48:37 +08:00
Louie Tsai
ad5523bac7 Enable OpenTelemtry Tracing for ChatQnA on Xeon and Gaudi by docker compose merge feature (#1488)
Signed-off-by: Louie, Tsai <louie.tsai@intel.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-02-10 22:58:50 -08:00
Louie Tsai
88a8235f21 Update README.md for Agent UI (#1495)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-02-10 22:22:55 -08:00
ZePan110
63ad850052 Update docker image list (#1513)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-02-11 13:18:22 +08:00
ZePan110
9a0c547112 Fix publish issue (#1514)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-02-11 11:43:00 +08:00
ZePan110
26a6da4123 Fix nightly triggered exceptions (#1505)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-02-10 16:51:34 +08:00
xiguiw
45d5da2ddd HUGGINGFACEHUB_API_TOKEN environment is change to HF_TOKEN (#1503)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-02-09 20:33:06 +08:00
xiguiw
1b3291a1c8 Fix docker compose.yaml error (#1496)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-02-07 09:53:20 +08:00
ZePan110
7ac8cf517a Restore test code. (#1502)
Remove nightly test code.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-02-07 09:50:21 +08:00
ZePan110
44a689b0bf Fix null value_file judgment (#1470)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: Malini Bhandaru <malini.bhandaru@intel.com>
2025-02-06 17:09:01 +08:00
xiguiw
388d3eb5c5 [Doc] Clean empty document (#1497)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-02-06 10:53:25 +08:00
chyundunovDatamonsters
ef9ad61440 DBQnA - Adding files to deploy DBQnA application on AMD GPU (#1273)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: Malini Bhandaru <malini.bhandaru@intel.com>
2025-02-06 09:41:59 +08:00
Louie Tsai
4c41a5db83 Update README.md for OPEA OTLP tracing (#1406)
Signed-off-by: louie-tsai <louie.tsai@intel.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Eero Tamminen <eero.t.tamminen@intel.com>
2025-02-05 13:03:15 -08:00
Liang Lv
9adf7a6af0 Add support for latest deepseek models on Gaudi (#1491)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-02-05 08:30:04 +08:00
chen, suyue
a4d028e8ea update image release workflow (#1303)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: Malini Bhandaru <malini.bhandaru@intel.com>
2025-02-03 17:07:07 -08:00
Omar Khleif
32d4f714fd Fix for NLTK related import failure (#1487)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-02-01 10:04:37 +08:00
chyundunovDatamonsters
fdbc27a9b5 AvatarChatbot - Adding files to deploy AvatarChatbot application on AMD GPU (#1288)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-01-27 11:30:52 +08:00
XinyuYe-Intel
5f4b1828a5 Added UT for rerank finetuning on Gaudi (#1472)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-01-27 11:24:05 +08:00
chyundunovDatamonsters
39abef8be8 SearchQnA App - Adding files to deploy SearchQnA application on AMD GPU (#1193)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-01-27 10:58:55 +08:00
bjzhjing
ed163087ba Provide unified scalable deployment and benchmarking support for exam… (#1315)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: letonghan <letong.han@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-24 22:27:49 +08:00
chen, suyue
259099d19f Remove kubernetes manifest related code and tests (#1466)
Remove deprecated kubernetes manifest related code and tests.
k8s implementation for those examples based on helm charts will target for next release.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-24 15:23:12 +08:00
chen, suyue
9a1118730b Freeze the triton version in vllm-gaudi image to 3.1.0 (#1463)
The new triton version 3.2.0 can't work with vllm-gaudi. Freeze the triton version in vllm-gaudi image to 3.1.0.

Issue create for vllm-fork: HabanaAI/vllm-fork#732
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-24 09:50:59 +08:00
chen, suyue
ffce7068aa Fix image on push action due to manifest test remove (#1460)
1. Fix image on push action due to manifest test remove.
2. Fix helm test cd workflow get test matrix step.
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-23 14:30:09 +08:00
dolpher
9b0f98be8b Update ChatQnA helm chart README. (#1459)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-23 10:54:39 +08:00
XinyuYe-Intel
f0fea7b706 Add docker compose yaml for text2image example (#1418)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-01-23 09:57:54 +08:00
Melanie Hart Buehler
1864fac978 Fixes MultimodalQnA dataprep endpoint and port in the UI (#1457)
Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-22 17:11:09 -08:00
Lianhao Lu
94f71f2322 Update top level readme (#1458)
Add helm support of SeachQnA and Text2Image in top level readme.

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
2025-01-23 09:07:33 +08:00
chen, suyue
6600c32a9b remove image build condition (#1456)
Test compose cd workflow depend on image build, so if we want to run both compose and helm charts deployment in cd workflow, this condition should be removed.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-23 00:17:04 +08:00
Liang Lv
d953332f43 Fix multimodal docker image issue for MutimodalQnA on Gaudi (#1455)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-23 00:12:06 +08:00
chyundunovDatamonsters
cbe5805f47 AgentQnA - add README file for deploy on ROCm (#1379)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-01-22 21:57:15 +08:00
Ervin Castelino
27fdbcab58 [chore/chatqna] Missing protocol in curl command (#1447)
This PR fixes the missing protocol for in the curl command mentioned in chatqna readme for tei-embedding-service.
2025-01-22 21:41:47 +08:00
lkk
f07cf1dad2 Fix wrong vllm repo. (#1454)
Use vllm-fork for gaudi.

fix the issue #1451
2025-01-22 21:22:56 +08:00
dolpher
ee0e5cc8d9 Sync value files from GenAIInfra (#1428)
All gaudi values updated with extra flags.
Added helm support for 2 new examples Text2Image and SearchQnA. Minor fix for llm-uservice.

Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-22 17:44:11 +08:00
chen, suyue
5c36443b11 Use local hub cache for AgentQnA test (#1450)
Use local hub cache for AgentQnA test to save workspace.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-22 13:23:00 +08:00
Lianhao Lu
62cea74a23 CI: improve helm CI (#1452)
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
2025-01-22 09:18:35 +08:00
WenjiaoYue
b721c256f9 Fix Domain Access Issue in Latest Vite Version (#1444)
Fix the restriction on using domain names when users are using the latest version of Vite

When users use the new version of Vite, the UI cannot be accessed via domain names due to Vite's new rules. This fix adds the corresponding parameters according to Vite's new rules, ensuring that users can access the frontend via domain names when building the UI.

Fixes #1441

Co-authored-by: WenjiaoYue <wenjiao.yue@intel.com>
2025-01-21 23:28:37 +08:00
chen, suyue
927698e23e Simplify git clone code in CI test (#1434)
1. Simplify git clone code in CI test.
2. Replace git clone branch in Dockerfile.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-21 23:00:08 +08:00
ZePan110
c3e84b5ffa Fix test matrix for helm charts (#1449)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-21 22:28:31 +08:00
ZePan110
6b2a041f25 Fix Helm-chart workflow issues. (#1448)
Fix matrix error issues and CD test files cannot be obtained.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-21 21:48:57 +08:00
ZePan110
842f46326b Switch helm-chart test runs-on label. (#1446)
Switch helm-chart test runs-on label from ${{ inputs.hardware }} to k8s-${{ inputs.hardware }}.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-21 18:07:03 +08:00
Wang, Kai Lawrence
284db982be [ROCm] Fix the hf-token setting for TGI and TEI in ChatQnA (#1432)
This PR is to correct the env variable names in chatqna example on ROCm platform passing to the docker container of TGI and TEI. For tgi, either HF_TOKEN and HUGGING_FACE_HUB_TOKEN could be parsed in TGI while HF_API_TOKEN can be parsed in TEI.

TGI: https://github.com/huggingface/text-generation-inference/blob/main/router/src/server.rs#L1700C1-L1702C15
TEI: https://github.com/huggingface/text-embeddings-inference/blob/main/router/src/main.rs#L112

Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-21 14:22:39 +08:00
ZePan110
fc96fe83e2 Fix CD workflow issue (#1443)
Fix the issue of CD workflow values_files errors.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-21 11:54:12 +08:00
Hoong Tee, Yeoh
0316114c4b ProductivitySuite: Fix FaqGen Microservice CI test fail (#1437)
Change in FAQGen microservice for content-type header result in CI failure.

#1431
Signed-off-by: Yeoh, Hoong Tee <hoong.tee.yeoh@intel.com>
2025-01-21 10:23:35 +08:00
chen, suyue
0408453fa2 Unify the yaml name to fix the CD workflow (#1435)
Fix the issue in #1372

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-21 01:10:41 +08:00
XinyaoWa
d0cd0aaf53 Update GraphRAG to be compatible with latest component changes (#1427)
- Updated ENV VARS to align with recent changes in neo4j dataprep and retriever.
- upgraded tgi-gaudi image version
Related to GenAIComps repo issue #1025 (opea-project/GenAIComps#1025)

Original PR #1384
Original contributor is @rbrugaro

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
2025-01-21 00:18:01 +08:00
chen, suyue
0ba3decb6b Simplify git clone code in CI test (#1422)
1. Simplify git clone code in CI test. 
2. Replace git clone branch in Dockerfile.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 23:55:20 +08:00
Wang, Kai Lawrence
3d3ac59bfb [ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu (#1430)
Update the default LLM to llama3-8B on cpu/nvgpu/amdgpu/gaudi for docker-compose deployment to avoid the potential model serving issue or the missing chat-template issue using neural-chat-7b.

Slow serving issue of neural-chat-7b on ICX: #1420
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-20 22:47:56 +08:00
Melanie Hart Buehler
f11ab458d8 MultimodalQnA image query, pdf, dynamic ports, and UI updates (#1381)
Per the proposed changes in this [RFC](https://github.com/opea-project/docs/blob/main/community/rfcs/24-10-02-GenAIExamples-001-Image_and_Audio_Support_in_MultimodalQnA.md)'s Phase 2 plan, this PR adds support for image queries, PDF ingestion and display, and dynamic ports. There are also some bug fixes. This PR goes with [this one in GenAIComps](https://github.com/opea-project/GenAIComps/pull/1134).

Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
2025-01-20 22:41:52 +08:00
ZePan110
f3562bef36 Add helm e2e test workflow (#1372)
Add both CICD workflow for helm charts values test. 

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-20 21:04:11 +08:00
chen, suyue
7a54064d65 remove Dockerfile.wrapper (#1429)
Remove Dockerfile.wrapper, it's not used anymore and no test cover this Dockerfile. So remove this Dockerfile to avoid regression.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 20:49:18 +08:00
Liang Lv
0f7e5a37ac Adapt code for dataprep microservice refactor (#1408)
https://github.com/opea-project/GenAIComps/pull/1153

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-20 20:37:03 +08:00
xiguiw
2d5898244c Enchance health check in GenAIExample docker-compose (#1410)
Fix service launch issue

1. Update Gaudi TGI image from 2.0.6 to 2.3.1
2. Change the hpu-gaudi TGI health check condition.

Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-01-20 20:13:13 +08:00
Neo Zhang Jianyu
59722d2bc9 [Bug] Enhance the template (#1396)
Enhance the bug & feature template according to the issue #1002.
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
2025-01-20 17:56:14 +08:00
chen, suyue
6bfd156573 Clean up test scripts and enhance git clone (#1417)
1. Clean up test code in scripts.
2. Simplify git clone code.
3. Replace git clone branch in Dockerfile.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 16:34:28 +08:00
XinyuYe-Intel
528770a8d7 Add UT for Text2Image on Gaudi (#1424)
Add UT for Text2Image on Gaudi.

#1421
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-01-20 16:01:35 +08:00
chen, suyue
239995da16 Update DocIndexRetriever CI test scripts (#1416)
1. Add image build condition.
2. Update single branch clone.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 11:16:38 +08:00
chen, suyue
f65e8d8668 Add port 5000 checking and warning (#1414)
Port 5000 is used by local docker registry, please DO NOT use it in docker compose deployment!!!

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 09:09:31 +08:00
chen, suyue
a49a36cebc Add secrets OPENAI_API_KEY (#1412)
Add secrets OPENAI_API_KEY for AMD GPU CI test. 

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-19 19:39:45 +08:00
Wang, Kai Lawrence
742cb6ddd3 [ChatQnA] Switch to vLLM as default llm backend on Xeon (#1403)
Switching from TGI to vLLM as the default LLM serving backend on Xeon for the ChatQnA example to enhance the perf.

https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-17 20:48:19 +08:00
Wang, Kai Lawrence
00e9da9ced [ChatQnA] Switch to vLLM as default llm backend on Gaudi (#1404)
Switching from TGI to vLLM as the default LLM serving backend on Gaudi for the ChatQnA example to enhance the perf. 

https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-17 20:46:38 +08:00
chyundunovDatamonsters
277222a922 General README.md - add deploy on AMD info (#1409)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-17 20:26:59 +08:00
lkk
5c68effc9f update agent example for the GenAIComps changes. (#1407)
Update build.yaml and compose_vllm.yaml because of refactoring of GenAIComps.

Fix issue left by https://github.com/opea-project/GenAIExamples/pull/1353
2025-01-17 11:29:11 +08:00
XinyaoWa
39409d7f61 Align OpenAI API for FaqGen, DocSum (#1401)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-17 11:19:35 +08:00
XinyaoWa
71e3c57366 Standardize name for LLM comps (#1402)
Update all the names for classes and files in llm comps to follow the standard format, related GenAIComps PR opea-project/GenAIComps#1162

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-16 23:10:27 +08:00
Letong Han
5ad24af2ee Fix Vectorestores Path Issue of Refactor (#1399)
Fix vectorestores path issue caused by refactor in PR opea-project/GenAIComps#1159.
Modify docker image name and file path in docker_images_list.md.

Signed-off-by: letonghan <letong.han@intel.com>
2025-01-16 19:50:59 +08:00
WenjiaoYue
3a9a24a51a Agent ui (#1389)
Signed-off-by: WenjiaoYue <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>
Co-authored-by: WenjiaoYue <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>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-16 18:47:46 +08:00
XinyaoWa
301b5e9a69 Fix vllm hpu to a stable release (#1398)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-16 16:35:32 +08:00
Yao Qing
b4269d6c4f Modify the corresponding path based on the refactor of chathistory in GenAIComps. (#1397)
GenAIComps has refactored chathistory based on E-RAG code structure. Related path in GenAIExample have been modified.

Fix GenAIComps Issue https://github.com/opea-project/GenAIComps/issues/989 
Signed-off-by: Yao, Qing <qing.yao@intel.com>
2025-01-16 14:26:17 +08:00
Letong Han
4cabd55778 Refactor Retrievers related Examples (#1387)
Delete redundant retrievers docker image in docker_images_list.md.
Refactor Retrievers related Examples READMEs.
Change all of the comps/retrievers/xxx/xxx/Dockerfile path into comps/retrievers/src/Dockerfile.

Fix the Examples CI issues of PR opea-project/GenAIComps#1138.
Signed-off-by: letonghan <letong.han@intel.com>
2025-01-16 14:21:48 +08:00
xiguiw
698a06edbf [DOC] Fix document issue (#1395)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-01-16 11:30:07 +08:00
Eero Tamminen
0eae391fda Use staged builds to minimize final image sizes (#1031)
Staged image builds so that final images do not have redundant things like:
- Git tool and its deps
- Git repo history
- Test directories

Fixes: #225
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
2025-01-16 11:14:47 +08:00
XinyaoWa
23d885bf60 Refactor vllm openvino to third parties (#1388)
vllm-openvino is a dependency for text generation comps, in GenAIComps PR opea-project/GenAIComps#1141 we move it to third-parties folder, update the path accordingly.

#998 
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-16 10:07:56 +08:00
minmin-intel
287f03a834 Add SQL agent to AgentQnA (#1370)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-01-15 09:31:13 -08:00
ZePan110
a65a1e5598 Fix CI filter issue (#1393)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-15 11:39:51 +08:00
Neo Zhang Jianyu
9812c2fb45 Update check-online-doc-build.yml (#1390) 2025-01-15 09:07:02 +08:00
XinyaoWa
7d218b9f36 Remove vllm hpu commit id limit (#1386)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-14 11:05:32 +08:00
Zhu Yongbo
ba9892f8ee minor bug fix for EC-RAG (#1378)
Signed-off-by: Zhu, Yongbo <yongbo.zhu@intel.com>
2025-01-14 10:45:15 +08:00
XinyaoWa
ff1310b11a Refactor docsum (#1336)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-13 15:49:48 +08:00
Sihan Chen
ca15fe9bdb Refactor lvm related examples (#1333) 2025-01-13 13:42:06 +08:00
XinyaoWa
f48bd8e74f Refactor Faqgen (#1323)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-13 13:01:04 +08:00
Ying Hu
91ff520baa Update README.md for add K8S cluster link for Gaudi (#1380) 2025-01-13 09:33:58 +08:00
Liang Lv
3ca78867eb Update example code for embedding dependency moving to 3rd_party (#1368)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-10 15:36:58 +08:00
Yao Qing
7a3dfa90ca Fix for animation dockerfile path. (#1371)
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-01-10 11:44:57 +08:00
dolpher
c795ef2203 Add helm deployment instructions for GenAIExamples (#1373)
Add helm deployment instructions for ChatQnA, AgentQnA, AudioQnA, CodeTrans, DocSum, FaqGen and VisualQnA

Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-10 09:55:31 +08:00
chen, suyue
99120f4cd2 Update action token for CI (#1374)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-09 17:19:07 +08:00
XinyuYe-Intel
9fe480b010 Update dockerfile path for text2image (#1307)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-01-09 12:03:27 +08:00
XinyuYe-Intel
113281d073 Update path for finetuning (#1306)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-09 12:01:59 +08:00
Liang Lv
370d6928c1 Update example code for prompt registry refactor (#1362)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-09 11:59:32 +08:00
Liang Lv
2b26450bb9 Update docker file path for feedback management refactor (#1364)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-09 11:21:25 +08:00
Louie Tsai
81022355a7 Enable OpenTelemetry Tracing for ChatQnA TGI serving on Gaudi (#1316)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-01-08 17:20:13 -08:00
Jaswanth Karani
ddacb7e86d fixed build issue (#1367) 2025-01-08 22:19:23 +08:00
Sihan Chen
5128c2d650 Refactor web retrievers links (#1338) 2025-01-08 16:19:50 +08:00
Liang Lv
b3c405a5f6 Adapt example code for guardrails refactor (#1360)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-08 14:35:23 +08:00
dolpher
5638075d65 Add helm deployment instructions for codegen (#1351)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-08 13:20:32 +08:00
chen, suyue
23117871c2 remove chatqna-conversation-ui build in CI test (#1361)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-08 12:09:33 +08:00
WenjiaoYue
9970605460 Adapt refactor comps (#1340)
Signed-off-by: WenjiaoYue
2025-01-08 10:36:24 +08:00
dolpher
28206311fd Disable GMC CI temporarily (#1359)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-08 09:55:53 +08:00
ZePan110
589bfb2b7a Change license template from 2024 to 2025 (#1358)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-07 19:29:55 +08:00
Pranav Singh
d2b49bbc82 [ChatQNA] Fix K8s Deployment for CPU/HPU (#1274)
Signed-off-by: Pranav Singh <pranav.singh@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-07 13:45:09 +08:00
Ying Hu
41374d865b Update README.md for support matrix (#983)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com>
2025-01-07 11:45:42 +08:00
pre-commit-ci[bot]
2c624e1f5f [pre-commit.ci] pre-commit autoupdate (#1356)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-07 11:13:07 +08:00
Ying Hu
00241d01d2 Update README.md for quick start guide (#1355)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-07 10:08:16 +08:00
ZePan110
ed2b8ed983 Exclude dockerfile under tests and exclude check Dockerfile under tests. (#1354)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-07 09:05:01 +08:00
lkk
a6e702e4d5 refine agent directories. (#1353) 2025-01-06 17:40:24 +08:00
ZePan110
aa5c91d7ee Check duplicated dockerfile (#1289)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-06 17:30:12 +08:00
chen, suyue
b88d09e23f Fix code owner list (#1352)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-06 14:00:13 +08:00
XinyaoWa
464e2d3125 Rename streaming to stream to align with OpenAI API (#1332)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-06 13:25:55 +08:00
chen, suyue
1f29eca288 fix chatqna benchmark without rerank config issue (#1341)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-06 09:16:20 +08:00
chen, suyue
1d7ac82979 Fix changed file detect issue (#1339)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-03 11:24:02 +08:00
chen, suyue
5c7a5bd850 Update Code and README for GenAIComps Refactor (#1285)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
Signed-off-by: ZePan110 <ze.pan@intel.com>
Signed-off-by: WenjiaoYue <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>
2025-01-02 20:03:26 +08:00
Yao Qing
72f8079289 Refactor text2sql. (#1304)
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-02 10:52:21 +08:00
Zhu Yongbo
6169ea4921 add new feature and bug fix for EC-RAG (#1324)
Signed-off-by: Zhu, Yongbo <yongbo.zhu@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-02 09:25:20 +08:00
chyundunovDatamonsters
75b0961a48 Translation App - Adding files to deploy Translation application on AMD GPU (#1191)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
2025-01-02 09:19:44 +08:00
Sihan Chen
cc1d97f816 Refactor AudioQnA/MultiModalQnA/AvatarChatbot (#1310)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chensuyue <suyue.chen@intel.com>
2024-12-31 12:47:30 +08:00
xiguiw
250ffb8b66 [DOC] Fix docker build command in document (#1287)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2024-12-31 00:02:22 +08:00
ZePan110
1e9d111982 Block the manifest test first and restore it after the Refactor work is completed. (#1321)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2024-12-30 16:24:19 +08:00
Ying Hu
597f17b979 Update set_env.sh to fix LOGFLAG warning (#1319) 2024-12-30 10:54:26 +08:00
Yao Qing
b9790d809b Refactoring animation. (#1301)
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-12-27 17:21:47 +08:00
Daniel De León
b27b48c488 Add microservice resources to no_proxy in the main ChatQnA README (#1269)
Signed-off-by: Daniel Deleon <daniel.de.leon@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2024-12-27 16:14:28 +08:00
Dina Suehiro Jones
0bf1d0be65 Bug fix to add missing BRIDGE_TOWER_EMBEDDING env var for MultimodalQnA (#1280)
Signed-off-by: dmsuehir <dina.s.jones@intel.com>
2024-12-26 23:30:57 -08:00
Sihan Chen
a01729a5c2 Refactor DocSum example (#1286)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-26 14:45:17 +08:00
chen, suyue
6b6a08df78 Add minimal containers and ports clean up before test (#1291)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-12-26 10:59:26 +08:00
chen, suyue
0b23cba505 add manually clean up container action (#1296)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-26 09:26:53 +08:00
XinyaoWa
50dd959d60 Support Long context for DocSum (#1255)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lkk <33276950+lkk12014402@users.noreply.github.com>
2024-12-20 19:17:10 +08:00
XinyaoWa
05365b6140 FaqGen param fix (#1277)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2024-12-20 11:30:36 +08:00
Sihan Chen
fd706d1a70 Align DocIndexRetriever Xeon tests with Gaudi (#1272) 2024-12-20 10:30:51 +08:00
Sihan Chen
3b9e55cb8e Minor fix DocIndexRetriever test (#1266) 2024-12-19 12:12:33 +08:00
bjzhjing
7d9b34cf5e Chatqna/benchmark: Remove the deprecated directory (#1261)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
2024-12-19 10:51:01 +08:00
Mustafa
84a6a6e9bc Adding URL summary option to DocSum Gradio-UI (#1248)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
Co-authored-by: okhleif-IL <omar.khleif@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lkk <33276950+lkk12014402@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: WenjiaoYue <wenjiao.yue@intel.com>
2024-12-19 10:49:03 +08:00
chen, suyue
89a7f9e001 Update CODEOWNERS list for PR review (#1262)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com>
2024-12-19 10:01:52 +08:00
Artem Astafev
236ea6bcce Added compose example for MultimodalQnA deployment on AMD ROCm systems (#1233)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
2024-12-18 17:43:32 +08:00
chyundunovDatamonsters
67634dfd22 DocSum - Solving the problem of running DocSum on ROCm (#1268)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2024-12-18 17:38:38 +08:00
Artem Astafev
df7c192835 Added docker compose example for AgentQnA deployment on AMD ROCm (#1166)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
2024-12-18 10:21:00 +08:00
Letong Han
f930638844 Update Multimodal Docker File Path (#1252)
Signed-off-by: letonghan <letong.han@intel.com>
2024-12-17 17:30:29 +08:00
Sun, Xuehao
5613add4dd Change to pull_request_target for dependency review workflow (#1256)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2024-12-17 12:05:02 +08:00
lkk
e18369ba0d remove examples gateway. (#1250)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-14 13:19:51 +08:00
lkk
2af1ea0f8e remove examples gateway. (#1243)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-13 15:16:11 +08:00
Melanie Hart Buehler
c760cac2f4 Adds audio querying to MultimodalQ&A Example (#1225)
Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
Signed-off-by: dmsuehir <dina.s.jones@intel.com>
Co-authored-by: Omar Khleif <omar.khleif@intel.com>
Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2024-12-12 16:05:14 +08:00
Li Gang
a50e4e6f9f [DocIndexRetriever] enable the without-rerank flavor (#1223)
Signed-off-by: Li Gang <gang.g.li@intel.com>
Co-authored-by: ligang <ligang@ligang-nuc9v.bj.intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-12 09:34:21 +08:00
Omar Khleif
00b526c8e5 Changed Default UI to Gradio (#1246)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
2024-12-11 11:04:10 -08:00
Wang, Kai Lawrence
4c01e14642 [ChatQnA] Remove enforce-eager to enable HPU graphs for better vLLM perf (#1210)
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2024-12-10 13:19:15 +08:00
Lianhao Lu
6f9f6f0bad Remove deprecated docker compose files (#1238)
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
2024-12-10 09:43:19 +08:00
Pranav Singh
893f324d07 [ChatQNA] Fixes Embedding Endpoint (#1230)
Signed-off-by: Pranav Singh <pranav.singh@intel.com>
2024-12-09 10:12:16 +08:00
Artem Astafev
77e640e2f3 Added compose example for VisualQnA deployment on AMD ROCm systems (#1201)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-07 18:58:40 +08:00
Mustafa
07e47a1f38 Update tests for issue 1229 (#1231)
Signed-off-by: Mustafa <mustafa.cetin@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-07 09:07:52 +08:00
lkk
bde285dfce move examples gateway (#992)
Co-authored-by: root <root@idc708073.jf.intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com>
2024-12-06 14:40:25 +08:00
WenjiaoYue
f5c08d4fbb Update audioQnA compose (#1227)
Signed-off-by: Yue, Wenjiao <wenjiao.yue@intel.com>
2024-12-05 16:23:47 +08:00
pallavijaini0525
3a371ac102 Updated the Pinecone readme to reflect the new structure (#1222)
Signed-off-by: Pallavi Jaini <pallavi.jaini@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-05 10:04:09 +08:00
sgurunat
031cf6e1ff ChatQnA: Update kubernetes xeon chatqna remote inference and svelte UI (#1215)
Signed-off-by: sgurunat <gurunath.s@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-04 22:40:03 +08:00
sgurunat
3299e5c9f5 ChatQnA: Update chatqna-vllm-remote-inference (#1224)
Signed-off-by: sgurunat <gurunath.s@intel.com>
2024-12-04 22:33:27 +08:00
ZePan110
340796bbae Split ChatQnA manifest test (#1190)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-04 15:17:46 +08:00
Lianhao Lu
8182a83382 CI: Add check for conflict image build definition (#1184)
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
2024-12-03 10:46:16 +08:00
WenjiaoYue
8192c3166f Update OPEA example package.json version (#1211)
Signed-off-by: Yue, Wenjiao <wenjiao.yue@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-02 21:33:30 +08:00
chen, suyue
240054ac52 CD workflow update (#1221)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-12-02 17:42:02 +08:00
Neo Zhang Jianyu
c9caf1c083 fix file name (#1219)
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
2024-12-02 14:29:20 +08:00
Neo Zhang Jianyu
a426a9a51d add label automaticly when create issue (#1217)
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
2024-12-02 13:41:22 +08:00
Zhu Yongbo
bb466b3791 EdgeCraft RAG UI bug fix (#1189)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-02 11:47:04 +08:00
chen, suyue
0f8344e4f5 Update test params (#1182)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-11-29 15:47:15 +08:00
ZePan110
ed8dbaac47 Revert "WA for the issue of vllm Dockerfile.cpu build failure (#1195)" (#1206) 2024-11-28 13:36:14 +08:00
ZePan110
e8cffc6146 Check image and service names and Dockerfile in build.yaml (#1209)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-28 13:14:11 +08:00
Sihan Chen
907b30b7fe Refactor service names (#1199) 2024-11-28 10:01:31 +08:00
Letong Han
545aa571bf [ChatQnA] Update Benchmark E2E Parameters (#1200)
Signed-off-by: letonghan <letong.han@intel.com>
2024-11-27 17:11:11 +08:00
ZePan110
5422bcb970 WA for the issue of vllm Dockerfile.cpu build failure (#1195)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2024-11-27 14:51:19 +08:00
VincyZhang
736155ca95 Detect dangerous command (#1179)
Signed-off-by: Wenxin Zhang <wenxin.zhang@intel.com>
2024-11-27 11:43:56 +08:00
ZePan110
39fa25e03a Limit the version of vllm to avoid dockers build failures. (#1183)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2024-11-25 10:33:33 +08:00
Wang, Kai Lawrence
ac470421d0 Update the llm backend ports (#1172)
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2024-11-22 09:20:09 +08:00
Mingyuan Qi
edcd7c9d6a Fix code scanning alert no. 21: Uncontrolled data used in path expression (#1171)
Signed-off-by: Mingyuan Qi <mingyuan.qi@intel.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2024-11-21 20:36:28 +08:00
bjzhjing
ef2047b070 Adjustments for helm release change (#1173)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
2024-11-21 14:14:27 +08:00
Letong Han
94231584aa Fix Translation Manifest CI with MODEL_ID (#1169)
Signed-off-by: letonghan <letong.han@intel.com>
2024-11-21 10:48:52 +08:00
minmin-intel
c5177c5e2f Fix DocIndexRetriever CI error on Xeon (#1167)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-21 09:30:11 +08:00
Artem Astafev
006c61bcbb Add example for AudioQnA deploy in AMD ROCm (#1147)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
2024-11-20 20:46:27 +08:00
1186 changed files with 90000 additions and 49368 deletions

37
.github/CODEOWNERS vendored
View File

@@ -1,17 +1,26 @@
/AgentQnA/ kaokao.lv@intel.com
/AudioQnA/ sihan.chen@intel.com
/ChatQnA/ liang1.lv@intel.com
# Code owners will review PRs within their respective folders.
* liang1.lv@intel.com feng.tian@intel.com suyue.chen@intel.com kaokao.lv@intel.com minmin.hou@intel.com rita.brugarolas.brufau@intel.com
/.github/ suyue.chen@intel.com ze.pan@intel.com
/AgentQnA/ abolfazl.shahbazi@intel.com kaokao.lv@intel.com minmin.hou@intel.com
/AudioQnA/ sihan.chen@intel.com wenjiao.yue@intel.com
/AvatarChatbot/ chun.tao@intel.com kaokao.lv@intel.com
/ChatQnA/ liang1.lv@intel.com letong.han@intel.com
/CodeGen/ liang1.lv@intel.com
/CodeTrans/ sihan.chen@intel.com
/DBQnA/ supriya.krishnamurthi@intel.com liang1.lv@intel.com
/DocIndexRetriever/ abolfazl.shahbazi@intel.com kaokao.lv@intel.com chendi.xue@intel.com
/DocSum/ letong.han@intel.com
/DocIndexRetriever/ kaokao.lv@intel.com chendi.xue@intel.com
/InstructionTuning xinyu.ye@intel.com
/RerankFinetuning xinyu.ye@intel.com
/MultimodalQnA tiep.le@intel.com
/FaqGen/ xinyao.wang@intel.com
/SearchQnA/ sihan.chen@intel.com
/Translation/ liang1.lv@intel.com
/VisualQnA/ liang1.lv@intel.com
/ProductivitySuite/ hoong.tee.yeoh@intel.com
/VideoQnA huiling.bao@intel.com
/*/ liang1.lv@intel.com
/EdgeCraftRAG/ yongbo.zhu@intel.com mingyuan.qi@intel.com
/FinanceAgent/ abolfazl.shahbazi@intel.com kaokao.lv@intel.com minmin.hou@intel.com rita.brugarolas.brufau@intel.com
/GraphRAG/ rita.brugarolas.brufau@intel.com abolfazl.shahbazi@intel.com
/InstructionTuning/ xinyu.ye@intel.com kaokao.lv@intel.com
/MultimodalQnA/ melanie.h.buehler@intel.com tiep.le@intel.com
/ProductivitySuite/ jaswanth.karani@intel.com hoong.tee.yeoh@intel.com
/RerankFinetuning/ xinyu.ye@intel.com kaokao.lv@intel.com
/SearchQnA/ sihan.chen@intel.com letong.han@intel.com
/Text2Image/ wenjiao.yue@intel.com xinyu.ye@intel.com
/Translation/ liang1.lv@intel.com sihan.chen@intel.com
/VideoQnA/ huiling.bao@intel.com
/VisualQnA/ liang1.lv@intel.com sihan.chen@intel.com
/WorkflowExecAgent/ joshua.jian.ern.liew@intel.com kaokao.lv@intel.com

View File

@@ -4,6 +4,7 @@
name: Report Bug
description: Used to report bug
title: "[Bug]"
labels: ["bug"]
body:
- type: dropdown
id: priority
@@ -31,6 +32,7 @@ body:
- Mac
- BSD
- Other (Please let us know in description)
- N/A
validations:
required: true
@@ -55,6 +57,7 @@ body:
- GPU-Nvidia
- GPU-AMD
- GPU-other (Please let us know in description)
- N/A
validations:
required: true
@@ -65,6 +68,8 @@ body:
options:
- label: Pull docker images from hub.docker.com
- label: Build docker images from source
- label: Other
- label: N/A
validations:
required: true
@@ -73,10 +78,12 @@ body:
attributes:
label: Deploy method
options:
- label: Docker compose
- label: Docker
- label: Kubernetes
- label: Helm
- label: Docker Compose
- label: Kubernetes Helm Charts
- label: Kubernetes GMC
- label: Other
- label: N/A
validations:
required: true
@@ -87,6 +94,8 @@ body:
options:
- Single Node
- Multiple Nodes
- Other
- N/A
default: 0
validations:
required: true
@@ -126,3 +135,12 @@ body:
render: shell
validations:
required: false
- type: textarea
id: attachments
attributes:
label: Attachments
description: Attach any relevant files or screenshots.
validations:
required: false

View File

@@ -4,6 +4,7 @@
name: Report Feature
description: Used to report feature
title: "[Feature]"
labels: ["feature"]
body:
- type: dropdown
id: priority
@@ -31,6 +32,7 @@ body:
- Mac
- BSD
- Other (Please let us know in description)
- N/A
validations:
required: true
@@ -55,6 +57,7 @@ body:
- GPU-Nvidia
- GPU-AMD
- GPU-other (Please let us know in description)
- N/A
validations:
required: true
@@ -65,6 +68,8 @@ body:
options:
- Single Node
- Multiple Nodes
- Other
- N/A
default: 0
validations:
required: true

View File

@@ -1,2 +1,3 @@
ModelIn
modelin
pressEnter

5
.github/env/_build_image.sh vendored Normal file
View File

@@ -0,0 +1,5 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
export VLLM_VER=v0.9.0
export VLLM_FORK_VER=v0.6.6.post1+Gaudi-1.20.0

View File

@@ -1,2 +1,2 @@
Copyright (C) 2024 Intel Corporation
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0

View File

@@ -0,0 +1,65 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Build Comps Base Image
permissions: read-all
on:
workflow_call:
inputs:
node:
required: true
type: string
build:
default: true
required: false
type: boolean
tag:
default: "latest"
required: false
type: string
opea_branch:
default: "main"
required: false
type: string
inject_commit:
default: false
required: false
type: boolean
jobs:
pre-build-image-check:
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.check-skip.outputs.should_skip }}
steps:
- name: Check if job should be skipped
id: check-skip
run: |
should_skip=true
if [[ "${{ inputs.node }}" == "gaudi" || "${{ inputs.node }}" == "xeon" ]]; then
should_skip=false
fi
echo "should_skip=$should_skip"
echo "should_skip=$should_skip" >> $GITHUB_OUTPUT
build-images:
needs: [ pre-build-image-check ]
if: ${{ needs.pre-build-image-check.outputs.should_skip == 'false' && fromJSON(inputs.build) }}
runs-on: "docker-build-${{ inputs.node }}"
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Clone Required Repo
run: |
git clone --depth 1 --branch ${{ inputs.opea_branch }} https://github.com/opea-project/GenAIComps.git
cd GenAIComps && git rev-parse HEAD && cd ../ && ls -l
- name: Build Image
uses: opea-project/validation/actions/image-build@main
with:
work_dir: ${{ github.workspace }}/GenAIComps
docker_compose_path: ${{ github.workspace }}/GenAIComps/.github/workflows/docker/compose/base-compose.yaml
registry: ${OPEA_IMAGE_REPO}opea
inject_commit: ${{ inputs.inject_commit }}
tag: ${{ inputs.tag }}

96
.github/workflows/_build_image.yml vendored Normal file
View File

@@ -0,0 +1,96 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Build Images
permissions: read-all
on:
workflow_call:
inputs:
node:
required: true
type: string
build:
default: true
required: false
type: boolean
example:
required: true
type: string
services:
default: ""
required: false
type: string
tag:
default: "latest"
required: false
type: string
opea_branch:
default: "main"
required: false
type: string
inject_commit:
default: false
required: false
type: boolean
jobs:
pre-build-image-check:
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.check-skip.outputs.should_skip }}
steps:
- name: Check if job should be skipped
id: check-skip
run: |
should_skip=true
if [[ "${{ inputs.node }}" == "gaudi" || "${{ inputs.node }}" == "xeon" ]]; then
should_skip=false
fi
echo "should_skip=$should_skip"
echo "should_skip=$should_skip" >> $GITHUB_OUTPUT
build-images:
needs: [ pre-build-image-check ]
if: ${{ needs.pre-build-image-check.outputs.should_skip == 'false' && fromJSON(inputs.build) }}
runs-on: "docker-build-${{ inputs.node }}"
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Get Checkout Ref
run: |
if [ "${{ github.event_name }}" == "pull_request" ] || [ "${{ github.event_name }}" == "pull_request_target" ]; then
echo "CHECKOUT_REF=refs/pull/${{ github.event.number }}/merge" >> $GITHUB_ENV
else
echo "CHECKOUT_REF=${{ github.ref }}" >> $GITHUB_ENV
fi
- name: Checkout out GenAIExamples
uses: actions/checkout@v4
with:
ref: ${{ env.CHECKOUT_REF }}
fetch-depth: 0
- name: Clone Required Repo
run: |
cd ${{ github.workspace }}/${{ inputs.example }}/docker_image_build
docker_compose_path=${{ github.workspace }}/${{ inputs.example }}/docker_image_build/build.yaml
source ${{ github.workspace }}/.github/env/_build_image.sh
if [[ $(grep -c "vllm:" ${docker_compose_path}) != 0 ]]; then
git clone -b ${VLLM_VER} --single-branch https://github.com/vllm-project/vllm.git
fi
if [[ $(grep -c "vllm-gaudi:" ${docker_compose_path}) != 0 ]]; then
git clone -b ${VLLM_FORK_VER} --single-branch https://github.com/HabanaAI/vllm-fork.git
fi
git clone --depth 1 --branch ${{ inputs.opea_branch }} https://github.com/opea-project/GenAIComps.git
cd GenAIComps && git rev-parse HEAD && cd ../
- name: Build Image
uses: opea-project/validation/actions/image-build@main
with:
work_dir: ${{ github.workspace }}/${{ inputs.example }}/docker_image_build
docker_compose_path: ${{ github.workspace }}/${{ inputs.example }}/docker_image_build/build.yaml
service_list: ${{ inputs.services }}
registry: ${OPEA_IMAGE_REPO}opea
inject_commit: ${{ inputs.inject_commit }}
tag: ${{ inputs.tag }}

View File

@@ -28,7 +28,7 @@ on:
default: false
required: false
type: boolean
test_k8s:
test_helmchart:
default: false
required: false
type: boolean
@@ -43,83 +43,54 @@ on:
inject_commit:
default: false
required: false
type: string
type: boolean
use_model_cache:
default: false
required: false
type: boolean
jobs:
####################################################################################################
# Image Build
####################################################################################################
build-images:
runs-on: "docker-build-${{ inputs.node }}"
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Get Checkout Ref
run: |
if [ "${{ github.event_name }}" == "pull_request" ] || [ "${{ github.event_name }}" == "pull_request_target" ]; then
echo "CHECKOUT_REF=refs/pull/${{ github.event.number }}/merge" >> $GITHUB_ENV
else
echo "CHECKOUT_REF=${{ github.ref }}" >> $GITHUB_ENV
fi
- name: Checkout out GenAIExamples
uses: actions/checkout@v4
with:
ref: ${{ env.CHECKOUT_REF }}
fetch-depth: 0
- name: Clone Required Repo
run: |
cd ${{ github.workspace }}/${{ inputs.example }}/docker_image_build
docker_compose_path=${{ github.workspace }}/${{ inputs.example }}/docker_image_build/build.yaml
if [[ $(grep -c "vllm:" ${docker_compose_path}) != 0 ]]; then
git clone https://github.com/vllm-project/vllm.git
cd vllm && git rev-parse HEAD && cd ../
fi
if [[ $(grep -c "vllm-gaudi:" ${docker_compose_path}) != 0 ]]; then
git clone https://github.com/HabanaAI/vllm-fork.git
cd vllm-fork && git checkout 3c39626 && cd ../
fi
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps && git checkout ${{ inputs.opea_branch }} && git rev-parse HEAD && cd ../
- name: Build Image
if: ${{ fromJSON(inputs.build) }}
uses: opea-project/validation/actions/image-build@main
with:
work_dir: ${{ github.workspace }}/${{ inputs.example }}/docker_image_build
docker_compose_path: ${{ github.workspace }}/${{ inputs.example }}/docker_image_build/build.yaml
service_list: ${{ inputs.services }}
registry: ${OPEA_IMAGE_REPO}opea
inject_commit: ${{ inputs.inject_commit }}
tag: ${{ inputs.tag }}
uses: ./.github/workflows/_build_image.yml
with:
node: ${{ inputs.node }}
build: ${{ fromJSON(inputs.build) }}
example: ${{ inputs.example }}
services: ${{ inputs.services }}
tag: ${{ inputs.tag }}
opea_branch: ${{ inputs.opea_branch }}
inject_commit: ${{ inputs.inject_commit }}
####################################################################################################
# Docker Compose Test
####################################################################################################
test-example-compose:
needs: [build-images]
if: ${{ fromJSON(inputs.test_compose) }}
if: ${{ inputs.test_compose }}
uses: ./.github/workflows/_run-docker-compose.yml
with:
tag: ${{ inputs.tag }}
example: ${{ inputs.example }}
hardware: ${{ inputs.node }}
use_model_cache: ${{ inputs.use_model_cache }}
opea_branch: ${{ inputs.opea_branch }}
secrets: inherit
####################################################################################################
# K8S Test
# helmchart Test
####################################################################################################
test-k8s-manifest:
needs: [build-images]
if: ${{ fromJSON(inputs.test_k8s) }}
uses: ./.github/workflows/_manifest-e2e.yml
test-helmchart:
if: ${{ fromJSON(inputs.test_helmchart) }}
uses: ./.github/workflows/_helm-e2e.yml
with:
example: ${{ inputs.example }}
hardware: ${{ inputs.node }}
tag: ${{ inputs.tag }}
mode: "CD"
secrets: inherit
####################################################################################################
@@ -127,7 +98,7 @@ jobs:
####################################################################################################
test-gmc-pipeline:
needs: [build-images]
if: ${{ fromJSON(inputs.test_gmc) }}
if: false # ${{ fromJSON(inputs.test_gmc) }}
uses: ./.github/workflows/_gmc-e2e.yml
with:
example: ${{ inputs.example }}

View File

@@ -42,6 +42,12 @@ jobs:
ref: ${{ env.CHECKOUT_REF }}
fetch-depth: 0
- name: Check Dangerous Command Injection
if: github.event_name == 'pull_request' || github.event_name == 'pull_request_target'
uses: opea-project/validation/actions/check-cmd@main
with:
work_dir: ${{ github.workspace }}
- name: Get test matrix
id: get-test-matrix
run: |
@@ -54,9 +60,11 @@ jobs:
base_commit=$(git rev-parse HEAD~1) # push event
fi
merged_commit=$(git log -1 --format='%H')
echo "print all changed files..."
git diff --name-only ${base_commit} ${merged_commit}
changed_files="$(git diff --name-only ${base_commit} ${merged_commit} | \
grep -vE '${{ inputs.diff_excluded_files }}')" || true
echo "changed_files=$changed_files"
echo "filtered changed_files=$changed_files"
export changed_files=$changed_files
export test_mode=${{ inputs.test_mode }}
export WORKSPACE=${{ github.workspace }}

View File

@@ -67,36 +67,6 @@ jobs:
make docker.build
make docker.push
- name: Scan gmcmanager
if: ${{ inputs.node == 'gaudi' }}
uses: opea-project/validation/actions/trivy-scan@main
with:
image-ref: ${{ env.DOCKER_REGISTRY }}/gmcmanager:${{ env.VERSION }}
output: gmcmanager-scan.txt
- name: Upload gmcmanager scan result
if: ${{ inputs.node == 'gaudi' }}
uses: actions/upload-artifact@v4.3.4
with:
name: gmcmanager-scan
path: gmcmanager-scan.txt
overwrite: true
- name: Scan gmcrouter
if: ${{ inputs.node == 'gaudi' }}
uses: opea-project/validation/actions/trivy-scan@main
with:
image-ref: ${{ env.DOCKER_REGISTRY }}/gmcrouter:${{ env.VERSION }}
output: gmcrouter-scan.txt
- name: Upload gmcrouter scan result
if: ${{ inputs.node == 'gaudi' }}
uses: actions/upload-artifact@v4.3.4
with:
name: gmcrouter-scan
path: gmcrouter-scan.txt
overwrite: true
- name: Clean up images
if: always()
run: |

252
.github/workflows/_helm-e2e.yml vendored Normal file
View File

@@ -0,0 +1,252 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Helm Chart E2e Test For Call
permissions:
contents: read
on:
workflow_call:
inputs:
example:
default: "chatqna"
required: true
type: string
description: "example to test, chatqna or common/asr"
hardware:
default: "xeon"
required: true
type: string
dockerhub:
default: "false"
required: false
type: string
description: "Set to true if you want to use released docker images at dockerhub. By default using internal docker registry."
mode:
default: "CD"
description: "Whether the test range is CI, CD or CICD"
required: false
type: string
tag:
default: "latest"
required: false
type: string
version:
default: "0-latest"
required: false
type: string
jobs:
get-test-case:
runs-on: ubuntu-latest
outputs:
value_files: ${{ steps.get-test-files.outputs.value_files }}
CHECKOUT_REF: ${{ steps.get-checkout-ref.outputs.CHECKOUT_REF }}
steps:
- name: Get checkout ref
id: get-checkout-ref
run: |
if [ "${{ github.event_name }}" == "pull_request" ] || [ "${{ github.event_name }}" == "pull_request_target" ]; then
CHECKOUT_REF=refs/pull/${{ github.event.number }}/merge
else
CHECKOUT_REF=${{ github.ref }}
fi
echo "CHECKOUT_REF=${CHECKOUT_REF}" >> $GITHUB_OUTPUT
echo "checkout ref ${CHECKOUT_REF}"
- name: Checkout Repo
uses: actions/checkout@v4
with:
ref: ${{ steps.get-checkout-ref.outputs.CHECKOUT_REF }}
fetch-depth: 0
- name: Get test Services
id: get-test-files
run: |
set -x
if [ "${{ inputs.mode }}" = "CI" ]; then
base_commit=${{ github.event.pull_request.base.sha }}
merged_commit=$(git log -1 --format='%H')
values_files=$(git diff --name-only ${base_commit} ${merged_commit} | \
grep "${{ inputs.example }}/kubernetes/helm" | \
grep "values.yaml" |\
sort -u)
echo $values_files
elif [ "${{ inputs.mode }}" = "CD" ]; then
values_files=$(ls ${{ inputs.example }}/kubernetes/helm/*values.yaml || true)
fi
value_files="["
for file in ${values_files}; do
if [ -f "$file" ]; then
filename=$(basename "$file")
if [[ "$filename" == *"gaudi"* ]]; then
if [[ "${{ inputs.hardware }}" == "gaudi" ]]; then
value_files="${value_files}\"${filename}\","
fi
elif [[ "$filename" == *"rocm"* ]]; then
if [[ "${{ inputs.hardware }}" == "rocm" ]]; then
value_files="${value_files}\"${filename}\","
fi
elif [[ "$filename" == *"nv"* ]]; then
continue
else
if [[ "${{ inputs.hardware }}" == "xeon" ]]; then
value_files="${value_files}\"${filename}\","
fi
fi
fi
done
value_files="${value_files%,}]"
echo "value_files=${value_files}"
echo "value_files=${value_files}" >> $GITHUB_OUTPUT
helm-test:
needs: [get-test-case]
if: ${{ needs.get-test-case.outputs.value_files != '[]' }}
strategy:
matrix:
value_file: ${{ fromJSON(needs.get-test-case.outputs.value_files) }}
fail-fast: false
runs-on: k8s-${{ inputs.hardware }}
continue-on-error: true
steps:
- name: Clean Up Working Directory
run: |
echo "value_file=${{ matrix.value_file }}"
sudo rm -rf ${{github.workspace}}/*
- name: Get checkout ref
id: get-checkout-ref
run: |
if [ "${{ github.event_name }}" == "pull_request" ] || [ "${{ github.event_name }}" == "pull_request_target" ]; then
CHECKOUT_REF=refs/pull/${{ github.event.number }}/merge
else
CHECKOUT_REF=${{ github.ref }}
fi
echo "CHECKOUT_REF=${CHECKOUT_REF}" >> $GITHUB_OUTPUT
echo "checkout ref ${CHECKOUT_REF}"
- name: Checkout Repo
uses: actions/checkout@v4
with:
ref: ${{ steps.get-checkout-ref.outputs.CHECKOUT_REF }}
fetch-depth: 0
- name: Set variables
env:
example: ${{ inputs.example }}
run: |
if [[ ! "$example" =~ ^[a-zA-Z0-9]{1,20}$ ]] || [[ "$example" =~ \.\. ]] || [[ "$example" == -* || "$example" == *- ]]; then
echo "Error: Invalid input - only lowercase alphanumeric and internal hyphens allowed"
exit 1
fi
# SAFE_PREFIX="kb-"
CHART_NAME="${SAFE_PREFIX}$(echo "$example" | tr '[:upper:]' '[:lower:]')"
RAND_SUFFIX=$(openssl rand -hex 2 | tr -dc 'a-f0-9')
cat <<EOF >> $GITHUB_ENV
CHART_NAME=${CHART_NAME}
RELEASE_NAME=${CHART_NAME}-$(date +%s)
NAMESPACE=ns-${CHART_NAME}-${RAND_SUFFIX}
ROLLOUT_TIMEOUT_SECONDS=600s
TEST_TIMEOUT_SECONDS=600s
KUBECTL_TIMEOUT_SECONDS=60s
should_cleanup=false
skip_validate=false
CHART_FOLDER=${example}/kubernetes/helm
EOF
echo "Generated safe variables:" >> $GITHUB_STEP_SUMMARY
echo "- CHART_NAME: ${CHART_NAME}" >> $GITHUB_STEP_SUMMARY
- name: Helm install
id: install
env:
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
HFTOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
value_file: ${{ matrix.value_file }}
run: |
set -xe
echo "should_cleanup=true" >> $GITHUB_ENV
if [[ ! -f ${{ github.workspace }}/${{ env.CHART_FOLDER }}/${value_file} ]]; then
echo "No value file found, exiting test!"
echo "skip_validate=true" >> $GITHUB_ENV
echo "should_cleanup=false" >> $GITHUB_ENV
exit 0
fi
for img in `helm template -n $NAMESPACE $RELEASE_NAME oci://ghcr.io/opea-project/charts/${CHART_NAME} -f ${{ inputs.example }}/kubernetes/helm/${value_file} --version ${{ inputs.version }} | grep 'image:' | grep 'opea/' | awk '{print $2}' | xargs`;
do
# increase helm install wait for for vllm-gaudi case
if [[ $img == *"vllm-gaudi"* ]]; then
ROLLOUT_TIMEOUT_SECONDS=900s
fi
done
if ! helm install \
--create-namespace \
--namespace $NAMESPACE \
$RELEASE_NAME \
oci://ghcr.io/opea-project/charts/${CHART_NAME} \
--set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} \
--set global.modelUseHostPath=/data2/hf_model \
--set GOOGLE_API_KEY=${{ env.GOOGLE_API_KEY}} \
--set GOOGLE_CSE_ID=${{ env.GOOGLE_CSE_ID}} \
--set web-retriever.GOOGLE_API_KEY=${{ env.GOOGLE_API_KEY}} \
--set web-retriever.GOOGLE_CSE_ID=${{ env.GOOGLE_CSE_ID}} \
-f ${{ inputs.example }}/kubernetes/helm/${value_file} \
--version ${{ inputs.version }} \
--wait --timeout "$ROLLOUT_TIMEOUT_SECONDS"; then
echo "Failed to install chart ${{ inputs.example }}"
echo "skip_validate=true" >> $GITHUB_ENV
.github/workflows/scripts/k8s-utils.sh dump_pods_status $NAMESPACE
exit 1
fi
- name: Validate e2e test
if: always()
run: |
set -xe
if $skip_validate; then
echo "Skip validate"
else
LOG_PATH=/home/$(whoami)/helm-logs
chart=${{ env.CHART_NAME }}
helm test -n $NAMESPACE $RELEASE_NAME --logs --timeout "$TEST_TIMEOUT_SECONDS" | tee ${LOG_PATH}/charts-${chart}.log
exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "Chart ${chart} test failed, please check the logs in ${LOG_PATH}!"
exit 1
fi
echo "Checking response results, make sure the output is reasonable. "
teststatus=false
if [[ -f $LOG_PATH/charts-${chart}.log ]] && \
[[ $(grep -c "^Phase:.*Failed" $LOG_PATH/charts-${chart}.log) != 0 ]]; then
teststatus=false
${{ github.workspace }}/.github/workflows/scripts/k8s-utils.sh dump_all_pod_logs $NAMESPACE
else
teststatus=true
fi
if [ $teststatus == false ]; then
echo "Response check failed, please check the logs in artifacts!"
exit 1
else
echo "Response check succeeded!"
exit 0
fi
fi
- name: Helm uninstall
if: always()
run: |
if $should_cleanup; then
helm uninstall $RELEASE_NAME --namespace $NAMESPACE
if ! kubectl delete ns $NAMESPACE --timeout=$KUBECTL_TIMEOUT_SECONDS; then
kubectl delete pods --namespace $NAMESPACE --force --grace-period=0 --all
kubectl delete ns $NAMESPACE --force --grace-period=0 --timeout=$KUBECTL_TIMEOUT_SECONDS
fi
fi

View File

@@ -1,111 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Single Kubernetes Manifest E2e Test For Call
on:
workflow_call:
inputs:
example:
default: "ChatQnA"
description: "The example to test on K8s"
required: true
type: string
hardware:
default: "xeon"
description: "Nodes to run the test, xeon or gaudi"
required: true
type: string
tag:
default: "latest"
description: "Tag to apply to images, default is latest"
required: false
type: string
jobs:
manifest-test:
runs-on: "k8s-${{ inputs.hardware }}"
continue-on-error: true
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Get checkout ref
run: |
if [ "${{ github.event_name }}" == "pull_request" ] || [ "${{ github.event_name }}" == "pull_request_target" ]; then
echo "CHECKOUT_REF=refs/pull/${{ github.event.number }}/merge" >> $GITHUB_ENV
else
echo "CHECKOUT_REF=${{ github.ref }}" >> $GITHUB_ENV
fi
echo "checkout ref ${{ env.CHECKOUT_REF }}"
- name: Checkout out Repo
uses: actions/checkout@v4
with:
ref: ${{ env.CHECKOUT_REF }}
fetch-depth: 0
- name: Set variables
run: |
echo "IMAGE_REPO=${OPEA_IMAGE_REPO}opea" >> $GITHUB_ENV
echo "IMAGE_TAG=${{ inputs.tag }}" >> $GITHUB_ENV
lower_example=$(echo "${{ inputs.example }}" | tr '[:upper:]' '[:lower:]')
echo "NAMESPACE=$lower_example-$(tr -dc a-z0-9 </dev/urandom | head -c 16)" >> $GITHUB_ENV
echo "ROLLOUT_TIMEOUT_SECONDS=1800s" >> $GITHUB_ENV
echo "KUBECTL_TIMEOUT_SECONDS=60s" >> $GITHUB_ENV
echo "continue_test=true" >> $GITHUB_ENV
echo "should_cleanup=false" >> $GITHUB_ENV
echo "skip_validate=true" >> $GITHUB_ENV
echo "NAMESPACE=$NAMESPACE"
- name: Kubectl install
id: install
run: |
if [[ ! -f ${{ github.workspace }}/${{ inputs.example }}/tests/test_manifest_on_${{ inputs.hardware }}.sh ]]; then
echo "No test script found, exist test!"
exit 0
else
${{ github.workspace }}/${{ inputs.example }}/tests/test_manifest_on_${{ inputs.hardware }}.sh init_${{ inputs.example }}
echo "should_cleanup=true" >> $GITHUB_ENV
kubectl create ns $NAMESPACE
${{ github.workspace }}/${{ inputs.example }}/tests/test_manifest_on_${{ inputs.hardware }}.sh install_${{ inputs.example }} $NAMESPACE
echo "Testing ${{ inputs.example }}, waiting for pod ready..."
if kubectl rollout status deployment --namespace "$NAMESPACE" --timeout "$ROLLOUT_TIMEOUT_SECONDS"; then
echo "Testing manifests ${{ inputs.example }}, waiting for pod ready done!"
echo "skip_validate=false" >> $GITHUB_ENV
else
echo "Timeout waiting for pods in namespace $NAMESPACE to be ready!"
.github/workflows/scripts/k8s-utils.sh dump_pods_status $NAMESPACE
exit 1
fi
sleep 60
fi
- name: Validate e2e test
if: always()
run: |
if $skip_validate; then
echo "Skip validate"
else
if ${{ github.workspace }}/${{ inputs.example }}/tests/test_manifest_on_${{ inputs.hardware }}.sh validate_${{ inputs.example }} $NAMESPACE ; then
echo "Validate ${{ inputs.example }} successful!"
else
echo "Validate ${{ inputs.example }} failure!!!"
echo "Check the logs in 'Dump logs when e2e test failed' step!!!"
exit 1
fi
fi
- name: Dump logs when e2e test failed
if: failure()
run: |
.github/workflows/scripts/k8s-utils.sh dump_all_pod_logs $NAMESPACE
- name: Kubectl uninstall
if: always()
run: |
if $should_cleanup; then
if ! kubectl delete ns $NAMESPACE --timeout=$KUBECTL_TIMEOUT_SECONDS; then
kubectl delete pods --namespace $NAMESPACE --force --grace-period=0 --all
kubectl delete ns $NAMESPACE --force --grace-period=0 --timeout=$KUBECTL_TIMEOUT_SECONDS
fi
fi

View File

@@ -28,6 +28,14 @@ on:
required: false
type: string
default: ""
use_model_cache:
required: false
type: boolean
default: false
opea_branch:
default: "main"
required: false
type: string
jobs:
get-test-case:
runs-on: ubuntu-latest
@@ -60,9 +68,16 @@ jobs:
cd ${{ github.workspace }}/${{ inputs.example }}/tests
run_test_cases=""
default_test_case=$(find . -type f -name "test_compose_on_${{ inputs.hardware }}.sh" | cut -d/ -f2)
if [[ "${{ inputs.hardware }}" == "gaudi"* ]]; then
hardware="gaudi"
elif [[ "${{ inputs.hardware }}" == "xeon"* ]]; then
hardware="xeon"
else
hardware="${{ inputs.hardware }}"
fi
default_test_case=$(find . -type f -name "test_compose_on_$hardware.sh" | cut -d/ -f2)
if [ "$default_test_case" ]; then run_test_cases="$default_test_case"; fi
other_test_cases=$(find . -type f -name "test_compose_*_on_${{ inputs.hardware }}.sh" | cut -d/ -f2)
other_test_cases=$(find . -type f -name "test_compose_*_on_$hardware.sh" | cut -d/ -f2)
echo "default_test_case=$default_test_case"
echo "other_test_cases=$other_test_cases"
@@ -85,12 +100,17 @@ jobs:
fi
done
if [ -z "$run_test_cases" ] && [[ $(printf '%s\n' "${changed_files[@]}" | grep ${{ inputs.example }} | grep /tests/) ]]; then
run_test_cases=$other_test_cases
fi
test_cases=$(echo $run_test_cases | tr ' ' '\n' | sort -u | jq -R '.' | jq -sc '.')
echo "test_cases=$test_cases"
echo "test_cases=$test_cases" >> $GITHUB_OUTPUT
run-test:
compose-test:
needs: [get-test-case]
if: ${{ needs.get-test-case.outputs.test_cases != '[""]' }}
strategy:
matrix:
test_case: ${{ fromJSON(needs.get-test-case.outputs.test_cases) }}
@@ -101,9 +121,18 @@ jobs:
- name: Clean up Working Directory
run: |
sudo rm -rf ${{github.workspace}}/* || true
echo "Cleaning up containers using ports..."
cid=$(docker ps --format '{{.Names}} : {{.Ports}}' | grep -v ' : $' | grep -v 0.0.0.0:5000 | awk -F' : ' '{print $1}')
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && sleep 1s; fi
docker system prune -f
docker rmi $(docker images --filter reference="*/*/*:latest" -q) || true
docker rmi $(docker images --filter reference="*/*:ci" -q) || true
echo "Cleaning up images ..."
docker images --filter reference="*/*/*:latest" -q | xargs -r docker rmi && sleep 1s
docker images --filter reference="*/*:ci" -q | xargs -r docker rmi && sleep 1s
docker images --filter reference="*:5000/*/*" -q | xargs -r docker rmi && sleep 1s
docker images --filter reference="opea/comps-base" -q | xargs -r docker rmi && sleep 1s
docker images
- name: Checkout out Repo
uses: actions/checkout@v4
@@ -111,47 +140,84 @@ jobs:
ref: ${{ needs.get-test-case.outputs.CHECKOUT_REF }}
fetch-depth: 0
- name: Clean up container before test
shell: bash
run: |
docker ps
cd ${{ github.workspace }}/${{ inputs.example }}
export test_case=${{ matrix.test_case }}
export hardware=${{ inputs.hardware }}
bash ${{ github.workspace }}/.github/workflows/scripts/docker_compose_clean_up.sh "containers"
bash ${{ github.workspace }}/.github/workflows/scripts/docker_compose_clean_up.sh "ports"
docker ps
- name: Log in DockerHub
uses: docker/login-action@v3.2.0
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Run test
shell: bash
env:
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
HF_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
PINECONE_KEY: ${{ secrets.PINECONE_KEY }}
PINECONE_KEY_LANGCHAIN_TEST: ${{ secrets.PINECONE_KEY_LANGCHAIN_TEST }}
SDK_BASE_URL: ${{ secrets.SDK_BASE_URL }}
SERVING_TOKEN: ${{ secrets.SERVING_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
FINNHUB_API_KEY: ${{ secrets.FINNHUB_API_KEY }}
FINANCIAL_DATASETS_API_KEY: ${{ secrets.FINANCIAL_DATASETS_API_KEY }}
IMAGE_REPO: ${{ inputs.registry }}
IMAGE_TAG: ${{ inputs.tag }}
opea_branch: ${{ inputs.opea_branch }}
example: ${{ inputs.example }}
hardware: ${{ inputs.hardware }}
test_case: ${{ matrix.test_case }}
use_model_cache: ${{ inputs.use_model_cache }}
run: |
cd ${{ github.workspace }}/$example/tests
if [[ "$IMAGE_REPO" == "" ]]; then export IMAGE_REPO="${OPEA_IMAGE_REPO}opea"; fi
if [ -f ${test_case} ]; then timeout 30m bash ${test_case}; else echo "Test script {${test_case}} not found, skip test!"; fi
if [[ "$use_model_cache" == "true" ]]; then
if [ -d "/data2/hf_model" ]; then
export model_cache="/data2/hf_model"
else
echo "Model cache directory /data2/hf_model does not exist"
export model_cache="$HOME/.cache/huggingface/hub"
fi
if [[ "$test_case" == *"rocm"* ]]; then
export model_cache="/var/lib/GenAI/data"
fi
fi
if [ -f "${test_case}" ]; then timeout 60m bash "${test_case}"; else echo "Test script {${test_case}} not found, skip test!"; fi
- name: Clean up container
shell: bash
if: cancelled() || failure()
- name: Clean up container after test
if: always()
run: |
cd ${{ github.workspace }}/${{ inputs.example }}/docker_compose
test_case=${{ matrix.test_case }}
flag=${test_case%_on_*}
flag=${flag#test_}
yaml_file=$(find . -type f -wholename "*${{ inputs.hardware }}/${flag}.yaml")
echo $yaml_file
container_list=$(cat $yaml_file | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
cid=$(docker ps -aq --filter "name=$container_name")
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && sleep 1s; fi
done
docker system prune -f
docker rmi $(docker images --filter reference="*:5000/*/*" -q) || true
set -x
echo "Cleaning up containers using ports..."
cid=$(docker ps --format '{{.Names}} : {{.Ports}}' | grep -v ' : $' | grep -v 0.0.0.0:5000 | awk -F' : ' '{print $1}')
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && sleep 1s; fi
echo "Cleaning up images ..."
if [[ "${{ inputs.hardware }}" == "xeon"* ]]; then
docker system prune -a -f
else
docker images --filter reference="*/*/*:latest" -q | xargs -r docker rmi && sleep 1s
docker images --filter reference="*/*:ci" -q | xargs -r docker rmi && sleep 1s
docker images --filter reference="*:5000/*/*" -q | xargs -r docker rmi && sleep 1s
docker images --filter reference="opea/comps-base" -q | xargs -r docker rmi && sleep 1s
docker system prune -f
fi
docker images
- name: Publish pipeline artifact
if: ${{ !cancelled() }}
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.example }}_${{ matrix.test_case }}
name: ${{ inputs.hardware }}_${{ inputs.example }}_${{ matrix.test_case }}
path: ${{ github.workspace }}/${{ inputs.example }}/tests/*.log

View File

@@ -13,7 +13,7 @@ on:
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: Checkout

View File

@@ -0,0 +1,94 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Daily update vLLM & vLLM-fork version
on:
schedule:
- cron: "30 22 * * *"
workflow_dispatch:
env:
BRANCH_NAME: "update"
USER_NAME: "CICD-at-OPEA"
USER_EMAIL: "CICD@opea.dev"
jobs:
freeze-tag:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- repo: vLLM
repo_name: vllm-project/vllm
ver_name: VLLM_VER
- repo: vLLM-fork
repo_name: HabanaAI/vllm-fork
ver_name: VLLM_FORK_VER
fail-fast: false
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }}
- name: Set up Git
run: |
git config --global user.name ${{ env.USER_NAME }}
git config --global user.email ${{ env.USER_EMAIL }}
git remote set-url origin https://${{ env.USER_NAME }}:"${{ secrets.ACTION_TOKEN }}"@github.com/${{ github.repository }}.git
git fetch
if git ls-remote https://github.com/${{ github.repository }}.git "refs/heads/${{ env.BRANCH_NAME }}_${{ matrix.repo }}" | grep -q "refs/heads/${{ env.BRANCH_NAME }}_${{ matrix.repo }}"; then
echo "branch ${{ env.BRANCH_NAME }}_${{ matrix.repo }} exists"
git checkout ${{ env.BRANCH_NAME }}_${{ matrix.repo }}
else
echo "branch ${{ env.BRANCH_NAME }}_${{ matrix.repo }} not exists"
git checkout -b ${{ env.BRANCH_NAME }}_${{ matrix.repo }}
git push origin ${{ env.BRANCH_NAME }}_${{ matrix.repo }}
echo "branch ${{ env.BRANCH_NAME }}_${{ matrix.repo }} created successfully"
fi
- name: Run script
run: |
latest_vllm_ver=$(curl -s "https://api.github.com/repos/${{ matrix.repo_name }}/tags" | jq '.[0].name' -)
latest_vllm_ver=$(echo "$latest_vllm_ver" | sed 's/"//g')
echo "latest_vllm_ver=${latest_vllm_ver}" >> "$GITHUB_ENV"
find . -type f -name "*.sh" -exec sed -i "s/${{ matrix.ver_name }}=.*/${{ matrix.ver_name }}=${latest_vllm_ver}/" {} \;
- name: Commit changes
run: |
git add .
if git diff-index --quiet HEAD --; then
echo "No changes detected, skipping commit."
exit 1
else
git commit -s -m "Update ${{ matrix.repo }} version to ${latest_vllm_ver}"
git push --set-upstream origin ${{ env.BRANCH_NAME }}_${{ matrix.repo }}
fi
- name: Create Pull Request
env:
GH_TOKEN: ${{ secrets.ACTION_TOKEN }}
run: |
pr_count=$(curl -H "Authorization: token ${{ secrets.ACTION_TOKEN }}" -s "https://api.github.com/repos/${{ github.repository }}/pulls?state=all&head=${{ env.USER_NAME }}:${{ env.BRANCH_NAME }}_${{ matrix.repo }}" | jq '. | length')
if [ $pr_count -gt 0 ]; then
echo "Pull Request exists"
pr_number=$(curl -H "Authorization: token ${{ secrets.ACTION_TOKEN }}" -s "https://api.github.com/repos/${{ github.repository }}/pulls?state=all&head=${{ env.USER_NAME }}:${{ env.BRANCH_NAME }}_${{ matrix.repo }}" | jq '.[0].number')
gh pr edit ${pr_number} \
--title "Update ${{ matrix.repo }} version to ${latest_vllm_ver}" \
--body "Update ${{ matrix.repo }} version to ${latest_vllm_ver}"
echo "Pull Request updated successfully"
else
echo "Pull Request does not exists..."
gh pr create \
-B main \
-H ${{ env.BRANCH_NAME }}_${{ matrix.repo }} \
--title "Update ${{ matrix.repo }} version to ${latest_vllm_ver}" \
--body "Update ${{ matrix.repo }} version to ${latest_vllm_ver}"
echo "Pull Request created successfully"
fi

View File

@@ -0,0 +1,29 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Check stale issue and pr
on:
schedule:
- cron: "30 22 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
with:
days-before-issue-stale: 30
days-before-pr-stale: 30
days-before-issue-close: 7
days-before-pr-close: 7
stale-issue-message: "This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 7 days."
stale-pr-message: "This PR is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 7 days."
close-issue-message: "This issue was closed because it has been stalled for 7 days with no activity."
close-pr-message: "This PR was closed because it has been stalled for 7 days with no activity."
repo-token: ${{ secrets.ACTION_TOKEN }}
start-date: "2025-03-01T00:00:00Z"
exempt-issue-labels: "Backlog"

View File

@@ -0,0 +1,117 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Update Docker Hub Description
on:
schedule:
- cron: "0 0 * * 0"
workflow_dispatch:
jobs:
get-images-matrix:
runs-on: ubuntu-latest
outputs:
examples_json: ${{ steps.extract.outputs.examples_json }}
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Extract images info and generate JSON matrix
id: extract
run: |
#!/bin/bash
set -e
images=$(awk -F'|' '/^\| *\[opea\// {
gsub(/^ +| +$/, "", $2);
gsub(/^ +| +$/, "", $4);
gsub(/^ +| +$/, "", $5);
# Extract the path portion of the dockerHub link from the Example Images column
match($2, /\(https:\/\/hub\.docker\.com\/r\/[^)]*\)/);
repository = substr($2, RSTART, RLENGTH);
# Remove the prefix and the trailing right bracket
sub(/^\(https:\/\/hub\.docker\.com\/r\//, "", repository);
sub(/\)$/, "", repository);
# Description Direct assignment
description = $4;
# Extract the content of the github link from the Readme column
match($5, /\(https:\/\/github\.com\/[^)]*\)/);
readme_url = substr($5, RSTART, RLENGTH);
# Remove the prefix and the trailing right bracket
sub(/^\(https:\/\/github\.com\//, "", readme_url);
sub(/\)$/, "", readme_url);
# Remove blob information, such as "blob/main/" or "blob/habana_main/"
gsub(/blob\/[^/]+\//, "", readme_url);
# Remove the organization name and keep only the file path, such as changing "opea-project/GenAIExamples/AudioQnA/README.md" to "GenAIExamples/AudioQnA/README.md"
sub(/^[^\/]+\//, "", readme_url);
# Generate JSON object string
printf "{\"repository\":\"%s\",\"short-description\":\"%s\",\"readme-filepath\":\"%s\"}\n", repository, description, readme_url;
}' docker_images_list.md)
# Concatenate all JSON objects into a JSON array, using paste to separate them with commas
json="[$(echo "$images" | paste -sd, -)]"
echo "$json"
# Set as output variable for subsequent jobs to use
echo "::set-output name=examples_json::$json"
check-images-matrix:
runs-on: ubuntu-latest
needs: get-images-matrix
if: ${{ needs.get-images-matrix.outputs.examples_json != '' }}
strategy:
matrix:
image: ${{ fromJSON(needs.get-images-matrix.outputs.examples_json) }}
fail-fast: false
steps:
- name: Check dockerhub description
run: |
echo "dockerhub description for ${{ matrix.image.repository }}"
echo "short-description: ${{ matrix.image.short-description }}"
echo "readme-filepath: ${{ matrix.image.readme-filepath }}"
dockerHubDescription:
runs-on: ubuntu-latest
needs: get-images-matrix
if: ${{ needs.get-images-matrix.outputs.examples_json != '' }}
strategy:
matrix:
image: ${{ fromJSON(needs.get-images-matrix.outputs.examples_json) }}
fail-fast: false
steps:
- name: Checkout GenAIExamples
uses: actions/checkout@v4
with:
repository: opea-project/GenAIExamples
path: GenAIExamples
- name: Checkout GenAIComps
uses: actions/checkout@v4
with:
repository: opea-project/GenAIComps
path: GenAIComps
- name: Checkout vllm-openvino
uses: actions/checkout@v4
with:
repository: vllm-project/vllm
path: vllm
- name: Checkout vllm-gaudi
uses: actions/checkout@v4
with:
repository: HabanaAI/vllm-fork
ref: habana_main
path: vllm-fork
- name: add dockerhub description
uses: peter-evans/dockerhub-description@v4
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ matrix.image.repository }}
short-description: ${{ matrix.image.short-description }}
readme-filepath: ${{ matrix.image.readme-filepath }}
enable-url-completion: false

View File

@@ -0,0 +1,31 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Clean up container on manual event
on:
workflow_dispatch:
inputs:
node:
default: "rocm"
description: "Hardware to clean"
required: true
type: string
clean_list:
default: ""
description: "docker command to clean"
required: false
type: string
jobs:
clean:
runs-on: "${{ inputs.node }}"
steps:
- name: Clean up container
run: |
docker ps
if [ "${{ inputs.clean_list }}" ]; then
echo "----------stop and remove containers----------"
docker stop ${{ inputs.clean_list }} && docker rm ${{ inputs.clean_list }}
echo "----------container removed----------"
docker ps
fi

View File

@@ -41,9 +41,11 @@ jobs:
publish:
needs: [get-image-list]
if: ${{ needs.get-image-list.outputs.matrix != '' }}
strategy:
matrix:
image: ${{ fromJSON(needs.get-image-list.outputs.matrix) }}
fail-fast: false
runs-on: "docker-build-${{ inputs.node }}"
steps:
- uses: docker/login-action@v3.2.0

View File

@@ -12,7 +12,7 @@ on:
type: string
examples:
default: ""
description: 'List of examples to publish "AgentQnA,AudioQnA,ChatQnA,CodeGen,CodeTrans,DocIndexRetriever,DocSum,FaqGen,InstructionTuning,MultimodalQnA,ProductivitySuite,RerankFinetuning,SearchQnA,Translation,VideoQnA,VisualQnA"'
description: 'List of examples to publish "AgentQnA,AudioQnA,ChatQnA,CodeGen,CodeTrans,DocIndexRetriever,DocSum,InstructionTuning,MultimodalQnA,ProductivitySuite,RerankFinetuning,SearchQnA,Translation,VideoQnA,VisualQnA"'
required: false
type: string
images:
@@ -47,6 +47,7 @@ jobs:
scan-docker:
needs: get-image-list
runs-on: "docker-build-${{ inputs.node }}"
if: ${{ needs.get-image-list.outputs.matrix != '' }}
strategy:
matrix:
image: ${{ fromJson(needs.get-image-list.outputs.matrix) }}

View File

@@ -7,12 +7,12 @@ on:
inputs:
nodes:
default: "gaudi,xeon"
description: "Hardware to run test"
description: "Hardware to run test gaudi,xeon,rocm,arc,gaudi3,xeon-gnr"
required: true
type: string
examples:
default: "ChatQnA"
description: 'List of examples to test [AudioQnA,ChatQnA,CodeGen,CodeTrans,DocSum,FaqGen,SearchQnA,Translation]'
description: 'List of examples to test [AgentQnA,AudioQnA,ChatQnA,CodeGen,CodeTrans,DocIndexRetriever,DocSum,FaqGen,InstructionTuning,MultimodalQnA,ProductivitySuite,RerankFinetuning,SearchQnA,Translation,VideoQnA,VisualQnA,AvatarChatbot,Text2Image,WorkflowExecAgent,DBQnA,EdgeCraftRAG,GraphRAG]'
required: true
type: string
tag:
@@ -20,11 +20,6 @@ on:
description: "Tag to apply to images"
required: true
type: string
deploy_gmc:
default: false
description: 'Whether to deploy gmc'
required: true
type: boolean
build:
default: true
description: 'Build test required images for Examples'
@@ -35,14 +30,9 @@ on:
description: 'Test examples with docker compose'
required: false
type: boolean
test_k8s:
default: false
description: 'Test examples with k8s'
required: false
type: boolean
test_gmc:
default: false
description: 'Test examples with gmc'
test_helmchart:
default: true
description: 'Test examples with helm charts'
required: false
type: boolean
opea_branch:
@@ -51,10 +41,15 @@ on:
required: false
type: string
inject_commit:
default: true
description: "inject commit to docker images true or false"
default: false
description: "inject commit to docker images"
required: false
type: string
type: boolean
use_model_cache:
default: false
description: "use model cache"
required: false
type: boolean
permissions: read-all
jobs:
@@ -74,23 +69,20 @@ jobs:
nodes_json=$(printf '%s\n' "${nodes[@]}" | sort -u | jq -R '.' | jq -sc '.')
echo "nodes=$nodes_json" >> $GITHUB_OUTPUT
build-deploy-gmc:
build-comps-base:
needs: [get-test-matrix]
if: ${{ fromJSON(inputs.deploy_gmc) }}
strategy:
matrix:
node: ${{ fromJson(needs.get-test-matrix.outputs.nodes) }}
fail-fast: false
uses: ./.github/workflows/_gmc-workflow.yml
uses: ./.github/workflows/_build_comps_base_image.yml
with:
node: ${{ matrix.node }}
build: ${{ fromJSON(inputs.build) }}
tag: ${{ inputs.tag }}
opea_branch: ${{ inputs.opea_branch }}
secrets: inherit
run-examples:
needs: [get-test-matrix, build-deploy-gmc]
if: always()
needs: [get-test-matrix, build-comps-base]
strategy:
matrix:
example: ${{ fromJson(needs.get-test-matrix.outputs.examples) }}
@@ -103,8 +95,8 @@ jobs:
tag: ${{ inputs.tag }}
build: ${{ fromJSON(inputs.build) }}
test_compose: ${{ fromJSON(inputs.test_compose) }}
test_k8s: ${{ fromJSON(inputs.test_k8s) }}
test_gmc: ${{ fromJSON(inputs.test_gmc) }}
test_helmchart: ${{ fromJSON(inputs.test_helmchart) }}
opea_branch: ${{ inputs.opea_branch }}
inject_commit: ${{ inputs.inject_commit }}
use_model_cache: ${{ inputs.use_model_cache }}
secrets: inherit

View File

@@ -25,9 +25,9 @@ jobs:
- name: Set up Git
run: |
git config --global user.name "NeuralChatBot"
git config --global user.email "grp_neural_chat_bot@intel.com"
git remote set-url origin https://NeuralChatBot:"${{ secrets.ACTION_TOKEN }}"@github.com/opea-project/GenAIExamples.git
git config --global user.name "CICD-at-OPEA"
git config --global user.email "CICD@opea.dev"
git remote set-url origin https://CICD-at-OPEA:"${{ secrets.ACTION_TOKEN }}"@github.com/opea-project/GenAIExamples.git
- name: Run script
run: |

View File

@@ -12,7 +12,7 @@ on:
type: string
example:
default: "ChatQnA"
description: 'Build images belong to which example?'
description: 'Build images belong to which example? [AgentQnA,AudioQnA,ChatQnA,CodeGen,CodeTrans,DocIndexRetriever,DocSum,FaqGen,InstructionTuning,MultimodalQnA,ProductivitySuite,RerankFinetuning,SearchQnA,Translation,VideoQnA,VisualQnA,AvatarChatbot,Text2Image,WorkflowExecAgent,DBQnA,EdgeCraftRAG,GraphRAG]'
required: true
type: string
services:
@@ -31,10 +31,10 @@ on:
required: false
type: string
inject_commit:
default: true
description: "inject commit to docker images true or false"
default: false
description: "inject commit to docker images"
required: false
type: string
type: boolean
jobs:
get-test-matrix:
@@ -51,6 +51,7 @@ jobs:
image-build:
needs: get-test-matrix
if: ${{ needs.get-test-matrix.outputs.nodes != '' }}
strategy:
matrix:
node: ${{ fromJson(needs.get-test-matrix.outputs.nodes) }}

View File

@@ -0,0 +1,61 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Clean up Local Registry on manual event
on:
workflow_dispatch:
inputs:
nodes:
default: "gaudi,xeon"
description: "Hardware to clean up"
required: true
type: string
env:
EXAMPLES: ${{ vars.NIGHTLY_RELEASE_EXAMPLES }}
jobs:
get-build-matrix:
runs-on: ubuntu-latest
outputs:
examples: ${{ steps.get-matrix.outputs.examples }}
nodes: ${{ steps.get-matrix.outputs.nodes }}
steps:
- name: Create Matrix
id: get-matrix
run: |
examples=($(echo ${EXAMPLES} | tr ',' ' '))
examples_json=$(printf '%s\n' "${examples[@]}" | sort -u | jq -R '.' | jq -sc '.')
echo "examples=$examples_json" >> $GITHUB_OUTPUT
nodes=($(echo ${{ inputs.nodes }} | tr ',' ' '))
nodes_json=$(printf '%s\n' "${nodes[@]}" | sort -u | jq -R '.' | jq -sc '.')
echo "nodes=$nodes_json" >> $GITHUB_OUTPUT
clean-up:
needs: get-build-matrix
if: ${{ needs.get-image-list.outputs.matrix != '' }}
strategy:
matrix:
node: ${{ fromJson(needs.get-build-matrix.outputs.nodes) }}
fail-fast: false
runs-on: "docker-build-${{ matrix.node }}"
steps:
- name: Clean Up Local Registry
run: |
echo "Cleaning up local registry on ${{ matrix.node }}"
bash /home/sdp/workspace/fully_registry_cleanup.sh
docker ps | grep registry
build:
needs: [get-build-matrix, clean-up]
if: ${{ needs.get-image-list.outputs.matrix != '' }}
strategy:
matrix:
example: ${{ fromJson(needs.get-build-matrix.outputs.examples) }}
node: ${{ fromJson(needs.get-build-matrix.outputs.nodes) }}
fail-fast: false
uses: ./.github/workflows/_example-workflow.yml
with:
node: ${{ matrix.node }}
example: ${{ matrix.example }}
secrets: inherit

View File

@@ -5,11 +5,11 @@ name: Nightly build/publish latest docker images
on:
schedule:
- cron: "30 13 * * *" # UTC time
- cron: "30 14 * * 1-5" # UTC time
workflow_dispatch:
env:
EXAMPLES: "AgentQnA,AudioQnA,ChatQnA,CodeGen,CodeTrans,DocIndexRetriever,DocSum,FaqGen,InstructionTuning,MultimodalQnA,ProductivitySuite,RerankFinetuning,SearchQnA,Translation,VideoQnA,VisualQnA"
EXAMPLES: ${{ vars.NIGHTLY_RELEASE_EXAMPLES }}
TAG: "latest"
PUBLISH_TAGS: "latest"
@@ -32,29 +32,54 @@ jobs:
echo "TAG=$TAG" >> $GITHUB_OUTPUT
echo "PUBLISH_TAGS=$PUBLISH_TAGS" >> $GITHUB_OUTPUT
build:
needs: get-build-matrix
build-comps-base:
needs: [get-build-matrix]
uses: ./.github/workflows/_build_comps_base_image.yml
with:
node: gaudi
build-images:
needs: [get-build-matrix, build-comps-base]
strategy:
matrix:
example: ${{ fromJSON(needs.get-build-matrix.outputs.examples_json) }}
fail-fast: false
uses: ./.github/workflows/_build_image.yml
with:
node: gaudi
example: ${{ matrix.example }}
inject_commit: true
secrets: inherit
test-example:
needs: [get-build-matrix]
if: ${{ needs.get-build-matrix.outputs.examples_json != '' }}
strategy:
matrix:
example: ${{ fromJSON(needs.get-build-matrix.outputs.examples_json) }}
fail-fast: false
uses: ./.github/workflows/_example-workflow.yml
with:
node: gaudi
node: xeon
build: false
example: ${{ matrix.example }}
test_compose: true
inject_commit: true
secrets: inherit
get-image-list:
needs: get-build-matrix
needs: [get-build-matrix]
uses: ./.github/workflows/_get-image-list.yml
with:
examples: ${{ needs.get-build-matrix.outputs.EXAMPLES }}
publish:
needs: [get-build-matrix, get-image-list, build]
needs: [get-build-matrix, get-image-list, build-images]
if: always()
strategy:
matrix:
image: ${{ fromJSON(needs.get-image-list.outputs.matrix) }}
fail-fast: false
runs-on: "docker-build-gaudi"
steps:
- uses: docker/login-action@v3.2.0

81
.github/workflows/pr-chart-e2e.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: E2E Test with Helm Charts
on:
pull_request_target:
branches: [main]
types: [opened, reopened, ready_for_review, synchronize] # added `ready_for_review` since draft is skipped
paths:
- "!**.md"
- "**/helm/**"
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
job1:
name: Get-Test-Matrix
permissions:
contents: read
pull-requests: read
runs-on: ubuntu-latest
outputs:
run_matrix: ${{ steps.get-test-matrix.outputs.run_matrix }}
steps:
- name: Checkout Repo
uses: actions/checkout@v4
with:
ref: "refs/pull/${{ github.event.number }}/merge"
fetch-depth: 0
- name: Get Test Matrix
id: get-test-matrix
run: |
set -x
echo "base_commit=${{ github.event.pull_request.base.sha }}"
base_commit=${{ github.event.pull_request.base.sha }}
merged_commit=$(git log -1 --format='%H')
values_files=$(git diff --name-only ${base_commit} ${merged_commit} | \
grep "values.yaml" | \
sort -u ) #CodeGen/kubernetes/helm/cpu-values.yaml
run_matrix="{\"include\":["
for values_file in ${values_files}; do
if [ -f "$values_file" ]; then
valuefile=$(basename "$values_file") # cpu-values.yaml
example=$(echo "$values_file" | cut -d'/' -f1) # CodeGen
if [[ "$valuefile" == *"gaudi"* ]]; then
hardware="gaudi"
elif [[ "$valuefile" == *"rocm"* ]]; then
hardware="rocm"
elif [[ "$valuefile" == *"nv"* ]]; then
continue
else
hardware="xeon"
fi
echo "example=${example}, hardware=${hardware}, valuefile=${valuefile}"
if [[ $(echo ${run_matrix} | grep -c "{\"example\":\"${example}\",\"hardware\":\"${hardware}\"},") == 0 ]]; then
run_matrix="${run_matrix}{\"example\":\"${example}\",\"hardware\":\"${hardware}\"},"
echo "------------------ add one values file ------------------"
fi
fi
done
run_matrix="${run_matrix%,}"
run_matrix=$run_matrix"]}"
echo "run_matrix="${run_matrix}""
echo "run_matrix="${run_matrix}"" >> $GITHUB_OUTPUT
helm-chart-test:
needs: [job1]
if: always() && ${{ fromJSON(needs.job1.outputs.run_matrix).length != 0 }}
uses: ./.github/workflows/_helm-e2e.yml
strategy:
matrix: ${{ fromJSON(needs.job1.outputs.run_matrix) }}
with:
example: ${{ matrix.example }}
hardware: ${{ matrix.hardware }}
mode: "CI"
secrets: inherit

View File

@@ -0,0 +1,40 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Check Duplicated Images
on:
pull_request:
branches: [main]
types: [opened, reopened, ready_for_review, synchronize]
paths:
- "**/docker_image_build/*.yaml"
- ".github/workflows/pr-check-duplicated-image.yml"
- ".github/workflows/scripts/check_duplicated_image.py"
workflow_dispatch:
# If there is a new commit, the previous jobs will be canceled
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
check-duplicated-image:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo
uses: actions/checkout@v4
- name: Check all the docker image build files
run: |
pip install PyYAML
cd ${{github.workspace}}
build_files=""
for f in `find . -path "*/docker_image_build/build.yaml"`; do
build_files="$build_files $f"
done
python3 .github/workflows/scripts/check_duplicated_image.py $build_files
shell: bash

View File

@@ -34,6 +34,11 @@ jobs:
- name: Checkout out Repo
uses: actions/checkout@v4
- name: Check Dangerous Command Injection
uses: opea-project/validation/actions/check-cmd@main
with:
work_dir: ${{ github.workspace }}
- name: Docker Build
run: |
docker build -f ${{ github.workspace }}/.github/workflows/docker/${{ env.DOCKER_FILE_NAME }}.dockerfile -t ${{ env.REPO_NAME }}:${{ env.REPO_TAG }} .

View File

@@ -2,7 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
name: "Dependency Review"
on: [pull_request]
on: [pull_request_target]
permissions:
contents: read

View File

@@ -28,19 +28,20 @@ jobs:
if: ${{ !github.event.pull_request.draft }}
uses: ./.github/workflows/_get-test-matrix.yml
with:
diff_excluded_files: '.github|*.md|*.txt|kubernetes|manifest|gmc|assets|benchmark'
diff_excluded_files: '\.github|\.md|\.txt|kubernetes|gmc|assets|benchmark'
example-test:
needs: [get-test-matrix]
if: ${{ needs.get-test-matrix.outputs.run_matrix != '' }}
strategy:
matrix: ${{ fromJSON(needs.get-test-matrix.outputs.run_matrix) }}
fail-fast: false
if: ${{ !github.event.pull_request.draft }}
uses: ./.github/workflows/_run-docker-compose.yml
with:
registry: "opea"
tag: "ci"
example: ${{ matrix.example }}
hardware: ${{ matrix.hardware }}
diff_excluded_files: '.github|*.md|*.txt|kubernetes|manifest|gmc|assets|benchmark'
use_model_cache: true
diff_excluded_files: '\.github|\.md|\.txt|kubernetes|gmc|assets|benchmark'
secrets: inherit

View File

@@ -0,0 +1,109 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Compose file and dockerfile path checking
on:
pull_request:
branches: [main]
types: [opened, reopened, ready_for_review, synchronize]
jobs:
check-dockerfile-paths-in-README:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo GenAIExamples
uses: actions/checkout@v4
- name: Clone Repo GenAIComps
run: |
cd ..
git clone --depth 1 https://github.com/opea-project/GenAIComps.git
- name: Check for Missing Dockerfile Paths in GenAIComps
run: |
cd ${{github.workspace}}
miss="FALSE"
while IFS=: read -r file line content; do
dockerfile_path=$(echo "$content" | awk -F '-f ' '{print $2}' | awk '{print $1}')
if [[ ! -f "../GenAIComps/${dockerfile_path}" ]]; then
miss="TRUE"
echo "Missing Dockerfile: GenAIComps/${dockerfile_path} (Referenced in GenAIExamples/${file}:${line})"
fi
done < <(grep -Ern 'docker build .* -f comps/.+/Dockerfile' --include='*.md' .)
if [[ "$miss" == "TRUE" ]]; then
exit 1
fi
shell: bash
check-Dockerfile-in-build-yamls:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo GenAIExamples
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check Dockerfile path included in image build yaml
if: always()
run: |
set -e
shopt -s globstar
no_add="FALSE"
cd ${{github.workspace}}
Dockerfiles=$(realpath $(find ./ -name '*Dockerfile*' ! -path '*/tests/*'))
if [ -n "$Dockerfiles" ]; then
for dockerfile in $Dockerfiles; do
service=$(echo "$dockerfile" | awk -F '/GenAIExamples/' '{print $2}' | awk -F '/' '{print $2}')
cd ${{github.workspace}}/$service/docker_image_build
all_paths=$(realpath $(awk ' /context:/ { context = $2 } /dockerfile:/ { dockerfile = $2; combined = context "/" dockerfile; gsub(/\/+/, "/", combined); if (index(context, ".") > 0) {print combined}}' build.yaml) 2> /dev/null || true )
if ! echo "$all_paths" | grep -q "$dockerfile"; then
echo "AR: Update $dockerfile to GenAIExamples/$service/docker_image_build/build.yaml. The yaml is used for release images build."
no_add="TRUE"
fi
done
fi
if [[ "$no_add" == "TRUE" ]]; then
exit 1
fi
check-image-and-service-names-in-build-yaml:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo GenAIExamples
uses: actions/checkout@v4
- name: Check name agreement in build.yaml
run: |
pip install ruamel.yaml
cd ${{github.workspace}}
consistency="TRUE"
build_yamls=$(find . -name 'build.yaml')
for build_yaml in $build_yamls; do
message=$(python3 .github/workflows/scripts/check-name-agreement.py "$build_yaml")
if [[ "$message" != *"consistent"* ]]; then
consistency="FALSE"
echo "Inconsistent service name and image name found in file $build_yaml."
echo "$message"
fi
done
if [[ "$consistency" == "FALSE" ]]; then
echo "Please ensure that the service and image names are consistent in build.yaml, otherwise we cannot guarantee that your image will be published correctly."
exit 1
fi
shell: bash

View File

@@ -8,11 +8,10 @@ on:
branches: ["main", "*rc"]
types: [opened, reopened, ready_for_review, synchronize] # added `ready_for_review` since draft is skipped
paths:
- "**/kubernetes/**/gmc/**"
- "**/kubernetes/gmc/**"
- "**/tests/test_gmc**"
- "!**.md"
- "!**.txt"
- "!**/kubernetes/**/manifest/**"
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@@ -22,7 +21,7 @@ jobs:
job1:
uses: ./.github/workflows/_get-test-matrix.yml
with:
diff_excluded_files: '.github|docker_compose|manifest|assets|*.md|*.txt'
diff_excluded_files: '\.github|docker_compose|assets|\.md|\.txt'
test_mode: "gmc"
gmc-test:

View File

@@ -1,7 +1,7 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Check Paths and Hyperlinks
name: Check hyperlinks and relative path validity
on:
pull_request:
@@ -9,39 +9,6 @@ on:
types: [opened, reopened, ready_for_review, synchronize]
jobs:
check-dockerfile-paths:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo GenAIExamples
uses: actions/checkout@v4
- name: Clone Repo GenAIComps
run: |
cd ..
git clone https://github.com/opea-project/GenAIComps.git
- name: Check for Missing Dockerfile Paths in GenAIComps
run: |
cd ${{github.workspace}}
miss="FALSE"
while IFS=: read -r file line content; do
dockerfile_path=$(echo "$content" | awk -F '-f ' '{print $2}' | awk '{print $1}')
if [[ ! -f "../GenAIComps/${dockerfile_path}" ]]; then
miss="TRUE"
echo "Missing Dockerfile: GenAIComps/${dockerfile_path} (Referenced in GenAIExamples/${file}:${line})"
fi
done < <(grep -Ern 'docker build .* -f comps/.+/Dockerfile' --include='*.md' .)
if [[ "$miss" == "TRUE" ]]; then
exit 1
fi
shell: bash
check-the-validity-of-hyperlinks-in-README:
runs-on: ubuntu-latest
steps:
@@ -56,6 +23,7 @@ jobs:
- name: Check the Validity of Hyperlinks
run: |
cd ${{github.workspace}}
delay=15
fail="FALSE"
merged_commit=$(git log -1 --format='%H')
changed_files="$(git diff --name-status --diff-filter=ARM ${{ github.event.pull_request.base.sha }} ${merged_commit} | awk '/\.md$/ {print $NF}')"
@@ -68,15 +36,20 @@ jobs:
# echo $url_line
url=$(echo "$url_line"|cut -d '(' -f2 | cut -d ')' -f1|sed 's/\.git$//')
path=$(echo "$url_line"|cut -d':' -f1 | cut -d'/' -f2-)
response=$(curl -L -s -o /dev/null -w "%{http_code}" "$url")|| true
if [ "$response" -ne 200 ]; then
echo "**********Validation failed, try again**********"
response_retry=$(curl -s -o /dev/null -w "%{http_code}" "$url")
if [ "$response_retry" -eq 200 ]; then
echo "*****Retry successfully*****"
else
echo "Invalid link from ${{github.workspace}}/$path: $url"
fail="TRUE"
if [[ "$url" == "https://platform.openai.com/api-keys"* ]]; then
echo "Link "$url" from ${{github.workspace}}/$path needs to be verified by a real person."
else
sleep $delay
response=$(curl -L -s -o /dev/null -w "%{http_code}" "$url")|| true
if [ "$response" -ne 200 ]; then
echo "**********Validation failed ($response), try again**********"
response_retry=$(curl -s -o /dev/null -w "%{http_code}" "$url")
if [ "$response_retry" -eq 200 ]; then
echo "*****Retry successfully*****"
else
echo "Invalid link ($response_retry) from ${{github.workspace}}/$path: $url"
fail="TRUE"
fi
fi
fi
done
@@ -109,13 +82,7 @@ jobs:
cd ${{github.workspace}}
fail="FALSE"
repo_name=${{ github.event.pull_request.head.repo.full_name }}
if [ "$(echo "$repo_name"|cut -d'/' -f1)" != "opea-project" ]; then
owner=$(echo "${{ github.event.pull_request.head.repo.full_name }}" |cut -d'/' -f1)
branch="https://github.com/$owner/GenAIExamples/tree/${{ github.event.pull_request.head.ref }}"
else
branch="https://github.com/opea-project/GenAIExamples/blob/${{ github.event.pull_request.head.ref }}"
fi
link_head="https://github.com/opea-project/GenAIExamples/blob/main"
branch="https://github.com/$repo_name/blob/${{ github.event.pull_request.head.ref }}"
merged_commit=$(git log -1 --format='%H')
changed_files="$(git diff --name-status --diff-filter=ARM ${{ github.event.pull_request.base.sha }} ${merged_commit} | awk '/\.md$/ {print $NF}')"

View File

@@ -1,42 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: E2E test with manifests
on:
pull_request_target:
branches: ["main", "*rc"]
types: [opened, reopened, ready_for_review, synchronize] # added `ready_for_review` since draft is skipped
paths:
- "**/Dockerfile**"
- "**.py"
- "**/kubernetes/**/manifest/**"
- "**/tests/test_manifest**"
- "!**.md"
- "!**.txt"
- "!**/kubernetes/**/gmc/**"
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
job1:
uses: ./.github/workflows/_get-test-matrix.yml
with:
diff_excluded_files: '.github|docker_compose|gmc|assets|*.md|*.txt|benchmark'
test_mode: "manifest"
run-example:
needs: job1
strategy:
matrix: ${{ fromJSON(needs.job1.outputs.run_matrix) }}
fail-fast: false
uses: ./.github/workflows/_example-workflow.yml
with:
node: ${{ matrix.hardware }}
example: ${{ matrix.example }}
tag: ${{ github.event.pull_request.head.sha }}
test_k8s: true
secrets: inherit

View File

@@ -10,6 +10,7 @@ on:
- "**.py"
- "**Dockerfile*"
- "**docker_image_build/build.yaml"
- "**/ui/**"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-on-push
@@ -23,6 +24,7 @@ jobs:
image-build:
needs: job1
if: ${{ needs.job1.outputs.run_matrix != '{"include":[]}' }}
strategy:
matrix: ${{ fromJSON(needs.job1.outputs.run_matrix) }}
fail-fast: false

View File

@@ -40,7 +40,7 @@ jobs:
- name: Create Issue
uses: daisy-ycguo/create-issue-action@stable
with:
token: ${{ secrets.Infra_Issue_Token }}
token: ${{ secrets.ACTION_TOKEN }}
owner: opea-project
repo: GenAIInfra
title: |
@@ -54,6 +54,6 @@ jobs:
${{ env.changed_files }}
Please verify if the helm charts and manifests need to be changed accordingly.
Please verify if the helm charts need to be changed accordingly.
> This issue was created automatically by CI.

View File

@@ -0,0 +1,46 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
from ruamel.yaml import YAML
def parse_yaml_file(file_path):
yaml = YAML()
with open(file_path, "r") as file:
data = yaml.load(file)
return data
def check_service_image_consistency(data):
inconsistencies = []
for service_name, service_details in data.get("services", {}).items():
image_name = service_details.get("image", "")
# Extract the image name part after the last '/'
image_name_part = image_name.split("/")[-1].split(":")[0]
# Check if the service name is a substring of the image name part
if service_name not in image_name_part:
# Get the line number of the service name
line_number = service_details.lc.line + 1
inconsistencies.append((service_name, image_name, line_number))
return inconsistencies
def main():
parser = argparse.ArgumentParser(description="Check service name and image name consistency in a YAML file.")
parser.add_argument("file_path", type=str, help="The path to the YAML file.")
args = parser.parse_args()
data = parse_yaml_file(args.file_path)
inconsistencies = check_service_image_consistency(data)
if inconsistencies:
for service_name, image_name, line_number in inconsistencies:
print(f"Service name: {service_name}, Image name: {image_name}, Line number: {line_number}")
else:
print("All consistent")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,79 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import os.path
import subprocess
import sys
import yaml
images = {}
dockerfiles = {}
errors = []
def check_docker_compose_build_definition(file_path):
with open(file_path, "r") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
for service in data["services"]:
if "build" in data["services"][service] and "image" in data["services"][service]:
bash_command = "echo " + data["services"][service]["image"]
image = (
subprocess.run(["bash", "-c", bash_command], check=True, capture_output=True)
.stdout.decode("utf-8")
.strip()
)
build = data["services"][service]["build"]
context = build.get("context", "")
dockerfile = os.path.normpath(
os.path.join(os.path.dirname(file_path), context, build.get("dockerfile", ""))
)
if not os.path.isfile(dockerfile):
# dockerfile not exists in the current repo context, assume it's in 3rd party context
dockerfile = os.path.normpath(os.path.join(context, build.get("dockerfile", "")))
item = {"file_path": file_path, "service": service, "dockerfile": dockerfile, "image": image}
if image in images and dockerfile != images[image]["dockerfile"]:
errors.append(
f"ERROR: !!! Found Conflicts !!!\n"
f"Image: {image}, Dockerfile: {dockerfile}, defined in Service: {service}, File: {file_path}\n"
f"Image: {image}, Dockerfile: {images[image]['dockerfile']}, defined in Service: {images[image]['service']}, File: {images[image]['file_path']}"
)
else:
# print(f"Add Image: {image} Dockerfile: {dockerfile}")
images[image] = item
if dockerfile in dockerfiles and image != dockerfiles[dockerfile]["image"]:
errors.append(
f"WARNING: Different images using the same Dockerfile\n"
f"Dockerfile: {dockerfile}, Image: {image}, defined in Service: {service}, File: {file_path}\n"
f"Dockerfile: {dockerfile}, Image: {dockerfiles[dockerfile]['image']}, defined in Service: {dockerfiles[dockerfile]['service']}, File: {dockerfiles[dockerfile]['file_path']}"
)
else:
dockerfiles[dockerfile] = item
def parse_arg():
parser = argparse.ArgumentParser(
description="Check for conflicts in image build definition in docker-compose.yml files"
)
parser.add_argument("files", nargs="+", help="list of files to be checked")
return parser.parse_args()
def main():
args = parse_arg()
for file_path in args.files:
check_docker_compose_build_definition(file_path)
print("SUCCESS: No Conlicts Found.")
if errors:
for error in errors:
print(error)
sys.exit(1)
else:
print("SUCCESS: No Conflicts Found.")
return 0
if __name__ == "__main__":
main()

View File

@@ -7,7 +7,7 @@ source /GenAIExamples/.github/workflows/scripts/change_color
log_dir=/GenAIExamples/.github/workflows/scripts/codeScan
ERROR_WARN=false
find . -type f \( -name "Dockerfile*" \) -print -exec hadolint --ignore DL3006 --ignore DL3007 --ignore DL3008 --ignore DL3013 {} \; > ${log_dir}/hadolint.log
find . -type f \( -name "Dockerfile*" \) -print -exec hadolint --ignore DL3006 --ignore DL3007 --ignore DL3008 --ignore DL3013 --ignore DL3018 --ignore DL3016 {} \; > ${log_dir}/hadolint.log
if [[ $(grep -c "error" ${log_dir}/hadolint.log) != 0 ]]; then
$BOLD_RED && echo "Error!! Please Click on the artifact button to download and check error details." && $RESET

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# The test machine used by several opea projects, so the test scripts can't use `docker compose down` to clean up
# the all the containers, ports and networks directly.
# So we need to use the following script to minimize the impact of the clean up.
test_case=${test_case:-"test_compose_on_gaudi.sh"}
hardware=${hardware:-"gaudi"}
flag=${test_case%_on_*}
flag=${flag#test_}
yaml_file=$(find . -type f -wholename "*${hardware}/${flag}.yaml")
echo $yaml_file
case "$1" in
containers)
echo "Stop and remove all containers used by the services in $yaml_file ..."
containers=$(cat $yaml_file | grep container_name | cut -d':' -f2)
for container_name in $containers; do
cid=$(docker ps -aq --filter "name=$container_name")
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && sleep 1s; fi
done
;;
ports)
echo "Release all ports used by the services in $yaml_file ..."
pip install jq yq
ports=$(yq '.services[].ports[] | split(":")[0]' $yaml_file | grep -o '[0-9a-zA-Z_-]\+')
echo "All ports list..."
echo "$ports"
for port in $ports; do
if [[ $port =~ [a-zA-Z_-] ]]; then
echo "Search port value $port from the test case..."
port_fix=$(grep -E "export $port=" tests/$test_case | cut -d'=' -f2)
if [[ "$port_fix" == "" ]]; then
echo "Can't find the port value from the test case, use the default value in yaml..."
port_fix=$(yq '.services[].ports[]' $yaml_file | grep $port | cut -d':' -f2 | grep -o '[0-9a-zA-Z]\+')
fi
port=$port_fix
fi
if [[ $port =~ [0-9] ]]; then
if [[ $port == 5000 ]]; then
echo "Error: Port 5000 is used by local docker registry, please DO NOT use it in docker compose deployment!!!"
exit 1
fi
echo "Check port $port..."
cid=$(docker ps --filter "publish=${port}" --format "{{.ID}}")
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && echo "release $port"; fi
fi
done
;;
*)
echo "Unknown function: $1"
;;
esac

View File

@@ -12,21 +12,25 @@ run_matrix="{\"include\":["
examples=$(printf '%s\n' "${changed_files[@]}" | grep '/' | cut -d'/' -f1 | sort -u)
for example in ${examples}; do
if [[ ! -d $WORKSPACE/$example ]]; then continue; fi
cd $WORKSPACE/$example
if [[ ! $(find . -type f | grep ${test_mode}) ]]; then continue; fi
cd tests
ls -l
if [[ "$test_mode" == "docker_image_build" ]]; then
find_name="test_manifest_on_*.sh"
hardware_list="gaudi xeon"
else
find_name="test_${test_mode}*_on_*.sh"
hardware_list=$(find . -type f -name "${find_name}" | cut -d/ -f2 | cut -d. -f1 | awk -F'_on_' '{print $2}'| sort -u)
fi
hardware_list=$(find . -type f -name "${find_name}" | cut -d/ -f2 | cut -d. -f1 | awk -F'_on_' '{print $2}'| sort -u)
echo -e "Test supported hardware list: \n${hardware_list}"
run_hardware=""
if [[ $(printf '%s\n' "${changed_files[@]}" | grep ${example} | cut -d'/' -f2 | grep -E '*.py|Dockerfile*|ui|docker_image_build' ) ]]; then
# run test on all hardware if megaservice or ui code change
if [[ $(printf '%s\n' "${changed_files[@]}" | grep ${example} | cut -d'/' -f2 | grep -E '\.py|Dockerfile*|ui|docker_image_build' ) ]]; then
echo "run test on all hardware if megaservice or ui code change..."
run_hardware=$hardware_list
elif [[ $(printf '%s\n' "${changed_files[@]}" | grep ${example} | grep 'tests'| cut -d'/' -f3 | grep -vE '^test_|^_test' ) ]]; then
echo "run test on all hardware if common test scripts change..."
run_hardware=$hardware_list
else
for hardware in ${hardware_list}; do

View File

@@ -2,7 +2,7 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#set -xe
set -e
function dump_pod_log() {
pod_name=$1
@@ -12,7 +12,7 @@ function dump_pod_log() {
kubectl describe pod $pod_name -n $namespace
echo "-----------------------------------"
echo "#kubectl logs $pod_name -n $namespace"
kubectl logs $pod_name -n $namespace
kubectl logs $pod_name -n $namespace --all-containers --prefix=true
echo "-----------------------------------"
}
@@ -44,8 +44,13 @@ function dump_pods_status() {
function dump_all_pod_logs() {
namespace=$1
echo "------SUMMARY of POD STATUS in NS $namespace------"
kubectl get pods -n $namespace -o wide
echo "------SUMMARY of SVC STATUS in NS $namespace------"
kubectl get services -n $namespace -o wide
echo "------SUMMARY of endpoint STATUS in NS $namespace------"
kubectl get endpoints -n $namespace -o wide
echo "-----DUMP POD STATUS AND LOG in NS $namespace------"
pods=$(kubectl get pods -n $namespace -o jsonpath='{.items[*].metadata.name}')
for pod_name in $pods
do

View File

@@ -0,0 +1,55 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Weekly test all examples on multiple HWs
on:
schedule:
- cron: "30 2 * * 6" # UTC time
workflow_dispatch:
env:
EXAMPLES: ${{ vars.NIGHTLY_RELEASE_EXAMPLES }}
NODES: "gaudi,xeon,rocm,arc"
jobs:
get-test-matrix:
runs-on: ubuntu-latest
outputs:
examples: ${{ steps.get-matrix.outputs.examples }}
nodes: ${{ steps.get-matrix.outputs.nodes }}
steps:
- name: Create Matrix
id: get-matrix
run: |
examples=($(echo ${EXAMPLES} | tr ',' ' '))
examples_json=$(printf '%s\n' "${examples[@]}" | sort -u | jq -R '.' | jq -sc '.')
echo "examples=$examples_json" >> $GITHUB_OUTPUT
nodes=($(echo ${NODES} | tr ',' ' '))
nodes_json=$(printf '%s\n' "${nodes[@]}" | sort -u | jq -R '.' | jq -sc '.')
echo "nodes=$nodes_json" >> $GITHUB_OUTPUT
build-comps-base:
needs: [get-test-matrix]
strategy:
matrix:
node: ${{ fromJson(needs.get-test-matrix.outputs.nodes) }}
uses: ./.github/workflows/_build_comps_base_image.yml
with:
node: ${{ matrix.node }}
run-examples:
needs: [get-test-matrix, build-comps-base]
strategy:
matrix:
example: ${{ fromJson(needs.get-test-matrix.outputs.examples) }}
node: ${{ fromJson(needs.get-test-matrix.outputs.nodes) }}
fail-fast: false
uses: ./.github/workflows/_example-workflow.yml
with:
node: ${{ matrix.node }}
example: ${{ matrix.example }}
build: true
test_compose: true
test_helmchart: true
secrets: inherit

View File

@@ -1,11 +1,9 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Weekly update base images and 3rd party images
name: Weekly update 3rd party images
on:
schedule:
- cron: "0 0 * * 0"
workflow_dispatch:
permissions:
@@ -16,8 +14,8 @@ jobs:
freeze-images:
runs-on: ubuntu-latest
env:
USER_NAME: "NeuralChatBot"
USER_EMAIL: "grp_neural_chat_bot@intel.com"
USER_NAME: "CICD-at-OPEA"
USER_EMAIL: "CICD@opea.dev"
BRANCH_NAME: "update_images_tag"
steps:
- name: Checkout repository

View File

@@ -7,7 +7,7 @@ ci:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
rev: v5.0.0
hooks:
- id: end-of-file-fixer
files: (.*\.(py|md|rst|yaml|yml|json|ts|js|html|svelte|sh))$
@@ -74,7 +74,7 @@ repos:
name: Unused noqa
- repo: https://github.com/pycqa/isort
rev: 5.13.2
rev: 6.0.1
hooks:
- id: isort
@@ -100,21 +100,21 @@ repos:
- prettier@3.2.5
- repo: https://github.com/psf/black.git
rev: 24.4.2
rev: 25.1.0
hooks:
- id: black
files: (.*\.py)$
- repo: https://github.com/asottile/blacken-docs
rev: 1.18.0
rev: 1.19.1
hooks:
- id: blacken-docs
args: [--line-length=120, --skip-errors]
additional_dependencies:
- black==24.4.2
- black==24.10.0
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
rev: v2.4.1
hooks:
- id: codespell
args: [-w]
@@ -122,7 +122,7 @@ repos:
- tomli
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.5.0
rev: v0.11.4
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix, --no-cache]

View File

@@ -1,9 +1,18 @@
# Agents for Question Answering
## Table of contents
1. [Overview](#overview)
2. [Deploy with Docker](#deploy-with-docker)
3. [How to interact with the agent system with UI](#how-to-interact-with-the-agent-system-with-ui)
4. [Validate Services](#validate-services)
5. [Register Tools](#how-to-register-other-tools-with-the-ai-agent)
6. [Monitoring and Tracing](#monitor-and-tracing)
## Overview
This example showcases a hierarchical multi-agent system for question-answering applications. The architecture diagram is shown below. The supervisor agent interfaces with the user and dispatch tasks to the worker agent and other tools to gather information and come up with answers. The worker agent uses the retrieval tool to generate answers to the queries posted by the supervisor agent. Other tools used by the supervisor agent may include APIs to interface knowledge graphs, SQL databases, external knowledge bases, etc.
![Architecture Overview](assets/agent_qna_arch.png)
This example showcases a hierarchical multi-agent system for question-answering applications. The architecture diagram below shows a supervisor agent that interfaces with the user and dispatches tasks to two worker agents to gather information and come up with answers. The worker RAG agent uses the retrieval tool to retrieve relevant documents from a knowledge base - a vector database. The worker SQL agent retrieves relevant data from a SQL database. Although not included in this example by default, other tools such as a web search tool or a knowledge graph query tool can be used by the supervisor agent to gather information from additional sources.
![Architecture Overview](assets/img/agent_qna_arch.png)
The AgentQnA example is implemented using the component-level microservices defined in [GenAIComps](https://github.com/opea-project/GenAIComps). The flow chart below shows the information flow between different microservices for this example.
@@ -38,6 +47,7 @@ flowchart LR
end
AG_REACT([Agent MicroService - react]):::blue
AG_RAG([Agent MicroService - rag]):::blue
AG_SQL([Agent MicroService - sql]):::blue
LLM_gen{{LLM Service <br>}}
DP([Data Preparation MicroService]):::blue
TEI_RER{{Reranking service<br>}}
@@ -51,6 +61,7 @@ flowchart LR
direction LR
a[User Input Query] --> AG_REACT
AG_REACT --> AG_RAG
AG_REACT --> AG_SQL
AG_RAG --> DocIndexRetriever-MegaService
EM ==> RET
RET ==> RER
@@ -59,6 +70,7 @@ flowchart LR
%% Embedding service flow
direction LR
AG_RAG <-.-> LLM_gen
AG_SQL <-.-> LLM_gen
AG_REACT <-.-> LLM_gen
EM <-.-> TEI_EM
RET <-.-> R_RET
@@ -72,152 +84,219 @@ flowchart LR
```
### Why Agent for question answering?
### Why should AI Agents be used for question-answering?
1. Improve relevancy of retrieved context.
Agent can rephrase user queries, decompose user queries, and iterate to get the most relevant context for answering user's questions. Compared to conventional RAG, RAG agent can significantly improve the correctness and relevancy of the answer.
2. Use tools to get additional knowledge.
For example, knowledge graphs and SQL databases can be exposed as APIs for Agents to gather knowledge that may be missing in the retrieval vector database.
3. Hierarchical agent can further improve performance.
Expert worker agents, such as retrieval agent, knowledge graph agent, SQL agent, etc., can provide high-quality output for different aspects of a complex query, and the supervisor agent can aggregate the information together to provide a comprehensive answer.
1. **Improve relevancy of retrieved context.**
RAG agents can rephrase user queries, decompose user queries, and iterate to get the most relevant context for answering a user's question. Compared to conventional RAG, RAG agents significantly improve the correctness and relevancy of the answer because of the iterations it goes through.
2. **Expand scope of skills.**
The supervisor agent interacts with multiple worker agents that specialize in different skills (e.g., retrieve documents, write SQL queries, etc.). Thus, it can answer questions with different methods.
3. **Hierarchical multi-agents improve performance.**
Expert worker agents, such as RAG agents and SQL agents, can provide high-quality output for different aspects of a complex query, and the supervisor agent can aggregate the information to provide a comprehensive answer. If only one agent is used and all tools are provided to this single agent, it can lead to large overhead or not use the best tool to provide accurate answers.
## Deployment with docker
## Deploy with docker
1. Build agent docker image [Optional]
### 1. Set up environment </br>
> [!NOTE]
> the step is optional. The docker images will be automatically pulled when running the docker compose commands. This step is only needed if pulling images failed.
#### First, clone the `GenAIExamples` repo.
First, clone the opea GenAIComps repo.
```
```bash
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIComps.git
git clone https://github.com/opea-project/GenAIExamples.git
```
Then build the agent docker image. Both the supervisor agent and the worker agent will use the same docker image, but when we launch the two agents we will specify different strategies and register different tools.
#### Second, set up environment variables.
```
cd GenAIComps
docker build -t opea/agent-langchain:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/agent/langchain/Dockerfile .
##### For proxy environments only
```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
```
2. Set up environment for this example </br>
##### For using open-source llms
First, clone this repo.
Set up a [HuggingFace](https://huggingface.co/) account and generate a [user access token](https://huggingface.co/docs/transformers.js/en/guides/private#step-1-generating-a-user-access-token).
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
Second, set up env vars.
```
# Example: host_ip="192.168.1.1" or export host_ip="External_Public_IP"
export host_ip=$(hostname -I | awk '{print $1}')
# if you are in a proxy environment, also set the proxy-related environment variables
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
# for using open-source llms
export HUGGINGFACEHUB_API_TOKEN=<your-HF-token>
export HF_CACHE_DIR=<directory-where-llms-are-downloaded> #so that no need to redownload every time
# optional: OPANAI_API_KEY if you want to use OpenAI models
export OPENAI_API_KEY=<your-openai-key>
```
3. Deploy the retrieval tool (i.e., DocIndexRetriever mega-service)
First, launch the mega-service.
```
cd $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool
bash launch_retrieval_tool.sh
```
Then, ingest data into the vector database. Here we provide an example. You can ingest your own data.
```
bash run_ingest_data.sh
```
4. Launch other tools. </br>
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
5. Launch agent services</br>
We provide two options for `llm_engine` of the agents: 1. open-source LLMs, 2. OpenAI models via API calls.
Deploy it on Gaudi or Xeon respectively
::::{tab-set}
:::{tab-item} Gaudi
:sync: Gaudi
To use open-source LLMs on Gaudi2, run commands below.
```
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
bash launch_tgi_gaudi.sh
bash launch_agent_service_tgi_gaudi.sh
```
:::
:::{tab-item} Xeon
:sync: Xeon
To use OpenAI models, run commands below.
```
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon
bash launch_agent_service_openai.sh
```
:::
::::
## Validate services
First look at logs of the agent docker containers:
Then set an environment variable with the token and another for a directory to download the models:
```bash
export HUGGINGFACEHUB_API_TOKEN=<your-HF-token>
export HF_CACHE_DIR=<directory-where-llms-are-downloaded> # to avoid redownloading models
```
# worker agent
##### [Optional] OPENAI_API_KEY to use OpenAI models or Intel® AI for Enterprise Inference
To use OpenAI models, generate a key following these [instructions](https://platform.openai.com/api-keys).
To use a remote server running Intel® AI for Enterprise Inference, contact the cloud service provider or owner of the on-prem machine for a key to access the desired model on the server.
Then set the environment variable `OPENAI_API_KEY` with the key contents:
```bash
export OPENAI_API_KEY=<your-openai-key>
```
#### Third, set up environment variables for the selected hardware using the corresponding `set_env.sh`
##### Gaudi
```bash
source $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi/set_env.sh
```
##### Xeon
```bash
source $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon/set_env.sh
```
For running
### 2. Launch the multi-agent system. </br>
We make it convenient to launch the whole system with docker compose, which includes microservices for LLM, agents, UI, retrieval tool, vector database, dataprep, and telemetry. There are 3 docker compose files, which make it easy for users to pick and choose. Users can choose a different retrieval tool other than the `DocIndexRetriever` example provided in our GenAIExamples repo. Users can choose not to launch the telemetry containers.
#### Launch on Gaudi
On Gaudi, `meta-llama/Meta-Llama-3.3-70B-Instruct` will be served using vllm. The command below will launch the multi-agent system with the `DocIndexRetriever` as the retrieval tool for the Worker RAG agent.
```bash
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi/
docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose.yaml up -d
```
> **Note**: To enable the web search tool, skip this step and proceed to the "[Optional] Web Search Tool Support" section.
To enable Open Telemetry Tracing, compose.telemetry.yaml file need to be merged along with default compose.yaml file.
Gaudi example with Open Telemetry feature:
```bash
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi/
docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose.yaml -f compose.telemetry.yaml up -d
```
##### [Optional] Web Search Tool Support
<details>
<summary> Instructions </summary>
A web search tool is supported in this example and can be enabled by running docker compose with the `compose.webtool.yaml` file.
The Google Search API is used. Follow the [instructions](https://python.langchain.com/docs/integrations/tools/google_search) to create an API key and enable the Custom Search API on a Google account. The environment variables `GOOGLE_CSE_ID` and `GOOGLE_API_KEY` need to be set.
```bash
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi/
export GOOGLE_CSE_ID="YOUR_ID"
export GOOGLE_API_KEY="YOUR_API_KEY"
docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose.yaml -f compose.webtool.yaml up -d
```
</details>
#### Launch on Xeon
On Xeon, OpenAI models and models deployed on a remote server are supported. Both methods require an API key.
```bash
export OPENAI_API_KEY=<your-openai-key>
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon
```
##### OpenAI Models
The command below will launch the multi-agent system with the `DocIndexRetriever` as the retrieval tool for the Worker RAG agent.
```bash
docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose_openai.yaml up -d
```
##### Models on Remote Server
When models are deployed on a remote server with Intel® AI for Enterprise Inference, a base URL and an API key are required to access them. To run the Agent microservice on Xeon while using models deployed on a remote server, add `compose_remote.yaml` to the `docker compose` command and set additional environment variables.
###### Notes
- `OPENAI_API_KEY` is already set in a previous step.
- `model` is used to overwrite the value set for this environment variable in `set_env.sh`.
- `LLM_ENDPOINT_URL` is the base URL given from the owner of the on-prem machine or cloud service provider. It will follow this format: "https://<DNS>". Here is an example: "https://api.inference.example.com".
```bash
export model=<name-of-model-card>
export LLM_ENDPOINT_URL=<http-endpoint-of-remote-server>
docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose_openai.yaml -f compose_remote.yaml up -d
```
### 3. Ingest Data into the vector database
The `run_ingest_data.sh` script will use an example jsonl file to ingest example documents into a vector database. Other ways to ingest data and other types of documents supported can be found in the OPEA dataprep microservice located in the opea-project/GenAIComps repo.
```bash
cd $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool/
bash run_ingest_data.sh
```
> **Note**: This is a one-time operation.
## How to interact with the agent system with UI
The UI microservice is launched in the previous step with the other microservices.
To see the UI, open a web browser to `http://${ip_address}:5173` to access the UI. Note the `ip_address` here is the host IP of the UI microservice.
1. Click on the arrow above `Get started`. Create an admin account with a name, email, and password.
2. Add an OpenAI-compatible API endpoint. In the upper right, click on the circle button with the user's initial, go to `Admin Settings`->`Connections`. Under `Manage OpenAI API Connections`, click on the `+` to add a connection. Fill in these fields:
- **URL**: `http://${ip_address}:9090/v1`, do not forget the `v1`
- **Key**: any value
- **Model IDs**: any name i.e. `opea-agent`, then press `+` to add it
Click "Save".
![opea-agent-setting](assets/img/opea-agent-setting.png)
3. Test OPEA agent with UI. Return to `New Chat` and ensure the model (i.e. `opea-agent`) is selected near the upper left. Enter in any prompt to interact with the agent.
![opea-agent-test](assets/img/opea-agent-test.png)
## [Optional] Deploy using Helm Charts
Refer to the [AgentQnA helm chart](./kubernetes/helm/README.md) for instructions on deploying AgentQnA on Kubernetes.
## Validate Services
1. First look at logs for each of the agent docker containers:
```bash
# worker RAG agent
docker logs rag-agent-endpoint
```
```
# worker SQL agent
docker logs sql-agent-endpoint
# supervisor agent
docker logs react-agent-endpoint
```
You should see something like "HTTP server setup successful" if the docker containers are started successfully.</p>
Look for the message "HTTP server setup successful" to confirm the agent docker container has started successfully.</p>
Second, validate worker agent:
2. Use python to validate each agent is working properly:
```
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```bash
# RAG worker agent
python $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "Tell me about Michael Jackson song Thriller" --agent_role "worker" --ext_port 9095
# SQL agent
python $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "How many employees in company" --agent_role "worker" --ext_port 9096
# supervisor agent: this will test a two-turn conversation
python $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --agent_role "supervisor" --ext_port 9090
```
Third, validate supervisor agent:
## How to register other tools with the AI agent
```
curl http://${host_ip}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
The [tools](./tools) folder contains YAML and Python files for additional tools for the supervisor and worker agents. Refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/src/README.md) to add tools and customize the AI agents.
## How to register your own tools with agent
## Monitor and Tracing
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain/README.md).
Follow [OpenTelemetry OPEA Guide](https://opea-project.github.io/latest/tutorial/OpenTelemetry/OpenTelemetry_OPEA_Guide.html) to understand how to use OpenTelemetry tracing and metrics in OPEA.
For AgentQnA specific tracing and metrics monitoring, follow [OpenTelemetry on AgentQnA](https://opea-project.github.io/latest/tutorial/OpenTelemetry/deploy/AgentQnA.html) section.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

@@ -0,0 +1,342 @@
# Build Mega Service of AgentQnA on AMD ROCm GPU
## Build Docker Images
### 1. Build Docker Image
- #### Create application install directory and go to it:
```bash
mkdir ~/agentqna-install && cd agentqna-install
```
- #### Clone the repository GenAIExamples (the default repository branch "main" is used here):
```bash
git clone https://github.com/opea-project/GenAIExamples.git
```
If you need to use a specific branch/tag of the GenAIExamples repository, then (v1.3 replace with its own value):
```bash
git clone https://github.com/opea-project/GenAIExamples.git && cd GenAIExamples && git checkout v1.3
```
We remind you that when using a specific version of the code, you need to use the README from this version:
- #### Go to build directory:
```bash
cd ~/agentqna-install/GenAIExamples/AgentQnA/docker_image_build
```
- Cleaning up the GenAIComps repository if it was previously cloned in this directory.
This is necessary if the build was performed earlier and the GenAIComps folder exists and is not empty:
```bash
echo Y | rm -R GenAIComps
```
- #### Clone the repository GenAIComps (the default repository branch "main" is used here):
```bash
git clone https://github.com/opea-project/GenAIComps.git
```
We remind you that when using a specific version of the code, you need to use the README from this version.
- #### Setting the list of images for the build (from the build file.yaml)
If you want to deploy a vLLM-based or TGI-based application, then the set of services is installed as follows:
#### vLLM-based application
```bash
service_list="vllm-rocm agent agent-ui"
```
#### TGI-based application
```bash
service_list="agent agent-ui"
```
- #### Optional. Pull TGI Docker Image (Do this if you want to use TGI)
```bash
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
```
- #### Build Docker Images
```bash
docker compose -f build.yaml build ${service_list} --no-cache
```
- #### Build DocIndexRetriever Docker Images
```bash
cd ~/agentqna-install/GenAIExamples/DocIndexRetriever/docker_image_build/
git clone https://github.com/opea-project/GenAIComps.git
service_list="doc-index-retriever dataprep embedding retriever reranking"
docker compose -f build.yaml build ${service_list} --no-cache
```
- #### Pull DocIndexRetriever Docker Images
```bash
docker pull redis/redis-stack:7.2.0-v9
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
```
After the build, we check the list of images with the command:
```bash
docker image ls
```
The list of images should include:
##### vLLM-based application:
- opea/vllm-rocm:latest
- opea/agent:latest
- redis/redis-stack:7.2.0-v9
- ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
- opea/embedding:latest
- opea/retriever:latest
- opea/reranking:latest
- opea/doc-index-retriever:latest
##### TGI-based application:
- ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
- opea/agent:latest
- redis/redis-stack:7.2.0-v9
- ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
- opea/embedding:latest
- opea/retriever:latest
- opea/reranking:latest
- opea/doc-index-retriever:latest
---
## Deploy the AgentQnA Application
### Docker Compose Configuration for AMD GPUs
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
- compose_vllm.yaml - for vLLM-based application
- compose.yaml - for TGI-based
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/card0:/dev/dri/card0
- /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
**How to Identify GPU Device IDs:**
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
### Set deploy environment variables
#### Setting variables in the operating system environment:
```bash
### Replace the string 'server_address' with your local server IP address
export host_ip='server_address'
### Replace the string 'your_huggingfacehub_token' with your HuggingFacehub repository access token.
export HUGGINGFACEHUB_API_TOKEN='your_huggingfacehub_token'
### Replace the string 'your_langchain_api_key' with your LANGCHAIN API KEY.
export LANGCHAIN_API_KEY='your_langchain_api_key'
export LANGCHAIN_TRACING_V2=""
```
### Start the services:
#### If you use vLLM
```bash
cd ~/agentqna-install/GenAIExamples/AgentQnA/docker_compose/amd/gpu/rocm
bash launch_agent_service_vllm_rocm.sh
```
#### If you use TGI
```bash
cd ~/agentqna-install/GenAIExamples/AgentQnA/docker_compose/amd/gpu/rocm
bash launch_agent_service_tgi_rocm.sh
```
All containers should be running and should not restart:
##### If you use vLLM:
- dataprep-redis-server
- doc-index-retriever-server
- embedding-server
- rag-agent-endpoint
- react-agent-endpoint
- redis-vector-db
- reranking-tei-xeon-server
- retriever-redis-server
- sql-agent-endpoint
- tei-embedding-server
- tei-reranking-server
- vllm-service
##### If you use TGI:
- dataprep-redis-server
- doc-index-retriever-server
- embedding-server
- rag-agent-endpoint
- react-agent-endpoint
- redis-vector-db
- reranking-tei-xeon-server
- retriever-redis-server
- sql-agent-endpoint
- tei-embedding-server
- tei-reranking-server
- tgi-service
---
## Validate the Services
### 1. Validate the vLLM/TGI Service
#### If you use vLLM:
```bash
DATA='{"model": "Intel/neural-chat-7b-v3-3t", '\
'"messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 256}'
curl http://${HOST_IP}:${VLLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to JSON:
```json
{
"id": "chatcmpl-142f34ef35b64a8db3deedd170fed951",
"object": "chat.completion",
"created": 1742270316,
"model": "Intel/neural-chat-7b-v3-3",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "length",
"stop_reason": null
}
],
"usage": { "prompt_tokens": 66, "total_tokens": 322, "completion_tokens": 256, "prompt_tokens_details": null },
"prompt_logprobs": null
}
```
If the service response has a meaningful response in the value of the "choices.message.content" key,
then we consider the vLLM service to be successfully launched
#### If you use TGI:
```bash
DATA='{"inputs":"What is Deep Learning?",'\
'"parameters":{"max_new_tokens":256,"do_sample": true}}'
curl http://${HOST_IP}:${TGI_SERVICE_PORT}/generate \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
```
Checking the response from the service. The response should be similar to JSON:
```json
{
"generated_text": " "
}
```
If the service response has a meaningful response in the value of the "generated_text" key,
then we consider the TGI service to be successfully launched
### 2. Validate Agent Services
#### Validate Rag Agent Service
```bash
export agent_port=${WORKER_RAG_AGENT_PORT}
prompt="Tell me about Michael Jackson song Thriller"
python3 ~/agentqna-install/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port
```
The response must contain the meaningful text of the response to the request from the "prompt" variable
#### Validate Sql Agent Service
```bash
export agent_port=${WORKER_SQL_AGENT_PORT}
prompt="How many employees are there in the company?"
python3 ~/agentqna-install/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port
```
The answer should make sense - "8 employees in the company"
#### Validate React (Supervisor) Agent Service
```bash
export agent_port=${SUPERVISOR_REACT_AGENT_PORT}
python3 ~/agentqna-install/GenAIExamples/AgentQnA/tests/test.py --agent_role "supervisor" --ext_port $agent_port --stream
```
The response should contain "Iron Maiden"
### 3. Stop application
#### If you use vLLM
```bash
cd ~/agentqna-install/GenAIExamples/AgentQnA/docker_compose/amd/gpu/rocm
bash stop_agent_service_vllm_rocm.sh
```
#### If you use TGI
```bash
cd ~/agentqna-install/GenAIExamples/AgentQnA/docker_compose/amd/gpu/rocm
bash stop_agent_service_tgi_rocm.sh
```

View File

@@ -0,0 +1,124 @@
# Copyright (C) 2025 Advanced Micro Devices, Inc.
services:
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:3.0.0-rocm
container_name: tgi-service
ports:
- "${TGI_SERVICE_PORT-8085}:80"
volumes:
- "${MODEL_CACHE:-./data}:/data"
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
TGI_LLM_ENDPOINT: "http://${ip_address}:${TGI_SERVICE_PORT}"
HUGGING_FACE_HUB_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
shm_size: 32g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
ipc: host
command: --model-id ${LLM_MODEL_ID} --max-input-length 4096 --max-total-tokens 8192
worker-rag-agent:
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: rag-agent-endpoint
volumes:
- "${TOOLSET_PATH}:/home/user/tools/"
ports:
- "${WORKER_RAG_AGENT_PORT:-9095}:9095"
ipc: host
environment:
ip_address: ${ip_address}
strategy: rag_agent_llama
with_memory: false
recursion_limit: ${recursion_limit_worker}
llm_engine: tgi
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: false
tools: /home/user/tools/worker_agent_tools.yaml
require_human_feedback: false
RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL}
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2}
LANGCHAIN_PROJECT: "opea-worker-agent-service"
port: 9095
worker-sql-agent:
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: sql-agent-endpoint
volumes:
- "${WORKDIR}/tests/Chinook_Sqlite.sqlite:/home/user/chinook-db/Chinook_Sqlite.sqlite:rw"
ports:
- "${WORKER_SQL_AGENT_PORT:-9096}:9096"
ipc: host
environment:
ip_address: ${ip_address}
strategy: sql_agent_llama
with_memory: false
db_name: ${db_name}
db_path: ${db_path}
use_hints: false
recursion_limit: ${recursion_limit_worker}
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: false
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
port: 9096
supervisor-react-agent:
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: react-agent-endpoint
depends_on:
- worker-rag-agent
volumes:
- "${TOOLSET_PATH}:/home/user/tools/"
ports:
- "${SUPERVISOR_REACT_AGENT_PORT:-9090}:9090"
ipc: host
environment:
ip_address: ${ip_address}
strategy: react_llama
with_memory: true
recursion_limit: ${recursion_limit_supervisor}
llm_engine: tgi
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: true
tools: /home/user/tools/supervisor_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2}
LANGCHAIN_PROJECT: "opea-supervisor-agent-service"
CRAG_SERVER: ${CRAG_SERVER}
WORKER_AGENT_URL: ${WORKER_AGENT_URL}
SQL_AGENT_URL: ${SQL_AGENT_URL}
port: 9090

View File

@@ -0,0 +1,128 @@
# Copyright (C) 2025 Advanced Micro Devices, Inc.
services:
vllm-service:
image: ${REGISTRY:-opea}/vllm-rocm:${TAG:-latest}
container_name: vllm-service
ports:
- "${VLLM_SERVICE_PORT:-8081}:8011"
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0
WILM_USE_TRITON_FLASH_ATTENTION: 0
PYTORCH_JIT: 0
volumes:
- "${MODEL_CACHE:-./data}:/data"
shm_size: 20G
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/:/dev/dri/
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
- apparmor=unconfined
command: "--model ${VLLM_LLM_MODEL_ID} --swap-space 16 --disable-log-requests --dtype float16 --tensor-parallel-size 4 --host 0.0.0.0 --port 8011 --num-scheduler-steps 1 --distributed-executor-backend \"mp\""
ipc: host
worker-rag-agent:
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: rag-agent-endpoint
volumes:
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "${WORKER_RAG_AGENT_PORT:-9095}:9095"
ipc: host
environment:
ip_address: ${ip_address}
strategy: rag_agent_llama
with_memory: false
recursion_limit: ${recursion_limit_worker}
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: false
tools: /home/user/tools/worker_agent_tools.yaml
require_human_feedback: false
RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL}
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2}
LANGCHAIN_PROJECT: "opea-worker-agent-service"
port: 9095
worker-sql-agent:
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: sql-agent-endpoint
volumes:
- "${WORKDIR}/tests/Chinook_Sqlite.sqlite:/home/user/chinook-db/Chinook_Sqlite.sqlite:rw"
ports:
- "${WORKER_SQL_AGENT_PORT:-9096}:9096"
ipc: host
environment:
ip_address: ${ip_address}
strategy: sql_agent_llama
with_memory: false
db_name: ${db_name}
db_path: ${db_path}
use_hints: false
recursion_limit: ${recursion_limit_worker}
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: false
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
port: 9096
supervisor-react-agent:
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: react-agent-endpoint
depends_on:
- worker-rag-agent
volumes:
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "${SUPERVISOR_REACT_AGENT_PORT:-9090}:9090"
ipc: host
environment:
ip_address: ${ip_address}
strategy: react_llama
with_memory: true
recursion_limit: ${recursion_limit_supervisor}
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: true
tools: /home/user/tools/supervisor_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2}
LANGCHAIN_PROJECT: "opea-supervisor-agent-service"
CRAG_SERVER: ${CRAG_SERVER}
WORKER_AGENT_URL: ${WORKER_AGENT_URL}
SQL_AGENT_URL: ${SQL_AGENT_URL}
port: 9090

View File

@@ -0,0 +1,87 @@
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
# Before start script:
# export host_ip="your_host_ip_or_host_name"
# export HUGGINGFACEHUB_API_TOKEN="your_huggingface_api_token"
# export LANGCHAIN_API_KEY="your_langchain_api_key"
# export LANGCHAIN_TRACING_V2=""
# Set server hostname or IP address
export ip_address=${host_ip}
# Set services IP ports
export TGI_SERVICE_PORT="18110"
export WORKER_RAG_AGENT_PORT="18111"
export WORKER_SQL_AGENT_PORT="18112"
export SUPERVISOR_REACT_AGENT_PORT="18113"
export CRAG_SERVER_PORT="18114"
export WORKPATH=$(dirname "$PWD")
export WORKDIR=${WORKPATH}/../../../
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
export HF_CACHE_DIR="./data"
export MODEL_CACHE="./data"
export TOOLSET_PATH=${WORKPATH}/../../../tools/
export recursion_limit_worker=12
export LLM_ENDPOINT_URL=http://${ip_address}:${TGI_SERVICE_PORT}
export temperature=0.01
export max_new_tokens=512
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
export LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2}
export db_name=Chinook
export db_path="sqlite:////home/user/chinook-db/Chinook_Sqlite.sqlite"
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export CRAG_SERVER=http://${ip_address}:${CRAG_SERVER_PORT}
export WORKER_AGENT_URL="http://${ip_address}:${WORKER_RAG_AGENT_PORT}/v1/chat/completions"
export SQL_AGENT_URL="http://${ip_address}:${WORKER_SQL_AGENT_PORT}/v1/chat/completions"
export HF_CACHE_DIR=${HF_CACHE_DIR}
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export no_proxy=${no_proxy}
export http_proxy=${http_proxy}
export https_proxy=${https_proxy}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export RERANK_TYPE="tei"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete"
echo ${WORKER_RAG_AGENT_PORT} > ${WORKPATH}/WORKER_RAG_AGENT_PORT_tmp
echo ${WORKER_SQL_AGENT_PORT} > ${WORKPATH}/WORKER_SQL_AGENT_PORT_tmp
echo ${SUPERVISOR_REACT_AGENT_PORT} > ${WORKPATH}/SUPERVISOR_REACT_AGENT_PORT_tmp
echo ${CRAG_SERVER_PORT} > ${WORKPATH}/CRAG_SERVER_PORT_tmp
echo "Downloading chinook data..."
echo Y | rm -R chinook-database
git clone https://github.com/lerocha/chinook-database.git
echo Y | rm -R ../../../../../AgentQnA/tests/Chinook_Sqlite.sqlite
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite ../../../../../AgentQnA/tests
docker compose -f ../../../../../DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml up -d
docker compose -f compose.yaml up -d
n=0
until [[ "$n" -ge 100 ]]; do
docker logs tgi-service > ${WORKPATH}/tgi_service_start.log
if grep -q Connected ${WORKPATH}/tgi_service_start.log; then
break
fi
sleep 10s
n=$((n+1))
done
echo "Starting CRAG server"
docker run -d --runtime=runc --name=kdd-cup-24-crag-service -p=${CRAG_SERVER_PORT}:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0

View File

@@ -0,0 +1,88 @@
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
# Before start script:
# export host_ip="your_host_ip_or_host_name"
# export HUGGINGFACEHUB_API_TOKEN="your_huggingface_api_token"
# export LANGCHAIN_API_KEY="your_langchain_api_key"
# export LANGCHAIN_TRACING_V2=""
# Set server hostname or IP address
export ip_address=${host_ip}
# Set services IP ports
export VLLM_SERVICE_PORT="18110"
export WORKER_RAG_AGENT_PORT="18111"
export WORKER_SQL_AGENT_PORT="18112"
export SUPERVISOR_REACT_AGENT_PORT="18113"
export CRAG_SERVER_PORT="18114"
export WORKPATH=$(dirname "$PWD")
export WORKDIR=${WORKPATH}/../../../
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export VLLM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
export HF_CACHE_DIR="./data"
export MODEL_CACHE="./data"
export TOOLSET_PATH=${WORKPATH}/../../../tools/
export recursion_limit_worker=12
export LLM_ENDPOINT_URL=http://${ip_address}:${VLLM_SERVICE_PORT}
export LLM_MODEL_ID=${VLLM_LLM_MODEL_ID}
export temperature=0.01
export max_new_tokens=512
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
export LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2}
export db_name=Chinook
export db_path="sqlite:////home/user/chinook-db/Chinook_Sqlite.sqlite"
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export CRAG_SERVER=http://${ip_address}:${CRAG_SERVER_PORT}
export WORKER_AGENT_URL="http://${ip_address}:${WORKER_RAG_AGENT_PORT}/v1/chat/completions"
export SQL_AGENT_URL="http://${ip_address}:${WORKER_SQL_AGENT_PORT}/v1/chat/completions"
export HF_CACHE_DIR=${HF_CACHE_DIR}
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export no_proxy=${no_proxy}
export http_proxy=${http_proxy}
export https_proxy=${https_proxy}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export RERANK_TYPE="tei"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete"
echo ${WORKER_RAG_AGENT_PORT} > ${WORKPATH}/WORKER_RAG_AGENT_PORT_tmp
echo ${WORKER_SQL_AGENT_PORT} > ${WORKPATH}/WORKER_SQL_AGENT_PORT_tmp
echo ${SUPERVISOR_REACT_AGENT_PORT} > ${WORKPATH}/SUPERVISOR_REACT_AGENT_PORT_tmp
echo ${CRAG_SERVER_PORT} > ${WORKPATH}/CRAG_SERVER_PORT_tmp
echo "Downloading chinook data..."
echo Y | rm -R chinook-database
git clone https://github.com/lerocha/chinook-database.git
echo Y | rm -R ../../../../../AgentQnA/tests/Chinook_Sqlite.sqlite
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite ../../../../../AgentQnA/tests
docker compose -f ../../../../../DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml up -d
docker compose -f compose_vllm.yaml up -d
n=0
until [[ "$n" -ge 500 ]]; do
docker logs vllm-service >& "${WORKPATH}"/vllm-service_start.log
if grep -q "Application startup complete" "${WORKPATH}"/vllm-service_start.log; then
break
fi
sleep 20s
n=$((n+1))
done
echo "Starting CRAG server"
docker run -d --runtime=runc --name=kdd-cup-24-crag-service -p=${CRAG_SERVER_PORT}:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0

View File

@@ -0,0 +1,62 @@
#!/usr/bin/env bash
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
WORKPATH=$(dirname "$PWD")/..
export ip_address=${host_ip}
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export AGENTQNA_TGI_IMAGE=ghcr.io/huggingface/text-generation-inference:2.4.1-rocm
export AGENTQNA_TGI_SERVICE_PORT="19001"
# LLM related environment variables
export AGENTQNA_CARD_ID="card1"
export AGENTQNA_RENDER_ID="renderD136"
export HF_CACHE_DIR=${HF_CACHE_DIR}
ls $HF_CACHE_DIR
export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
export NUM_SHARDS=4
export LLM_ENDPOINT_URL="http://${ip_address}:${AGENTQNA_TGI_SERVICE_PORT}"
export temperature=0.01
export max_new_tokens=512
# agent related environment variables
export AGENTQNA_WORKER_AGENT_SERVICE_PORT="9095"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export WORKER_AGENT_URL="http://${ip_address}:${AGENTQNA_WORKER_AGENT_SERVICE_PORT}/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:18881
export AGENTQNA_FRONTEND_PORT="15557"
#retrieval_tool
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:26379"
export INDEX_NAME="rag-redis"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete"
echo "Removing chinook data..."
echo Y | rm -R chinook-database
if [ -d "chinook-database" ]; then
rm -rf chinook-database
fi
echo "Chinook data removed!"
echo "Stopping CRAG server"
docker rm kdd-cup-24-crag-service --force
echo "Stopping Agent services"
docker compose -f compose.yaml down
echo "Stopping Retrieval services"
docker compose -f ../../../../../DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml down

View File

@@ -0,0 +1,84 @@
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
# Before start script:
# export host_ip="your_host_ip_or_host_name"
# export HUGGINGFACEHUB_API_TOKEN="your_huggingface_api_token"
# export LANGCHAIN_API_KEY="your_langchain_api_key"
# export LANGCHAIN_TRACING_V2=""
# Set server hostname or IP address
export ip_address=${host_ip}
# Set services IP ports
export VLLM_SERVICE_PORT="18110"
export WORKER_RAG_AGENT_PORT="18111"
export WORKER_SQL_AGENT_PORT="18112"
export SUPERVISOR_REACT_AGENT_PORT="18113"
export CRAG_SERVER_PORT="18114"
export WORKPATH=$(dirname "$PWD")
export WORKDIR=${WORKPATH}/../../../
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export VLLM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
export HF_CACHE_DIR="./data"
export MODEL_CACHE="./data"
export TOOLSET_PATH=${WORKPATH}/../../../tools/
export recursion_limit_worker=12
export LLM_ENDPOINT_URL=http://${ip_address}:${VLLM_SERVICE_PORT}
export LLM_MODEL_ID=${VLLM_LLM_MODEL_ID}
export temperature=0.01
export max_new_tokens=512
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
export LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2}
export db_name=Chinook
export db_path="sqlite:////home/user/chinook-db/Chinook_Sqlite.sqlite"
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export CRAG_SERVER=http://${ip_address}:${CRAG_SERVER_PORT}
export WORKER_AGENT_URL="http://${ip_address}:${WORKER_RAG_AGENT_PORT}/v1/chat/completions"
export SQL_AGENT_URL="http://${ip_address}:${WORKER_SQL_AGENT_PORT}/v1/chat/completions"
export HF_CACHE_DIR=${HF_CACHE_DIR}
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export no_proxy=${no_proxy}
export http_proxy=${http_proxy}
export https_proxy=${https_proxy}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export RERANK_TYPE="tei"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete"
echo ${WORKER_RAG_AGENT_PORT} > ${WORKPATH}/WORKER_RAG_AGENT_PORT_tmp
echo ${WORKER_SQL_AGENT_PORT} > ${WORKPATH}/WORKER_SQL_AGENT_PORT_tmp
echo ${SUPERVISOR_REACT_AGENT_PORT} > ${WORKPATH}/SUPERVISOR_REACT_AGENT_PORT_tmp
echo ${CRAG_SERVER_PORT} > ${WORKPATH}/CRAG_SERVER_PORT_tmp
echo "Removing chinook data..."
echo Y | rm -R chinook-database
if [ -d "chinook-database" ]; then
rm -rf chinook-database
fi
echo "Chinook data removed!"
echo "Stopping CRAG server"
docker rm kdd-cup-24-crag-service --force
echo "Stopping Agent services"
docker compose -f compose_vllm.yaml down
echo "Stopping Retrieval services"
docker compose -f ../../../../../DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml down

View File

@@ -1,100 +1,3 @@
# Single node on-prem deployment with Docker Compose on Xeon Scalable processors
This example showcases a hierarchical multi-agent system for question-answering applications. We deploy the example on Xeon. For LLMs, we use OpenAI models via API calls. For instructions on using open-source LLMs, please refer to the deployment guide [here](../../../../README.md).
## Deployment with docker
1. First, clone this repo.
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
2. Set up environment for this example </br>
```
# Example: host_ip="192.168.1.1" or export host_ip="External_Public_IP"
export host_ip=$(hostname -I | awk '{print $1}')
# if you are in a proxy environment, also set the proxy-related environment variables
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
#OPANAI_API_KEY if you want to use OpenAI models
export OPENAI_API_KEY=<your-openai-key>
```
3. Deploy the retrieval tool (i.e., DocIndexRetriever mega-service)
First, launch the mega-service.
```
cd $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool
bash launch_retrieval_tool.sh
```
Then, ingest data into the vector database. Here we provide an example. You can ingest your own data.
```
bash run_ingest_data.sh
```
4. Launch Tool service
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
5. Launch `Agent` service
The configurations of the supervisor agent and the worker agent are defined in the docker-compose yaml file. We currently use openAI GPT-4o-mini as LLM, and llama3.1-70B-instruct (served by TGI-Gaudi) in Gaudi example. To use openai llm, run command below.
```
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon
bash launch_agent_service_openai.sh
```
6. [Optional] Build `Agent` docker image if pulling images failed.
```
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/agent-langchain:latest -f comps/agent/langchain/Dockerfile .
```
## Validate services
First look at logs of the agent docker containers:
```
# worker agent
docker logs rag-agent-endpoint
```
```
# supervisor agent
docker logs react-agent-endpoint
```
You should see something like "HTTP server setup successful" if the docker containers are started successfully.</p>
Second, validate worker agent:
```
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
Third, validate supervisor agent:
```
curl http://${host_ip}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
## How to register your own tools with agent
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain/README.md).
This example showcases a hierarchical multi-agent system for question-answering applications. To deploy the example on Xeon, OpenAI LLM models via API calls are used. For instructions, refer to the deployment guide [here](../../../../README.md).

View File

@@ -3,7 +3,7 @@
services:
worker-rag-agent:
image: opea/agent-langchain:latest
image: opea/agent:latest
container_name: rag-agent-endpoint
volumes:
- ${TOOLSET_PATH}:/home/user/tools/
@@ -13,13 +13,14 @@ services:
environment:
ip_address: ${ip_address}
strategy: rag_agent
with_memory: false
recursion_limit: ${recursion_limit_worker}
llm_engine: openai
OPENAI_API_KEY: ${OPENAI_API_KEY}
model: ${model}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
stream: false
tools: /home/user/tools/worker_agent_tools.yaml
require_human_feedback: false
RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL}
@@ -31,12 +32,40 @@ services:
LANGCHAIN_PROJECT: "opea-worker-agent-service"
port: 9095
worker-sql-agent:
image: opea/agent:latest
container_name: sql-agent-endpoint
volumes:
- ${WORKDIR}/GenAIExamples/AgentQnA/tests:/home/user/chinook-db # SQL database
ports:
- "9096:9096"
ipc: host
environment:
ip_address: ${ip_address}
strategy: sql_agent
with_memory: false
db_name: ${db_name}
db_path: ${db_path}
use_hints: false
recursion_limit: ${recursion_limit_worker}
llm_engine: openai
OPENAI_API_KEY: ${OPENAI_API_KEY}
model: ${model}
temperature: 0
max_new_tokens: ${max_new_tokens}
stream: false
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
port: 9096
supervisor-react-agent:
image: opea/agent-langchain:latest
image: opea/agent:latest
container_name: react-agent-endpoint
depends_on:
- worker-rag-agent
- worker-sql-agent
volumes:
- ${TOOLSET_PATH}:/home/user/tools/
ports:
@@ -44,14 +73,15 @@ services:
ipc: host
environment:
ip_address: ${ip_address}
strategy: react_langgraph
strategy: react_llama
with_memory: true
recursion_limit: ${recursion_limit_supervisor}
llm_engine: openai
OPENAI_API_KEY: ${OPENAI_API_KEY}
model: ${model}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
stream: true
tools: /home/user/tools/supervisor_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}
@@ -62,4 +92,21 @@ services:
LANGCHAIN_PROJECT: "opea-supervisor-agent-service"
CRAG_SERVER: $CRAG_SERVER
WORKER_AGENT_URL: $WORKER_AGENT_URL
SQL_AGENT_URL: $SQL_AGENT_URL
port: 9090
mock-api:
image: docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
container_name: mock-api
ports:
- "8080:8000"
ipc: host
agent-ui:
image: opea/agent-ui
container_name: agent-ui
ports:
- "5173:8080"
ipc: host
networks:
default:
driver: bridge

View File

@@ -0,0 +1,18 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
worker-rag-agent:
environment:
llm_endpoint_url: ${LLM_ENDPOINT_URL}
api_key: ${OPENAI_API_KEY}
worker-sql-agent:
environment:
llm_endpoint_url: ${LLM_ENDPOINT_URL}
api_key: ${OPENAI_API_KEY}
supervisor-react-agent:
environment:
llm_endpoint_url: ${LLM_ENDPOINT_URL}
api_key: ${OPENAI_API_KEY}

View File

@@ -1,19 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
pushd "../../../../../" > /dev/null
source .set_env.sh
popd > /dev/null
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export ip_address=$(hostname -I | awk '{print $1}')
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export model="gpt-4o-mini-2024-07-18"
export temperature=0
export max_new_tokens=4096
export OPENAI_API_KEY=${OPENAI_API_KEY}
export WORKER_AGENT_URL="http://${ip_address}:9095/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:8080
docker compose -f compose_openai.yaml up -d

View File

@@ -0,0 +1,57 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
pushd "../../../../../" > /dev/null
source .set_env.sh
popd > /dev/null
if [[ -z "${WORKDIR}" ]]; then
echo "Please set WORKDIR environment variable"
exit 0
fi
echo "WORKDIR=${WORKDIR}"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export ip_address=$(hostname -I | awk '{print $1}')
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export model="gpt-4o-mini-2024-07-18"
export temperature=0
export max_new_tokens=4096
export OPENAI_API_KEY=${OPENAI_API_KEY}
export WORKER_AGENT_URL="http://${ip_address}:9095/v1/chat/completions"
export SQL_AGENT_URL="http://${ip_address}:9096/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:8080
export db_name=Chinook
export db_path="sqlite:////home/user/chinook-db/Chinook_Sqlite.sqlite"
if [ ! -f $WORKDIR/GenAIExamples/AgentQnA/tests/Chinook_Sqlite.sqlite ]; then
echo "Download Chinook_Sqlite!"
wget -O $WORKDIR/GenAIExamples/AgentQnA/tests/Chinook_Sqlite.sqlite https://github.com/lerocha/chinook-database/releases/download/v1.4.5/Chinook_Sqlite.sqlite
fi
# retriever
export host_ip=$(hostname -I | awk '{print $1}')
export HF_CACHE_DIR=${HF_CACHE_DIR}
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export no_proxy=${no_proxy}
export http_proxy=${http_proxy}
export https_proxy=${https_proxy}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export RERANK_TYPE="tei"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete"
export no_proxy="$no_proxy,rag-agent-endpoint,sql-agent-endpoint,react-agent-endpoint,agent-ui"

View File

@@ -1,105 +1,3 @@
# Single node on-prem deployment AgentQnA on Gaudi
This example showcases a hierarchical multi-agent system for question-answering applications. We deploy the example on Gaudi using open-source LLMs,
For more details, please refer to the deployment guide [here](../../../../README.md).
## Deployment with docker
1. First, clone this repo.
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
2. Set up environment for this example </br>
```
# Example: host_ip="192.168.1.1" or export host_ip="External_Public_IP"
export host_ip=$(hostname -I | awk '{print $1}')
# if you are in a proxy environment, also set the proxy-related environment variables
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
# for using open-source llms
export HUGGINGFACEHUB_API_TOKEN=<your-HF-token>
# Example export HF_CACHE_DIR=$WORKDIR so that no need to redownload every time
export HF_CACHE_DIR=<directory-where-llms-are-downloaded>
```
3. Deploy the retrieval tool (i.e., DocIndexRetriever mega-service)
First, launch the mega-service.
```
cd $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool
bash launch_retrieval_tool.sh
```
Then, ingest data into the vector database. Here we provide an example. You can ingest your own data.
```
bash run_ingest_data.sh
```
4. Launch Tool service
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
5. Launch `Agent` service
To use open-source LLMs on Gaudi2, run commands below.
```
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
bash launch_tgi_gaudi.sh
bash launch_agent_service_tgi_gaudi.sh
```
6. [Optional] Build `Agent` docker image if pulling images failed.
```
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/agent-langchain:latest -f comps/agent/langchain/Dockerfile .
```
## Validate services
First look at logs of the agent docker containers:
```
# worker agent
docker logs rag-agent-endpoint
```
```
# supervisor agent
docker logs react-agent-endpoint
```
You should see something like "HTTP server setup successful" if the docker containers are started successfully.</p>
Second, validate worker agent:
```
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
Third, validate supervisor agent:
```
curl http://${host_ip}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
## How to register your own tools with agent
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain/README.md).
This example showcases a hierarchical multi-agent system for question-answering applications. To deploy the example on Gaudi using open-source LLMs, refer to the deployment guide [here](../../../../README.md).

View File

@@ -0,0 +1,93 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
tei-embedding-service:
command: --model-id ${EMBEDDING_MODEL_ID} --auto-truncate --otlp-endpoint $OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
tei-reranking-service:
command: --model-id ${RERANK_MODEL_ID} --auto-truncate --otlp-endpoint $OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
jaeger:
image: jaegertracing/all-in-one:1.67.0
container_name: jaeger
ports:
- "16686:16686"
- "4317:4317"
- "4318:4318"
- "9411:9411"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
COLLECTOR_ZIPKIN_HOST_PORT: 9411
restart: unless-stopped
prometheus:
image: prom/prometheus:v2.52.0
container_name: prometheus
user: root
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yaml
- ./prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yaml'
ports:
- '9091:9090'
ipc: host
restart: unless-stopped
grafana:
image: grafana/grafana:11.0.0
container_name: grafana
volumes:
- ./grafana_data:/var/lib/grafana
- ./grafana/dashboards:/var/lib/grafana/dashboards
- ./grafana/provisioning:/etc/grafana/provisioning
user: root
environment:
GF_SECURITY_ADMIN_PASSWORD: admin
GF_RENDERING_CALLBACK_URL: http://grafana:3000/
GF_LOG_FILTERS: rendering:debug
depends_on:
- prometheus
ports:
- '3000:3000'
ipc: host
restart: unless-stopped
node-exporter:
image: prom/node-exporter
container_name: node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
ports:
- 9100:9100
restart: always
deploy:
mode: global
gaudi-exporter:
image: vault.habana.ai/gaudi-metric-exporter/metric-exporter:1.19.2-32
container_name: gaudi-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- /dev:/dev
ports:
- 41612:41611
restart: always
deploy:
mode: global
worker-rag-agent:
environment:
- TELEMETRY_ENDPOINT=${TELEMETRY_ENDPOINT}
worker-sql-agent:
environment:
- TELEMETRY_ENDPOINT=${TELEMETRY_ENDPOINT}
supervisor-react-agent:
environment:
- TELEMETRY_ENDPOINT=${TELEMETRY_ENDPOINT}

View File

@@ -0,0 +1,9 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
supervisor-react-agent:
environment:
- tools=/home/user/tools/supervisor_agent_webtools.yaml
- GOOGLE_CSE_ID=${GOOGLE_CSE_ID}
- GOOGLE_API_KEY=${GOOGLE_API_KEY}

View File

@@ -3,10 +3,9 @@
services:
worker-rag-agent:
image: opea/agent-langchain:latest
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: rag-agent-endpoint
volumes:
# - ${WORKDIR}/GenAIExamples/AgentQnA/docker_image_build/GenAIComps/comps/agent/langchain/:/home/user/comps/agent/langchain/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9095:9095"
@@ -14,14 +13,15 @@ services:
environment:
ip_address: ${ip_address}
strategy: rag_agent_llama
with_memory: false
recursion_limit: ${recursion_limit_worker}
llm_engine: tgi
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
stream: false
tools: /home/user/tools/worker_agent_tools.yaml
require_human_feedback: false
RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL}
@@ -33,14 +33,42 @@ services:
LANGCHAIN_PROJECT: "opea-worker-agent-service"
port: 9095
worker-sql-agent:
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: sql-agent-endpoint
volumes:
- ${WORKDIR}/GenAIExamples/AgentQnA/tests:/home/user/chinook-db # test db
ports:
- "9096:9096"
ipc: host
environment:
ip_address: ${ip_address}
strategy: sql_agent_llama
with_memory: false
db_name: ${db_name}
db_path: ${db_path}
use_hints: false
recursion_limit: ${recursion_limit_worker}
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: false
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
port: 9096
supervisor-react-agent:
image: opea/agent-langchain:latest
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
container_name: react-agent-endpoint
depends_on:
- worker-rag-agent
- worker-sql-agent
volumes:
# - ${WORKDIR}/GenAIExamples/AgentQnA/docker_image_build/GenAIComps/comps/agent/langchain/:/home/user/comps/agent/langchain/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9090:9090"
@@ -48,14 +76,15 @@ services:
environment:
ip_address: ${ip_address}
strategy: react_llama
with_memory: true
recursion_limit: ${recursion_limit_supervisor}
llm_engine: tgi
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
stream: true
tools: /home/user/tools/supervisor_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}
@@ -66,4 +95,47 @@ services:
LANGCHAIN_PROJECT: "opea-supervisor-agent-service"
CRAG_SERVER: $CRAG_SERVER
WORKER_AGENT_URL: $WORKER_AGENT_URL
SQL_AGENT_URL: $SQL_AGENT_URL
port: 9090
mock-api:
image: docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
container_name: mock-api
ports:
- "8080:8000"
ipc: host
agent-ui:
image: ${REGISTRY:-opea}/agent-ui:${TAG:-latest}
container_name: agent-ui
environment:
host_ip: ${host_ip}
ports:
- "5173:8080"
ipc: host
vllm-service:
image: ${REGISTRY:-opea}/vllm-gaudi:${TAG:-latest}
container_name: vllm-gaudi-server
ports:
- "8086:8000"
volumes:
- "${MODEL_CACHE:-./data}:/data"
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HABANA_VISIBLE_DEVICES: all
OMPI_MCA_btl_vader_single_copy_mechanism: none
LLM_MODEL_ID: ${LLM_MODEL_ID}
VLLM_TORCH_PROFILER_DIR: "/mnt"
VLLM_SKIP_WARMUP: true
PT_HPU_ENABLE_LAZY_COLLECTIVES: true
healthcheck:
test: ["CMD-SHELL", "curl -f http://$host_ip:8086/health || exit 1"]
interval: 10s
timeout: 10s
retries: 100
runtime: habana
cap_add:
- SYS_NICE
ipc: host
command: --model $LLM_MODEL_ID --tensor-parallel-size 4 --host 0.0.0.0 --port 8000 --block-size 128 --max-num-seqs 256 --max-seq-len-to-capture 16384

View File

@@ -0,0 +1,10 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
rm *.json
wget https://raw.githubusercontent.com/opea-project/GenAIEval/refs/heads/main/evals/benchmark/grafana/chatqna_megaservice_grafana.json
mv chatqna_megaservice_grafana.json agentqna_microervices_grafana.json
wget https://raw.githubusercontent.com/opea-project/GenAIEval/refs/heads/main/evals/benchmark/grafana/vllm_grafana.json
wget https://raw.githubusercontent.com/opea-project/GenAIEval/refs/heads/main/evals/benchmark/grafana/tgi_grafana.json
wget https://raw.githubusercontent.com/opea-project/GenAIEval/refs/heads/main/evals/benchmark/grafana/node_grafana.json
wget https://raw.githubusercontent.com/opea-project/GenAIEval/refs/heads/main/evals/benchmark/grafana/gaudi_grafana.json

View File

@@ -0,0 +1,14 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
updateIntervalSeconds: 10 #how often Grafana will scan for changed dashboards
options:
path: /var/lib/grafana/dashboards

View File

@@ -0,0 +1,54 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# config file version
apiVersion: 1
# list of datasources that should be deleted from the database
deleteDatasources:
- name: Prometheus
orgId: 1
# list of datasources to insert/update depending
# what's available in the database
datasources:
# <string, required> name of the datasource. Required
- name: Prometheus
# <string, required> datasource type. Required
type: prometheus
# <string, required> access mode. direct or proxy. Required
access: proxy
# <int> org id. will default to orgId 1 if not specified
orgId: 1
# <string> url
url: http://prometheus:9090
# <string> database password, if used
password:
# <string> database user, if used
user:
# <string> database name, if used
database:
# <bool> enable/disable basic auth
basicAuth: false
# <string> basic auth username, if used
basicAuthUser:
# <string> basic auth password, if used
basicAuthPassword:
# <bool> enable/disable with credentials headers
withCredentials:
# <bool> mark as default datasource. Max one per org
isDefault: true
# <map> fields that will be converted to json and stored in json_data
jsonData:
httpMethod: GET
graphiteVersion: "1.1"
tlsAuth: false
tlsAuthWithCACert: false
# <string> json object of data that will be encrypted.
secureJsonData:
tlsCACert: "..."
tlsClientCert: "..."
tlsClientKey: "..."
version: 1
# <bool> allow users to edit datasources from the UI.
editable: true

View File

@@ -1,32 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
pushd "../../../../../" > /dev/null
source .set_env.sh
popd > /dev/null
WORKPATH=$(dirname "$PWD")/..
# export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
# LLM related environment variables
export HF_CACHE_DIR=${HF_CACHE_DIR}
ls $HF_CACHE_DIR
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export LLM_MODEL_ID="meta-llama/Meta-Llama-3.1-70B-Instruct"
export NUM_SHARDS=4
export LLM_ENDPOINT_URL="http://${ip_address}:8085"
export temperature=0.01
export max_new_tokens=4096
# agent related environment variables
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
echo "TOOLSET_PATH=${TOOLSET_PATH}"
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export WORKER_AGENT_URL="http://${ip_address}:9095/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:8080
docker compose -f compose.yaml up -d

View File

@@ -1,25 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# LLM related environment variables
export HF_CACHE_DIR=${HF_CACHE_DIR}
ls $HF_CACHE_DIR
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export LLM_MODEL_ID="meta-llama/Meta-Llama-3.1-70B-Instruct"
export NUM_SHARDS=4
docker compose -f tgi_gaudi.yaml up -d
sleep 5s
echo "Waiting tgi gaudi ready"
n=0
until [[ "$n" -ge 100 ]] || [[ $ready == true ]]; do
docker logs tgi-server &> tgi-gaudi-service.log
n=$((n+1))
if grep -q Connected tgi-gaudi-service.log; then
break
fi
sleep 5s
done
sleep 5s
echo "Service started successfully"

View File

@@ -0,0 +1,55 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
global:
scrape_interval: 5s
external_labels:
monitor: "my-monitor"
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["prometheus:9090"]
- job_name: "vllm"
metrics_path: /metrics
static_configs:
- targets: ["vllm-gaudi-server:8000"]
- job_name: "tgi"
metrics_path: /metrics
static_configs:
- targets: ["tgi-gaudi-server:80"]
- job_name: "tei-embedding"
metrics_path: /metrics
static_configs:
- targets: ["tei-embedding-server:80"]
- job_name: "tei-reranking"
metrics_path: /metrics
static_configs:
- targets: ["tei-reranking-server:80"]
- job_name: "retriever"
metrics_path: /metrics
static_configs:
- targets: ["retriever:7000"]
- job_name: "dataprep-redis-service"
metrics_path: /metrics
static_configs:
- targets: ["dataprep-redis-service:5000"]
- job_name: "prometheus-node-exporter"
metrics_path: /metrics
static_configs:
- targets: ["node-exporter:9100"]
- job_name: "prometheus-gaudi-exporter"
metrics_path: /metrics
static_configs:
- targets: ["gaudi-exporter:41611"]
- job_name: "supervisor-react-agent"
metrics_path: /metrics
static_configs:
- targets: ["react-agent-endpoint:9090"]
- job_name: "worker-rag-agent"
metrics_path: /metrics
static_configs:
- targets: ["rag-agent-endpoint:9095"]
- job_name: "worker-sql-agent"
metrics_path: /metrics
static_configs:
- targets: ["sql-agent-endpoint:9096"]

View File

@@ -0,0 +1,72 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
pushd "../../../../../" > /dev/null
source .set_env.sh
popd > /dev/null
WORKPATH=$(dirname "$PWD")/..
# export WORKDIR=$WORKPATH/../../
if [[ -z "${WORKDIR}" ]]; then
echo "Please set WORKDIR environment variable"
exit 0
fi
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
# LLM related environment variables
export HF_CACHE_DIR=${HF_CACHE_DIR}
ls $HF_CACHE_DIR
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export HF_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export LLM_MODEL_ID="meta-llama/Llama-3.3-70B-Instruct"
export NUM_SHARDS=4
export LLM_ENDPOINT_URL="http://${ip_address}:8086"
export temperature=0
export max_new_tokens=4096
# agent related environment variables
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
echo "TOOLSET_PATH=${TOOLSET_PATH}"
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export WORKER_AGENT_URL="http://${ip_address}:9095/v1/chat/completions"
export SQL_AGENT_URL="http://${ip_address}:9096/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:8080
export db_name=Chinook
export db_path="sqlite:////home/user/chinook-db/Chinook_Sqlite.sqlite"
if [ ! -f $WORKDIR/GenAIExamples/AgentQnA/tests/Chinook_Sqlite.sqlite ]; then
echo "Download Chinook_Sqlite!"
wget -O $WORKDIR/GenAIExamples/AgentQnA/tests/Chinook_Sqlite.sqlite https://github.com/lerocha/chinook-database/releases/download/v1.4.5/Chinook_Sqlite.sqlite
fi
# configure agent ui
# echo "AGENT_URL = 'http://$ip_address:9090/v1/chat/completions'" | tee ${WORKDIR}/GenAIExamples/AgentQnA/ui/svelte/.env
# retriever
export host_ip=$(hostname -I | awk '{print $1}')
export no_proxy=${no_proxy}
export http_proxy=${http_proxy}
export https_proxy=${https_proxy}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export RERANK_TYPE="tei"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete"
# Set OpenTelemetry Tracing Endpoint
export JAEGER_IP=$(ip route get 8.8.8.8 | grep -oP 'src \K[^ ]+')
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=grpc://$JAEGER_IP:4317
export TELEMETRY_ENDPOINT=http://$JAEGER_IP:4318/v1/traces
export no_proxy="$no_proxy,rag-agent-endpoint,sql-agent-endpoint,react-agent-endpoint,agent-ui,vllm-gaudi-server,jaeger,grafana,prometheus,node-exporter,gaudi-exporter,127.0.0.1,localhost,0.0.0.0,$host_ip,,$JAEGER_IP"

View File

@@ -3,7 +3,7 @@
services:
tgi-server:
image: ghcr.io/huggingface/tgi-gaudi:2.0.6
image: ghcr.io/huggingface/tgi-gaudi:2.3.1
container_name: tgi-server
ports:
- "8085:80"

View File

@@ -2,12 +2,30 @@
# SPDX-License-Identifier: Apache-2.0
services:
agent-langchain:
agent:
build:
context: GenAIComps
dockerfile: comps/agent/langchain/Dockerfile
dockerfile: comps/agent/src/Dockerfile
args:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
no_proxy: ${no_proxy}
image: ${REGISTRY:-opea}/agent-langchain:${TAG:-latest}
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
agent-ui:
build:
context: ../ui
dockerfile: ./docker/Dockerfile
extends: agent
image: ${REGISTRY:-opea}/agent-ui:${TAG:-latest}
vllm-gaudi:
build:
context: vllm-fork
dockerfile: Dockerfile.hpu
extends: agent
image: ${REGISTRY:-opea}/vllm-gaudi:${TAG:-latest}
vllm-rocm:
build:
context: GenAIComps
dockerfile: comps/third_parties/vllm/src/Dockerfile.amd_gpu
extends: agent
image: ${REGISTRY:-opea}/vllm-rocm:${TAG:-latest}

View File

@@ -0,0 +1,11 @@
# Deploy AgentQnA on Kubernetes cluster
- You should have Helm (version >= 3.15) installed. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
- For more deploy options, refer to [helm charts README](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts#readme).
## Deploy on Gaudi
```
export HFTOKEN="insert-your-huggingface-token-here"
helm install agentqna oci://ghcr.io/opea-project/charts/agentqna --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} -f gaudi-values.yaml
```

View File

@@ -0,0 +1,22 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
tgi:
enabled: false
vllm:
enabled: true
LLM_MODEL_ID: "meta-llama/Meta-Llama-3-8B-Instruct"
extraCmdArgs: ["--max-seq-len-to-capture", "16384", "--enable-auto-tool-choice", "--tool-call-parser", "llama3_json"]
supervisor:
llm_endpoint_url: http://{{ .Release.Name }}-vllm
llm_engine: vllm
model: "meta-llama/Meta-Llama-3-8B-Instruct"
ragagent:
llm_endpoint_url: http://{{ .Release.Name }}-vllm
llm_engine: vllm
model: "meta-llama/Meta-Llama-3-8B-Instruct"
sqlagent:
llm_endpoint_url: http://{{ .Release.Name }}-vllm
llm_engine: vllm
model: "meta-llama/Meta-Llama-3-8B-Instruct"

View File

@@ -0,0 +1,35 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Accelerate inferencing in heaviest components to improve performance
# by overriding their subchart values
tgi:
enabled: false
vllm:
enabled: true
accelDevice: "gaudi"
image:
repository: opea/vllm-gaudi
resources:
limits:
habana.ai/gaudi: 4
LLM_MODEL_ID: "meta-llama/Llama-3.3-70B-Instruct"
OMPI_MCA_btl_vader_single_copy_mechanism: none
PT_HPU_ENABLE_LAZY_COLLECTIVES: true
VLLM_SKIP_WARMUP: true
shmSize: 16Gi
extraCmdArgs: ["--tensor-parallel-size", "4", "--max-seq-len-to-capture", "16384", "--enable-auto-tool-choice", "--tool-call-parser", "llama3_json"]
supervisor:
llm_endpoint_url: http://{{ .Release.Name }}-vllm
llm_engine: vllm
model: "meta-llama/Llama-3.3-70B-Instruct"
ragagent:
llm_endpoint_url: http://{{ .Release.Name }}-vllm
llm_engine: vllm
model: "meta-llama/Llama-3.3-70B-Instruct"
sqlagent:
llm_endpoint_url: http://{{ .Release.Name }}-vllm
llm_engine: vllm
model: "meta-llama/Llama-3.3-70B-Instruct"

View File

@@ -53,7 +53,7 @@ def main():
host_ip = args.host_ip
port = args.port
proxies = {"http": ""}
url = "http://{host_ip}:{port}/v1/dataprep".format(host_ip=host_ip, port=port)
url = "http://{host_ip}:{port}/v1/dataprep/ingest".format(host_ip=host_ip, port=port)
# Split jsonl file into json files
files = split_jsonl_into_txts(os.path.join(args.filedir, args.filename))

View File

@@ -13,13 +13,14 @@ export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export RERANK_TYPE="tei"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get_file"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete_file"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete"
docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml up -d

View File

@@ -1,7 +1,22 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
host_ip=$(hostname -I | awk '{print $1}')
port=6007
FILEDIR=${WORKDIR}/GenAIExamples/AgentQnA/example_data/
FILENAME=test_docs_music.jsonl
python3 index_data.py --filedir ${FILEDIR} --filename ${FILENAME} --host_ip $host_ip
# AgentQnA ingestion script requires following packages
packages=("requests" "tqdm")
# Check if packages are installed
for package in "${packages[@]}"; do
if pip freeze | grep -q "$package="; then
echo "$package is installed"
else
echo "$package is not installed"
pip install --no-cache-dir "$package"
fi
done
python3 index_data.py --filedir ${FILEDIR} --filename ${FILENAME} --host_ip $host_ip --port $port

View File

@@ -15,28 +15,35 @@ function stop_agent_and_api_server() {
echo "Stopping CRAG server"
docker stop $(docker ps -q --filter ancestor=docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0)
echo "Stopping Agent services"
docker stop $(docker ps -q --filter ancestor=opea/agent-langchain:latest)
docker stop $(docker ps -q --filter ancestor=opea/agent:latest)
}
function stop_retrieval_tool() {
echo "Stopping Retrieval tool"
docker compose -f $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool/docker/docker-compose-retrieval-tool.yaml down
local RETRIEVAL_TOOL_PATH=$WORKPATH/../DocIndexRetriever
cd $RETRIEVAL_TOOL_PATH/docker_compose/intel/cpu/xeon/
container_list=$(cat compose.yaml | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
cid=$(docker ps -aq --filter "name=$container_name")
echo "Stopping container $container_name"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
done
}
echo "=================== #1 Building docker images===================="
bash 1_build_images.sh
bash step1_build_images.sh xeon
echo "=================== #1 Building docker images completed===================="
echo "=================== #2 Start retrieval tool===================="
bash 2_start_retrieval_tool.sh
bash step2_start_retrieval_tool.sh
echo "=================== #2 Retrieval tool started===================="
echo "=================== #3 Ingest data and validate retrieval===================="
bash 3_ingest_data_and_validate_retrieval.sh
bash step3_ingest_data_and_validate_retrieval.sh
echo "=================== #3 Data ingestion and validation completed===================="
echo "=================== #4 Start agent and API server===================="
bash 4_launch_and_validate_agent_openai.sh
bash step4_launch_and_validate_agent_openai.sh
echo "=================== #4 Agent test passed ===================="
echo "=================== #5 Stop agent and API server===================="

View File

@@ -0,0 +1,6 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
DATAPATH=$WORKDIR/TAG-Bench/tag_queries.csv
OUTFOLDER=$WORKDIR/TAG-Bench/query_by_db
python3 split_data.py --path $DATAPATH --output $OUTFOLDER

View File

@@ -0,0 +1,27 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import os
import pandas as pd
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--path", type=str, required=True)
parser.add_argument("--output", type=str, required=True)
args = parser.parse_args()
# if output folder does not exist, create it
if not os.path.exists(args.output):
os.makedirs(args.output)
# Load the data
data = pd.read_csv(args.path)
# Split the data by domain
domains = data["DB used"].unique()
for domain in domains:
domain_data = data[data["DB used"] == domain]
out = os.path.join(args.output, f"query_{domain}.csv")
domain_data.to_csv(out, index=False)

View File

@@ -11,38 +11,90 @@ export ip_address=$(hostname -I | awk '{print $1}')
function get_genai_comps() {
if [ ! -d "GenAIComps" ] ; then
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
git clone --depth 1 --branch ${opea_branch:-"main"} https://github.com/opea-project/GenAIComps.git
fi
}
function build_docker_images_for_retrieval_tool(){
cd $WORKDIR/GenAIExamples/DocIndexRetriever/docker_image_build/
# git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
get_genai_comps
echo "Build all the images with --no-cache..."
service_list="doc-index-retriever dataprep-redis embedding-tei retriever-redis reranking-tei"
docker compose -f build.yaml build ${service_list} --no-cache
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
docker compose -f build.yaml build --no-cache
docker images && sleep 1s
}
function build_agent_docker_image() {
function build_agent_docker_image_xeon() {
cd $WORKDIR/GenAIExamples/AgentQnA/docker_image_build/
get_genai_comps
echo "Build agent image with --no-cache..."
docker compose -f build.yaml build --no-cache
service_list="agent agent-ui"
docker compose -f build.yaml build ${service_list} --no-cache
}
function build_agent_docker_image_gaudi_vllm() {
cd $WORKDIR/GenAIExamples/AgentQnA/docker_image_build/
get_genai_comps
git clone https://github.com/HabanaAI/vllm-fork.git && cd vllm-fork
VLLM_FORK_VER=v0.6.6.post1+Gaudi-1.20.0
git checkout ${VLLM_FORK_VER} &> /dev/null && cd ../
echo "Build agent image with --no-cache..."
service_list="agent agent-ui vllm-gaudi"
docker compose -f build.yaml build ${service_list} --no-cache
}
function build_agent_docker_image_rocm() {
cd $WORKDIR/GenAIExamples/AgentQnA/docker_image_build/
get_genai_comps
echo "Build agent image with --no-cache..."
service_list="agent agent-ui"
docker compose -f build.yaml build ${service_list} --no-cache
}
function build_agent_docker_image_rocm_vllm() {
cd $WORKDIR/GenAIExamples/AgentQnA/docker_image_build/
get_genai_comps
echo "Build agent image with --no-cache..."
service_list="agent agent-ui vllm-rocm"
docker compose -f build.yaml build ${service_list} --no-cache
}
function main() {
echo "==================== Build docker images for retrieval tool ===================="
build_docker_images_for_retrieval_tool
echo "==================== Build docker images for retrieval tool completed ===================="
echo "==================== Build agent docker image ===================="
build_agent_docker_image
echo "==================== Build agent docker image completed ===================="
sleep 3s
case $1 in
"rocm")
echo "==================== Build agent docker image for ROCm ===================="
build_agent_docker_image_rocm
;;
"rocm_vllm")
echo "==================== Build agent docker image for ROCm VLLM ===================="
build_agent_docker_image_rocm_vllm
;;
"gaudi_vllm")
echo "==================== Build agent docker image for Gaudi ===================="
build_agent_docker_image_gaudi_vllm
;;
"xeon")
echo "==================== Build agent docker image for Xeon ===================="
build_agent_docker_image_xeon
;;
*)
echo "Invalid argument"
exit 1
;;
esac
docker image ls | grep vllm
}
main
main $1

View File

@@ -7,8 +7,9 @@ WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export host_ip=${ip_address}
export HF_CACHE_DIR=$WORKDIR/hf_cache
export HF_CACHE_DIR=${model_cache:-"$WORKDIR/hf_cache"}
if [ ! -d "$HF_CACHE_DIR" ]; then
echo "Creating HF_CACHE directory"
mkdir -p "$HF_CACHE_DIR"

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export host_ip=${ip_address}
export HF_CACHE_DIR=$WORKPATH/hf_cache
if [ ! -d "$HF_CACHE_DIR" ]; then
echo "Creating HF_CACHE directory"
mkdir -p "$HF_CACHE_DIR"
fi
function start_retrieval_tool() {
echo "Starting Retrieval tool"
cd $WORKPATH/../DocIndexRetriever/docker_compose/intel/cpu/xeon
host_ip=$(hostname -I | awk '{print $1}')
export HF_CACHE_DIR=${HF_CACHE_DIR}
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export no_proxy=${no_proxy}
export http_proxy=${http_proxy}
export https_proxy=${https_proxy}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export RERANK_TYPE="tei"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete"
docker compose -f compose.yaml up -d
}
echo "==================== Start retrieval tool ===================="
start_retrieval_tool
sleep 20 # needed for downloading the models
echo "==================== Retrieval tool started ===================="

View File

@@ -0,0 +1,68 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export host_ip=$ip_address
echo "ip_address=${ip_address}"
function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo 0
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo 1
fi
}
function ingest_data_and_validate() {
echo "Ingesting data"
cd $WORKPATH/retrieval_tool/
echo $PWD
local CONTENT=$(bash run_ingest_data.sh)
local EXIT_CODE=$(validate "$CONTENT" "Data preparation succeeded" "dataprep-redis-server")
echo "$EXIT_CODE"
local EXIT_CODE="${EXIT_CODE:0-1}"
echo "return value is $EXIT_CODE"
if [ "$EXIT_CODE" == "1" ]; then
docker logs dataprep-redis-server
return 1
fi
}
function validate_retrieval_tool() {
echo "----------------Test retrieval tool ----------------"
local CONTENT=$(http_proxy="" curl http://${ip_address}:8889/v1/retrievaltool -X POST -H "Content-Type: application/json" -d '{
"text": "Who sang Thriller"
}')
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "retrieval-tool")
if [ "$EXIT_CODE" == "1" ]; then
docker logs retrievaltool-xeon-backend-server
exit 1
fi
}
function main(){
echo "==================== Ingest data ===================="
ingest_data_and_validate
echo "==================== Data ingestion completed ===================="
echo "==================== Validate retrieval tool ===================="
validate_retrieval_tool
echo "==================== Retrieval tool validated ===================="
}
main

View File

@@ -0,0 +1,197 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export host_ip=$ip_address
echo "ip_address=${ip_address}"
export TOOLSET_PATH=$WORKPATH/tools/
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
HF_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
model="meta-llama/Llama-3.3-70B-Instruct" #"meta-llama/Meta-Llama-3.1-70B-Instruct"
export HF_CACHE_DIR=${model_cache:-"/data2/huggingface"}
if [ ! -d "$HF_CACHE_DIR" ]; then
HF_CACHE_DIR=$WORKDIR/hf_cache
mkdir -p "$HF_CACHE_DIR"
fi
echo "HF_CACHE_DIR=$HF_CACHE_DIR"
ls $HF_CACHE_DIR
vllm_port=8086
vllm_volume=${HF_CACHE_DIR}
function start_agent_service() {
echo "Starting agent service"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
source set_env.sh
docker compose -f compose.yaml up -d
}
function start_all_services() {
echo "token is ${HF_TOKEN}"
echo "start vllm gaudi service"
echo "**************model is $model**************"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
source set_env.sh
docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose.yaml -f compose.telemetry.yaml up -d
sleep 5s
echo "Waiting vllm gaudi ready"
n=0
LOG_PATH=$PWD
until [[ "$n" -ge 100 ]] || [[ $ready == true ]]; do
docker logs vllm-gaudi-server
docker logs vllm-gaudi-server &> ${LOG_PATH}/vllm-gaudi-service.log
n=$((n+1))
if grep -q "Uvicorn running on" ${LOG_PATH}/vllm-gaudi-service.log; then
break
fi
if grep -q "No such container" ${LOG_PATH}/vllm-gaudi-service.log; then
echo "container vllm-gaudi-server not found"
exit 1
fi
sleep 5s
done
sleep 5s
echo "Service started successfully"
}
function download_chinook_data(){
echo "Downloading chinook data..."
cd $WORKDIR
git clone https://github.com/lerocha/chinook-database.git
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite $WORKDIR/GenAIExamples/AgentQnA/tests/
}
function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo 0
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo 1
fi
}
function validate_agent_service() {
# # test worker rag agent
echo "======================Testing worker rag agent======================"
export agent_port="9095"
export agent_ip="127.0.0.1"
prompt="Tell me about Michael Jackson song Thriller"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ip_addr $agent_ip --ext_port $agent_port)
# echo $CONTENT
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "rag-agent-endpoint")
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs rag-agent-endpoint
exit 1
fi
# # test worker sql agent
echo "======================Testing worker sql agent======================"
export agent_port="9096"
prompt="How many employees are there in the company?"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ip_addr $agent_ip --ext_port $agent_port)
local EXIT_CODE=$(validate "$CONTENT" "8" "sql-agent-endpoint")
echo $CONTENT
# echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs sql-agent-endpoint
exit 1
fi
# test supervisor react agent
echo "======================Testing supervisor react agent======================"
export agent_port="9090"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --agent_role "supervisor" --ip_addr $agent_ip --ext_port $agent_port --stream)
local EXIT_CODE=$(validate "$CONTENT" "Iron" "react-agent-endpoint")
# echo $CONTENT
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs react-agent-endpoint
exit 1
fi
}
function remove_chinook_data(){
echo "Removing chinook data..."
cd $WORKDIR
if [ -d "chinook-database" ]; then
rm -rf chinook-database
fi
echo "Chinook data removed!"
}
function ingest_data_and_validate() {
echo "Ingesting data"
cd $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool/
echo $PWD
local CONTENT=$(bash run_ingest_data.sh)
local EXIT_CODE=$(validate "$CONTENT" "Data preparation succeeded" "dataprep-redis-server")
echo "$EXIT_CODE"
local EXIT_CODE="${EXIT_CODE:0-1}"
echo "return value is $EXIT_CODE"
if [ "$EXIT_CODE" == "1" ]; then
docker logs dataprep-redis-server
return 1
fi
}
function validate_retrieval_tool() {
echo "----------------Test retrieval tool ----------------"
local CONTENT=$(http_proxy="" curl http://${ip_address}:8889/v1/retrievaltool -X POST -H "Content-Type: application/json" -d '{
"text": "Who sang Thriller"
}')
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "retrieval-tool")
if [ "$EXIT_CODE" == "1" ]; then
docker logs retrievaltool-xeon-backend-server
exit 1
fi
}
function main() {
echo "==================== Prepare data ===================="
download_chinook_data
echo "==================== Data prepare done ===================="
echo "==================== Start all services ===================="
start_all_services
echo "==================== all services started ===================="
echo "==================== Ingest data ===================="
ingest_data_and_validate
echo "==================== Data ingestion completed ===================="
echo "==================== Validate retrieval tool ===================="
validate_retrieval_tool
echo "==================== Retrieval tool validated ===================="
echo "==================== Validate agent service ===================="
validate_agent_service
echo "==================== Agent service validated ===================="
}
remove_chinook_data
main
remove_chinook_data

View File

@@ -11,13 +11,22 @@ echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
function download_chinook_data(){
echo "Downloading chinook data..."
cd $WORKDIR
git clone https://github.com/lerocha/chinook-database.git
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite $WORKDIR/GenAIExamples/AgentQnA/tests/
}
function start_agent_and_api_server() {
echo "Starting CRAG server"
docker run -d --runtime=runc --name=kdd-cup-24-crag-service -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon/
bash launch_agent_service_openai.sh
sleep 2m
}
function validate() {
@@ -35,19 +44,64 @@ function validate() {
}
function validate_agent_service() {
echo "----------------Test agent ----------------"
local CONTENT=$(http_proxy="" curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Tell me about Michael Jackson song thriller"
}')
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "react-agent-endpoint")
docker logs react-agent-endpoint
# # test worker rag agent
echo "======================Testing worker rag agent======================"
export agent_port="9095"
prompt="Tell me about Michael Jackson song Thriller"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port)
# echo $CONTENT
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "rag-agent-endpoint")
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs rag-agent-endpoint
exit 1
fi
# # test worker sql agent
echo "======================Testing worker sql agent======================"
export agent_port="9096"
prompt="How many employees are there in the company?"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port)
local EXIT_CODE=$(validate "$CONTENT" "8" "sql-agent-endpoint")
echo $CONTENT
# echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs sql-agent-endpoint
exit 1
fi
# test supervisor react agent
echo "======================Testing supervisor react agent======================"
export agent_port="9090"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --agent_role "supervisor" --ext_port $agent_port --stream)
local EXIT_CODE=$(validate "$CONTENT" "Iron" "react-agent-endpoint")
# echo $CONTENT
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs react-agent-endpoint
exit 1
fi
}
function remove_chinook_data(){
echo "Removing chinook data..."
cd $WORKDIR
if [ -d "chinook-database" ]; then
rm -rf chinook-database
fi
echo "Chinook data removed!"
}
function main() {
echo "==================== Prepare data ===================="
download_chinook_data
echo "==================== Data prepare done ===================="
echo "==================== Start agent ===================="
start_agent_and_api_server
echo "==================== Agent started ===================="
@@ -57,4 +111,9 @@ function main() {
echo "==================== Agent service validated ===================="
}
remove_chinook_data
main
remove_chinook_data

View File

@@ -0,0 +1,120 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
WORKPATH=$(dirname "$PWD")
export LOG_PATH=${WORKPATH}
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export host_ip=${ip_address}
export TOOLSET_PATH=$WORKPATH/tools/
export HF_CACHE_DIR=$WORKPATH/data2/huggingface
if [ ! -d "$HF_CACHE_DIR" ]; then
HF_CACHE_DIR=$WORKDIR/hf_cache
mkdir -p "$HF_CACHE_DIR"
fi
function download_chinook_data(){
echo "Downloading chinook data..."
cd $WORKDIR
git clone https://github.com/lerocha/chinook-database.git
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite ${WORKPATH}/tests/
}
function start_agent_and_api_server() {
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/amd/gpu/rocm
bash launch_agent_service_vllm_rocm.sh
}
function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo 0
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo 1
fi
}
function validate_agent_service() {
# # test worker rag agent
echo "======================Testing worker rag agent======================"
export agent_port=$(cat ${WORKPATH}/docker_compose/amd/gpu/WORKER_RAG_AGENT_PORT_tmp)
prompt="Tell me about Michael Jackson song Thriller"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port)
# echo $CONTENT
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "rag-agent-endpoint")
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs rag-agent-endpoint
exit 1
fi
# test worker sql agent
echo "======================Testing worker sql agent======================"
export agent_port=$(cat ${WORKPATH}/docker_compose/amd/gpu/WORKER_SQL_AGENT_PORT_tmp)
prompt="How many employees are there in the company?"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port)
local EXIT_CODE=$(validate "$CONTENT" "8" "sql-agent-endpoint")
echo $CONTENT
# echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs sql-agent-endpoint
exit 1
fi
# test supervisor react agent
echo "======================Testing supervisor react agent======================"
export agent_port=$(cat ${WORKPATH}/docker_compose/amd/gpu/SUPERVISOR_REACT_AGENT_PORT_tmp)
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --agent_role "supervisor" --ext_port $agent_port --stream)
local EXIT_CODE=$(validate "$CONTENT" "Iron" "react-agent-endpoint")
# echo $CONTENT
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs react-agent-endpoint
exit 1
fi
}
function remove_chinook_data(){
echo "Removing chinook data..."
cd $WORKDIR
if [ -d "chinook-database" ]; then
rm -rf chinook-database
fi
echo "Chinook data removed!"
}
function main() {
echo "==================== Prepare data ===================="
download_chinook_data
echo "==================== Data prepare done ===================="
echo "==================== Start agent ===================="
start_agent_and_api_server
echo "==================== Agent started ===================="
echo "==================== Validate agent service ===================="
validate_agent_service
echo "==================== Agent service validated ===================="
}
remove_chinook_data
main
remove_chinook_data

View File

@@ -1,91 +0,0 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export HF_CACHE_DIR=$WORKDIR/hf_cache
if [ ! -d "$HF_CACHE_DIR" ]; then
mkdir -p "$HF_CACHE_DIR"
fi
ls $HF_CACHE_DIR
function start_tgi(){
echo "Starting tgi-gaudi server"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
bash launch_tgi_gaudi.sh
}
function start_agent_and_api_server() {
echo "Starting CRAG server"
docker run -d --runtime=runc --name=kdd-cup-24-crag-service -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
bash launch_agent_service_tgi_gaudi.sh
sleep 10
}
function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo 0
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo 1
fi
}
function validate_agent_service() {
echo "----------------Test agent ----------------"
# local CONTENT=$(http_proxy="" curl http://${ip_address}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
# "query": "Tell me about Michael Jackson song thriller"
# }')
export agent_port="9095"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py)
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "rag-agent-endpoint")
docker logs rag-agent-endpoint
if [ "$EXIT_CODE" == "1" ]; then
exit 1
fi
# local CONTENT=$(http_proxy="" curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
# "query": "Tell me about Michael Jackson song thriller"
# }')
export agent_port="9090"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py)
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "react-agent-endpoint")
docker logs react-agent-endpoint
if [ "$EXIT_CODE" == "1" ]; then
exit 1
fi
}
function main() {
echo "==================== Start TGI ===================="
start_tgi
echo "==================== TGI started ===================="
echo "==================== Start agent ===================="
start_agent_and_api_server
echo "==================== Agent started ===================="
echo "==================== Validate agent service ===================="
validate_agent_service
echo "==================== Agent service validated ===================="
}
main

View File

@@ -0,0 +1,120 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
WORKPATH=$(dirname "$PWD")
export LOG_PATH=${WORKPATH}
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export host_ip=${ip_address}
export TOOLSET_PATH=$WORKPATH/tools/
export HF_CACHE_DIR=$WORKPATH/data2/huggingface
if [ ! -d "$HF_CACHE_DIR" ]; then
HF_CACHE_DIR=$WORKDIR/hf_cache
mkdir -p "$HF_CACHE_DIR"
fi
function download_chinook_data(){
echo "Downloading chinook data..."
cd $WORKDIR
git clone https://github.com/lerocha/chinook-database.git
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite ${WORKPATH}/tests/
}
function start_agent_and_api_server() {
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/amd/gpu/rocm
bash launch_agent_service_tgi_rocm.sh
}
function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo 0
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo 1
fi
}
function validate_agent_service() {
# # test worker rag agent
echo "======================Testing worker rag agent======================"
export agent_port=$(cat ${WORKPATH}/docker_compose/amd/gpu/WORKER_RAG_AGENT_PORT_tmp)
prompt="Tell me about Michael Jackson song Thriller"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port)
# echo $CONTENT
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "rag-agent-endpoint")
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs rag-agent-endpoint
exit 1
fi
# test worker sql agent
echo "======================Testing worker sql agent======================"
export agent_port=$(cat ${WORKPATH}/docker_compose/amd/gpu/WORKER_SQL_AGENT_PORT_tmp)
prompt="How many employees are there in the company?"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port)
local EXIT_CODE=$(validate "$CONTENT" "8" "sql-agent-endpoint")
echo $CONTENT
# echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs sql-agent-endpoint
exit 1
fi
# test supervisor react agent
echo "======================Testing supervisor react agent======================"
export agent_port=$(cat ${WORKPATH}/docker_compose/amd/gpu/SUPERVISOR_REACT_AGENT_PORT_tmp)
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --agent_role "supervisor" --ext_port $agent_port --stream)
local EXIT_CODE=$(validate "$CONTENT" "Iron" "react-agent-endpoint")
# echo $CONTENT
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs react-agent-endpoint
exit 1
fi
}
function remove_chinook_data(){
echo "Removing chinook data..."
cd $WORKDIR
if [ -d "chinook-database" ]; then
rm -rf chinook-database
fi
echo "Chinook data removed!"
}
function main() {
echo "==================== Prepare data ===================="
download_chinook_data
echo "==================== Data prepare done ===================="
echo "==================== Start agent ===================="
start_agent_and_api_server
echo "==================== Agent started ===================="
echo "==================== Validate agent service ===================="
validate_agent_service
echo "==================== Agent service validated ===================="
}
remove_chinook_data
main
remove_chinook_data

View File

@@ -1,25 +1,77 @@
# Copyright (C) 2024 Intel Corporation
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
import argparse
import json
import uuid
import requests
def generate_answer_agent_api(url, prompt):
def process_request(url, query, is_stream=False):
proxies = {"http": ""}
payload = {
"query": prompt,
}
response = requests.post(url, json=payload, proxies=proxies)
answer = response.json()["text"]
return answer
content = json.dumps(query) if query is not None else None
try:
resp = requests.post(url=url, data=content, proxies=proxies, stream=is_stream)
if not is_stream:
ret = resp.json()["text"]
else:
for line in resp.iter_lines(decode_unicode=True):
print(line)
ret = None
resp.raise_for_status() # Raise an exception for unsuccessful HTTP status codes
return ret
except requests.exceptions.RequestException as e:
ret = f"An error occurred:{e}"
return None
def test_worker_agent(args):
url = f"http://{args.ip_addr}:{args.ext_port}/v1/chat/completions"
query = {"role": "user", "messages": args.prompt, "stream": "false"}
ret = process_request(url, query)
print("Response: ", ret)
def add_message_and_run(url, user_message, thread_id, stream=False):
print("User message: ", user_message)
query = {"role": "user", "messages": user_message, "thread_id": thread_id, "stream": stream}
ret = process_request(url, query, is_stream=stream)
print("Response: ", ret)
def test_chat_completion_multi_turn(args):
url = f"http://{args.ip_addr}:{args.ext_port}/v1/chat/completions"
thread_id = f"{uuid.uuid4()}"
# first turn
print("===============First turn==================")
user_message = "Which artist has the most albums in the database?"
add_message_and_run(url, user_message, thread_id, stream=args.stream)
print("===============End of first turn==================")
# second turn
print("===============Second turn==================")
user_message = "Give me a few examples of the artist's albums?"
add_message_and_run(url, user_message, thread_id, stream=args.stream)
print("===============End of second turn==================")
if __name__ == "__main__":
ip_address = os.getenv("ip_address", "localhost")
agent_port = os.getenv("agent_port", "9095")
url = f"http://{ip_address}:{agent_port}/v1/chat/completions"
prompt = "Tell me about Michael Jackson song thriller"
answer = generate_answer_agent_api(url, prompt)
print(answer)
parser = argparse.ArgumentParser()
parser.add_argument("--ip_addr", type=str, default="127.0.0.1", help="endpoint ip address")
parser.add_argument("--ext_port", type=str, default="9090", help="endpoint port")
parser.add_argument("--stream", action="store_true", help="streaming mode")
parser.add_argument("--prompt", type=str, help="prompt message")
parser.add_argument("--agent_role", type=str, default="supervisor", help="supervisor or worker")
args, _ = parser.parse_known_args()
print(args)
if args.agent_role == "supervisor":
test_chat_completion_multi_turn(args)
elif args.agent_role == "worker":
test_worker_agent(args)
else:
raise ValueError("Invalid agent role")

View File

@@ -1,8 +1,7 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
set -xe
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
@@ -10,6 +9,15 @@ echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export no_proxy="$no_proxy,rag-agent-endpoint,sql-agent-endpoint,react-agent-endpoint,agent-ui,vllm-gaudi-server,jaeger,grafana,prometheus,127.0.0.1,localhost,0.0.0.0,$ip_address"
IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
export REGISTRY=${IMAGE_REPO}
export TAG=${IMAGE_TAG}
export MODEL_CACHE=${model_cache:-"./data"}
function stop_crag() {
cid=$(docker ps -aq --filter "name=kdd-cup-24-crag-service")
@@ -17,7 +25,7 @@ function stop_crag() {
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
}
function stop_agent_docker() {
function stop_agent_containers() {
cd $WORKPATH/docker_compose/intel/hpu/gaudi/
container_list=$(cat compose.yaml | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
@@ -27,7 +35,19 @@ function stop_agent_docker() {
done
}
function stop_tgi(){
function stop_telemetry_containers(){
cd $WORKPATH/docker_compose/intel/hpu/gaudi/
container_list=$(cat compose.telemetry.yaml | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
cid=$(docker ps -aq --filter "name=$container_name")
echo "Stopping container $container_name"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
done
container_list=$(cat compose.telemetry.yaml | grep container_name | cut -d':' -f2)
}
function stop_llm(){
cd $WORKPATH/docker_compose/intel/hpu/gaudi/
container_list=$(cat tgi_gaudi.yaml | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
@@ -36,6 +56,14 @@ function stop_tgi(){
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
done
cid=$(docker ps -aq --filter "name=vllm-gaudi-server")
echo "Stopping container $cid"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
cid=$(docker ps -aq --filter "name=test-comps-vllm-gaudi-service")
echo "Stopping container $cid"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
}
function stop_retrieval_tool() {
@@ -50,36 +78,31 @@ function stop_retrieval_tool() {
done
}
echo "workpath: $WORKPATH"
echo "=================== Stop containers ===================="
echo "::group::=================== Stop containers ===================="
stop_llm
stop_crag
stop_tgi
stop_agent_docker
stop_agent_containers
stop_retrieval_tool
stop_telemetry_containers
echo "::endgroup::"
cd $WORKPATH/tests
echo "=================== #1 Building docker images===================="
bash step1_build_images.sh
echo "=================== #1 Building docker images completed===================="
echo "::group::=================== Building docker images===================="
bash step1_build_images.sh gaudi_vllm > docker_image_build.log
echo "::endgroup::"
echo "=================== #2 Start retrieval tool===================="
bash step2_start_retrieval_tool.sh
echo "=================== #2 Retrieval tool started===================="
echo "::group::=================== Start agent, API server, retrieval, and ingest data===================="
bash step4_launch_and_validate_agent_gaudi.sh
echo "::endgroup::"
echo "=================== #3 Ingest data and validate retrieval===================="
bash step3_ingest_data_and_validate_retrieval.sh
echo "=================== #3 Data ingestion and validation completed===================="
echo "=================== #4 Start agent and API server===================="
bash step4_launch_and_validate_agent_tgi.sh
echo "=================== #4 Agent test passed ===================="
echo "=================== #5 Stop agent and API server===================="
echo "::group::=================== Stop agent and API server===================="
stop_llm
stop_crag
stop_agent_docker
stop_agent_containers
stop_retrieval_tool
echo "=================== #5 Agent and API server stopped===================="
stop_telemetry_containers
echo y | docker system prune
echo "::endgroup::"
echo "ALL DONE!"
echo "ALL DONE!!"

View File

@@ -0,0 +1,78 @@
#!/bin/bash
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
set -xe
WORKPATH=$(dirname "$PWD")
ls $WORKPATH
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export TOOLSET_PATH=$WORKPATH/tools/
IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
export REGISTRY=${IMAGE_REPO}
export TAG=${IMAGE_TAG}
export MODEL_CACHE=${model_cache:-"./data"}
function stop_crag() {
cid=$(docker ps -aq --filter "name=kdd-cup-24-crag-service")
echo "Stopping container kdd-cup-24-crag-service with cid $cid"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
}
function stop_agent_docker() {
cd $WORKPATH/docker_compose/amd/gpu/rocm
bash stop_agent_service_tgi_rocm.sh
}
function stop_retrieval_tool() {
echo "Stopping Retrieval tool"
local RETRIEVAL_TOOL_PATH=$WORKPATH/../DocIndexRetriever
cd $RETRIEVAL_TOOL_PATH/docker_compose/intel/cpu/xeon/
# docker compose -f compose.yaml down
container_list=$(cat compose.yaml | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
cid=$(docker ps -aq --filter "name=$container_name")
echo "Stopping container $container_name"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
done
}
echo "workpath: $WORKPATH"
echo "::group::=================== Stop containers ===================="
stop_crag
stop_agent_docker
stop_retrieval_tool
echo "::endgroup::=================== Stop containers completed ===================="
cd $WORKPATH/tests
echo "::group::=================== #1 Building docker images===================="
bash step1_build_images.sh rocm > docker_image_build.log
echo "::endgroup::=================== #1 Building docker images completed===================="
echo "::group::=================== #2 Start retrieval tool===================="
bash step2_start_retrieval_tool.sh
echo "::endgroup::=================== #2 Retrieval tool started===================="
echo "::group::=================== #3 Ingest data and validate retrieval===================="
bash step3_ingest_data_and_validate_retrieval.sh
echo "::endgroup::=================== #3 Data ingestion and validation completed===================="
echo "::group::=================== #4 Start agent and API server===================="
bash step4a_launch_and_validate_agent_tgi_on_rocm.sh
echo "::endgroup::=================== #4 Agent test passed ===================="
echo "::group::=================== #5 Stop agent and API server===================="
stop_crag
stop_agent_docker
stop_retrieval_tool
echo "::endgroup::=================== #5 Agent and API server stopped===================="
echo y | docker system prune
echo "ALL DONE!!"

View File

@@ -0,0 +1,72 @@
#!/bin/bash
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
set -e
WORKPATH=$(dirname "$PWD")
export WORKDIR=${WORKPATH}/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export TOOLSET_PATH=$WORKPATH/tools/
IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
export REGISTRY=${IMAGE_REPO}
export TAG=${IMAGE_TAG}
export MODEL_CACHE=${model_cache:-"./data"}
function stop_crag() {
cid=$(docker ps -aq --filter "name=kdd-cup-24-crag-service")
echo "Stopping container kdd-cup-24-crag-service with cid $cid"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
}
function stop_agent_docker() {
cd $WORKPATH/docker_compose/amd/gpu/rocm
bash stop_agent_service_vllm_rocm.sh
}
function stop_retrieval_tool() {
echo "Stopping Retrieval tool"
local RETRIEVAL_TOOL_PATH=$WORKDIR/GenAIExamples/DocIndexRetriever
cd $RETRIEVAL_TOOL_PATH/docker_compose/intel/cpu/xeon/
docker compose -f compose.yaml down
}
echo "workpath: $WORKPATH"
echo "::group::=================== Stop containers ===================="
stop_crag
stop_agent_docker
stop_retrieval_tool
echo "::endgroup::"
cd $WORKPATH/tests
echo "::group::=================== #1 Building docker images===================="
bash step1_build_images.sh rocm_vllm > docker_image_build.log
echo "::endgroup::=================== #1 Building docker images completed===================="
echo "::group::=================== #2 Start retrieval tool===================="
bash step2_start_retrieval_tool_rocm_vllm.sh
echo "::endgroup::=================== #2 Retrieval tool started===================="
echo "::group::=================== #3 Ingest data and validate retrieval===================="
bash step3_ingest_data_and_validate_retrieval_rocm_vllm.sh
echo "::endgroup::=================== #3 Data ingestion and validation completed===================="
echo "::group::=================== #4 Start agent and API server===================="
bash step4_launch_and_validate_agent_rocm_vllm.sh
echo "::endgroup::=================== #4 Agent test passed ===================="
echo "::group::=================== #5 Stop agent and API server===================="
stop_crag
stop_agent_docker
stop_retrieval_tool
echo "::endgroup::=================== #5 Agent and API server stopped===================="
echo y | docker system prune
echo "ALL DONE!!"

Some files were not shown because too many files have changed in this diff Show More