Compare commits

...

201 Commits

Author SHA1 Message Date
bjzhjing
c8c6fa2e3e Provide unified scalable deployment and benchmarking support for exam… (#1315)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
Co-authored-by: letonghan <letong.han@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit ed163087ba)
2025-01-24 22:55:38 +08:00
NeuralChatBot
905a5100f9 Freeze OPEA images tag
Signed-off-by: NeuralChatBot <grp_neural_chat_bot@intel.com>
2025-01-24 08:31:22 +00:00
chen, suyue
259099d19f Remove kubernetes manifest related code and tests (#1466)
Remove deprecated kubernetes manifest related code and tests.
k8s implementation for those examples based on helm charts will target for next release.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-24 15:23:12 +08:00
chen, suyue
9a1118730b Freeze the triton version in vllm-gaudi image to 3.1.0 (#1463)
The new triton version 3.2.0 can't work with vllm-gaudi. Freeze the triton version in vllm-gaudi image to 3.1.0.

Issue create for vllm-fork: HabanaAI/vllm-fork#732
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-24 09:50:59 +08:00
chen, suyue
ffce7068aa Fix image on push action due to manifest test remove (#1460)
1. Fix image on push action due to manifest test remove.
2. Fix helm test cd workflow get test matrix step.
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-23 14:30:09 +08:00
dolpher
9b0f98be8b Update ChatQnA helm chart README. (#1459)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-23 10:54:39 +08:00
XinyuYe-Intel
f0fea7b706 Add docker compose yaml for text2image example (#1418)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-01-23 09:57:54 +08:00
Melanie Hart Buehler
1864fac978 Fixes MultimodalQnA dataprep endpoint and port in the UI (#1457)
Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-22 17:11:09 -08:00
Lianhao Lu
94f71f2322 Update top level readme (#1458)
Add helm support of SeachQnA and Text2Image in top level readme.

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
2025-01-23 09:07:33 +08:00
chen, suyue
6600c32a9b remove image build condition (#1456)
Test compose cd workflow depend on image build, so if we want to run both compose and helm charts deployment in cd workflow, this condition should be removed.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-23 00:17:04 +08:00
Liang Lv
d953332f43 Fix multimodal docker image issue for MutimodalQnA on Gaudi (#1455)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-23 00:12:06 +08:00
chyundunovDatamonsters
cbe5805f47 AgentQnA - add README file for deploy on ROCm (#1379)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2025-01-22 21:57:15 +08:00
Ervin Castelino
27fdbcab58 [chore/chatqna] Missing protocol in curl command (#1447)
This PR fixes the missing protocol for in the curl command mentioned in chatqna readme for tei-embedding-service.
2025-01-22 21:41:47 +08:00
lkk
f07cf1dad2 Fix wrong vllm repo. (#1454)
Use vllm-fork for gaudi.

fix the issue #1451
2025-01-22 21:22:56 +08:00
dolpher
ee0e5cc8d9 Sync value files from GenAIInfra (#1428)
All gaudi values updated with extra flags.
Added helm support for 2 new examples Text2Image and SearchQnA. Minor fix for llm-uservice.

Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-22 17:44:11 +08:00
chen, suyue
5c36443b11 Use local hub cache for AgentQnA test (#1450)
Use local hub cache for AgentQnA test to save workspace.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-22 13:23:00 +08:00
Lianhao Lu
62cea74a23 CI: improve helm CI (#1452)
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
2025-01-22 09:18:35 +08:00
WenjiaoYue
b721c256f9 Fix Domain Access Issue in Latest Vite Version (#1444)
Fix the restriction on using domain names when users are using the latest version of Vite

When users use the new version of Vite, the UI cannot be accessed via domain names due to Vite's new rules. This fix adds the corresponding parameters according to Vite's new rules, ensuring that users can access the frontend via domain names when building the UI.

Fixes #1441

Co-authored-by: WenjiaoYue <wenjiao.yue@intel.com>
2025-01-21 23:28:37 +08:00
chen, suyue
927698e23e Simplify git clone code in CI test (#1434)
1. Simplify git clone code in CI test.
2. Replace git clone branch in Dockerfile.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-21 23:00:08 +08:00
ZePan110
c3e84b5ffa Fix test matrix for helm charts (#1449)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-21 22:28:31 +08:00
ZePan110
6b2a041f25 Fix Helm-chart workflow issues. (#1448)
Fix matrix error issues and CD test files cannot be obtained.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-21 21:48:57 +08:00
ZePan110
842f46326b Switch helm-chart test runs-on label. (#1446)
Switch helm-chart test runs-on label from ${{ inputs.hardware }} to k8s-${{ inputs.hardware }}.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-21 18:07:03 +08:00
Wang, Kai Lawrence
284db982be [ROCm] Fix the hf-token setting for TGI and TEI in ChatQnA (#1432)
This PR is to correct the env variable names in chatqna example on ROCm platform passing to the docker container of TGI and TEI. For tgi, either HF_TOKEN and HUGGING_FACE_HUB_TOKEN could be parsed in TGI while HF_API_TOKEN can be parsed in TEI.

TGI: https://github.com/huggingface/text-generation-inference/blob/main/router/src/server.rs#L1700C1-L1702C15
TEI: https://github.com/huggingface/text-embeddings-inference/blob/main/router/src/main.rs#L112

Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-21 14:22:39 +08:00
ZePan110
fc96fe83e2 Fix CD workflow issue (#1443)
Fix the issue of CD workflow values_files errors.

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-21 11:54:12 +08:00
Hoong Tee, Yeoh
0316114c4b ProductivitySuite: Fix FaqGen Microservice CI test fail (#1437)
Change in FAQGen microservice for content-type header result in CI failure.

#1431
Signed-off-by: Yeoh, Hoong Tee <hoong.tee.yeoh@intel.com>
2025-01-21 10:23:35 +08:00
chen, suyue
0408453fa2 Unify the yaml name to fix the CD workflow (#1435)
Fix the issue in #1372

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-21 01:10:41 +08:00
XinyaoWa
d0cd0aaf53 Update GraphRAG to be compatible with latest component changes (#1427)
- Updated ENV VARS to align with recent changes in neo4j dataprep and retriever.
- upgraded tgi-gaudi image version
Related to GenAIComps repo issue #1025 (opea-project/GenAIComps#1025)

Original PR #1384
Original contributor is @rbrugaro

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
2025-01-21 00:18:01 +08:00
chen, suyue
0ba3decb6b Simplify git clone code in CI test (#1422)
1. Simplify git clone code in CI test. 
2. Replace git clone branch in Dockerfile.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 23:55:20 +08:00
Wang, Kai Lawrence
3d3ac59bfb [ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu (#1430)
Update the default LLM to llama3-8B on cpu/nvgpu/amdgpu/gaudi for docker-compose deployment to avoid the potential model serving issue or the missing chat-template issue using neural-chat-7b.

Slow serving issue of neural-chat-7b on ICX: #1420
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-20 22:47:56 +08:00
Melanie Hart Buehler
f11ab458d8 MultimodalQnA image query, pdf, dynamic ports, and UI updates (#1381)
Per the proposed changes in this [RFC](https://github.com/opea-project/docs/blob/main/community/rfcs/24-10-02-GenAIExamples-001-Image_and_Audio_Support_in_MultimodalQnA.md)'s Phase 2 plan, this PR adds support for image queries, PDF ingestion and display, and dynamic ports. There are also some bug fixes. This PR goes with [this one in GenAIComps](https://github.com/opea-project/GenAIComps/pull/1134).

Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
2025-01-20 22:41:52 +08:00
ZePan110
f3562bef36 Add helm e2e test workflow (#1372)
Add both CICD workflow for helm charts values test. 

Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-20 21:04:11 +08:00
chen, suyue
7a54064d65 remove Dockerfile.wrapper (#1429)
Remove Dockerfile.wrapper, it's not used anymore and no test cover this Dockerfile. So remove this Dockerfile to avoid regression.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 20:49:18 +08:00
Liang Lv
0f7e5a37ac Adapt code for dataprep microservice refactor (#1408)
https://github.com/opea-project/GenAIComps/pull/1153

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-20 20:37:03 +08:00
xiguiw
2d5898244c Enchance health check in GenAIExample docker-compose (#1410)
Fix service launch issue

1. Update Gaudi TGI image from 2.0.6 to 2.3.1
2. Change the hpu-gaudi TGI health check condition.

Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-01-20 20:13:13 +08:00
Neo Zhang Jianyu
59722d2bc9 [Bug] Enhance the template (#1396)
Enhance the bug & feature template according to the issue #1002.
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
2025-01-20 17:56:14 +08:00
chen, suyue
6bfd156573 Clean up test scripts and enhance git clone (#1417)
1. Clean up test code in scripts.
2. Simplify git clone code.
3. Replace git clone branch in Dockerfile.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 16:34:28 +08:00
XinyuYe-Intel
528770a8d7 Add UT for Text2Image on Gaudi (#1424)
Add UT for Text2Image on Gaudi.

#1421
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-01-20 16:01:35 +08:00
chen, suyue
239995da16 Update DocIndexRetriever CI test scripts (#1416)
1. Add image build condition.
2. Update single branch clone.

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 11:16:38 +08:00
chen, suyue
f65e8d8668 Add port 5000 checking and warning (#1414)
Port 5000 is used by local docker registry, please DO NOT use it in docker compose deployment!!!

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-20 09:09:31 +08:00
chen, suyue
a49a36cebc Add secrets OPENAI_API_KEY (#1412)
Add secrets OPENAI_API_KEY for AMD GPU CI test. 

Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-19 19:39:45 +08:00
Wang, Kai Lawrence
742cb6ddd3 [ChatQnA] Switch to vLLM as default llm backend on Xeon (#1403)
Switching from TGI to vLLM as the default LLM serving backend on Xeon for the ChatQnA example to enhance the perf.

https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-17 20:48:19 +08:00
Wang, Kai Lawrence
00e9da9ced [ChatQnA] Switch to vLLM as default llm backend on Gaudi (#1404)
Switching from TGI to vLLM as the default LLM serving backend on Gaudi for the ChatQnA example to enhance the perf. 

https://github.com/opea-project/GenAIExamples/issues/1213
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2025-01-17 20:46:38 +08:00
chyundunovDatamonsters
277222a922 General README.md - add deploy on AMD info (#1409)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-17 20:26:59 +08:00
lkk
5c68effc9f update agent example for the GenAIComps changes. (#1407)
Update build.yaml and compose_vllm.yaml because of refactoring of GenAIComps.

Fix issue left by https://github.com/opea-project/GenAIExamples/pull/1353
2025-01-17 11:29:11 +08:00
XinyaoWa
39409d7f61 Align OpenAI API for FaqGen, DocSum (#1401)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-17 11:19:35 +08:00
XinyaoWa
71e3c57366 Standardize name for LLM comps (#1402)
Update all the names for classes and files in llm comps to follow the standard format, related GenAIComps PR opea-project/GenAIComps#1162

Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-16 23:10:27 +08:00
Letong Han
5ad24af2ee Fix Vectorestores Path Issue of Refactor (#1399)
Fix vectorestores path issue caused by refactor in PR opea-project/GenAIComps#1159.
Modify docker image name and file path in docker_images_list.md.

Signed-off-by: letonghan <letong.han@intel.com>
2025-01-16 19:50:59 +08:00
WenjiaoYue
3a9a24a51a Agent ui (#1389)
Signed-off-by: WenjiaoYue <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>
Co-authored-by: WenjiaoYue <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>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-16 18:47:46 +08:00
XinyaoWa
301b5e9a69 Fix vllm hpu to a stable release (#1398)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-16 16:35:32 +08:00
Yao Qing
b4269d6c4f Modify the corresponding path based on the refactor of chathistory in GenAIComps. (#1397)
GenAIComps has refactored chathistory based on E-RAG code structure. Related path in GenAIExample have been modified.

Fix GenAIComps Issue https://github.com/opea-project/GenAIComps/issues/989 
Signed-off-by: Yao, Qing <qing.yao@intel.com>
2025-01-16 14:26:17 +08:00
Letong Han
4cabd55778 Refactor Retrievers related Examples (#1387)
Delete redundant retrievers docker image in docker_images_list.md.
Refactor Retrievers related Examples READMEs.
Change all of the comps/retrievers/xxx/xxx/Dockerfile path into comps/retrievers/src/Dockerfile.

Fix the Examples CI issues of PR opea-project/GenAIComps#1138.
Signed-off-by: letonghan <letong.han@intel.com>
2025-01-16 14:21:48 +08:00
xiguiw
698a06edbf [DOC] Fix document issue (#1395)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2025-01-16 11:30:07 +08:00
Eero Tamminen
0eae391fda Use staged builds to minimize final image sizes (#1031)
Staged image builds so that final images do not have redundant things like:
- Git tool and its deps
- Git repo history
- Test directories

Fixes: #225
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
2025-01-16 11:14:47 +08:00
XinyaoWa
23d885bf60 Refactor vllm openvino to third parties (#1388)
vllm-openvino is a dependency for text generation comps, in GenAIComps PR opea-project/GenAIComps#1141 we move it to third-parties folder, update the path accordingly.

#998 
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-16 10:07:56 +08:00
minmin-intel
287f03a834 Add SQL agent to AgentQnA (#1370)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-01-15 09:31:13 -08:00
ZePan110
a65a1e5598 Fix CI filter issue (#1393)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-15 11:39:51 +08:00
Neo Zhang Jianyu
9812c2fb45 Update check-online-doc-build.yml (#1390) 2025-01-15 09:07:02 +08:00
XinyaoWa
7d218b9f36 Remove vllm hpu commit id limit (#1386)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-14 11:05:32 +08:00
Zhu Yongbo
ba9892f8ee minor bug fix for EC-RAG (#1378)
Signed-off-by: Zhu, Yongbo <yongbo.zhu@intel.com>
2025-01-14 10:45:15 +08:00
XinyaoWa
ff1310b11a Refactor docsum (#1336)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-13 15:49:48 +08:00
Sihan Chen
ca15fe9bdb Refactor lvm related examples (#1333) 2025-01-13 13:42:06 +08:00
XinyaoWa
f48bd8e74f Refactor Faqgen (#1323)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-13 13:01:04 +08:00
Ying Hu
91ff520baa Update README.md for add K8S cluster link for Gaudi (#1380) 2025-01-13 09:33:58 +08:00
Liang Lv
3ca78867eb Update example code for embedding dependency moving to 3rd_party (#1368)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-10 15:36:58 +08:00
Yao Qing
7a3dfa90ca Fix for animation dockerfile path. (#1371)
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2025-01-10 11:44:57 +08:00
dolpher
c795ef2203 Add helm deployment instructions for GenAIExamples (#1373)
Add helm deployment instructions for ChatQnA, AgentQnA, AudioQnA, CodeTrans, DocSum, FaqGen and VisualQnA

Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-10 09:55:31 +08:00
chen, suyue
99120f4cd2 Update action token for CI (#1374)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-09 17:19:07 +08:00
XinyuYe-Intel
9fe480b010 Update dockerfile path for text2image (#1307)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2025-01-09 12:03:27 +08:00
XinyuYe-Intel
113281d073 Update path for finetuning (#1306)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-09 12:01:59 +08:00
Liang Lv
370d6928c1 Update example code for prompt registry refactor (#1362)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-09 11:59:32 +08:00
Liang Lv
2b26450bb9 Update docker file path for feedback management refactor (#1364)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-09 11:21:25 +08:00
Louie Tsai
81022355a7 Enable OpenTelemetry Tracing for ChatQnA TGI serving on Gaudi (#1316)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2025-01-08 17:20:13 -08:00
Jaswanth Karani
ddacb7e86d fixed build issue (#1367) 2025-01-08 22:19:23 +08:00
Sihan Chen
5128c2d650 Refactor web retrievers links (#1338) 2025-01-08 16:19:50 +08:00
Liang Lv
b3c405a5f6 Adapt example code for guardrails refactor (#1360)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-08 14:35:23 +08:00
dolpher
5638075d65 Add helm deployment instructions for codegen (#1351)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-08 13:20:32 +08:00
chen, suyue
23117871c2 remove chatqna-conversation-ui build in CI test (#1361)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-08 12:09:33 +08:00
WenjiaoYue
9970605460 Adapt refactor comps (#1340)
Signed-off-by: WenjiaoYue
2025-01-08 10:36:24 +08:00
dolpher
28206311fd Disable GMC CI temporarily (#1359)
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
2025-01-08 09:55:53 +08:00
ZePan110
589bfb2b7a Change license template from 2024 to 2025 (#1358)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-07 19:29:55 +08:00
Pranav Singh
d2b49bbc82 [ChatQNA] Fix K8s Deployment for CPU/HPU (#1274)
Signed-off-by: Pranav Singh <pranav.singh@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-07 13:45:09 +08:00
Ying Hu
41374d865b Update README.md for support matrix (#983)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com>
2025-01-07 11:45:42 +08:00
pre-commit-ci[bot]
2c624e1f5f [pre-commit.ci] pre-commit autoupdate (#1356)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-07 11:13:07 +08:00
Ying Hu
00241d01d2 Update README.md for quick start guide (#1355)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-07 10:08:16 +08:00
ZePan110
ed2b8ed983 Exclude dockerfile under tests and exclude check Dockerfile under tests. (#1354)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-07 09:05:01 +08:00
lkk
a6e702e4d5 refine agent directories. (#1353) 2025-01-06 17:40:24 +08:00
ZePan110
aa5c91d7ee Check duplicated dockerfile (#1289)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2025-01-06 17:30:12 +08:00
chen, suyue
b88d09e23f Fix code owner list (#1352)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-06 14:00:13 +08:00
XinyaoWa
464e2d3125 Rename streaming to stream to align with OpenAI API (#1332)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2025-01-06 13:25:55 +08:00
chen, suyue
1f29eca288 fix chatqna benchmark without rerank config issue (#1341)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-06 09:16:20 +08:00
chen, suyue
1d7ac82979 Fix changed file detect issue (#1339)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-03 11:24:02 +08:00
chen, suyue
5c7a5bd850 Update Code and README for GenAIComps Refactor (#1285)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Signed-off-by: letonghan <letong.han@intel.com>
Signed-off-by: ZePan110 <ze.pan@intel.com>
Signed-off-by: WenjiaoYue <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>
2025-01-02 20:03:26 +08:00
Yao Qing
72f8079289 Refactor text2sql. (#1304)
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
2025-01-02 10:52:21 +08:00
Zhu Yongbo
6169ea4921 add new feature and bug fix for EC-RAG (#1324)
Signed-off-by: Zhu, Yongbo <yongbo.zhu@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-01-02 09:25:20 +08:00
chyundunovDatamonsters
75b0961a48 Translation App - Adding files to deploy Translation application on AMD GPU (#1191)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
2025-01-02 09:19:44 +08:00
Sihan Chen
cc1d97f816 Refactor AudioQnA/MultiModalQnA/AvatarChatbot (#1310)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chensuyue <suyue.chen@intel.com>
2024-12-31 12:47:30 +08:00
xiguiw
250ffb8b66 [DOC] Fix docker build command in document (#1287)
Signed-off-by: Wang, Xigui <xigui.wang@intel.com>
2024-12-31 00:02:22 +08:00
ZePan110
1e9d111982 Block the manifest test first and restore it after the Refactor work is completed. (#1321)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2024-12-30 16:24:19 +08:00
Ying Hu
597f17b979 Update set_env.sh to fix LOGFLAG warning (#1319) 2024-12-30 10:54:26 +08:00
Yao Qing
b9790d809b Refactoring animation. (#1301)
Signed-off-by: Yao, Qing <qing.yao@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-12-27 17:21:47 +08:00
Daniel De León
b27b48c488 Add microservice resources to no_proxy in the main ChatQnA README (#1269)
Signed-off-by: Daniel Deleon <daniel.de.leon@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2024-12-27 16:14:28 +08:00
Dina Suehiro Jones
0bf1d0be65 Bug fix to add missing BRIDGE_TOWER_EMBEDDING env var for MultimodalQnA (#1280)
Signed-off-by: dmsuehir <dina.s.jones@intel.com>
2024-12-26 23:30:57 -08:00
Sihan Chen
a01729a5c2 Refactor DocSum example (#1286)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-26 14:45:17 +08:00
chen, suyue
6b6a08df78 Add minimal containers and ports clean up before test (#1291)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-12-26 10:59:26 +08:00
chen, suyue
0b23cba505 add manually clean up container action (#1296)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-26 09:26:53 +08:00
XinyaoWa
50dd959d60 Support Long context for DocSum (#1255)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lkk <33276950+lkk12014402@users.noreply.github.com>
2024-12-20 19:17:10 +08:00
XinyaoWa
05365b6140 FaqGen param fix (#1277)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2024-12-20 11:30:36 +08:00
Sihan Chen
fd706d1a70 Align DocIndexRetriever Xeon tests with Gaudi (#1272) 2024-12-20 10:30:51 +08:00
Sihan Chen
3b9e55cb8e Minor fix DocIndexRetriever test (#1266) 2024-12-19 12:12:33 +08:00
bjzhjing
7d9b34cf5e Chatqna/benchmark: Remove the deprecated directory (#1261)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
2024-12-19 10:51:01 +08:00
Mustafa
84a6a6e9bc Adding URL summary option to DocSum Gradio-UI (#1248)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
Co-authored-by: okhleif-IL <omar.khleif@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lkk <33276950+lkk12014402@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: WenjiaoYue <wenjiao.yue@intel.com>
2024-12-19 10:49:03 +08:00
chen, suyue
89a7f9e001 Update CODEOWNERS list for PR review (#1262)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com>
2024-12-19 10:01:52 +08:00
Artem Astafev
236ea6bcce Added compose example for MultimodalQnA deployment on AMD ROCm systems (#1233)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
2024-12-18 17:43:32 +08:00
chyundunovDatamonsters
67634dfd22 DocSum - Solving the problem of running DocSum on ROCm (#1268)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2024-12-18 17:38:38 +08:00
Artem Astafev
df7c192835 Added docker compose example for AgentQnA deployment on AMD ROCm (#1166)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
2024-12-18 10:21:00 +08:00
Letong Han
f930638844 Update Multimodal Docker File Path (#1252)
Signed-off-by: letonghan <letong.han@intel.com>
2024-12-17 17:30:29 +08:00
Sun, Xuehao
5613add4dd Change to pull_request_target for dependency review workflow (#1256)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2024-12-17 12:05:02 +08:00
lkk
e18369ba0d remove examples gateway. (#1250)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-14 13:19:51 +08:00
lkk
2af1ea0f8e remove examples gateway. (#1243)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-13 15:16:11 +08:00
Melanie Hart Buehler
c760cac2f4 Adds audio querying to MultimodalQ&A Example (#1225)
Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
Signed-off-by: dmsuehir <dina.s.jones@intel.com>
Co-authored-by: Omar Khleif <omar.khleif@intel.com>
Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2024-12-12 16:05:14 +08:00
Li Gang
a50e4e6f9f [DocIndexRetriever] enable the without-rerank flavor (#1223)
Signed-off-by: Li Gang <gang.g.li@intel.com>
Co-authored-by: ligang <ligang@ligang-nuc9v.bj.intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-12 09:34:21 +08:00
Omar Khleif
00b526c8e5 Changed Default UI to Gradio (#1246)
Signed-off-by: okhleif-IL <omar.khleif@intel.com>
2024-12-11 11:04:10 -08:00
Wang, Kai Lawrence
4c01e14642 [ChatQnA] Remove enforce-eager to enable HPU graphs for better vLLM perf (#1210)
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2024-12-10 13:19:15 +08:00
Lianhao Lu
6f9f6f0bad Remove deprecated docker compose files (#1238)
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
2024-12-10 09:43:19 +08:00
Pranav Singh
893f324d07 [ChatQNA] Fixes Embedding Endpoint (#1230)
Signed-off-by: Pranav Singh <pranav.singh@intel.com>
2024-12-09 10:12:16 +08:00
Artem Astafev
77e640e2f3 Added compose example for VisualQnA deployment on AMD ROCm systems (#1201)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-07 18:58:40 +08:00
Mustafa
07e47a1f38 Update tests for issue 1229 (#1231)
Signed-off-by: Mustafa <mustafa.cetin@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-07 09:07:52 +08:00
lkk
bde285dfce move examples gateway (#992)
Co-authored-by: root <root@idc708073.jf.intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com>
2024-12-06 14:40:25 +08:00
WenjiaoYue
f5c08d4fbb Update audioQnA compose (#1227)
Signed-off-by: Yue, Wenjiao <wenjiao.yue@intel.com>
2024-12-05 16:23:47 +08:00
pallavijaini0525
3a371ac102 Updated the Pinecone readme to reflect the new structure (#1222)
Signed-off-by: Pallavi Jaini <pallavi.jaini@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-05 10:04:09 +08:00
sgurunat
031cf6e1ff ChatQnA: Update kubernetes xeon chatqna remote inference and svelte UI (#1215)
Signed-off-by: sgurunat <gurunath.s@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-04 22:40:03 +08:00
sgurunat
3299e5c9f5 ChatQnA: Update chatqna-vllm-remote-inference (#1224)
Signed-off-by: sgurunat <gurunath.s@intel.com>
2024-12-04 22:33:27 +08:00
ZePan110
340796bbae Split ChatQnA manifest test (#1190)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-04 15:17:46 +08:00
Lianhao Lu
8182a83382 CI: Add check for conflict image build definition (#1184)
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
2024-12-03 10:46:16 +08:00
WenjiaoYue
8192c3166f Update OPEA example package.json version (#1211)
Signed-off-by: Yue, Wenjiao <wenjiao.yue@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-02 21:33:30 +08:00
chen, suyue
240054ac52 CD workflow update (#1221)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-12-02 17:42:02 +08:00
Neo Zhang Jianyu
c9caf1c083 fix file name (#1219)
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
2024-12-02 14:29:20 +08:00
Neo Zhang Jianyu
a426a9a51d add label automaticly when create issue (#1217)
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
2024-12-02 13:41:22 +08:00
Zhu Yongbo
bb466b3791 EdgeCraft RAG UI bug fix (#1189)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-12-02 11:47:04 +08:00
chen, suyue
0f8344e4f5 Update test params (#1182)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-11-29 15:47:15 +08:00
ZePan110
ed8dbaac47 Revert "WA for the issue of vllm Dockerfile.cpu build failure (#1195)" (#1206) 2024-11-28 13:36:14 +08:00
ZePan110
e8cffc6146 Check image and service names and Dockerfile in build.yaml (#1209)
Signed-off-by: ZePan110 <ze.pan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-28 13:14:11 +08:00
Sihan Chen
907b30b7fe Refactor service names (#1199) 2024-11-28 10:01:31 +08:00
Letong Han
545aa571bf [ChatQnA] Update Benchmark E2E Parameters (#1200)
Signed-off-by: letonghan <letong.han@intel.com>
2024-11-27 17:11:11 +08:00
ZePan110
5422bcb970 WA for the issue of vllm Dockerfile.cpu build failure (#1195)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2024-11-27 14:51:19 +08:00
VincyZhang
736155ca95 Detect dangerous command (#1179)
Signed-off-by: Wenxin Zhang <wenxin.zhang@intel.com>
2024-11-27 11:43:56 +08:00
ZePan110
39fa25e03a Limit the version of vllm to avoid dockers build failures. (#1183)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2024-11-25 10:33:33 +08:00
Wang, Kai Lawrence
ac470421d0 Update the llm backend ports (#1172)
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
2024-11-22 09:20:09 +08:00
Mingyuan Qi
edcd7c9d6a Fix code scanning alert no. 21: Uncontrolled data used in path expression (#1171)
Signed-off-by: Mingyuan Qi <mingyuan.qi@intel.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2024-11-21 20:36:28 +08:00
bjzhjing
ef2047b070 Adjustments for helm release change (#1173)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
2024-11-21 14:14:27 +08:00
Letong Han
94231584aa Fix Translation Manifest CI with MODEL_ID (#1169)
Signed-off-by: letonghan <letong.han@intel.com>
2024-11-21 10:48:52 +08:00
minmin-intel
c5177c5e2f Fix DocIndexRetriever CI error on Xeon (#1167)
Signed-off-by: minmin-intel <minmin.hou@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-21 09:30:11 +08:00
Artem Astafev
006c61bcbb Add example for AudioQnA deploy in AMD ROCm (#1147)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Liang Lv <liang1.lv@intel.com>
2024-11-20 20:46:27 +08:00
chen, suyue
cc108b5a18 Fix DBQnA image build (#1165)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-11-20 10:56:49 +08:00
chen, suyue
f70d9c3853 chatqna benchmark for v1.1 release (#1120)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
2024-11-19 22:57:25 +08:00
ZePan110
8808b51e42 Rename image name XXX-hpu to XXX-gaudi (#1154)
Signed-off-by: ZePan110 <ze.pan@intel.com>
2024-11-19 22:18:41 +08:00
chen, suyue
17d4b0c97f freeze nodejs version in CI test (#1162)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-11-19 13:22:56 +08:00
Sun, Xuehao
3a03d31f8f Update manual-freeze-tag workflow (#1161)
Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
2024-11-19 11:00:36 +08:00
dependabot[bot]
179fd84362 Bump gradio from 4.44.0 to 5.5.0 in /DocSum/ui/gradio (#1157)
Signed-off-by: dependabot[bot] <support@github.com>
2024-11-18 23:50:56 +08:00
chen, suyue
9ba034b22d fix the docker image name for release image build (#1152)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-11-18 23:48:01 +08:00
jotpalch
c3e6f43ece Fix command in README for deploying ChatQnA application (#1156) 2024-11-18 22:59:22 +08:00
Theresa
1ac756a1c7 Rename the GraphRAG UI image (#1155)
Signed-off-by: ichbinblau <theresa.shan@intel.com>
2024-11-18 20:07:22 +08:00
sgurunat
56f770cb28 ChatQnA with Remote Inference Endpoints (Kubernetes) (#1149)
Signed-off-by: sgurunat <gurunath.s@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2024-11-18 20:06:17 +08:00
XinyaoWa
0cdeb946e4 DocSum Manifest support multimedia (#1158)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-18 18:46:01 +08:00
Artem Astafev
5648839411 Add compose example for FaqGen AMD ROCm (#1126)
Signed-off-by: artem-astafev <a.astafev@datamonsters.com>
2024-11-18 17:38:21 +08:00
Mustafa
eb91d1f054 Docsum (#1095)
Signed-off-by: Mustafa <mustafa.cetin@intel.com>
Signed-off-by: Harsha Ramayanam <harsha.ramayanam@intel.com>
Co-authored-by: Harsha Ramayanam <harsha.ramayanam@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: XinyaoWa <xinyao.wang@intel.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2024-11-18 17:15:42 +08:00
Wang, Kai Lawrence
2587179224 Add instructions of modifying reranking docker image for NVGPU (#1133)
Signed-off-by: Wang, Kai Lawrence <kai.lawrence.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-18 15:37:32 +08:00
chyundunovDatamonsters
7e62175c2e Adding files to deploy CodeTrans application on AMD GPU (#1138)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
2024-11-18 14:58:38 +08:00
Louie Tsai
152adf8012 maintain a version info for docker_compose yaml files among release (#1141)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2024-11-17 22:39:41 -08:00
chyundunovDatamonsters
83172e9a99 Adding files to deploy CodeGen application on AMD GPU (#1130)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-18 14:36:23 +08:00
Liang Lv
fb514bb8ba Add chatqna wrapper for multiple model selection (#1144)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: Ying Hu <ying.hu@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2024-11-18 10:48:09 +08:00
Artem Astafev
b1bb6db52d Add compose example for DocSum amd rocm deployment (#1125)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-18 09:09:12 +08:00
rui2zhang
7949045176 EdgeCraftRAG: Add E2E test cases for EdgeCraftRAG - local LLM and vllm (#1137)
Signed-off-by: Zhang, Rui <rui2.zhang@intel.com>
Signed-off-by: Mingyuan Qi <mingyuan.qi@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Mingyuan Qi <mingyuan.qi@intel.com>
2024-11-17 18:22:32 +08:00
Lianhao Lu
cbe952ec5e Fail CI manifest test if response content is not expected (#1145)
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2024-11-17 12:46:31 +08:00
chen, suyue
3b1a9fe9e1 optimize hardware list for test (#1151)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-11-15 22:46:02 +08:00
chen, suyue
e66d7fe381 fix typo involved in ci workflow (#1150)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-11-15 21:19:29 +08:00
Artem Astafev
6d3a017609 Add compose example for ChatQnA AMD ROCm deployment (#1122)
Signed-off-by: Artem Astafev <a.astafev@datamonsters.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-15 17:24:06 +08:00
Ying Hu
dbf4ba03fa Update AgentQnA README.md for refactor doc structure (#1146)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-15 16:30:13 +08:00
XinyaoWa
4f96d9e605 vllm hpu fix version for bug fix (#1142)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
2024-11-15 15:12:53 +08:00
Ying Hu
a8f4245384 Update README.md for usage experience (#1135)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2024-11-15 14:23:12 +08:00
Mingyuan Qi
096a37aacc EdgeCraftRAG: Fix multiple issues (#1143)
Signed-off-by: Mingyuan Qi <mingyuan.qi@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-15 14:01:27 +08:00
rbrugaro
6f8fa6a689 Grag ex1.1 (#1123)
Signed-off-by: Rita Brugarolas <rita.brugarolas.brufau@intel.com>
Signed-off-by: theresa <theresa.shan@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: theresa <theresa.shan@intel.com>
2024-11-15 13:17:06 +08:00
Letong Han
39f68d5d6b Fix SearchQnA CI Issue (#1134)
Signed-off-by: letonghan <letong.han@intel.com>
2024-11-15 10:01:27 +08:00
Louie Tsai
00d9bb6128 Enable vLLM Profiling for ChatQnA on Gaudi (#1128)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
2024-11-14 15:46:33 -08:00
Abolfazl Shahbazi
59b624c677 Fix minor documentation build issue (#1139)
Signed-off-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
2024-11-14 15:29:50 -08:00
chen, suyue
2b2c7ee2f5 upgrade setuptools version to fix CVE-2024-6345 (#999)
Signed-off-by: chensuyue <suyue.chen@intel.com>
2024-11-14 14:57:16 +08:00
Hoong Tee, Yeoh
6b9a27dd83 DBQnA: Include workflow in README (#956)
Signed-off-by: Yeoh, Hoong Tee <hoong.tee.yeoh@intel.com>
2024-11-14 14:05:28 +08:00
Yi Yao
5720cd45c0 Add benchmark launcher for AudioQnA (#981)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-14 13:58:51 +08:00
XinyaoWa
73879d3cec fix faq ui bug (#1118)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-14 10:00:30 +08:00
Lucas Melo
7c9ed04132 ChatQnA - Add Terraform and Ansible Modules information (#970)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Signed-off-by: lucasmelogithub <lucas.melo@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Malini Bhandaru <malini.bhandaru@intel.com>
2024-11-13 11:42:12 -08:00
lvliang-intel
9ff7df9202 Use fixed version of TEI Gaudi for stability (#1101)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: Malini Bhandaru <malini.bhandaru@intel.com>
2024-11-13 10:45:50 -08:00
Abolfazl Shahbazi
b5f95f735e Fix missing end of file chars (#1106)
Signed-off-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-13 09:40:53 -08:00
chen, suyue
393367e9f1 Fix left issue of tgi version update (#1121)
Signed-off-by: chensuyue <suyue.chen@intel.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-13 15:42:42 +08:00
Louie Tsai
7adbba6add Enable vLLM Profiling for ChatQnA (#1124) 2024-11-13 11:26:31 +08:00
pallavijaini0525
0d52c2f003 Pinecone update to Readme and docker compose for ChatQnA (#540)
Signed-off-by: pallavi jaini <pallavi.jaini@intel.com>
Signed-off-by: AI Workloads <aigoldrush1@g2-r3-2.iind.intel.com>
Signed-off-by: Pallavi Jaini <pallavi,jaini@intel.com>
Signed-off-by: Pallavi Jaini <pallavi.jaini@intel.com>
Signed-off-by: root <root@test-pjaini.535545281608.us-region-2.idcservice.net>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: AI Workloads <aigoldrush1@g2-r3-2.iind.intel.com>
Co-authored-by: Pallavi Jaini <pallavi,jaini@intel.com>
Co-authored-by: root <root@test-pjaini.535545281608.us-region-2.idcservice.net>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2024-11-13 09:32:37 +08:00
lvliang-intel
1ff85f6a85 Upgrade TGI Gaudi version to v2.0.6 (#1088)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
2024-11-12 14:38:22 +08:00
bjzhjing
f7a7f8aa3f Fix typo (#1117)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
2024-11-12 09:54:05 +08:00
lvliang-intel
e3187be819 Update ChatQnA manifests using always pull image policy (#1100)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2024-11-11 14:37:14 +08:00
Sihan Chen
abd9d12937 Fix non stream case (#1115)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-11 14:18:42 +08:00
bjzhjing
a7353bbaa4 Refine performance directory (#1017)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
2024-11-11 13:58:46 +08:00
Letong Han
aa314f6757 [Readme] Update ChatQnA Readme for LLM Endpoint (#1086)
Signed-off-by: letonghan <letong.han@intel.com>
2024-11-11 13:53:06 +08:00
797 changed files with 26019 additions and 31953 deletions

38
.github/CODEOWNERS vendored
View File

@@ -1,17 +1,23 @@
/AgentQnA/ kaokao.lv@intel.com
/AudioQnA/ sihan.chen@intel.com
/ChatQnA/ liang1.lv@intel.com
/CodeGen/ liang1.lv@intel.com
/CodeTrans/ sihan.chen@intel.com
/DocSum/ letong.han@intel.com
* liang1.lv@intel.com feng.tian@intel.com suyue.chen@intel.com
/.github/ suyue.chen@intel.com ze.pan@intel.com
/AgentQnA/ kaokao.lv@intel.com minmin.hou@intel.com
/AudioQnA/ sihan.chen@intel.com wenjiao.yue@intel.com
/AvatarChatbot/ chun.tao@intel.com kaokao.lv@intel.com
/ChatQnA/ liang1.lv@intel.com letong.han@intel.com
/CodeGen/ liang1.lv@intel.com xinyao.wang@intel.com
/CodeTrans/ sihan.chen@intel.com xinyao.wang@intel.com
/DBQnA/ supriya.krishnamurthi@intel.com liang1.lv@intel.com
/DocIndexRetriever/ kaokao.lv@intel.com chendi.xue@intel.com
/InstructionTuning xinyu.ye@intel.com
/RerankFinetuning xinyu.ye@intel.com
/MultimodalQnA tiep.le@intel.com
/FaqGen/ xinyao.wang@intel.com
/SearchQnA/ sihan.chen@intel.com
/Translation/ liang1.lv@intel.com
/VisualQnA/ liang1.lv@intel.com
/ProductivitySuite/ hoong.tee.yeoh@intel.com
/VideoQnA huiling.bao@intel.com
/*/ liang1.lv@intel.com
/DocSum/ letong.han@intel.com xinyao.wang@intel.com
/EdgeCraftRAG/ yongbo.zhu@intel.com mingyuan.qi@intel.com
/FaqGen/ yogesh.pandey@intel.com xinyao.wang@intel.com
/GraphRAG/ rita.brugarolas.brufau@intel.com abolfazl.shahbazi@intel.com
/InstructionTuning/ xinyu.ye@intel.com kaokao.lv@intel.com
/MultimodalQnA/ melanie.h.buehler@intel.com tiep.le@intel.com
/ProductivitySuite/ jaswanth.karani@intel.com hoong.tee.yeoh@intel.com
/RerankFinetuning/ xinyu.ye@intel.com kaokao.lv@intel.com
/SearchQnA/ sihan.chen@intel.com letong.han@intel.com
/Text2Image/ wenjiao.yue@intel.com xinyu.ye@intel.com
/Translation/ liang1.lv@intel.com sihan.chen@intel.com
/VideoQnA/ huiling.bao@intel.com xinyao.wang@intel.com
/VisualQnA/ liang1.lv@intel.com sihan.chen@intel.com

View File

@@ -4,6 +4,7 @@
name: Report Bug
description: Used to report bug
title: "[Bug]"
labels: ["bug"]
body:
- type: dropdown
id: priority
@@ -65,6 +66,7 @@ body:
options:
- label: Pull docker images from hub.docker.com
- label: Build docker images from source
- label: Other
validations:
required: true
@@ -73,10 +75,11 @@ body:
attributes:
label: Deploy method
options:
- label: Docker compose
- label: Docker
- label: Kubernetes
- label: Helm
- label: Docker Compose
- label: Kubernetes Helm Charts
- label: Kubernetes GMC
- label: Other
validations:
required: true
@@ -87,6 +90,7 @@ body:
options:
- Single Node
- Multiple Nodes
- Other
default: 0
validations:
required: true
@@ -126,3 +130,12 @@ body:
render: shell
validations:
required: false
- type: textarea
id: attachments
attributes:
label: Attachments
description: Attach any relevant files or screenshots.
validations:
required: false

View File

@@ -4,6 +4,7 @@
name: Report Feature
description: Used to report feature
title: "[Feature]"
labels: ["feature"]
body:
- type: dropdown
id: priority
@@ -65,6 +66,7 @@ body:
options:
- Single Node
- Multiple Nodes
- Other
default: 0
validations:
required: true

View File

@@ -1,2 +1,2 @@
ModelIn
modelin
modelin

View File

@@ -1,2 +1,2 @@
Copyright (C) 2024 Intel Corporation
SPDX-License-Identifier: Apache-2.0
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0

View File

@@ -28,7 +28,7 @@ on:
default: false
required: false
type: boolean
test_k8s:
test_helmchart:
default: false
required: false
type: boolean
@@ -74,15 +74,15 @@ jobs:
cd ${{ github.workspace }}/${{ inputs.example }}/docker_image_build
docker_compose_path=${{ github.workspace }}/${{ inputs.example }}/docker_image_build/build.yaml
if [[ $(grep -c "vllm:" ${docker_compose_path}) != 0 ]]; then
git clone https://github.com/vllm-project/vllm.git
git clone --depth 1 https://github.com/vllm-project/vllm.git
cd vllm && git rev-parse HEAD && cd ../
fi
if [[ $(grep -c "vllm-hpu:" ${docker_compose_path}) != 0 ]]; then
git clone https://github.com/HabanaAI/vllm-fork.git
cd vllm-fork && git rev-parse HEAD && cd ../
if [[ $(grep -c "vllm-gaudi:" ${docker_compose_path}) != 0 ]]; then
git clone --depth 1 --branch v0.6.4.post2+Gaudi-1.19.0 https://github.com/HabanaAI/vllm-fork.git
sed -i 's/triton/triton==3.1.0/g' vllm-fork/requirements-hpu.txt
fi
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps && git checkout ${{ inputs.opea_branch }} && git rev-parse HEAD && cd ../
git clone --depth 1 --branch ${{ inputs.opea_branch }} https://github.com/opea-project/GenAIComps.git
cd GenAIComps && git rev-parse HEAD && cd ../
- name: Build Image
if: ${{ fromJSON(inputs.build) }}
@@ -110,16 +110,16 @@ jobs:
####################################################################################################
# K8S Test
# helmchart Test
####################################################################################################
test-k8s-manifest:
needs: [build-images]
if: ${{ fromJSON(inputs.test_k8s) }}
uses: ./.github/workflows/_manifest-e2e.yml
test-helmchart:
if: ${{ fromJSON(inputs.test_helmchart) }}
uses: ./.github/workflows/_helm-e2e.yml
with:
example: ${{ inputs.example }}
hardware: ${{ inputs.node }}
tag: ${{ inputs.tag }}
mode: "CD"
secrets: inherit
####################################################################################################

View File

@@ -14,7 +14,7 @@ on:
test_mode:
required: false
type: string
default: 'docker_compose'
default: 'compose'
outputs:
run_matrix:
description: "The matrix string"
@@ -42,6 +42,12 @@ jobs:
ref: ${{ env.CHECKOUT_REF }}
fetch-depth: 0
- name: Check Dangerous Command Injection
if: github.event_name == 'pull_request' || github.event_name == 'pull_request_target'
uses: opea-project/validation/actions/check-cmd@main
with:
work_dir: ${{ github.workspace }}
- name: Get test matrix
id: get-test-matrix
run: |
@@ -54,9 +60,11 @@ jobs:
base_commit=$(git rev-parse HEAD~1) # push event
fi
merged_commit=$(git log -1 --format='%H')
echo "print all changed files..."
git diff --name-only ${base_commit} ${merged_commit}
changed_files="$(git diff --name-only ${base_commit} ${merged_commit} | \
grep -vE '${{ inputs.diff_excluded_files }}')" || true
echo "changed_files=$changed_files"
echo "filtered changed_files=$changed_files"
export changed_files=$changed_files
export test_mode=${{ inputs.test_mode }}
export WORKSPACE=${{ github.workspace }}

View File

@@ -67,36 +67,6 @@ jobs:
make docker.build
make docker.push
- name: Scan gmcmanager
if: ${{ inputs.node == 'gaudi' }}
uses: opea-project/validation/actions/trivy-scan@main
with:
image-ref: ${{ env.DOCKER_REGISTRY }}/gmcmanager:${{ env.VERSION }}
output: gmcmanager-scan.txt
- name: Upload gmcmanager scan result
if: ${{ inputs.node == 'gaudi' }}
uses: actions/upload-artifact@v4.3.4
with:
name: gmcmanager-scan
path: gmcmanager-scan.txt
overwrite: true
- name: Scan gmcrouter
if: ${{ inputs.node == 'gaudi' }}
uses: opea-project/validation/actions/trivy-scan@main
with:
image-ref: ${{ env.DOCKER_REGISTRY }}/gmcrouter:${{ env.VERSION }}
output: gmcrouter-scan.txt
- name: Upload gmcrouter scan result
if: ${{ inputs.node == 'gaudi' }}
uses: actions/upload-artifact@v4.3.4
with:
name: gmcrouter-scan
path: gmcrouter-scan.txt
overwrite: true
- name: Clean up images
if: always()
run: |

233
.github/workflows/_helm-e2e.yml vendored Normal file
View File

@@ -0,0 +1,233 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Helm Chart E2e Test For Call
permissions: read-all
on:
workflow_call:
inputs:
example:
default: "chatqna"
required: true
type: string
description: "example to test, chatqna or common/asr"
hardware:
default: "xeon"
required: true
type: string
dockerhub:
default: "false"
required: false
type: string
description: "Set to true if you want to use released docker images at dockerhub. By default using internal docker registry."
mode:
default: "CD"
description: "Whether the test range is CI, CD or CICD"
required: false
type: string
tag:
default: "latest"
required: false
type: string
version:
default: "0-latest"
required: false
type: string
jobs:
get-test-case:
runs-on: ubuntu-latest
outputs:
value_files: ${{ steps.get-test-files.outputs.value_files }}
CHECKOUT_REF: ${{ steps.get-checkout-ref.outputs.CHECKOUT_REF }}
steps:
- name: Get checkout ref
id: get-checkout-ref
run: |
if [ "${{ github.event_name }}" == "pull_request" ] || [ "${{ github.event_name }}" == "pull_request_target" ]; then
CHECKOUT_REF=refs/pull/${{ github.event.number }}/merge
else
CHECKOUT_REF=${{ github.ref }}
fi
echo "CHECKOUT_REF=${CHECKOUT_REF}" >> $GITHUB_OUTPUT
echo "checkout ref ${CHECKOUT_REF}"
- name: Checkout Repo
uses: actions/checkout@v4
with:
ref: ${{ steps.get-checkout-ref.outputs.CHECKOUT_REF }}
fetch-depth: 0
- name: Get test Services
id: get-test-files
run: |
set -x
if [ "${{ inputs.mode }}" = "CI" ]; then
base_commit=${{ github.event.pull_request.base.sha }}
merged_commit=$(git log -1 --format='%H')
values_files=$(git diff --name-only ${base_commit} ${merged_commit} | \
grep "${{ inputs.example }}/kubernetes/helm" | \
grep "values.yaml" |\
sort -u)
echo $values_files
elif [ "${{ inputs.mode }}" = "CD" ]; then
values_files=$(ls ${{ inputs.example }}/kubernetes/helm/*values.yaml || true)
fi
value_files="["
for file in ${values_files}; do
if [ -f "$file" ]; then
filename=$(basename "$file")
if [[ "$filename" == *"gaudi"* ]]; then
if [[ "${{ inputs.hardware }}" == "gaudi" ]]; then
value_files="${value_files}\"${filename}\","
fi
elif [[ "$filename" == *"nv"* ]]; then
continue
else
if [[ "${{ inputs.hardware }}" == "xeon" ]]; then
value_files="${value_files}\"${filename}\","
fi
fi
fi
done
value_files="${value_files%,}]"
echo "value_files=${value_files}"
echo "value_files=${value_files}" >> $GITHUB_OUTPUT
helm-test:
needs: [get-test-case]
strategy:
matrix:
value_file: ${{ fromJSON(needs.get-test-case.outputs.value_files) }}
fail-fast: false
runs-on: k8s-${{ inputs.hardware }}
continue-on-error: true
steps:
- name: Clean Up Working Directory
run: |
echo "value_file=${{ matrix.value_file }}"
sudo rm -rf ${{github.workspace}}/*
- name: Get checkout ref
id: get-checkout-ref
run: |
if [ "${{ github.event_name }}" == "pull_request" ] || [ "${{ github.event_name }}" == "pull_request_target" ]; then
CHECKOUT_REF=refs/pull/${{ github.event.number }}/merge
else
CHECKOUT_REF=${{ github.ref }}
fi
echo "CHECKOUT_REF=${CHECKOUT_REF}" >> $GITHUB_OUTPUT
echo "checkout ref ${CHECKOUT_REF}"
- name: Checkout Repo
uses: actions/checkout@v4
with:
ref: ${{ steps.get-checkout-ref.outputs.CHECKOUT_REF }}
fetch-depth: 0
- name: Set variables
env:
example: ${{ inputs.example }}
run: |
CHART_NAME="${example,,}" # CodeGen
echo "CHART_NAME=$CHART_NAME" >> $GITHUB_ENV
echo "RELEASE_NAME=${CHART_NAME}$(date +%Y%m%d%H%M%S)" >> $GITHUB_ENV
echo "NAMESPACE=${CHART_NAME}-$(date +%Y%m%d%H%M%S)" >> $GITHUB_ENV
echo "ROLLOUT_TIMEOUT_SECONDS=600s" >> $GITHUB_ENV
echo "TEST_TIMEOUT_SECONDS=600s" >> $GITHUB_ENV
echo "KUBECTL_TIMEOUT_SECONDS=60s" >> $GITHUB_ENV
echo "should_cleanup=false" >> $GITHUB_ENV
echo "skip_validate=false" >> $GITHUB_ENV
echo "CHART_FOLDER=${example}/kubernetes/helm" >> $GITHUB_ENV
- name: Helm install
id: install
env:
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
HFTOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
value_file: ${{ matrix.value_file }}
run: |
set -xe
echo "should_cleanup=true" >> $GITHUB_ENV
if [[ ! -f ${{ github.workspace }}/${{ env.CHART_FOLDER }}/${value_file} ]]; then
echo "No value file found, exiting test!"
echo "skip_validate=true" >> $GITHUB_ENV
echo "should_cleanup=false" >> $GITHUB_ENV
exit 0
fi
for img in `helm template -n $NAMESPACE $RELEASE_NAME oci://ghcr.io/opea-project/charts/${CHART_NAME} -f ${{ inputs.example }}/kubernetes/helm/${value_file} --version ${{ inputs.version }} | grep 'image:' | grep 'opea/' | awk '{print $2}' | xargs`;
do
# increase helm install wait for for vllm-gaudi case
if [[ $img == *"vllm-gaudi"* ]]; then
ROLLOUT_TIMEOUT_SECONDS=900s
fi
done
if ! helm install \
--create-namespace \
--namespace $NAMESPACE \
$RELEASE_NAME \
oci://ghcr.io/opea-project/charts/${CHART_NAME} \
--set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} \
--set global.modelUseHostPath=/home/sdp/.cache/huggingface/hub \
--set GOOGLE_API_KEY=${{ env.GOOGLE_API_KEY}} \
--set GOOGLE_CSE_ID=${{ env.GOOGLE_CSE_ID}} \
--set web-retriever.GOOGLE_API_KEY=${{ env.GOOGLE_API_KEY}} \
--set web-retriever.GOOGLE_CSE_ID=${{ env.GOOGLE_CSE_ID}} \
-f ${{ inputs.example }}/kubernetes/helm/${value_file} \
--version ${{ inputs.version }} \
--wait --timeout "$ROLLOUT_TIMEOUT_SECONDS"; then
echo "Failed to install chart ${{ inputs.example }}"
echo "skip_validate=true" >> $GITHUB_ENV
.github/workflows/scripts/k8s-utils.sh dump_pods_status $NAMESPACE
exit 1
fi
- name: Validate e2e test
if: always()
run: |
set -xe
if $skip_validate; then
echo "Skip validate"
else
LOG_PATH=/home/$(whoami)/helm-logs
chart=${{ env.CHART_NAME }}
helm test -n $NAMESPACE $RELEASE_NAME --logs --timeout "$TEST_TIMEOUT_SECONDS" | tee ${LOG_PATH}/charts-${chart}.log
exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "Chart ${chart} test failed, please check the logs in ${LOG_PATH}!"
exit 1
fi
echo "Checking response results, make sure the output is reasonable. "
teststatus=false
if [[ -f $LOG_PATH/charts-${chart}.log ]] && \
[[ $(grep -c "^Phase:.*Failed" $LOG_PATH/charts-${chart}.log) != 0 ]]; then
teststatus=false
${{ github.workspace }}/.github/workflows/scripts/k8s-utils.sh dump_all_pod_logs $NAMESPACE
else
teststatus=true
fi
if [ $teststatus == false ]; then
echo "Response check failed, please check the logs in artifacts!"
exit 1
else
echo "Response check succeeded!"
exit 0
fi
fi
- name: Helm uninstall
if: always()
run: |
if $should_cleanup; then
helm uninstall $RELEASE_NAME --namespace $NAMESPACE
if ! kubectl delete ns $NAMESPACE --timeout=$KUBECTL_TIMEOUT_SECONDS; then
kubectl delete pods --namespace $NAMESPACE --force --grace-period=0 --all
kubectl delete ns $NAMESPACE --force --grace-period=0 --timeout=$KUBECTL_TIMEOUT_SECONDS
fi
fi

View File

@@ -1,111 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Single Kubernetes Manifest E2e Test For Call
on:
workflow_call:
inputs:
example:
default: "ChatQnA"
description: "The example to test on K8s"
required: true
type: string
hardware:
default: "xeon"
description: "Nodes to run the test, xeon or gaudi"
required: true
type: string
tag:
default: "latest"
description: "Tag to apply to images, default is latest"
required: false
type: string
jobs:
manifest-test:
runs-on: "k8s-${{ inputs.hardware }}"
continue-on-error: true
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Get checkout ref
run: |
if [ "${{ github.event_name }}" == "pull_request" ] || [ "${{ github.event_name }}" == "pull_request_target" ]; then
echo "CHECKOUT_REF=refs/pull/${{ github.event.number }}/merge" >> $GITHUB_ENV
else
echo "CHECKOUT_REF=${{ github.ref }}" >> $GITHUB_ENV
fi
echo "checkout ref ${{ env.CHECKOUT_REF }}"
- name: Checkout out Repo
uses: actions/checkout@v4
with:
ref: ${{ env.CHECKOUT_REF }}
fetch-depth: 0
- name: Set variables
run: |
echo "IMAGE_REPO=${OPEA_IMAGE_REPO}opea" >> $GITHUB_ENV
echo "IMAGE_TAG=${{ inputs.tag }}" >> $GITHUB_ENV
lower_example=$(echo "${{ inputs.example }}" | tr '[:upper:]' '[:lower:]')
echo "NAMESPACE=$lower_example-$(tr -dc a-z0-9 </dev/urandom | head -c 16)" >> $GITHUB_ENV
echo "ROLLOUT_TIMEOUT_SECONDS=1800s" >> $GITHUB_ENV
echo "KUBECTL_TIMEOUT_SECONDS=60s" >> $GITHUB_ENV
echo "continue_test=true" >> $GITHUB_ENV
echo "should_cleanup=false" >> $GITHUB_ENV
echo "skip_validate=true" >> $GITHUB_ENV
echo "NAMESPACE=$NAMESPACE"
- name: Kubectl install
id: install
run: |
if [[ ! -f ${{ github.workspace }}/${{ inputs.example }}/tests/test_manifest_on_${{ inputs.hardware }}.sh ]]; then
echo "No test script found, exist test!"
exit 0
else
${{ github.workspace }}/${{ inputs.example }}/tests/test_manifest_on_${{ inputs.hardware }}.sh init_${{ inputs.example }}
echo "should_cleanup=true" >> $GITHUB_ENV
kubectl create ns $NAMESPACE
${{ github.workspace }}/${{ inputs.example }}/tests/test_manifest_on_${{ inputs.hardware }}.sh install_${{ inputs.example }} $NAMESPACE
echo "Testing ${{ inputs.example }}, waiting for pod ready..."
if kubectl rollout status deployment --namespace "$NAMESPACE" --timeout "$ROLLOUT_TIMEOUT_SECONDS"; then
echo "Testing manifests ${{ inputs.example }}, waiting for pod ready done!"
echo "skip_validate=false" >> $GITHUB_ENV
else
echo "Timeout waiting for pods in namespace $NAMESPACE to be ready!"
.github/workflows/scripts/k8s-utils.sh dump_pods_status $NAMESPACE
exit 1
fi
sleep 60
fi
- name: Validate e2e test
if: always()
run: |
if $skip_validate; then
echo "Skip validate"
else
if ${{ github.workspace }}/${{ inputs.example }}/tests/test_manifest_on_${{ inputs.hardware }}.sh validate_${{ inputs.example }} $NAMESPACE ; then
echo "Validate ${{ inputs.example }} successful!"
else
echo "Validate ${{ inputs.example }} failure!!!"
echo "Check the logs in 'Dump logs when e2e test failed' step!!!"
exit 1
fi
fi
- name: Dump logs when e2e test failed
if: failure()
run: |
.github/workflows/scripts/k8s-utils.sh dump_all_pod_logs $NAMESPACE
- name: Kubectl uninstall
if: always()
run: |
if $should_cleanup; then
if ! kubectl delete ns $NAMESPACE --timeout=$KUBECTL_TIMEOUT_SECONDS; then
kubectl delete pods --namespace $NAMESPACE --force --grace-period=0 --all
kubectl delete ns $NAMESPACE --force --grace-period=0 --timeout=$KUBECTL_TIMEOUT_SECONDS
fi
fi

View File

@@ -89,7 +89,7 @@ jobs:
echo "test_cases=$test_cases"
echo "test_cases=$test_cases" >> $GITHUB_OUTPUT
run-test:
compose-test:
needs: [get-test-case]
strategy:
matrix:
@@ -111,6 +111,17 @@ jobs:
ref: ${{ needs.get-test-case.outputs.CHECKOUT_REF }}
fetch-depth: 0
- name: Clean up container before test
shell: bash
run: |
docker ps
cd ${{ github.workspace }}/${{ inputs.example }}
export test_case=${{ matrix.test_case }}
export hardware=${{ inputs.hardware }}
bash ${{ github.workspace }}/.github/workflows/scripts/docker_compose_clean_up.sh "containers"
bash ${{ github.workspace }}/.github/workflows/scripts/docker_compose_clean_up.sh "ports"
docker ps
- name: Run test
shell: bash
env:
@@ -121,6 +132,7 @@ jobs:
PINECONE_KEY_LANGCHAIN_TEST: ${{ secrets.PINECONE_KEY_LANGCHAIN_TEST }}
SDK_BASE_URL: ${{ secrets.SDK_BASE_URL }}
SERVING_TOKEN: ${{ secrets.SERVING_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
IMAGE_REPO: ${{ inputs.registry }}
IMAGE_TAG: ${{ inputs.tag }}
example: ${{ inputs.example }}
@@ -131,21 +143,14 @@ jobs:
if [[ "$IMAGE_REPO" == "" ]]; then export IMAGE_REPO="${OPEA_IMAGE_REPO}opea"; fi
if [ -f ${test_case} ]; then timeout 30m bash ${test_case}; else echo "Test script {${test_case}} not found, skip test!"; fi
- name: Clean up container
- name: Clean up container after test
shell: bash
if: cancelled() || failure()
run: |
cd ${{ github.workspace }}/${{ inputs.example }}/docker_compose
test_case=${{ matrix.test_case }}
flag=${test_case%_on_*}
flag=${flag#test_}
yaml_file=$(find . -type f -wholename "*${{ inputs.hardware }}/${flag}.yaml")
echo $yaml_file
container_list=$(cat $yaml_file | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
cid=$(docker ps -aq --filter "name=$container_name")
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && sleep 1s; fi
done
cd ${{ github.workspace }}/${{ inputs.example }}
export test_case=${{ matrix.test_case }}
export hardware=${{ inputs.hardware }}
bash ${{ github.workspace }}/.github/workflows/scripts/docker_compose_clean_up.sh "containers"
docker system prune -f
docker rmi $(docker images --filter reference="*:5000/*/*" -q) || true

View File

@@ -13,7 +13,7 @@ on:
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: Checkout

View File

@@ -0,0 +1,31 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Clean up container on manual event
on:
workflow_dispatch:
inputs:
node:
default: "rocm"
description: "Hardware to clean"
required: true
type: string
clean_list:
default: ""
description: "docker command to clean"
required: false
type: string
jobs:
clean:
runs-on: "${{ inputs.node }}"
steps:
- name: Clean up container
run: |
docker ps
if [ "${{ inputs.clean_list }}" ]; then
echo "----------stop and remove containers----------"
docker stop ${{ inputs.clean_list }} && docker rm ${{ inputs.clean_list }}
echo "----------container removed----------"
docker ps
fi

View File

@@ -12,7 +12,7 @@ on:
type: string
examples:
default: "ChatQnA"
description: 'List of examples to test [AudioQnA,ChatQnA,CodeGen,CodeTrans,DocSum,FaqGen,SearchQnA,Translation]'
description: 'List of examples to test [AgentQnA,AudioQnA,ChatQnA,CodeGen,CodeTrans,DocIndexRetriever,DocSum,FaqGen,InstructionTuning,MultimodalQnA,ProductivitySuite,RerankFinetuning,SearchQnA,Translation,VideoQnA,VisualQnA,AvatarChatbot,Text2Image,WorkflowExecAgent,DBQnA,EdgeCraftRAG,GraphRAG]'
required: true
type: string
tag:
@@ -35,9 +35,9 @@ on:
description: 'Test examples with docker compose'
required: false
type: boolean
test_k8s:
default: false
description: 'Test examples with k8s'
test_helmchart:
default: true
description: 'Test examples with helm charts'
required: false
type: boolean
test_gmc:
@@ -51,7 +51,7 @@ on:
required: false
type: string
inject_commit:
default: true
default: false
description: "inject commit to docker images true or false"
required: false
type: string
@@ -103,7 +103,7 @@ jobs:
tag: ${{ inputs.tag }}
build: ${{ fromJSON(inputs.build) }}
test_compose: ${{ fromJSON(inputs.test_compose) }}
test_k8s: ${{ fromJSON(inputs.test_k8s) }}
test_helmchart: ${{ fromJSON(inputs.test_helmchart) }}
test_gmc: ${{ fromJSON(inputs.test_gmc) }}
opea_branch: ${{ inputs.opea_branch }}
inject_commit: ${{ inputs.inject_commit }}

View File

@@ -1,13 +1,13 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Freeze OPEA images release tag in readme on manual event
name: Freeze OPEA images release tag
on:
workflow_dispatch:
inputs:
tag:
default: "latest"
default: "1.1.0"
description: "Tag to apply to images"
required: true
type: string
@@ -23,10 +23,6 @@ jobs:
fetch-depth: 0
ref: ${{ github.ref }}
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Set up Git
run: |
git config --global user.name "NeuralChatBot"
@@ -35,9 +31,10 @@ jobs:
- name: Run script
run: |
find . -name "*.md" | xargs sed -i "s|^docker\ compose|TAG=${{ github.event.inputs.tag }}\ docker\ compose|g"
find . -type f -name "*.yaml" \( -path "*/benchmark/*" -o -path "*/kubernetes/*" \) | xargs sed -i -E 's/(opea\/[A-Za-z0-9\-]*:)latest/\1${{ github.event.inputs.tag }}/g'
find . -type f -name "*.md" \( -path "*/benchmark/*" -o -path "*/kubernetes/*" \) | xargs sed -i -E 's/(opea\/[A-Za-z0-9\-]*:)latest/\1${{ github.event.inputs.tag }}/g'
IFS='.' read -r major minor patch <<< "${{ github.event.inputs.tag }}"
echo "VERSION_MAJOR ${major}" > version.txt
echo "VERSION_MINOR ${minor}" >> version.txt
echo "VERSION_PATCH ${patch}" >> version.txt
- name: Commit changes
run: |

View File

@@ -12,7 +12,7 @@ on:
type: string
example:
default: "ChatQnA"
description: 'Build images belong to which example?'
description: 'Build images belong to which example? [AgentQnA,AudioQnA,ChatQnA,CodeGen,CodeTrans,DocIndexRetriever,DocSum,FaqGen,InstructionTuning,MultimodalQnA,ProductivitySuite,RerankFinetuning,SearchQnA,Translation,VideoQnA,VisualQnA,AvatarChatbot,Text2Image,WorkflowExecAgent,DBQnA,EdgeCraftRAG,GraphRAG]'
required: true
type: string
services:
@@ -31,7 +31,7 @@ on:
required: false
type: string
inject_commit:
default: true
default: false
description: "inject commit to docker images true or false"
required: false
type: string

View File

@@ -0,0 +1,59 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Clean up Local Registry on manual event
on:
workflow_dispatch:
inputs:
nodes:
default: "gaudi,xeon"
description: "Hardware to clean up"
required: true
type: string
env:
EXAMPLES: ${{ vars.NIGHTLY_RELEASE_EXAMPLES }}
jobs:
get-build-matrix:
runs-on: ubuntu-latest
outputs:
examples: ${{ steps.get-matrix.outputs.examples }}
nodes: ${{ steps.get-matrix.outputs.nodes }}
steps:
- name: Create Matrix
id: get-matrix
run: |
examples=($(echo ${EXAMPLES} | tr ',' ' '))
examples_json=$(printf '%s\n' "${examples[@]}" | sort -u | jq -R '.' | jq -sc '.')
echo "examples=$examples_json" >> $GITHUB_OUTPUT
nodes=($(echo ${{ inputs.nodes }} | tr ',' ' '))
nodes_json=$(printf '%s\n' "${nodes[@]}" | sort -u | jq -R '.' | jq -sc '.')
echo "nodes=$nodes_json" >> $GITHUB_OUTPUT
clean-up:
needs: get-build-matrix
strategy:
matrix:
node: ${{ fromJson(needs.get-build-matrix.outputs.nodes) }}
fail-fast: false
runs-on: "docker-build-${{ matrix.node }}"
steps:
- name: Clean Up Local Registry
run: |
echo "Cleaning up local registry on ${{ matrix.node }}"
bash /home/sdp/workspace/fully_registry_cleanup.sh
docker ps | grep registry
build:
needs: [get-build-matrix, clean-up]
strategy:
matrix:
example: ${{ fromJson(needs.get-build-matrix.outputs.examples) }}
node: ${{ fromJson(needs.get-build-matrix.outputs.nodes) }}
fail-fast: false
uses: ./.github/workflows/_example-workflow.yml
with:
node: ${{ matrix.node }}
example: ${{ matrix.example }}
secrets: inherit

View File

@@ -5,11 +5,11 @@ name: Nightly build/publish latest docker images
on:
schedule:
- cron: "30 13 * * *" # UTC time
- cron: "30 14 * * *" # UTC time
workflow_dispatch:
env:
EXAMPLES: "AgentQnA,AudioQnA,ChatQnA,CodeGen,CodeTrans,DocIndexRetriever,DocSum,FaqGen,InstructionTuning,MultimodalQnA,ProductivitySuite,RerankFinetuning,SearchQnA,Translation,VideoQnA,VisualQnA"
EXAMPLES: ${{ vars.NIGHTLY_RELEASE_EXAMPLES }}
TAG: "latest"
PUBLISH_TAGS: "latest"
@@ -32,7 +32,7 @@ jobs:
echo "TAG=$TAG" >> $GITHUB_OUTPUT
echo "PUBLISH_TAGS=$PUBLISH_TAGS" >> $GITHUB_OUTPUT
build:
build-and-test:
needs: get-build-matrix
strategy:
matrix:
@@ -42,6 +42,7 @@ jobs:
with:
node: gaudi
example: ${{ matrix.example }}
test_compose: true
secrets: inherit
get-image-list:
@@ -51,7 +52,7 @@ jobs:
examples: ${{ needs.get-build-matrix.outputs.EXAMPLES }}
publish:
needs: [get-build-matrix, get-image-list, build]
needs: [get-build-matrix, get-image-list, build-and-test]
strategy:
matrix:
image: ${{ fromJSON(needs.get-image-list.outputs.matrix) }}

76
.github/workflows/pr-chart-e2e.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: E2E Test with Helm Charts
on:
pull_request_target:
branches: [main]
types: [opened, reopened, ready_for_review, synchronize] # added `ready_for_review` since draft is skipped
paths:
- "!**.md"
- "**/helm/**"
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
job1:
name: Get-Test-Matrix
runs-on: ubuntu-latest
outputs:
run_matrix: ${{ steps.get-test-matrix.outputs.run_matrix }}
steps:
- name: Checkout Repo
uses: actions/checkout@v4
with:
ref: "refs/pull/${{ github.event.number }}/merge"
fetch-depth: 0
- name: Get Test Matrix
id: get-test-matrix
run: |
set -x
echo "base_commit=${{ github.event.pull_request.base.sha }}"
base_commit=${{ github.event.pull_request.base.sha }}
merged_commit=$(git log -1 --format='%H')
values_files=$(git diff --name-only ${base_commit} ${merged_commit} | \
grep "values.yaml" | \
sort -u ) #CodeGen/kubernetes/helm/cpu-values.yaml
run_matrix="{\"include\":["
for values_file in ${values_files}; do
if [ -f "$values_file" ]; then
valuefile=$(basename "$values_file") # cpu-values.yaml
example=$(echo "$values_file" | cut -d'/' -f1) # CodeGen
if [[ "$valuefile" == *"gaudi"* ]]; then
hardware="gaudi"
elif [[ "$valuefile" == *"nv"* ]]; then
continue
else
hardware="xeon"
fi
echo "example=${example}, hardware=${hardware}, valuefile=${valuefile}"
if [[ $(echo ${run_matrix} | grep -c "{\"example\":\"${example}\",\"hardware\":\"${hardware}\"},") == 0 ]]; then
run_matrix="${run_matrix}{\"example\":\"${example}\",\"hardware\":\"${hardware}\"},"
echo "------------------ add one values file ------------------"
fi
fi
done
run_matrix="${run_matrix%,}"
run_matrix=$run_matrix"]}"
echo "run_matrix="${run_matrix}""
echo "run_matrix="${run_matrix}"" >> $GITHUB_OUTPUT
helm-chart-test:
needs: [job1]
if: always() && ${{ needs.job1.outputs.run_matrix.example.length > 0 }}
uses: ./.github/workflows/_helm-e2e.yml
strategy:
matrix: ${{ fromJSON(needs.job1.outputs.run_matrix) }}
with:
example: ${{ matrix.example }}
hardware: ${{ matrix.hardware }}
mode: "CI"
secrets: inherit

View File

@@ -0,0 +1,40 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Check Duplicated Images
on:
pull_request:
branches: [main]
types: [opened, reopened, ready_for_review, synchronize]
paths:
- "**/docker_image_build/*.yaml"
- ".github/workflows/pr-check-duplicated-image.yml"
- ".github/workflows/scripts/check_duplicated_image.py"
workflow_dispatch:
# If there is a new commit, the previous jobs will be canceled
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
check-duplicated-image:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo
uses: actions/checkout@v4
- name: Check all the docker image build files
run: |
pip install PyYAML
cd ${{github.workspace}}
build_files=""
for f in `find . -path "*/docker_image_build/build.yaml"`; do
build_files="$build_files $f"
done
python3 .github/workflows/scripts/check_duplicated_image.py $build_files
shell: bash

View File

@@ -34,6 +34,11 @@ jobs:
- name: Checkout out Repo
uses: actions/checkout@v4
- name: Check Dangerous Command Injection
uses: opea-project/validation/actions/check-cmd@main
with:
work_dir: ${{ github.workspace }}
- name: Docker Build
run: |
docker build -f ${{ github.workspace }}/.github/workflows/docker/${{ env.DOCKER_FILE_NAME }}.dockerfile -t ${{ env.REPO_NAME }}:${{ env.REPO_TAG }} .

View File

@@ -2,7 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
name: "Dependency Review"
on: [pull_request]
on: [pull_request_target]
permissions:
contents: read

View File

@@ -28,7 +28,7 @@ jobs:
if: ${{ !github.event.pull_request.draft }}
uses: ./.github/workflows/_get-test-matrix.yml
with:
diff_excluded_files: '.github|*.md|*.txt|kubernetes|manifest|gmc|assets|benchmark'
diff_excluded_files: '\.github|\.md|\.txt|kubernetes|gmc|assets|benchmark'
example-test:
needs: [get-test-matrix]
@@ -42,5 +42,5 @@ jobs:
tag: "ci"
example: ${{ matrix.example }}
hardware: ${{ matrix.hardware }}
diff_excluded_files: '.github|*.md|*.txt|kubernetes|manifest|gmc|assets|benchmark'
diff_excluded_files: '\.github|\.md|\.txt|kubernetes|gmc|assets|benchmark'
secrets: inherit

View File

@@ -0,0 +1,109 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Compose file and dockerfile path checking
on:
pull_request:
branches: [main]
types: [opened, reopened, ready_for_review, synchronize]
jobs:
check-dockerfile-paths-in-README:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo GenAIExamples
uses: actions/checkout@v4
- name: Clone Repo GenAIComps
run: |
cd ..
git clone --depth 1 https://github.com/opea-project/GenAIComps.git
- name: Check for Missing Dockerfile Paths in GenAIComps
run: |
cd ${{github.workspace}}
miss="FALSE"
while IFS=: read -r file line content; do
dockerfile_path=$(echo "$content" | awk -F '-f ' '{print $2}' | awk '{print $1}')
if [[ ! -f "../GenAIComps/${dockerfile_path}" ]]; then
miss="TRUE"
echo "Missing Dockerfile: GenAIComps/${dockerfile_path} (Referenced in GenAIExamples/${file}:${line})"
fi
done < <(grep -Ern 'docker build .* -f comps/.+/Dockerfile' --include='*.md' .)
if [[ "$miss" == "TRUE" ]]; then
exit 1
fi
shell: bash
check-Dockerfile-in-build-yamls:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo GenAIExamples
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check Dockerfile path included in image build yaml
if: always()
run: |
set -e
shopt -s globstar
no_add="FALSE"
cd ${{github.workspace}}
Dockerfiles=$(realpath $(find ./ -name '*Dockerfile*' ! -path '*/tests/*'))
if [ -n "$Dockerfiles" ]; then
for dockerfile in $Dockerfiles; do
service=$(echo "$dockerfile" | awk -F '/GenAIExamples/' '{print $2}' | awk -F '/' '{print $2}')
cd ${{github.workspace}}/$service/docker_image_build
all_paths=$(realpath $(awk ' /context:/ { context = $2 } /dockerfile:/ { dockerfile = $2; combined = context "/" dockerfile; gsub(/\/+/, "/", combined); if (index(context, ".") > 0) {print combined}}' build.yaml) 2> /dev/null || true )
if ! echo "$all_paths" | grep -q "$dockerfile"; then
echo "AR: Update $dockerfile to GenAIExamples/$service/docker_image_build/build.yaml. The yaml is used for release images build."
no_add="TRUE"
fi
done
fi
if [[ "$no_add" == "TRUE" ]]; then
exit 1
fi
check-image-and-service-names-in-build-yaml:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo GenAIExamples
uses: actions/checkout@v4
- name: Check name agreement in build.yaml
run: |
pip install ruamel.yaml
cd ${{github.workspace}}
consistency="TRUE"
build_yamls=$(find . -name 'build.yaml')
for build_yaml in $build_yamls; do
message=$(python3 .github/workflows/scripts/check-name-agreement.py "$build_yaml")
if [[ "$message" != *"consistent"* ]]; then
consistency="FALSE"
echo "Inconsistent service name and image name found in file $build_yaml."
echo "$message"
fi
done
if [[ "$consistency" == "FALSE" ]]; then
echo "Please ensure that the service and image names are consistent in build.yaml, otherwise we cannot guarantee that your image will be published correctly."
exit 1
fi
shell: bash

View File

@@ -8,11 +8,10 @@ on:
branches: ["main", "*rc"]
types: [opened, reopened, ready_for_review, synchronize] # added `ready_for_review` since draft is skipped
paths:
- "**/kubernetes/**/gmc/**"
- "**/kubernetes/gmc/**"
- "**/tests/test_gmc**"
- "!**.md"
- "!**.txt"
- "!**/kubernetes/**/manifest/**"
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@@ -22,7 +21,7 @@ jobs:
job1:
uses: ./.github/workflows/_get-test-matrix.yml
with:
diff_excluded_files: '.github|docker_compose|manifest|assets|*.md|*.txt'
diff_excluded_files: '\.github|docker_compose|assets|\.md|\.txt'
test_mode: "gmc"
gmc-test:

View File

@@ -1,7 +1,7 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: Check Paths and Hyperlinks
name: Check hyperlinks and relative path validity
on:
pull_request:
@@ -9,39 +9,6 @@ on:
types: [opened, reopened, ready_for_review, synchronize]
jobs:
check-dockerfile-paths:
runs-on: ubuntu-latest
steps:
- name: Clean Up Working Directory
run: sudo rm -rf ${{github.workspace}}/*
- name: Checkout Repo GenAIExamples
uses: actions/checkout@v4
- name: Clone Repo GenAIComps
run: |
cd ..
git clone https://github.com/opea-project/GenAIComps.git
- name: Check for Missing Dockerfile Paths in GenAIComps
run: |
cd ${{github.workspace}}
miss="FALSE"
while IFS=: read -r file line content; do
dockerfile_path=$(echo "$content" | awk -F '-f ' '{print $2}' | awk '{print $1}')
if [[ ! -f "../GenAIComps/${dockerfile_path}" ]]; then
miss="TRUE"
echo "Missing Dockerfile: GenAIComps/${dockerfile_path} (Referenced in GenAIExamples/${file}:${line})"
fi
done < <(grep -Ern 'docker build .* -f comps/.+/Dockerfile' --include='*.md' .)
if [[ "$miss" == "TRUE" ]]; then
exit 1
fi
shell: bash
check-the-validity-of-hyperlinks-in-README:
runs-on: ubuntu-latest
steps:

View File

@@ -1,42 +0,0 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
name: E2E test with manifests
on:
pull_request_target:
branches: ["main", "*rc"]
types: [opened, reopened, ready_for_review, synchronize] # added `ready_for_review` since draft is skipped
paths:
- "**/Dockerfile**"
- "**.py"
- "**/kubernetes/**/manifest/**"
- "**/tests/test_manifest**"
- "!**.md"
- "!**.txt"
- "!**/kubernetes/**/gmc/**"
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
job1:
uses: ./.github/workflows/_get-test-matrix.yml
with:
diff_excluded_files: '.github|docker_compose|gmc|assets|*.md|*.txt|benchmark'
test_mode: "manifest"
run-example:
needs: job1
strategy:
matrix: ${{ fromJSON(needs.job1.outputs.run_matrix) }}
fail-fast: false
uses: ./.github/workflows/_example-workflow.yml
with:
node: ${{ matrix.hardware }}
example: ${{ matrix.example }}
tag: ${{ github.event.pull_request.head.sha }}
test_k8s: true
secrets: inherit

View File

@@ -8,7 +8,9 @@ on:
branches: [ 'main' ]
paths:
- "**.py"
- "**Dockerfile"
- "**Dockerfile*"
- "**docker_image_build/build.yaml"
- "**/ui/**"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-on-push
@@ -18,7 +20,7 @@ jobs:
job1:
uses: ./.github/workflows/_get-test-matrix.yml
with:
test_mode: "docker_image_build/build.yaml"
test_mode: "docker_image_build"
image-build:
needs: job1

View File

@@ -40,7 +40,7 @@ jobs:
- name: Create Issue
uses: daisy-ycguo/create-issue-action@stable
with:
token: ${{ secrets.Infra_Issue_Token }}
token: ${{ secrets.ACTION_TOKEN }}
owner: opea-project
repo: GenAIInfra
title: |
@@ -54,6 +54,6 @@ jobs:
${{ env.changed_files }}
Please verify if the helm charts and manifests need to be changed accordingly.
Please verify if the helm charts need to be changed accordingly.
> This issue was created automatically by CI.

View File

@@ -0,0 +1,46 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
from ruamel.yaml import YAML
def parse_yaml_file(file_path):
yaml = YAML()
with open(file_path, "r") as file:
data = yaml.load(file)
return data
def check_service_image_consistency(data):
inconsistencies = []
for service_name, service_details in data.get("services", {}).items():
image_name = service_details.get("image", "")
# Extract the image name part after the last '/'
image_name_part = image_name.split("/")[-1].split(":")[0]
# Check if the service name is a substring of the image name part
if service_name not in image_name_part:
# Get the line number of the service name
line_number = service_details.lc.line + 1
inconsistencies.append((service_name, image_name, line_number))
return inconsistencies
def main():
parser = argparse.ArgumentParser(description="Check service name and image name consistency in a YAML file.")
parser.add_argument("file_path", type=str, help="The path to the YAML file.")
args = parser.parse_args()
data = parse_yaml_file(args.file_path)
inconsistencies = check_service_image_consistency(data)
if inconsistencies:
for service_name, image_name, line_number in inconsistencies:
print(f"Service name: {service_name}, Image name: {image_name}, Line number: {line_number}")
else:
print("All consistent")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,79 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import os.path
import subprocess
import sys
import yaml
images = {}
dockerfiles = {}
errors = []
def check_docker_compose_build_definition(file_path):
with open(file_path, "r") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
for service in data["services"]:
if "build" in data["services"][service] and "image" in data["services"][service]:
bash_command = "echo " + data["services"][service]["image"]
image = (
subprocess.run(["bash", "-c", bash_command], check=True, capture_output=True)
.stdout.decode("utf-8")
.strip()
)
build = data["services"][service]["build"]
context = build.get("context", "")
dockerfile = os.path.normpath(
os.path.join(os.path.dirname(file_path), context, build.get("dockerfile", ""))
)
if not os.path.isfile(dockerfile):
# dockerfile not exists in the current repo context, assume it's in 3rd party context
dockerfile = os.path.normpath(os.path.join(context, build.get("dockerfile", "")))
item = {"file_path": file_path, "service": service, "dockerfile": dockerfile, "image": image}
if image in images and dockerfile != images[image]["dockerfile"]:
errors.append(
f"ERROR: !!! Found Conflicts !!!\n"
f"Image: {image}, Dockerfile: {dockerfile}, defined in Service: {service}, File: {file_path}\n"
f"Image: {image}, Dockerfile: {images[image]['dockerfile']}, defined in Service: {images[image]['service']}, File: {images[image]['file_path']}"
)
else:
# print(f"Add Image: {image} Dockerfile: {dockerfile}")
images[image] = item
if dockerfile in dockerfiles and image != dockerfiles[dockerfile]["image"]:
errors.append(
f"WARNING: Different images using the same Dockerfile\n"
f"Dockerfile: {dockerfile}, Image: {image}, defined in Service: {service}, File: {file_path}\n"
f"Dockerfile: {dockerfile}, Image: {dockerfiles[dockerfile]['image']}, defined in Service: {dockerfiles[dockerfile]['service']}, File: {dockerfiles[dockerfile]['file_path']}"
)
else:
dockerfiles[dockerfile] = item
def parse_arg():
parser = argparse.ArgumentParser(
description="Check for conflicts in image build definition in docker-compose.yml files"
)
parser.add_argument("files", nargs="+", help="list of files to be checked")
return parser.parse_args()
def main():
args = parse_arg()
for file_path in args.files:
check_docker_compose_build_definition(file_path)
print("SUCCESS: No Conlicts Found.")
if errors:
for error in errors:
print(error)
sys.exit(1)
else:
print("SUCCESS: No Conflicts Found.")
return 0
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,48 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# The test machine used by several opea projects, so the test scripts can't use `docker compose down` to clean up
# the all the containers, ports and networks directly.
# So we need to use the following script to minimize the impact of the clean up.
test_case=${test_case:-"test_compose_on_gaudi.sh"}
hardware=${hardware:-"gaudi"}
flag=${test_case%_on_*}
flag=${flag#test_}
yaml_file=$(find . -type f -wholename "*${hardware}/${flag}.yaml")
echo $yaml_file
case "$1" in
containers)
echo "Stop and remove all containers used by the services in $yaml_file ..."
containers=$(cat $yaml_file | grep container_name | cut -d':' -f2)
for container_name in $containers; do
cid=$(docker ps -aq --filter "name=$container_name")
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && sleep 1s; fi
done
;;
ports)
echo "Release all ports used by the services in $yaml_file ..."
pip install jq yq
ports=$(yq '.services[].ports[] | split(":")[0]' $yaml_file | grep -o '[0-9a-zA-Z_-]\+')
echo "All ports list..."
echo "$ports"
for port in $ports; do
if [[ $port =~ [a-zA-Z_-] ]]; then
port=$(grep -E "export $port=" tests/$test_case | cut -d'=' -f2)
fi
if [[ $port =~ [0-9] ]]; then
if [[ $port == 5000 ]]; then
echo "Error: Port 5000 is used by local docker registry, please DO NOT use it in docker compose deployment!!!"
exit 1
fi
cid=$(docker ps --filter "publish=${port}" --format "{{.ID}}")
if [[ ! -z "$cid" ]]; then docker stop $cid && docker rm $cid && echo "release $port"; fi
fi
done
;;
*)
echo "Unknown function: $1"
;;
esac

View File

@@ -16,11 +16,16 @@ for example in ${examples}; do
if [[ ! $(find . -type f | grep ${test_mode}) ]]; then continue; fi
cd tests
ls -l
hardware_list=$(find . -type f -name "test_compose*_on_*.sh" | cut -d/ -f2 | cut -d. -f1 | awk -F'_on_' '{print $2}'| sort -u)
echo "Test supported hardware list = ${hardware_list}"
if [[ "$test_mode" == "docker_image_build" ]]; then
hardware_list="gaudi xeon"
else
find_name="test_${test_mode}*_on_*.sh"
hardware_list=$(find . -type f -name "${find_name}" | cut -d/ -f2 | cut -d. -f1 | awk -F'_on_' '{print $2}'| sort -u)
fi
echo -e "Test supported hardware list: \n${hardware_list}"
run_hardware=""
if [[ $(printf '%s\n' "${changed_files[@]}" | grep ${example} | cut -d'/' -f2 | grep -E '*.py|Dockerfile*|ui|docker_image_build' ) ]]; then
if [[ $(printf '%s\n' "${changed_files[@]}" | grep ${example} | cut -d'/' -f2 | grep -E '\.py|Dockerfile*|ui|docker_image_build' ) ]]; then
# run test on all hardware if megaservice or ui code change
run_hardware=$hardware_list
else

View File

@@ -2,7 +2,7 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#set -xe
set -e
function dump_pod_log() {
pod_name=$1
@@ -12,7 +12,7 @@ function dump_pod_log() {
kubectl describe pod $pod_name -n $namespace
echo "-----------------------------------"
echo "#kubectl logs $pod_name -n $namespace"
kubectl logs $pod_name -n $namespace
kubectl logs $pod_name -n $namespace --all-containers --prefix=true
echo "-----------------------------------"
}
@@ -44,8 +44,13 @@ function dump_pods_status() {
function dump_all_pod_logs() {
namespace=$1
echo "------SUMMARY of POD STATUS in NS $namespace------"
kubectl get pods -n $namespace -o wide
echo "------SUMMARY of SVC STATUS in NS $namespace------"
kubectl get services -n $namespace -o wide
echo "------SUMMARY of endpoint STATUS in NS $namespace------"
kubectl get endpoints -n $namespace -o wide
echo "-----DUMP POD STATUS AND LOG in NS $namespace------"
pods=$(kubectl get pods -n $namespace -o jsonpath='{.items[*].metadata.name}')
for pod_name in $pods
do

2
.gitignore vendored
View File

@@ -5,4 +5,4 @@
**/playwright/.cache/
**/test-results/
__pycache__/
__pycache__/

View File

@@ -7,7 +7,7 @@ ci:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
rev: v5.0.0
hooks:
- id: end-of-file-fixer
files: (.*\.(py|md|rst|yaml|yml|json|ts|js|html|svelte|sh))$
@@ -100,18 +100,18 @@ repos:
- prettier@3.2.5
- repo: https://github.com/psf/black.git
rev: 24.4.2
rev: 24.10.0
hooks:
- id: black
files: (.*\.py)$
- repo: https://github.com/asottile/blacken-docs
rev: 1.18.0
rev: 1.19.1
hooks:
- id: blacken-docs
args: [--line-length=120, --skip-errors]
additional_dependencies:
- black==24.4.2
- black==24.10.0
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
@@ -122,7 +122,7 @@ repos:
- tomli
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.5.0
rev: v0.8.6
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix, --no-cache]

View File

@@ -1 +1 @@
**/kubernetes/
**/kubernetes/

16
.set_env.sh Normal file
View File

@@ -0,0 +1,16 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
#To anounce the version of the codes, please create a version.txt and have following format.
#VERSION_MAJOR 1
#VERSION_MINOR 0
#VERSION_PATCH 0
VERSION_FILE="version.txt"
if [ -f $VERSION_FILE ]; then
VER_OPEA_MAJOR=$(grep "VERSION_MAJOR" $VERSION_FILE | cut -d " " -f 2)
VER_OPEA_MINOR=$(grep "VERSION_MINOR" $VERSION_FILE | cut -d " " -f 2)
VER_OPEA_PATCH=$(grep "VERSION_PATCH" $VERSION_FILE | cut -d " " -f 2)
export TAG=$VER_OPEA_MAJOR.$VER_OPEA_MINOR
echo OPEA Version:$TAG
fi

View File

@@ -2,8 +2,8 @@
## Overview
This example showcases a hierarchical multi-agent system for question-answering applications. The architecture diagram is shown below. The supervisor agent interfaces with the user and dispatch tasks to the worker agent and other tools to gather information and come up with answers. The worker agent uses the retrieval tool to generate answers to the queries posted by the supervisor agent. Other tools used by the supervisor agent may include APIs to interface knowledge graphs, SQL databases, external knowledge bases, etc.
![Architecture Overview](assets/agent_qna_arch.png)
This example showcases a hierarchical multi-agent system for question-answering applications. The architecture diagram is shown below. The supervisor agent interfaces with the user and dispatch tasks to two worker agents to gather information and come up with answers. The worker RAG agent uses the retrieval tool to retrieve relevant documents from the knowledge base (a vector database). The worker SQL agent retrieve relevant data from the SQL database. Although not included in this example, but other tools such as a web search tool or a knowledge graph query tool can be used by the supervisor agent to gather information from additional sources.
![Architecture Overview](assets/img/agent_qna_arch.png)
The AgentQnA example is implemented using the component-level microservices defined in [GenAIComps](https://github.com/opea-project/GenAIComps). The flow chart below shows the information flow between different microservices for this example.
@@ -38,6 +38,7 @@ flowchart LR
end
AG_REACT([Agent MicroService - react]):::blue
AG_RAG([Agent MicroService - rag]):::blue
AG_SQL([Agent MicroService - sql]):::blue
LLM_gen{{LLM Service <br>}}
DP([Data Preparation MicroService]):::blue
TEI_RER{{Reranking service<br>}}
@@ -51,6 +52,7 @@ flowchart LR
direction LR
a[User Input Query] --> AG_REACT
AG_REACT --> AG_RAG
AG_REACT --> AG_SQL
AG_RAG --> DocIndexRetriever-MegaService
EM ==> RET
RET ==> RER
@@ -59,6 +61,7 @@ flowchart LR
%% Embedding service flow
direction LR
AG_RAG <-.-> LLM_gen
AG_SQL <-.-> LLM_gen
AG_REACT <-.-> LLM_gen
EM <-.-> TEI_EM
RET <-.-> R_RET
@@ -75,37 +78,40 @@ flowchart LR
### Why Agent for question answering?
1. Improve relevancy of retrieved context.
Agent can rephrase user queries, decompose user queries, and iterate to get the most relevant context for answering user's questions. Compared to conventional RAG, RAG agent can significantly improve the correctness and relevancy of the answer.
2. Use tools to get additional knowledge.
For example, knowledge graphs and SQL databases can be exposed as APIs for Agents to gather knowledge that may be missing in the retrieval vector database.
3. Hierarchical agent can further improve performance.
Expert worker agents, such as retrieval agent, knowledge graph agent, SQL agent, etc., can provide high-quality output for different aspects of a complex query, and the supervisor agent can aggregate the information together to provide a comprehensive answer.
RAG agent can rephrase user queries, decompose user queries, and iterate to get the most relevant context for answering user's questions. Compared to conventional RAG, RAG agent can significantly improve the correctness and relevancy of the answer.
2. Expand scope of the agent.
The supervisor agent can interact with multiple worker agents that specialize in different domains with different skills (e.g., retrieve documents, write SQL queries, etc.), and thus can answer questions in multiple domains.
3. Hierarchical multi-agents can improve performance.
Expert worker agents, such as RAG agent and SQL agent, can provide high-quality output for different aspects of a complex query, and the supervisor agent can aggregate the information together to provide a comprehensive answer. If we only use one agent and provide all the tools to this single agent, it may get overwhelmed and not able to provide accurate answers.
## Deployment with docker
1. Build agent docker image
1. Build agent docker image [Optional]
Note: this is optional. The docker images will be automatically pulled when running the docker compose commands. This step is only needed if pulling images failed.
> [!NOTE]
> the step is optional. The docker images will be automatically pulled when running the docker compose commands. This step is only needed if pulling images failed.
First, clone the opea GenAIComps repo.
First, clone the opea GenAIComps repo.
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIComps.git
```
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIComps.git
```
Then build the agent docker image. Both the supervisor agent and the worker agent will use the same docker image, but when we launch the two agents we will specify different strategies and register different tools.
Then build the agent docker image. Both the supervisor agent and the worker agent will use the same docker image, but when we launch the two agents we will specify different strategies and register different tools.
```
cd GenAIComps
docker build -t opea/agent-langchain:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/agent/langchain/Dockerfile .
```
```
cd GenAIComps
docker build -t opea/agent:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/agent/src/Dockerfile .
```
2. Set up environment for this example </br>
First, clone this repo.
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
@@ -113,6 +119,14 @@ flowchart LR
Second, set up env vars.
```
# Example: host_ip="192.168.1.1" or export host_ip="External_Public_IP"
export host_ip=$(hostname -I | awk '{print $1}')
# if you are in a proxy environment, also set the proxy-related environment variables
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
# for using open-source llms
export HUGGINGFACEHUB_API_TOKEN=<your-HF-token>
@@ -137,38 +151,86 @@ flowchart LR
bash run_ingest_data.sh
```
4. Launch other tools. </br>
4. Prepare SQL database
In this example, we will use the Chinook SQLite database. Run the commands below.
```
# Download data
cd $WORKDIR
git clone https://github.com/lerocha/chinook-database.git
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite $WORKDIR/GenAIExamples/AgentQnA/tests/
```
5. Launch other tools. </br>
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
5. Launch agent services</br>
We provide two options for `llm_engine` of the agents: 1. open-source LLMs, 2. OpenAI models via API calls.
6. Launch multi-agent system. </br>
We provide two options for `llm_engine` of the agents: 1. open-source LLMs on Intel Gaudi2, 2. OpenAI models via API calls.
To use open-source LLMs on Gaudi2, run commands below.
::::{tab-set}
:::{tab-item} Gaudi
:sync: Gaudi
On Gaudi2 we will serve `meta-llama/Meta-Llama-3.1-70B-Instruct` using vllm.
First build vllm-gaudi docker image.
```bash
cd $WORKDIR
git clone https://github.com/vllm-project/vllm.git
cd ./vllm
git checkout v0.6.6
docker build --no-cache -f Dockerfile.hpu -t opea/vllm-gaudi:latest --shm-size=128g . --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy
```
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
bash launch_tgi_gaudi.sh
bash launch_agent_service_tgi_gaudi.sh
Then launch vllm on Gaudi2 with the command below.
```bash
vllm_port=8086
model="meta-llama/Meta-Llama-3.1-70B-Instruct"
docker run -d --runtime=habana --rm --name "vllm-gaudi-server" -e HABANA_VISIBLE_DEVICES=0,1,2,3 -p $vllm_port:8000 -v $vllm_volume:/data -e HF_TOKEN=$HF_TOKEN -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN -e HF_HOME=/data -e OMPI_MCA_btl_vader_single_copy_mechanism=none -e PT_HPU_ENABLE_LAZY_COLLECTIVES=true -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e VLLM_SKIP_WARMUP=true --cap-add=sys_nice --ipc=host opea/vllm-gaudi:latest --model ${model} --max-seq-len-to-capture 16384 --tensor-parallel-size 4
```
Then launch Agent microservices.
```bash
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi/
bash launch_agent_service_gaudi.sh
```
:::
:::{tab-item} Xeon
:sync: Xeon
To use OpenAI models, run commands below.
```
export OPENAI_API_KEY=<your-openai-key>
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon
bash launch_agent_service_openai.sh
```
:::
::::
## Deploy using Helm Chart
Refer to the [AgentQnA helm chart](./kubernetes/helm/README.md) for instructions on deploying AgentQnA on Kubernetes.
## Validate services
First look at logs of the agent docker containers:
```
# worker agent
# worker RAG agent
docker logs rag-agent-endpoint
# worker SQL agent
docker logs sql-agent-endpoint
```
```
@@ -178,22 +240,36 @@ docker logs react-agent-endpoint
You should see something like "HTTP server setup successful" if the docker containers are started successfully.</p>
Second, validate worker agent:
Second, validate worker RAG agent:
```
curl http://${ip_address}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "Michael Jackson song Thriller"
}'
```
Third, validate supervisor agent:
Third, validate worker SQL agent:
```
curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
curl http://${host_ip}:9096/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "How many employees are in the company"
}'
```
Finally, validate supervisor agent:
```
curl http://${host_ip}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "How many albums does Iron Maiden have?"
}'
```
## Deploy AgentQnA UI
The AgentQnA UI can be deployed locally or using Docker.
For detailed instructions on deploying AgentQnA UI, refer to the [AgentQnA UI Guide](./ui/svelte/README.md).
## How to register your own tools with agent
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain/README.md).
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/src/README.md).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@@ -0,0 +1,101 @@
# Single node on-prem deployment with Docker Compose on AMD GPU
This example showcases a hierarchical multi-agent system for question-answering applications. We deploy the example on Xeon. For LLMs, we use OpenAI models via API calls. For instructions on using open-source LLMs, please refer to the deployment guide [here](../../../../README.md).
## Deployment with docker
1. First, clone this repo.
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
2. Set up environment for this example </br>
```
# Example: host_ip="192.168.1.1" or export host_ip="External_Public_IP"
export host_ip=$(hostname -I | awk '{print $1}')
# if you are in a proxy environment, also set the proxy-related environment variables
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
#OPANAI_API_KEY if you want to use OpenAI models
export OPENAI_API_KEY=<your-openai-key>
# Set AMD GPU settings
export AGENTQNA_CARD_ID="card1"
export AGENTQNA_RENDER_ID="renderD136"
```
3. Deploy the retrieval tool (i.e., DocIndexRetriever mega-service)
First, launch the mega-service.
```
cd $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool
bash launch_retrieval_tool.sh
```
Then, ingest data into the vector database. Here we provide an example. You can ingest your own data.
```
bash run_ingest_data.sh
```
4. Launch Tool service
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
5. Launch `Agent` service
```
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/amd/gpu/rocm
bash launch_agent_service_tgi_rocm.sh
```
6. [Optional] Build `Agent` docker image if pulling images failed.
```
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/agent:latest -f comps/agent/src/Dockerfile .
```
## Validate services
First look at logs of the agent docker containers:
```
# worker agent
docker logs rag-agent-endpoint
```
```
# supervisor agent
docker logs react-agent-endpoint
```
You should see something like "HTTP server setup successful" if the docker containers are started successfully.</p>
Second, validate worker agent:
```
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
Third, validate supervisor agent:
```
curl http://${host_ip}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Most recent album by Taylor Swift"
}'
```
## How to register your own tools with agent
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/src/README.md).

View File

@@ -0,0 +1,97 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
services:
agent-tgi-server:
image: ${AGENTQNA_TGI_IMAGE}
container_name: agent-tgi-server
ports:
- "${AGENTQNA_TGI_SERVICE_PORT-8085}:80"
volumes:
- /var/opea/agent-service/:/data
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
TGI_LLM_ENDPOINT: "http://${HOST_IP}:${AGENTQNA_TGI_SERVICE_PORT}"
HUGGING_FACE_HUB_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/${AGENTQNA_CARD_ID}:/dev/dri/${AGENTQNA_CARD_ID}
- /dev/dri/${AGENTQNA_RENDER_ID}:/dev/dri/${AGENTQNA_RENDER_ID}
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
ipc: host
command: --model-id ${LLM_MODEL_ID} --max-input-length 4096 --max-total-tokens 8192
worker-rag-agent:
image: opea/agent:latest
container_name: rag-agent-endpoint
volumes:
# - ${WORKDIR}/GenAIExamples/AgentQnA/docker_image_build/GenAIComps/comps/agent/langchain/:/home/user/comps/agent/langchain/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9095:9095"
ipc: host
environment:
ip_address: ${ip_address}
strategy: rag_agent_llama
recursion_limit: ${recursion_limit_worker}
llm_engine: tgi
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: false
tools: /home/user/tools/worker_agent_tools.yaml
require_human_feedback: false
RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL}
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2}
LANGCHAIN_PROJECT: "opea-worker-agent-service"
port: 9095
supervisor-react-agent:
image: opea/agent:latest
container_name: react-agent-endpoint
depends_on:
- agent-tgi-server
- worker-rag-agent
volumes:
# - ${WORKDIR}/GenAIExamples/AgentQnA/docker_image_build/GenAIComps/comps/agent/langchain/:/home/user/comps/agent/langchain/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "${AGENTQNA_FRONTEND_PORT}:9090"
ipc: host
environment:
ip_address: ${ip_address}
strategy: react_langgraph
recursion_limit: ${recursion_limit_supervisor}
llm_engine: tgi
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: false
tools: /home/user/tools/supervisor_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2}
LANGCHAIN_PROJECT: "opea-supervisor-agent-service"
CRAG_SERVER: $CRAG_SERVER
WORKER_AGENT_URL: $WORKER_AGENT_URL
port: 9090

View File

@@ -0,0 +1,47 @@
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
WORKPATH=$(dirname "$PWD")/..
export ip_address=${host_ip}
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export AGENTQNA_TGI_IMAGE=ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
export AGENTQNA_TGI_SERVICE_PORT="8085"
# LLM related environment variables
export AGENTQNA_CARD_ID="card1"
export AGENTQNA_RENDER_ID="renderD136"
export HF_CACHE_DIR=${HF_CACHE_DIR}
ls $HF_CACHE_DIR
export LLM_MODEL_ID="meta-llama/Meta-Llama-3-8B-Instruct"
#export NUM_SHARDS=4
export LLM_ENDPOINT_URL="http://${ip_address}:${AGENTQNA_TGI_SERVICE_PORT}"
export temperature=0.01
export max_new_tokens=512
# agent related environment variables
export AGENTQNA_WORKER_AGENT_SERVICE_PORT="9095"
export TOOLSET_PATH=/home/huggingface/datamonsters/amd-opea/GenAIExamples/AgentQnA/tools/
echo "TOOLSET_PATH=${TOOLSET_PATH}"
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export WORKER_AGENT_URL="http://${ip_address}:${AGENTQNA_WORKER_AGENT_SERVICE_PORT}/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:18881
export AGENTQNA_FRONTEND_PORT="9090"
#retrieval_tool
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:26379"
export INDEX_NAME="rag-redis"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete"
docker compose -f compose.yaml up -d

View File

@@ -0,0 +1,46 @@
#!/usr/bin/env bash
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
WORKPATH=$(dirname "$PWD")/..
export ip_address=${host_ip}
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export AGENTQNA_TGI_IMAGE=ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
export AGENTQNA_TGI_SERVICE_PORT="19001"
# LLM related environment variables
export AGENTQNA_CARD_ID="card1"
export AGENTQNA_RENDER_ID="renderD136"
export HF_CACHE_DIR=${HF_CACHE_DIR}
ls $HF_CACHE_DIR
export LLM_MODEL_ID="meta-llama/Meta-Llama-3-8B-Instruct"
export NUM_SHARDS=4
export LLM_ENDPOINT_URL="http://${ip_address}:${AGENTQNA_TGI_SERVICE_PORT}"
export temperature=0.01
export max_new_tokens=512
# agent related environment variables
export AGENTQNA_WORKER_AGENT_SERVICE_PORT="9095"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export WORKER_AGENT_URL="http://${ip_address}:${AGENTQNA_WORKER_AGENT_SERVICE_PORT}/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:18881
export AGENTQNA_FRONTEND_PORT="15557"
#retrieval_tool
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:26379"
export INDEX_NAME="rag-redis"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete"

View File

@@ -1,3 +1,123 @@
# Deployment on Xeon
# Single node on-prem deployment with Docker Compose on Xeon Scalable processors
We deploy the retrieval tool on Xeon. For LLMs, we support OpenAI models via API calls. For instructions on using open-source LLMs, please refer to the deployment guide [here](../../../../README.md).
This example showcases a hierarchical multi-agent system for question-answering applications. We deploy the example on Xeon. For LLMs, we use OpenAI models via API calls. For instructions on using open-source LLMs, please refer to the deployment guide [here](../../../../README.md).
## Deployment with docker
1. First, clone this repo.
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
2. Set up environment for this example </br>
```
# Example: host_ip="192.168.1.1" or export host_ip="External_Public_IP"
export host_ip=$(hostname -I | awk '{print $1}')
# if you are in a proxy environment, also set the proxy-related environment variables
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
#OPANAI_API_KEY if you want to use OpenAI models
export OPENAI_API_KEY=<your-openai-key>
```
3. Deploy the retrieval tool (i.e., DocIndexRetriever mega-service)
First, launch the mega-service.
```
cd $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool
bash launch_retrieval_tool.sh
```
Then, ingest data into the vector database. Here we provide an example. You can ingest your own data.
```
bash run_ingest_data.sh
```
4. Prepare SQL database
In this example, we will use the SQLite database provided in the [TAG-Bench](https://github.com/TAG-Research/TAG-Bench/tree/main). Run the commands below.
```
# Download data
cd $WORKDIR
git clone https://github.com/TAG-Research/TAG-Bench.git
cd TAG-Bench/setup
chmod +x get_dbs.sh
./get_dbs.sh
```
5. Launch Tool service
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
6. Launch multi-agent system
The configurations of the supervisor agent and the worker agents are defined in the docker-compose yaml file. We currently use openAI GPT-4o-mini as LLM.
```
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon
bash launch_agent_service_openai.sh
```
7. [Optional] Build `Agent` docker image if pulling images failed.
```
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/agent:latest -f comps/agent/src/Dockerfile .
```
## Validate services
First look at logs of the agent docker containers:
```
# worker RAG agent
docker logs rag-agent-endpoint
# worker SQL agent
docker logs sql-agent-endpoint
```
```
# supervisor agent
docker logs react-agent-endpoint
```
You should see something like "HTTP server setup successful" if the docker containers are started successfully.</p>
Second, validate worker RAG agent:
```
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "Michael Jackson song Thriller"
}'
```
Third, validate worker SQL agent:
```
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "How many employees are in the company?"
}'
```
Finally, validate supervisor agent:
```
curl http://${host_ip}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "How many albums does Iron Maiden have?"
}'
```
## How to register your own tools with agent
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/src/README.md).

View File

@@ -3,7 +3,7 @@
services:
worker-rag-agent:
image: opea/agent-langchain:latest
image: opea/agent:latest
container_name: rag-agent-endpoint
volumes:
- ${TOOLSET_PATH}:/home/user/tools/
@@ -19,7 +19,7 @@ services:
model: ${model}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
stream: false
tools: /home/user/tools/worker_agent_tools.yaml
require_human_feedback: false
RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL}
@@ -31,9 +31,36 @@ services:
LANGCHAIN_PROJECT: "opea-worker-agent-service"
port: 9095
worker-sql-agent:
image: opea/agent:latest
container_name: sql-agent-endpoint
volumes:
- ${WORKDIR}/TAG-Bench/:/home/user/TAG-Bench # SQL database
ports:
- "9096:9096"
ipc: host
environment:
ip_address: ${ip_address}
strategy: sql_agent
db_name: ${db_name}
db_path: ${db_path}
use_hints: false
hints_file: /home/user/TAG-Bench/${db_name}_hints.csv
recursion_limit: ${recursion_limit_worker}
llm_engine: openai
OPENAI_API_KEY: ${OPENAI_API_KEY}
model: ${model}
temperature: 0
max_new_tokens: ${max_new_tokens}
stream: false
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
port: 9096
supervisor-react-agent:
image: opea/agent-langchain:latest
image: opea/agent:latest
container_name: react-agent-endpoint
depends_on:
- worker-rag-agent
@@ -51,7 +78,7 @@ services:
model: ${model}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
stream: false
tools: /home/user/tools/supervisor_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}

View File

@@ -1,6 +1,9 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
pushd "../../../../../" > /dev/null
source .set_env.sh
popd > /dev/null
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export ip_address=$(hostname -I | awk '{print $1}')
export recursion_limit_worker=12
@@ -10,7 +13,10 @@ export temperature=0
export max_new_tokens=4096
export OPENAI_API_KEY=${OPENAI_API_KEY}
export WORKER_AGENT_URL="http://${ip_address}:9095/v1/chat/completions"
export SQL_AGENT_URL="http://${ip_address}:9096/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:8080
export db_name=california_schools
export db_path="sqlite:////home/user/TAG-Bench/dev_folder/dev_databases/${db_name}/${db_name}.sqlite"
docker compose -f compose_openai.yaml up -d

View File

@@ -0,0 +1,147 @@
# Single node on-prem deployment AgentQnA on Gaudi
This example showcases a hierarchical multi-agent system for question-answering applications. We deploy the example on Gaudi using open-source LLMs.
For more details, please refer to the deployment guide [here](../../../../README.md).
## Deployment with docker
1. First, clone this repo.
```
export WORKDIR=<your-work-directory>
cd $WORKDIR
git clone https://github.com/opea-project/GenAIExamples.git
```
2. Set up environment for this example </br>
```
# Example: host_ip="192.168.1.1" or export host_ip="External_Public_IP"
export host_ip=$(hostname -I | awk '{print $1}')
# if you are in a proxy environment, also set the proxy-related environment variables
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
# for using open-source llms
export HUGGINGFACEHUB_API_TOKEN=<your-HF-token>
# Example export HF_CACHE_DIR=$WORKDIR so that no need to redownload every time
export HF_CACHE_DIR=<directory-where-llms-are-downloaded>
```
3. Deploy the retrieval tool (i.e., DocIndexRetriever mega-service)
First, launch the mega-service.
```
cd $WORKDIR/GenAIExamples/AgentQnA/retrieval_tool
bash launch_retrieval_tool.sh
```
Then, ingest data into the vector database. Here we provide an example. You can ingest your own data.
```
bash run_ingest_data.sh
```
4. Prepare SQL database
In this example, we will use the Chinook SQLite database. Run the commands below.
```
# Download data
cd $WORKDIR
git clone https://github.com/lerocha/chinook-database.git
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite $WORKDIR/GenAIExamples/AgentQnA/tests/
```
5. Launch Tool service
In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.
```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```
6. Launch multi-agent system
On Gaudi2 we will serve `meta-llama/Meta-Llama-3.1-70B-Instruct` using vllm.
First build vllm-gaudi docker image.
```bash
cd $WORKDIR
git clone https://github.com/vllm-project/vllm.git
cd ./vllm
git checkout v0.6.6
docker build --no-cache -f Dockerfile.hpu -t opea/vllm-gaudi:latest --shm-size=128g . --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy
```
Then launch vllm on Gaudi2 with the command below.
```bash
vllm_port=8086
model="meta-llama/Meta-Llama-3.1-70B-Instruct"
docker run -d --runtime=habana --rm --name "vllm-gaudi-server" -e HABANA_VISIBLE_DEVICES=0,1,2,3 -p $vllm_port:8000 -v $vllm_volume:/data -e HF_TOKEN=$HF_TOKEN -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN -e HF_HOME=/data -e OMPI_MCA_btl_vader_single_copy_mechanism=none -e PT_HPU_ENABLE_LAZY_COLLECTIVES=true -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e VLLM_SKIP_WARMUP=true --cap-add=sys_nice --ipc=host opea/vllm-gaudi:latest --model ${model} --max-seq-len-to-capture 16384 --tensor-parallel-size 4
```
Then launch Agent microservices.
```bash
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi/
bash launch_agent_service_gaudi.sh
```
7. [Optional] Build `Agent` docker image if pulling images failed.
If docker image pulling failed in Step 6 above, build the agent docker image with the commands below. After image build, try Step 6 again.
```
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/agent:latest -f comps/agent/src/Dockerfile .
```
## Validate services
First look at logs of the agent docker containers:
```
# worker RAG agent
docker logs rag-agent-endpoint
# worker SQL agent
docker logs sql-agent-endpoint
```
```
# supervisor agent
docker logs react-agent-endpoint
```
You should see something like "HTTP server setup successful" if the docker containers are started successfully.</p>
Second, validate worker RAG agent:
```
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "Michael Jackson song Thriller"
}'
```
Third, validate worker SQL agent:
```
curl http://${host_ip}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "How many employees are in the company?"
}'
```
Finally, validate supervisor agent:
```
curl http://${host_ip}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "How many albums does Iron Maiden have?"
}'
```
## How to register your own tools with agent
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/src/README.md).

View File

@@ -3,10 +3,9 @@
services:
worker-rag-agent:
image: opea/agent-langchain:latest
image: opea/agent:latest
container_name: rag-agent-endpoint
volumes:
# - ${WORKDIR}/GenAIExamples/AgentQnA/docker_image_build/GenAIComps/comps/agent/langchain/:/home/user/comps/agent/langchain/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9095:9095"
@@ -15,13 +14,13 @@ services:
ip_address: ${ip_address}
strategy: rag_agent_llama
recursion_limit: ${recursion_limit_worker}
llm_engine: tgi
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
stream: false
tools: /home/user/tools/worker_agent_tools.yaml
require_human_feedback: false
RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL}
@@ -33,14 +32,41 @@ services:
LANGCHAIN_PROJECT: "opea-worker-agent-service"
port: 9095
worker-sql-agent:
image: opea/agent:latest
container_name: sql-agent-endpoint
volumes:
- ${WORKDIR}/GenAIExamples/AgentQnA/tests:/home/user/chinook-db # test db
ports:
- "9096:9096"
ipc: host
environment:
ip_address: ${ip_address}
strategy: sql_agent_llama
db_name: ${db_name}
db_path: ${db_path}
use_hints: false
recursion_limit: ${recursion_limit_worker}
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
stream: false
require_human_feedback: false
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
port: 9096
supervisor-react-agent:
image: opea/agent-langchain:latest
image: opea/agent:latest
container_name: react-agent-endpoint
depends_on:
- worker-rag-agent
- worker-sql-agent
volumes:
# - ${WORKDIR}/GenAIExamples/AgentQnA/docker_image_build/GenAIComps/comps/agent/langchain/:/home/user/comps/agent/langchain/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9090:9090"
@@ -49,13 +75,13 @@ services:
ip_address: ${ip_address}
strategy: react_llama
recursion_limit: ${recursion_limit_supervisor}
llm_engine: tgi
llm_engine: vllm
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
llm_endpoint_url: ${LLM_ENDPOINT_URL}
model: ${LLM_MODEL_ID}
temperature: ${temperature}
max_new_tokens: ${max_new_tokens}
streaming: false
stream: false
tools: /home/user/tools/supervisor_agent_tools.yaml
require_human_feedback: false
no_proxy: ${no_proxy}
@@ -66,4 +92,5 @@ services:
LANGCHAIN_PROJECT: "opea-supervisor-agent-service"
CRAG_SERVER: $CRAG_SERVER
WORKER_AGENT_URL: $WORKER_AGENT_URL
SQL_AGENT_URL: $SQL_AGENT_URL
port: 9090

View File

@@ -1,6 +1,9 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
pushd "../../../../../" > /dev/null
source .set_env.sh
popd > /dev/null
WORKPATH=$(dirname "$PWD")/..
# export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
@@ -13,8 +16,8 @@ ls $HF_CACHE_DIR
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export LLM_MODEL_ID="meta-llama/Meta-Llama-3.1-70B-Instruct"
export NUM_SHARDS=4
export LLM_ENDPOINT_URL="http://${ip_address}:8085"
export temperature=0.01
export LLM_ENDPOINT_URL="http://${ip_address}:8086"
export temperature=0
export max_new_tokens=4096
# agent related environment variables
@@ -23,7 +26,11 @@ echo "TOOLSET_PATH=${TOOLSET_PATH}"
export recursion_limit_worker=12
export recursion_limit_supervisor=10
export WORKER_AGENT_URL="http://${ip_address}:9095/v1/chat/completions"
export SQL_AGENT_URL="http://${ip_address}:9096/v1/chat/completions"
export RETRIEVAL_TOOL_URL="http://${ip_address}:8889/v1/retrievaltool"
export CRAG_SERVER=http://${ip_address}:8080
export db_name=Chinook
export db_path="sqlite:////home/user/chinook-db/Chinook_Sqlite.sqlite"
docker compose -f compose.yaml up -d

View File

@@ -3,7 +3,7 @@
services:
tgi-server:
image: ghcr.io/huggingface/tgi-gaudi:2.0.5
image: ghcr.io/huggingface/tgi-gaudi:2.0.6
container_name: tgi-server
ports:
- "8085:80"

View File

@@ -2,12 +2,18 @@
# SPDX-License-Identifier: Apache-2.0
services:
agent-langchain:
agent:
build:
context: GenAIComps
dockerfile: comps/agent/langchain/Dockerfile
dockerfile: comps/agent/src/Dockerfile
args:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
no_proxy: ${no_proxy}
image: ${REGISTRY:-opea}/agent-langchain:${TAG:-latest}
image: ${REGISTRY:-opea}/agent:${TAG:-latest}
agent-ui:
build:
context: ../ui
dockerfile: ./docker/Dockerfile
extends: agent
image: ${REGISTRY:-opea}/agent-ui:${TAG:-latest}

View File

@@ -0,0 +1,11 @@
# Deploy AgentQnA on Kubernetes cluster
- You should have Helm (version >= 3.15) installed. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
- For more deploy options, refer to [helm charts README](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts#readme).
## Deploy on Gaudi
```
export HFTOKEN="insert-your-huggingface-token-here"
helm install agentqna oci://ghcr.io/opea-project/charts/agentqna --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} -f gaudi-values.yaml
```

View File

@@ -0,0 +1,16 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Accelerate inferencing in heaviest components to improve performance
# by overriding their subchart values
vllm:
enabled: true
image:
repository: opea/vllm-gaudi
supervisor:
llm_endpoint_url: http://{{ .Release.Name }}-vllm
ragagent:
llm_endpoint_url: http://{{ .Release.Name }}-vllm
sqlagent:
llm_endpoint_url: http://{{ .Release.Name }}-vllm

View File

@@ -53,7 +53,7 @@ def main():
host_ip = args.host_ip
port = args.port
proxies = {"http": ""}
url = "http://{host_ip}:{port}/v1/dataprep".format(host_ip=host_ip, port=port)
url = "http://{host_ip}:{port}/v1/dataprep/ingest".format(host_ip=host_ip, port=port)
# Split jsonl file into json files
files = split_jsonl_into_txts(os.path.join(args.filedir, args.filename))

View File

@@ -13,13 +13,14 @@ export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export RERANK_TYPE="tei"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8889/v1/retrievaltool"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get_file"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete_file"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/ingest"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete"
docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml up -d

View File

@@ -15,7 +15,7 @@ function stop_agent_and_api_server() {
echo "Stopping CRAG server"
docker stop $(docker ps -q --filter ancestor=docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0)
echo "Stopping Agent services"
docker stop $(docker ps -q --filter ancestor=opea/agent-langchain:latest)
docker stop $(docker ps -q --filter ancestor=opea/agent:latest)
}
function stop_retrieval_tool() {

View File

@@ -0,0 +1,6 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
DATAPATH=$WORKDIR/TAG-Bench/tag_queries.csv
OUTFOLDER=$WORKDIR/TAG-Bench/query_by_db
python3 split_data.py --path $DATAPATH --output $OUTFOLDER

View File

@@ -0,0 +1,27 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import os
import pandas as pd
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--path", type=str, required=True)
parser.add_argument("--output", type=str, required=True)
args = parser.parse_args()
# if output folder does not exist, create it
if not os.path.exists(args.output):
os.makedirs(args.output)
# Load the data
data = pd.read_csv(args.path)
# Split the data by domain
domains = data["DB used"].unique()
for domain in domains:
domain_data = data[data["DB used"] == domain]
out = os.path.join(args.output, f"query_{domain}.csv")
domain_data.to_csv(out, index=False)

View File

@@ -11,17 +11,16 @@ export ip_address=$(hostname -I | awk '{print $1}')
function get_genai_comps() {
if [ ! -d "GenAIComps" ] ; then
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
git clone --depth 1 --branch ${opea_branch:-"main"} https://github.com/opea-project/GenAIComps.git
fi
}
function build_docker_images_for_retrieval_tool(){
cd $WORKDIR/GenAIExamples/DocIndexRetriever/docker_image_build/
# git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
get_genai_comps
echo "Build all the images with --no-cache..."
service_list="doc-index-retriever dataprep-redis embedding-tei retriever-redis reranking-tei"
service_list="doc-index-retriever dataprep embedding retriever reranking"
docker compose -f build.yaml build ${service_list} --no-cache
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
@@ -35,6 +34,26 @@ function build_agent_docker_image() {
docker compose -f build.yaml build --no-cache
}
function build_vllm_docker_image() {
echo "Building the vllm docker image"
cd $WORKPATH
echo $WORKPATH
if [ ! -d "./vllm-fork" ]; then
git clone https://github.com/HabanaAI/vllm-fork.git
fi
cd ./vllm-fork
git checkout v0.6.4.post2+Gaudi-1.19.0
sed -i 's/triton/triton==3.1.0/g' requirements-hpu.txt
docker build --no-cache -f Dockerfile.hpu -t opea/vllm-gaudi:ci --shm-size=128g . --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy
if [ $? -ne 0 ]; then
echo "opea/vllm-gaudi:ci failed"
exit 1
else
echo "opea/vllm-gaudi:ci successful"
fi
}
function main() {
echo "==================== Build docker images for retrieval tool ===================="
build_docker_images_for_retrieval_tool
@@ -43,6 +62,12 @@ function main() {
echo "==================== Build agent docker image ===================="
build_agent_docker_image
echo "==================== Build agent docker image completed ===================="
echo "==================== Build vllm docker image ===================="
build_vllm_docker_image
echo "==================== Build vllm docker image completed ===================="
docker image ls | grep vllm
}
main

View File

@@ -7,6 +7,7 @@ WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export host_ip=${ip_address}
export HF_CACHE_DIR=$WORKDIR/hf_cache
if [ ! -d "$HF_CACHE_DIR" ]; then

View File

@@ -8,15 +8,22 @@ WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export TOOLSET_PATH=$WORKPATH/tools/
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
HF_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
model="meta-llama/Meta-Llama-3.1-70B-Instruct"
export HF_CACHE_DIR=$WORKDIR/hf_cache
export HF_CACHE_DIR=/data2/huggingface
if [ ! -d "$HF_CACHE_DIR" ]; then
HF_CACHE_DIR=$WORKDIR/hf_cache
mkdir -p "$HF_CACHE_DIR"
fi
echo "HF_CACHE_DIR=$HF_CACHE_DIR"
ls $HF_CACHE_DIR
vllm_port=8086
vllm_volume=${HF_CACHE_DIR}
function start_tgi(){
echo "Starting tgi-gaudi server"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
@@ -24,14 +31,67 @@ function start_tgi(){
}
function start_vllm_service_70B() {
echo "token is ${HF_TOKEN}"
echo "start vllm gaudi service"
echo "**************model is $model**************"
vllm_image=opea/vllm-gaudi:ci
docker run -d --runtime=habana --rm --name "vllm-gaudi-server" -e HABANA_VISIBLE_DEVICES=0,1,2,3 -p $vllm_port:8000 -v $vllm_volume:/data -e HF_TOKEN=$HF_TOKEN -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN -e HF_HOME=/data -e OMPI_MCA_btl_vader_single_copy_mechanism=none -e PT_HPU_ENABLE_LAZY_COLLECTIVES=true -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e VLLM_SKIP_WARMUP=true --cap-add=sys_nice --ipc=host $vllm_image --model ${model} --max-seq-len-to-capture 16384 --tensor-parallel-size 4
sleep 5s
echo "Waiting vllm gaudi ready"
n=0
LOG_PATH=$PWD
until [[ "$n" -ge 100 ]] || [[ $ready == true ]]; do
docker logs vllm-gaudi-server
docker logs vllm-gaudi-server &> ${LOG_PATH}/vllm-gaudi-service.log
n=$((n+1))
if grep -q "Uvicorn running on" ${LOG_PATH}/vllm-gaudi-service.log; then
break
fi
if grep -q "No such container" ${LOG_PATH}/vllm-gaudi-service.log; then
echo "container vllm-gaudi-server not found"
exit 1
fi
sleep 5s
done
sleep 5s
echo "Service started successfully"
}
function prepare_data() {
cd $WORKDIR
echo "Downloading data..."
git clone https://github.com/TAG-Research/TAG-Bench.git
cd TAG-Bench/setup
chmod +x get_dbs.sh
./get_dbs.sh
echo "Split data..."
cd $WORKPATH/tests/sql_agent_test
bash run_data_split.sh
echo "Data preparation done!"
}
function download_chinook_data(){
echo "Downloading chinook data..."
cd $WORKDIR
git clone https://github.com/lerocha/chinook-database.git
cp chinook-database/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite $WORKDIR/GenAIExamples/AgentQnA/tests/
}
function start_agent_and_api_server() {
echo "Starting CRAG server"
docker run -d --runtime=runc --name=kdd-cup-24-crag-service -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi
bash launch_agent_service_tgi_gaudi.sh
sleep 10
bash launch_agent_service_gaudi.sh
sleep 2m
}
function validate() {
@@ -49,35 +109,76 @@ function validate() {
}
function validate_agent_service() {
echo "----------------Test agent ----------------"
# local CONTENT=$(http_proxy="" curl http://${ip_address}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
# "query": "Tell me about Michael Jackson song thriller"
# }')
# # test worker rag agent
echo "======================Testing worker rag agent======================"
export agent_port="9095"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py)
prompt="Tell me about Michael Jackson song Thriller"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt")
# echo $CONTENT
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "rag-agent-endpoint")
docker logs rag-agent-endpoint
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs rag-agent-endpoint
exit 1
fi
# local CONTENT=$(http_proxy="" curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
# "query": "Tell me about Michael Jackson song thriller"
# }')
export agent_port="9090"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py)
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "react-agent-endpoint")
docker logs react-agent-endpoint
# # test worker sql agent
echo "======================Testing worker sql agent======================"
export agent_port="9096"
prompt="How many employees are there in the company?"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt")
local EXIT_CODE=$(validate "$CONTENT" "8" "sql-agent-endpoint")
echo $CONTENT
# echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs sql-agent-endpoint
exit 1
fi
# test supervisor react agent
echo "======================Testing supervisor react agent======================"
export agent_port="9090"
prompt="How many albums does Iron Maiden have?"
local CONTENT=$(python3 $WORKDIR/GenAIExamples/AgentQnA/tests/test.py --prompt "$prompt")
local EXIT_CODE=$(validate "$CONTENT" "21" "react-agent-endpoint")
# echo $CONTENT
echo $EXIT_CODE
local EXIT_CODE="${EXIT_CODE:0-1}"
if [ "$EXIT_CODE" == "1" ]; then
docker logs react-agent-endpoint
exit 1
fi
}
function remove_data() {
echo "Removing data..."
cd $WORKDIR
if [ -d "TAG-Bench" ]; then
rm -rf TAG-Bench
fi
echo "Data removed!"
}
function remove_chinook_data(){
echo "Removing chinook data..."
cd $WORKDIR
if [ -d "chinook-database" ]; then
rm -rf chinook-database
fi
echo "Chinook data removed!"
}
function main() {
echo "==================== Start TGI ===================="
start_tgi
echo "==================== TGI started ===================="
echo "==================== Prepare data ===================="
download_chinook_data
echo "==================== Data prepare done ===================="
echo "==================== Start VLLM service ===================="
start_vllm_service_70B
echo "==================== VLLM service started ===================="
echo "==================== Start agent ===================="
start_agent_and_api_server
@@ -88,4 +189,8 @@ function main() {
echo "==================== Agent service validated ===================="
}
remove_data
remove_chinook_data
main
remove_data
remove_chinook_data

View File

@@ -0,0 +1,76 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -ex
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export HF_CACHE_DIR=$WORKDIR/hf_cache
if [ ! -d "$HF_CACHE_DIR" ]; then
mkdir -p "$HF_CACHE_DIR"
fi
ls $HF_CACHE_DIR
function start_agent_and_api_server() {
echo "Starting CRAG server"
docker run -d --runtime=runc --name=kdd-cup-24-crag-service -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/amd/gpu/rocm
bash launch_agent_service_tgi_rocm.sh
}
function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"
if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo 0
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo 1
fi
}
function validate_agent_service() {
echo "----------------Test agent ----------------"
local CONTENT=$(http_proxy="" curl http://${ip_address}:9095/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Tell me about Michael Jackson song thriller"
}')
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "react-agent-endpoint")
docker logs rag-agent-endpoint
if [ "$EXIT_CODE" == "1" ]; then
exit 1
fi
local CONTENT=$(http_proxy="" curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "Tell me about Michael Jackson song thriller"
}')
local EXIT_CODE=$(validate "$CONTENT" "Thriller" "react-agent-endpoint")
docker logs react-agent-endpoint
if [ "$EXIT_CODE" == "1" ]; then
exit 1
fi
}
function main() {
echo "==================== Start agent ===================="
start_agent_and_api_server
echo "==================== Agent started ===================="
echo "==================== Validate agent service ===================="
validate_agent_service
echo "==================== Agent service validated ===================="
}
main

View File

@@ -1,6 +1,7 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import os
import requests
@@ -9,17 +10,47 @@ import requests
def generate_answer_agent_api(url, prompt):
proxies = {"http": ""}
payload = {
"query": prompt,
"messages": prompt,
}
response = requests.post(url, json=payload, proxies=proxies)
answer = response.json()["text"]
return answer
def process_request(url, query, is_stream=False):
proxies = {"http": ""}
payload = {
"messages": query,
}
try:
resp = requests.post(url=url, json=payload, proxies=proxies, stream=is_stream)
if not is_stream:
ret = resp.json()["text"]
print(ret)
else:
for line in resp.iter_lines(decode_unicode=True):
print(line)
ret = None
resp.raise_for_status() # Raise an exception for unsuccessful HTTP status codes
return ret
except requests.exceptions.RequestException as e:
ret = f"An error occurred:{e}"
print(ret)
return False
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--prompt", type=str)
parser.add_argument("--stream", action="store_true")
args = parser.parse_args()
ip_address = os.getenv("ip_address", "localhost")
agent_port = os.getenv("agent_port", "9095")
agent_port = os.getenv("agent_port", "9090")
url = f"http://{ip_address}:{agent_port}/v1/chat/completions"
prompt = "Tell me about Michael Jackson song thriller"
answer = generate_answer_agent_api(url, prompt)
print(answer)
prompt = args.prompt
process_request(url, prompt, args.stream)

View File

@@ -1,8 +1,7 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
set -e
set -xe
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
@@ -27,7 +26,7 @@ function stop_agent_docker() {
done
}
function stop_tgi(){
function stop_llm(){
cd $WORKPATH/docker_compose/intel/hpu/gaudi/
container_list=$(cat tgi_gaudi.yaml | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
@@ -36,6 +35,14 @@ function stop_tgi(){
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
done
cid=$(docker ps -aq --filter "name=vllm-gaudi-server")
echo "Stopping container $cid"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
cid=$(docker ps -aq --filter "name=test-comps-vllm-gaudi-service")
echo "Stopping container $cid"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
}
function stop_retrieval_tool() {
@@ -52,7 +59,7 @@ function stop_retrieval_tool() {
echo "workpath: $WORKPATH"
echo "=================== Stop containers ===================="
stop_crag
stop_tgi
stop_llm
stop_agent_docker
stop_retrieval_tool
@@ -78,8 +85,9 @@ echo "=================== #5 Stop agent and API server===================="
stop_crag
stop_agent_docker
stop_retrieval_tool
stop_llm
echo "=================== #5 Agent and API server stopped===================="
echo y | docker system prune
echo "ALL DONE!"
echo "ALL DONE!!"

View File

@@ -0,0 +1,75 @@
#!/bin/bash
# Copyright (C) 2024 Advanced Micro Devices, Inc.
# SPDX-License-Identifier: Apache-2.0
set -xe
WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
function stop_crag() {
cid=$(docker ps -aq --filter "name=kdd-cup-24-crag-service")
echo "Stopping container kdd-cup-24-crag-service with cid $cid"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
}
function stop_agent_docker() {
cd $WORKPATH/docker_compose/amd/gpu/rocm
# docker compose -f compose.yaml down
container_list=$(cat compose.yaml | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
cid=$(docker ps -aq --filter "name=$container_name")
echo "Stopping container $container_name"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
done
}
function stop_retrieval_tool() {
echo "Stopping Retrieval tool"
local RETRIEVAL_TOOL_PATH=$WORKPATH/../DocIndexRetriever
cd $RETRIEVAL_TOOL_PATH/docker_compose/intel/cpu/xeon/
# docker compose -f compose.yaml down
container_list=$(cat compose.yaml | grep container_name | cut -d':' -f2)
for container_name in $container_list; do
cid=$(docker ps -aq --filter "name=$container_name")
echo "Stopping container $container_name"
if [[ ! -z "$cid" ]]; then docker rm $cid -f && sleep 1s; fi
done
}
echo "workpath: $WORKPATH"
echo "=================== Stop containers ===================="
stop_crag
stop_agent_docker
stop_retrieval_tool
cd $WORKPATH/tests
echo "=================== #1 Building docker images===================="
bash step1_build_images.sh
echo "=================== #1 Building docker images completed===================="
echo "=================== #2 Start retrieval tool===================="
bash step2_start_retrieval_tool.sh
echo "=================== #2 Retrieval tool started===================="
echo "=================== #3 Ingest data and validate retrieval===================="
bash step3_ingest_data_and_validate_retrieval.sh
echo "=================== #3 Data ingestion and validation completed===================="
echo "=================== #4 Start agent and API server===================="
bash step4a_launch_and_validate_agent_tgi_on_rocm.sh
echo "=================== #4 Agent test passed ===================="
echo "=================== #5 Stop agent and API server===================="
stop_crag
stop_agent_docker
stop_retrieval_tool
echo "=================== #5 Agent and API server stopped===================="
echo y | docker system prune
echo "ALL DONE!!"

View File

@@ -2,7 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
search_knowledge_base:
description: Search knowledge base for a given query. Returns text related to the query.
description: Search a knowledge base for a given query. Returns text related to the query.
callable_api: tools.py:search_knowledge_base
args_schema:
query:
@@ -10,6 +10,15 @@ search_knowledge_base:
description: query
return_output: retrieved_data
search_artist_database:
description: Search a SQL database on artists and their music with a natural language query. Returns text related to the query.
callable_api: tools.py:search_sql_database
args_schema:
query:
type: str
description: natural language query
return_output: retrieved_data
get_artist_birth_place:
description: Get the birth place of an artist.
callable_api: tools.py:get_artist_birth_place

View File

@@ -8,13 +8,30 @@ from tools.pycragapi import CRAG
def search_knowledge_base(query: str) -> str:
"""Search the knowledge base for a specific query."""
# use worker agent (DocGrader) to search the knowledge base
"""Search a knowledge base about music and singers for a given query.
Returns text related to the query.
"""
url = os.environ.get("WORKER_AGENT_URL")
print(url)
proxies = {"http": ""}
payload = {
"query": query,
"messages": query,
}
response = requests.post(url, json=payload, proxies=proxies)
return response.json()["text"]
def search_sql_database(query: str) -> str:
"""Search a SQL database on artists and their music with a natural language query.
Returns text related to the query.
"""
url = os.environ.get("SQL_AGENT_URL")
print(url)
proxies = {"http": ""}
payload = {
"messages": query,
}
response = requests.post(url, json=payload, proxies=proxies)
return response.json()["text"]

View File

@@ -12,7 +12,7 @@ def search_knowledge_base(query: str) -> str:
print(url)
proxies = {"http": ""}
payload = {
"messages": query,
"text": query,
}
response = requests.post(url, json=payload, proxies=proxies)
print(response)

View File

@@ -0,0 +1,26 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Use node 20.11.1 as the base image
FROM node:20.11.1
# Update package manager and install Git
RUN apt-get update -y && apt-get install -y git
# Copy the front-end code repository
COPY svelte /home/user/svelte
# Set the working directory
WORKDIR /home/user/svelte
# Install front-end dependencies
RUN npm install
# Build the front-end application
RUN npm run build
# Expose the port of the front-end application
EXPOSE 5173
# Run the front-end application in preview mode
CMD ["npm", "run", "preview", "--", "--port", "5173", "--host", "0.0.0.0"]

View File

@@ -0,0 +1,10 @@
[*]
indent_style = tab
[package.json]
indent_style = space
indent_size = 2
[*.md]
indent_style = space
indent_size = 2

1
AgentQnA/ui/svelte/.env Normal file
View File

@@ -0,0 +1 @@
AGENT_URL = '/v1/chat/completions'

View File

@@ -0,0 +1,13 @@
.DS_Store
node_modules
/build
/.svelte-kit
/package
.env
.env.*
!.env.example
# Ignore files for PNPM, NPM and YARN
pnpm-lock.yaml
package-lock.json
yarn.lock

View File

@@ -0,0 +1,20 @@
module.exports = {
root: true,
parser: "@typescript-eslint/parser",
extends: ["eslint:recommended", "plugin:@typescript-eslint/recommended", "prettier"],
plugins: ["svelte3", "@typescript-eslint", "neverthrow"],
ignorePatterns: ["*.cjs"],
overrides: [{ files: ["*.svelte"], processor: "svelte3/svelte3" }],
settings: {
"svelte3/typescript": () => require("typescript"),
},
parserOptions: {
sourceType: "module",
ecmaVersion: 2020,
},
env: {
browser: true,
es2017: true,
node: true,
},
};

View File

@@ -0,0 +1,13 @@
.DS_Store
node_modules
/build
/.svelte-kit
/package
.env
.env.*
!.env.example
# Ignore files for PNPM, NPM and YARN
pnpm-lock.yaml
package-lock.json
yarn.lock

View File

@@ -0,0 +1,13 @@
{
"pluginSearchDirs": [
"."
],
"overrides": [
{
"files": "*.svelte",
"options": {
"parser": "svelte"
}
}
]
}

View File

@@ -0,0 +1,60 @@
# AgentQnA
## 📸 Project Screenshots
![project-screenshot](../../assets/img/agent_ui.png)
![project-screenshot](../../assets/img/agent_ui_result.png)
## 🧐 Features
Here're some of the project's features:
- Create AgentProvide more precise answers based on user queries, showcase the high-quality output process of complex queries across different dimensions, and consolidate information to present comprehensive answers.
## 🛠️ Get it Running
1. Clone the repo.
2. cd command to the current folder.
```
cd AgentQnA/ui
```
3. Modify the required .env variables.
```
AGENT_URL = ''
```
4. **For Local Development:**
- Install the dependencies:
```
npm install
```
- Start the development server:
```
npm run dev
```
- The application will be available at `http://localhost:3000`.
5. **For Docker Setup:**
- Build the Docker image:
```
docker build -t opea:agent-ui .
```
- Run the Docker container:
```
docker run -d -p 3000:3000 --name agent-ui opea:agent-ui
```
- The application will be available at `http://localhost:3000`.

View File

@@ -0,0 +1,60 @@
{
"name": "agent-example",
"version": "0.0.1",
"private": true,
"scripts": {
"dev": "vite dev --host 0.0.0.0",
"build": "vite build",
"preview": "vite preview",
"check": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json",
"check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch",
"lint": "prettier --check . && eslint .",
"format": "prettier --write ."
},
"devDependencies": {
"@fortawesome/free-solid-svg-icons": "6.2.0",
"@sveltejs/adapter-auto": "1.0.0-next.75",
"@sveltejs/kit": "^1.20.1",
"@tailwindcss/typography": "0.5.7",
"@types/debug": "4.1.7",
"@typescript-eslint/eslint-plugin": "^5.27.0",
"@typescript-eslint/parser": "^5.27.0",
"autoprefixer": "^10.4.7",
"daisyui": "^2.52.0",
"debug": "4.3.4",
"eslint": "^8.16.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-neverthrow": "1.1.4",
"eslint-plugin-svelte3": "^4.0.0",
"neverthrow": "5.0.0",
"pocketbase": "0.7.0",
"postcss": "^8.4.23",
"postcss-load-config": "^4.0.1",
"postcss-preset-env": "^8.3.2",
"prettier": "^2.8.8",
"prettier-plugin-svelte": "^2.7.0",
"prettier-plugin-tailwindcss": "^0.3.0",
"svelte": "^3.59.1",
"svelte-check": "^2.7.1",
"svelte-fa": "3.0.3",
"svelte-preprocess": "^4.10.7",
"tailwindcss": "^3.1.5",
"ts-pattern": "4.0.5",
"tslib": "^2.3.1",
"typescript": "^4.7.4",
"vite": "^4.3.9"
},
"type": "module",
"dependencies": {
"@heroicons/vue": "^2.1.5",
"echarts": "^5.4.2",
"flowbite-svelte": "^0.38.5",
"flowbite-svelte-icons": "^0.3.6",
"fuse.js": "^6.6.2",
"marked": "^15.0.0",
"ramda": "^0.29.0",
"sjcl": "^1.0.8",
"sse.js": "^0.6.1",
"svelte-notifications": "^0.9.98"
}
}

View File

@@ -0,0 +1,13 @@
const tailwindcss = require("tailwindcss");
const autoprefixer = require("autoprefixer");
const config = {
plugins: [
//Some plugins, like tailwindcss/nesting, need to run before Tailwind,
tailwindcss(),
//But others, like autoprefixer, need to run after,
autoprefixer,
],
};
module.exports = config;

50
AgentQnA/ui/svelte/src/app.d.ts vendored Normal file
View File

@@ -0,0 +1,50 @@
// Copyright (C) 2025 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
// See: https://kit.svelte.dev/docs/types#app
// import { Result} from "neverthrow";
declare namespace App {
interface Locals {
user?: User;
}
// interface PageData { }
// interface PageError {}
// interface Platform {}
}
interface User {
id?: string;
email: string;
password?: string;
token?: string;
[key: string]: any;
}
type AuthResponse = Result<User>;
interface AuthAdapter {
login(props: { email: string; password: string }): Promise<AuthResponse>;
signup(props: { email: string; password: string; password_confirm: string }): Promise<AuthResponse>;
validate_session(props: { token: string }): Promise<AuthResponse>;
logout(props: { token: string; email: string }): Promise<Result<void>>;
forgotPassword(props: { email: string; password: string }): Promise<Result<void>>;
}
interface ChatAdapter {
modelList(props: {}): Promise<Result<void>>;
txt2img(props: {}): Promise<Result<void>>;
}
interface ChatMessage {
role: string;
content: string;
}
interface ChatMessageType {
model: string;
knowledge: string;
temperature: string;
max_new_tokens: string;
topk: string;
}

View File

@@ -0,0 +1,17 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%sveltekit.assets%/favicon.png" />
<meta name="viewport" content="width=device-width" />
%sveltekit.head%
</head>
<body>
<div>%sveltekit.body%</div>
</body>
</html>

View File

@@ -0,0 +1,82 @@
/* Write your global styles here, in PostCSS syntax */
@tailwind base;
@tailwind components;
@tailwind utilities;
.btn {
@apply flex-nowrap;
}
a.btn {
@apply no-underline;
}
.input {
@apply text-base;
}
.bg-dark-blue {
background-color: #004a86;
}
.bg-light-blue {
background-color: #0068b5;
}
.bg-turquoise {
background-color: #00a3f6;
}
.bg-header {
background-color: #ffffff;
}
.bg-button {
background-color: #0068b5;
}
.bg-title {
background-color: #f7f7f7;
}
.text-header {
color: #0068b5;
}
.text-button {
color: #0071c5;
}
.text-title-color {
color: rgb(38,38,38);
}
.font-intel {
font-family: "intel-clear","tahoma",Helvetica,"helvetica",Arial,sans-serif;
}
.font-title-intel {
font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif;
}
.bg-footer {
background-color: #e7e7e7;
}
.bg-light-green {
background-color: #d7f3a1;
}
.bg-purple {
background-color: #653171;
}
.bg-dark-blue {
background-color: #224678;
}
.border-input-color {
border-color: #605e5c;
}
.w-12\/12 {
width: 100%
}

View File

@@ -0,0 +1,25 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
t="1731984271860"
class="w-8 h-8"
viewBox="0 0 1024 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="11418"
width="200"
height="200"
><path
d="M0 0m170.666667 0l682.666666 0q170.666667 0 170.666667 170.666667l0 682.666666q0 170.666667-170.666667 170.666667l-682.666666 0q-170.666667 0-170.666667-170.666667l0-682.666666q0-170.666667 170.666667-170.666667Z"
fill="#1890FF"
fill-opacity=".1"
p-id="11419"
/><path
d="M404.352 552.661333a63.018667 63.018667 0 1 0 0-125.994666 63.018667 63.018667 0 0 0 0 125.994666z m0 213.333334a63.018667 63.018667 0 1 0 0-125.994667 63.018667 63.018667 0 0 0 0 125.994667z m-213.333333-426.666667a63.018667 63.018667 0 1 0 0-125.994667 63.018667 63.018667 0 0 0 0 125.994667z m669.653333-10.88H376.362667a35.669333 35.669333 0 0 1-35.114667-36.096c0-19.882667 15.786667-36.096 35.114667-36.096h484.394666c19.370667 0 35.157333 16.213333 35.157334 36.096a35.669333 35.669333 0 0 1-35.242667 36.096z m16.384 213.034667h-260.821333c-10.410667 0-18.901333-16.213333-18.901334-36.096 0-19.925333 8.490667-36.138667 18.901334-36.138667h260.864c10.410667 0 18.901333 16.213333 18.901333 36.138667-0.042667 19.882667-8.490667 36.096-18.944 36.096z m0 212.992h-260.821333c-10.410667 0-18.901333-16.213333-18.901334-36.096 0-19.925333 8.490667-36.096 18.901334-36.096h260.864c10.410667 0 18.901333 16.213333 18.901333 36.096-0.042667 19.882667-8.490667 36.096-18.944 36.096z"
fill="#1890FF"
p-id="11420"
/></svg
>

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -0,0 +1,9 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg class="w-3.5 h-3.5 me-2.5" aria-hidden="true" xmlns="http://www.w3.org/2000/svg" fill="currentColor" viewBox="0 0 20 20">
<path d="M14.707 7.793a1 1 0 0 0-1.414 0L11 10.086V1.5a1 1 0 0 0-2 0v8.586L6.707 7.793a1 1 0 1 0-1.414 1.414l4 4a1 1 0 0 0 1.416 0l4-4a1 1 0 0 0-.002-1.414Z"/>
<path d="M18 12h-2.55l-2.975 2.975a3.5 3.5 0 0 1-4.95 0L4.55 12H2a2 2 0 0 0-2 2v4a2 2 0 0 0 2 2h16a2 2 0 0 0 2-2v-4a2 2 0 0 0-2-2Zm-3 5a1 1 0 1 1 0-2 1 1 0 0 1 0 2Z"/>
</svg>

After

Width:  |  Height:  |  Size: 559 B

View File

@@ -0,0 +1,16 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
class="me-2 h-3 w-3"
aria-hidden="true"
xmlns="http://www.w3.org/2000/svg"
fill="currentColor"
viewBox="0 0 20 14"
>
<path
d="M10 0C4.612 0 0 5.336 0 7c0 1.742 3.546 7 10 7 6.454 0 10-5.258 10-7 0-1.664-4.612-7-10-7Zm0 10a3 3 0 1 1 0-6 3 3 0 0 1 0 6Z"
/>
</svg>

After

Width:  |  Height:  |  Size: 413 B

View File

@@ -0,0 +1,97 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<!-- <svg class="h-11 w-11 flex-none overflow-visible" fill="none"
><defs
><filter
id="step-icon-2"
x="-3"
y="-1"
width="50"
height="50"
filterUnits="userSpaceOnUse"
color-interpolation-filters="sRGB"
><feFlood flood-opacity="0" result="BackgroundImageFix" /><feColorMatrix
in="SourceAlpha"
values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 127 0"
result="hardAlpha"
/><feOffset dy="2" /><feGaussianBlur stdDeviation="2.5" /><feComposite
in2="hardAlpha"
operator="out"
/><feColorMatrix
values="0 0 0 0 0.054902 0 0 0 0 0.647059 0 0 0 0 0.913725 0 0 0 0.12 0"
/><feBlend
in2="BackgroundImageFix"
result="effect1_dropShadow_804_95228"
/><feBlend
in="SourceGraphic"
in2="effect1_dropShadow_804_95228"
result="shape"
/></filter
></defs
><g filter="url(#step-icon-2)"
><path
d="M2.75 10A7.25 7.25 0 0 1 10 2.75h24A7.25 7.25 0 0 1 41.25 10v24A7.25 7.25 0 0 1 34 41.25H10A7.25 7.25 0 0 1 2.75 34V10Z"
fill="#EEF2FF"
/><path
d="M2.75 10A7.25 7.25 0 0 1 10 2.75h24A7.25 7.25 0 0 1 41.25 10v24A7.25 7.25 0 0 1 34 41.25H10A7.25 7.25 0 0 1 2.75 34V10Z"
stroke="#6366F1"
stroke-width="1.5"
stroke-linecap="round"
stroke-linejoin="round"
/></g
><path
fill-rule="evenodd"
clip-rule="evenodd"
d="M23 35.25c.69 0 1.25-.56 1.25-1.25A3.75 3.75 0 0 1 28 30.25a1.25 1.25 0 1 0 0-2.5A3.75 3.75 0 0 1 24.25 24a1.25 1.25 0 1 0-2.5 0A3.75 3.75 0 0 1 18 27.75a1.25 1.25 0 0 0 0 2.5A3.75 3.75 0 0 1 21.75 34c0 .69.56 1.25 1.25 1.25Z"
fill="#fff"
/><path
d="M28 27a.75.75 0 0 0 0 1.5V27Zm-4.5 7a.5.5 0 0 1-.5.5V36a2 2 0 0 0 2-2h-1.5Zm5-5a.5.5 0 0 1-.5.5V31a2 2 0 0 0 2-2h-1.5Zm-.5-.5a.5.5 0 0 1 .5.5H30a2 2 0 0 0-2-2v1.5Zm-5-5a.5.5 0 0 1 .5.5H25a2 2 0 0 0-2-2v1.5Zm-.5.5a.5.5 0 0 1 .5-.5V22a2 2 0 0 0-2 2h1.5Zm-5 5a.5.5 0 0 1 .5-.5V27a2 2 0 0 0-2 2h1.5Zm.5.5a.5.5 0 0 1-.5-.5H16a2 2 0 0 0 2 2v-1.5Zm5 5a.5.5 0 0 1-.5-.5H21a2 2 0 0 0 2 2v-1.5ZM18 31a3 3 0 0 1 3 3h1.5a4.5 4.5 0 0 0-4.5-4.5V31Zm3-7a3 3 0 0 1-3 3v1.5a4.5 4.5 0 0 0 4.5-4.5H21Zm7 3a3 3 0 0 1-3-3h-1.5a4.5 4.5 0 0 0 4.5 4.5V27Zm-3 7a3 3 0 0 1 3-3v-1.5a4.5 4.5 0 0 0-4.5 4.5H25Z"
fill="#6366F1"
/><path
fill-rule="evenodd"
clip-rule="evenodd"
d="M13 27.25c.69 0 1.25-.56 1.25-1.25 0-.966.784-1.75 1.75-1.75a1.25 1.25 0 1 0 0-2.5A1.75 1.75 0 0 1 14.25 20a1.25 1.25 0 1 0-2.5 0A1.75 1.75 0 0 1 10 21.75a1.25 1.25 0 0 0 0 2.5c.966 0 1.75.784 1.75 1.75 0 .69.56 1.25 1.25 1.25Z"
fill="#fff"
/><path
d="M16 21a.75.75 0 0 0 0 1.5V21Zm-2.5 5a.5.5 0 0 1-.5.5V28a2 2 0 0 0 2-2h-1.5Zm3-3a.5.5 0 0 1-.5.5V25a2 2 0 0 0 2-2h-1.5Zm-.5-.5a.5.5 0 0 1 .5.5H18a2 2 0 0 0-2-2v1.5Zm-3-3a.5.5 0 0 1 .5.5H15a2 2 0 0 0-2-2v1.5Zm-.5.5a.5.5 0 0 1 .5-.5V18a2 2 0 0 0-2 2h1.5Zm-3 3a.5.5 0 0 1 .5-.5V21a2 2 0 0 0-2 2h1.5Zm.5.5a.5.5 0 0 1-.5-.5H8a2 2 0 0 0 2 2v-1.5Zm3 3a.5.5 0 0 1-.5-.5H11a2 2 0 0 0 2 2v-1.5ZM10 25a1 1 0 0 1 1 1h1.5a2.5 2.5 0 0 0-2.5-2.5V25Zm1-5a1 1 0 0 1-1 1v1.5a2.5 2.5 0 0 0 2.5-2.5H11Zm5 1a1 1 0 0 1-1-1h-1.5a2.5 2.5 0 0 0 2.5 2.5V21Zm-1 5a1 1 0 0 1 1-1v-1.5a2.5 2.5 0 0 0-2.5 2.5H15Z"
fill="#6366F1"
/><path
opacity=".4"
d="M29.75 35.25h2.5a3 3 0 0 0 3-3v-20.5a3 3 0 0 0-3-3h-20.5a3 3 0 0 0-3 3v5.5M12.75 14.25h18.5"
stroke="#6366F1"
stroke-width="1.5"
stroke-linecap="round"
stroke-linejoin="round"
/></svg
> -->
<svg
t="1731984480564"
class="h-10 w-10"
viewBox="0 0 1114 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="29550"
width="200"
height="200"
><path
d="M1081.916235 788.781176H909.312v172.634353a24.696471 24.696471 0 0 1-49.332706 0V788.781176H687.314824a24.696471 24.696471 0 0 1 0-49.362823H859.949176V566.814118a24.696471 24.696471 0 0 1 49.332706 0v172.634353h172.664471a24.696471 24.696471 0 0 1 0 49.362823z"
fill="#0972E7"
p-id="29551"
/><path
d="M174.772706 143.028706h509.831529c43.550118 0 78.516706 35.689412 78.516706 80.173176v280.576c0 44.453647-34.966588 80.173176-78.516706 80.173177H174.772706c-43.550118 0-78.516706-35.719529-78.516706-80.173177V223.171765c0-43.851294 34.966588-80.173176 78.516706-80.173177z"
fill="#CAE4FF"
p-id="29552"
/><path
d="M335.600941 910.637176H104.899765c-24.545882 0-43.550118-20.028235-43.550118-45.086117V107.098353c0-25.057882 19.636706-45.086118 44.182588-45.086118h742.912c23.913412 0 44.182588 20.028235 44.182589 44.453647V282.503529c0 16.896 13.492706 31.322353 30.659764 31.322353a30.72 30.72 0 0 0 30.689883-31.322353V106.465882C953.976471 47.585882 906.721882 0 849.046588 0H104.899765C47.224471 0 0 48.218353 0 107.098353v758.452706c0 58.88 46.622118 107.098353 104.297412 107.098353h230.671059c16.564706 0 30.659765-13.793882 30.659764-31.322353a30.027294 30.027294 0 0 0-30.057411-30.689883z"
fill="#0972E7"
p-id="29553"
/><path
d="M709.180235 219.196235c0-16.896-13.492706-31.322353-30.659764-31.322353H171.760941c-16.564706 0-30.659765 13.793882-30.659765 31.322353 0 16.926118 13.492706 31.322353 30.659765 31.322353h506.75953a30.72 30.72 0 0 0 30.659764-31.322353zM171.760941 436.525176c-16.564706 0-30.659765 13.793882-30.659765 31.322353 0 16.896 13.492706 31.322353 30.659765 31.322353h344.786824c16.564706 0 30.689882-13.793882 30.689882-31.322353 0-16.926118-13.522824-31.322353-30.689882-31.322353H171.760941z"
fill="#0972E7"
p-id="29554"
/></svg
>

View File

@@ -0,0 +1,8 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg class="h-5 w-5 flex-shrink-0 text-[#1d4dd5]" viewBox="0 0 20 20" fill="currentColor" aria-hidden="true" data-slot="icon">
<path fill-rule="evenodd" d="M15.621 4.379a3 3 0 0 0-4.242 0l-7 7a3 3 0 0 0 4.241 4.243h.001l.497-.5a.75.75 0 0 1 1.064 1.057l-.498.501-.002.002a4.5 4.5 0 0 1-6.364-6.364l7-7a4.5 4.5 0 0 1 6.368 6.36l-3.455 3.553A2.625 2.625 0 1 1 9.52 9.52l3.45-3.451a.75.75 0 1 1 1.061 1.06l-3.45 3.451a1.125 1.125 0 0 0 1.587 1.595l3.454-3.553a3 3 0 0 0 0-4.242Z" clip-rule="evenodd"></path>
</svg>

After

Width:  |  Height:  |  Size: 601 B

View File

@@ -0,0 +1,13 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
class="pointer-events-none absolute left-0 ml-4 hidden h-4 w-4 fill-current text-gray-500 group-hover:text-gray-400 sm:block"
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
><path
d="M12.9 14.32a8 8 0 1 1 1.41-1.41l5.35 5.33-1.42 1.42-5.33-5.34zM8 14A6 6 0 1 0 8 2a6 6 0 0 0 0 12z"
/></svg
>

After

Width:  |  Height:  |  Size: 413 B

View File

@@ -0,0 +1,17 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
fill="none"
class="relative h-5 w-5"
stroke-linecap="round"
stroke-linejoin="round"
stroke-width="2"
stroke="currentColor"
viewBox="0 0 24 24"
><path
d="M10 14l2-2m0 0l2-2m-2 2l-2-2m2 2l2 2m7-2a9 9 0 11-18 0 9 9 0 0118 0z"
/></svg
>

After

Width:  |  Height:  |  Size: 369 B

View File

@@ -0,0 +1,20 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
t="1731987484014"
class="w-5 h-5"
viewBox="0 0 1267 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="49311"
width="200"
height="200"
><path
d="M56.880762 910.214095H1194.666667a57.051429 57.051429 0 0 1 56.880762 56.905143A57.051429 57.051429 0 0 1 1194.666667 1024H56.880762A57.051429 57.051429 0 0 1 0 967.119238a57.051429 57.051429 0 0 1 56.880762-56.905143z m1024-56.880762H170.666667a114.102857 114.102857 0 0 1-113.785905-113.785904V113.785905A114.102857 114.102857 0 0 1 170.666667 0h910.214095A114.102857 114.102857 0 0 1 1194.666667 113.785905l-0.560762 625.761524C1194.105905 802.133333 1143.466667 853.333333 1080.880762 853.333333zM495.006476 227.328a198.948571 198.948571 0 0 0-63.219809 59.977143c-43.227429 63.707429-45.519238 150.747429-3.974096 215.600762 63.146667 99.547429 187.733333 120.027429 277.040762 63.146666l88.185905 88.161524a42.910476 42.910476 0 0 0 60.294095 0 42.910476 42.910476 0 0 0 0-60.294095l-88.746666-88.185905c49.493333-77.360762 40.399238-180.906667-26.745905-248.027428a198.92419 198.92419 0 0 0-242.834286-30.378667z m216.112762 170.910476a113.785905 113.785905 0 1 1-227.571809 0 113.785905 113.785905 0 0 1 227.571809 0z"
fill="#0377FF"
p-id="49312"
/></svg
>

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -0,0 +1,22 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
t="1730766012593"
viewBox="0 0 1024 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="11065"
class="w-4 h-4"
><path
d="M996.693333 494.933333l-341.333333-126.293333-126.293333-341.333333c-3.413333-13.653333-27.306667-13.653333-30.72 0l-126.293334 341.333333-341.333333 126.293333c-6.826667 3.413333-10.24 10.24-10.24 17.066667s3.413333 13.653333 10.24 17.066667l341.333333 126.293333 126.293334 341.333333c3.413333 6.826667 10.24 10.24 17.066666 10.24s13.653333-3.413333 17.066667-10.24l126.293333-341.333333 341.333334-126.293333c6.826667-3.413333 10.24-10.24 10.24-17.066667s-6.826667-13.653333-13.653334-17.066667z m-314.026666 34.133334h-153.6V682.666667c0 10.24-6.826667 17.066667-17.066667 17.066666s-17.066667-6.826667-17.066667-17.066666v-153.6H341.333333c-10.24 0-17.066667-6.826667-17.066666-17.066667s6.826667-17.066667 17.066666-17.066667h153.6V341.333333c0-10.24 6.826667-17.066667 17.066667-17.066666s17.066667 6.826667 17.066667 17.066666v153.6H682.666667c10.24 0 17.066667 6.826667 17.066666 17.066667s-6.826667 17.066667-17.066666 17.066667z"
fill="#ffffff"
p-id="11066"
/><path
d="M293.546667 703.146667l-136.533334 136.533333c-6.826667 6.826667-6.826667 17.066667 0 23.893333 3.413333 3.413333 6.826667 3.413333 13.653334 3.413334s10.24 0 13.653333-3.413334l136.533333-136.533333c6.826667-6.826667 6.826667-17.066667 0-23.893333s-20.48-6.826667-27.306666 0zM716.8 324.266667c3.413333 0 10.24 0 13.653333-3.413334l136.533334-136.533333c6.826667-6.826667 6.826667-17.066667 0-23.893333s-17.066667-6.826667-23.893334 0l-136.533333 136.533333c-6.826667 6.826667-6.826667 17.066667 0 23.893333 0 0 6.826667 3.413333 10.24 3.413334zM293.546667 317.44c3.413333 3.413333 10.24 6.826667 13.653333 6.826667s10.24 0 13.653333-3.413334c6.826667-6.826667 6.826667-17.066667 0-23.893333l-136.533333-136.533333c-6.826667-6.826667-17.066667-6.826667-23.893333 0s-6.826667 17.066667 0 23.893333l133.12 133.12zM730.453333 703.146667c-6.826667-6.826667-17.066667-6.826667-23.893333 0s-6.826667 17.066667 0 23.893333l136.533333 136.533333c3.413333 3.413333 6.826667 3.413333 13.653334 3.413334s10.24 0 13.653333-3.413334c6.826667-6.826667 6.826667-17.066667 0-23.893333l-139.946667-136.533333z"
fill="#ffffff"
p-id="11067"
/></svg
>

After

Width:  |  Height:  |  Size: 2.3 KiB

View File

@@ -0,0 +1,44 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
t="1731984744752"
class="w-12 h-12"
viewBox="0 0 1024 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="31753"
width="200"
height="200"
><path
d="M244.224 370.78016h526.336c48.64 0 87.552 39.424 87.552 87.552v292.352c0 48.64-39.424 87.552-87.552 87.552H244.224c-48.64 0-87.552-39.424-87.552-87.552v-292.352c-0.512-48.128 38.912-87.552 87.552-87.552z"
fill="#CAE4FF"
p-id="31754"
/><path
d="M760.832 983.30624H245.76c-114.176 0-206.848-92.672-206.848-206.848v-357.888c0-114.176 92.672-206.336 206.848-206.848h515.072c114.176 0 206.336 92.672 206.848 206.848v357.888c0 114.176-92.672 206.848-206.848 206.848zM245.76 270.09024c-81.92 0-148.48 66.56-148.48 148.48v357.888c0 81.92 66.56 148.48 148.48 148.48h515.072c81.92 0 148.48-66.56 148.48-148.48v-357.888c0-81.92-66.56-148.48-148.48-148.48H245.76z"
fill="#0972E7"
p-id="31755"
/><path
d="M303.616 748.29824c0.512 14.848-11.264 27.648-26.112 28.16-14.848 0.512-27.648-11.264-28.16-26.112v-291.328c0.512-14.848 13.312-26.624 28.16-26.112 14.336 0.512 25.6 11.776 26.112 26.112v289.28z"
fill="#0972E7"
p-id="31756"
/><path
d="M742.912 758.53824c0 13.824-11.264 25.088-25.088 25.088H274.432c-13.824 0.512-25.6-9.728-26.112-23.552-0.512-13.824 9.728-25.6 23.552-26.112h446.464c13.312 0 24.576 11.264 24.576 24.576z m-261.12-224.768c-9.728-10.24-26.112-10.24-36.352-0.512l-78.848 79.36c-10.24 10.24-10.24 26.624 0 36.864 9.728 10.24 26.112 10.24 36.352 0.512l79.36-78.848c9.728-10.752 9.728-27.136-0.512-37.376z"
fill="#0972E7"
p-id="31757"
/><path
d="M564.736 648.97024c10.24-9.728 10.24-26.112 0-36.352l-79.36-78.848c-10.24-10.24-26.624-10.24-36.864 0-10.24 9.728-10.24 26.112 0 36.352l78.848 78.848c10.752 10.24 27.136 10.24 37.376 0z"
fill="#0972E7"
p-id="31758"
/><path
d="M649.216 533.77024c-9.728-10.24-26.112-10.24-36.352-0.512l-79.36 78.848c-10.24 10.24-10.24 26.624 0 36.864 9.728 10.24 26.112 10.24 36.352 0.512l79.36-78.848c9.728-10.24 9.728-26.624 0-36.864z"
fill="#0972E7"
p-id="31759"
/><path
d="M714.24 468.74624c-9.728-10.24-26.112-10.24-36.352-0.512l-79.36 78.848c-10.24 10.24-10.24 26.624 0 36.864 9.728 10.24 26.112 10.24 36.352 0.512l79.36-78.848c10.24-10.24 10.24-27.136 0-36.864zM97.792 404.74624H39.936c0-51.2-0.512-120.832-0.512-120.832 0-112.128 91.136-203.264 203.264-203.264h136.704c123.392 0 194.56 66.56 194.56 182.784h-57.856c0-83.968-44.544-124.928-136.192-124.928H242.688c-80.384 0-145.408 65.024-145.408 145.408 0 0 0.512 69.632 0.512 120.832z"
fill="#0972E7"
p-id="31760"
/></svg
>

After

Width:  |  Height:  |  Size: 2.6 KiB

View File

@@ -0,0 +1,24 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
t="1731987065328"
class="w-5 h-5"
viewBox="0 0 1024 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="35111"
width="200"
height="200"
><path
d="M740.565333 112c63.146667 0 114.304 51.2 114.304 114.304v457.130667H169.130667V226.304c0-63.146667 51.2-114.304 114.304-114.304h457.130666z m-219.434666 326.826667H331.434667c-32 0-48 16.042667-48 48.042666l0.213333 6.186667c2.005333 27.861333 17.92 41.813333 47.786667 41.813333h189.696c32 0 48-16 48-48l-0.213334-6.186666c-1.962667-27.904-17.92-41.813333-47.786666-41.813334z m171.434666-212.522667H331.434667c-32 0-48 16-48 48l0.213333 6.186667c2.005333 27.861333 17.92 41.813333 47.786667 41.813333h361.130666c32 0 48-16 48-48l-0.213333-6.186667c-2.005333-27.904-17.92-41.813333-47.786667-41.813333z"
fill="#93C0FB"
p-id="35112"
/><path
d="M154.752 422.101333l343.68 196.096a28.586667 28.586667 0 0 0 28.330667 0l342.485333-196.010666a28.586667 28.586667 0 0 1 42.752 24.789333v350.72c0 63.146667-51.2 114.304-114.304 114.304H226.261333c-63.104 0-114.261333-51.2-114.261333-114.304v-350.805333a28.586667 28.586667 0 0 1 42.752-24.789334z"
fill="#4B96F9"
p-id="35113"
/></svg
>

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -0,0 +1,60 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
t="1731987759041"
class="w-7 h-7"
viewBox="0 0 1230 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="50480"
width="200"
height="200"
><path
d="M455.756297 168.055915a69.477515 69.477515 0 0 1-1.687986 16.069629c-2.768298 18.837927-9.385204 26.062508-9.385204 31.058948s5.536595 6.076751 10.465515 6.07675c30.991428 0 98.51088-32.679414 196.414084-32.679414 28.695767 0 112.554925 3.848609 112.554926 32.139259C764.117632 276.154556 533.403666 351.101147 533.403666 351.101147a0.540156 0.540156 0 0 0 0.540156 0.540156h15.461954c40.511671-2.228142 140.643017-39.36384 321.662667-39.36384 118.024001 0 291.819069 21.606224 291.819069 62.590531 0 23.901886-56.783859 73.731241-116.94369 121.535013-51.7199 40.849268-122.885402 77.51233-122.885402 118.024001 0 27.007781 30.856389 31.599103 59.012001 31.599103 55.703547 0 139.02255-19.378083 193.645786-19.378083 23.766847 0 54.623236 3.848609 54.623236 27.007781 0 37.743373-262.042991 370.344191-652.710536 370.344191-44.090202 0-102.021891-6.751945-102.021891-49.896875s55.163392-108.031122 103.169721-145.301859c38.080971-29.303442 67.519451-38.216009 67.519452-43.752605 0-3.308453-11.07319-3.916128-14.921799-3.916128-32.00422 0-110.866939 16.137149-183.720427 60.767507-55.703547 34.367401-71.165502 48.208888-121.535013 73.73124-31.396545 15.529474-63.94092 37.135698-110.326783 37.135699-62.320454 0-93.784518-37.743373-93.784518-93.716999 0-63.738362 60.767506-81.495978 60.767506-143.006198C183.180272 657.571937 47.263616 698.556244 47.263616 629.821443c0-29.911117 27.007781-65.966504 33.759726-74.811552 114.242912-152.458921 226.190162-143.613873 253.805618-157.995517 31.464064-16.069629-2.768298-25.522353-25.387314-25.522352-20.998549 0-45.778188 8.304893-67.519452 17.21746-9.385204 3.848609-22.078861 16.069629-40.51167 16.069629-14.921799 0-19.378083-14.989318-19.378083-29.978636C180.95213 282.231307 374.057761 143.681393 434.757748 143.681393c15.461954 0 20.998549 8.845048 20.998549 24.374522z"
fill="#E1EBFF"
p-id="50481"
/><path
d="M898.751418 191.350125H341.513385A24.712119 24.712119 0 0 0 316.666227 216.062244v671.886061a24.712119 24.712119 0 0 0 24.847158 24.374522h557.238033a25.049716 25.049716 0 0 0 24.847158-24.374522V216.062244a24.712119 24.712119 0 0 0-24.847158-24.374521"
fill="#335DFF"
p-id="50482"
/><path
d="M847.436635 853.783463h-458.457075a25.049716 25.049716 0 0 1-24.847159-24.374522V279.463009a24.6446 24.6446 0 0 1 24.847159-24.374522h458.457075a25.049716 25.049716 0 0 1 24.847158 24.374522v549.945932a24.712119 24.712119 0 0 1-24.847158 24.374522z"
fill="#FFFFFF"
p-id="50483"
/><path
d="M58.471845 792.273243H41.389424v-17.21746a8.507451 8.507451 0 0 0-8.304893-8.304893 8.439931 8.439931 0 0 0-8.237373 8.304893v17.21746H8.304893a8.304893 8.304893 0 1 0 0 16.609785h17.082421v17.21746A8.507451 8.507451 0 0 0 33.759726 834.40538a8.57497 8.57497 0 0 0 8.237373-8.304892v-17.21746h17.082421a8.304893 8.304893 0 1 0-0.607675-16.609785z"
fill="#D2DFFF"
p-id="50484"
/><path
d="M809.355664 225.717526h-371.356983a16.204668 16.204668 0 0 1-16.542265-16.069629v-39.903996a16.204668 16.204668 0 0 1 16.542265-16.137149h371.356983a16.204668 16.204668 0 0 1 16.609785 16.069629v40.511671a16.137149 16.137149 0 0 1-16.609785 15.529474z"
fill="#8FAFFF"
p-id="50485"
/><path
d="M677.490175 181.357246H570.471845A16.542266 16.542266 0 0 1 553.659502 165.287617v-55.973625a16.137149 16.137149 0 0 1 16.542265-16.06963h107.018331a16.542266 16.542266 0 0 1 16.609785 16.06963v56.513781a16.474746 16.474746 0 0 1-16.339708 15.529473z"
fill="#8FAFFF"
p-id="50486"
/><path
d="M459.13227 688.02321h280.205723a16.272188 16.272188 0 0 1 16.542265 16.609785v8.912567a16.272188 16.272188 0 0 1-16.542265 16.609785H459.13227a16.272188 16.272188 0 0 1-16.542266-16.002109v-9.520243A17.014902 17.014902 0 0 1 459.13227 688.02321z m0-111.947251h224.569695a16.272188 16.272188 0 0 1 16.542266 16.609785v8.845049a16.272188 16.272188 0 0 1-16.542266 16.677304H459.13227a16.272188 16.272188 0 0 1-16.542266-16.069629 1.890545 1.890545 0 0 1 0-0.607675v-8.845049A16.609785 16.609785 0 0 1 459.13227 576.075959z m0-112.014769h224.569695a16.272188 16.272188 0 0 1 16.542266 16.609785v8.912567a16.272188 16.272188 0 0 1-16.677305 16.812344H459.13227a16.204668 16.204668 0 0 1-16.542266-16.00211 1.890545 1.890545 0 0 1 0-0.607676v-9.115125A16.677304 16.677304 0 0 1 459.13227 464.06119z m0-111.947251h280.205723a16.272188 16.272188 0 0 1 16.879863 16.609785v8.912568a16.272188 16.272188 0 0 1-16.542266 16.609785H459.13227A16.339707 16.339707 0 0 1 442.454965 378.108928v-9.452723A16.609785 16.609785 0 0 1 459.13227 352.113939zM247.526309 0.810233l-10.465515 18.905447a39.093762 39.093762 0 0 1-14.921799 14.921799l-18.230252 10.533034a2.160622 2.160622 0 0 0 0 3.375973l18.230252 10.465515A39.296321 39.296321 0 0 1 237.060794 74.271397l10.465515 18.837926a2.025584 2.025584 0 0 0 3.308453 0L261.300277 74.271397a38.823685 38.823685 0 0 1 14.921799-14.989319l18.230252-10.465515a2.160622 2.160622 0 0 0 0-3.375972l-18.230252-10.533035a38.621126 38.621126 0 0 1-14.921799-15.191876L250.834762 0.810233c-0.540156-0.810233-2.228142-0.810233-3.308453 0zM1057.624687 183.585388a22.754055 22.754055 0 1 1-22.011341 22.686536 21.606224 21.606224 0 0 1 22.011341-22.686536z m0-11.07319a33.759726 33.759726 0 0 0-33.084531 33.759726 33.152051 33.152051 0 1 0 66.236581 0 33.354609 33.354609 0 0 0-33.15205-33.759726z"
fill="#D2DFFF"
p-id="50487"
/><path
d="M642.785177 138.144798a22.821575 22.821575 0 0 1-22.686535 22.686535 22.281419 22.281419 0 0 1-22.551497-22.213899 1.147831 1.147831 0 0 1 0-0.472636 22.821575 22.821575 0 0 1 22.619016-22.754056 22.416458 22.416458 0 0 1 22.686536 22.2139z m68.059607 445.628379A178.791507 178.791507 0 1 0 762.969801 456.971647a179.66926 179.66926 0 0 0-52.125017 126.869049z"
fill="#FFFFFF"
p-id="50488"
/><path
d="M889.298694 436.91837a145.706976 145.706976 0 0 0-145.504417 145.909535v1.012791a145.571937 145.571937 0 1 0 291.076355 0.742714v-0.742714a146.787287 146.787287 0 0 0-145.571938-146.989845z"
fill="#2ED073"
p-id="50489"
/><path
d="M856.230925 638.373959m5.681472-5.681472l95.534667-95.534667q5.681472-5.681472 11.362944 0l0 0q5.681472 5.681472 0 11.362943l-95.534667 95.534668q-5.681472 5.681472-11.362944 0l0 0q-5.681472-5.681472 0-11.362944Z"
fill="#FFFFFF"
p-id="50490"
/><path
d="M804.217647 586.365756m5.681472-5.681472l0 0q5.681472-5.681472 11.362944 0l51.944886 51.944887q5.681472 5.681472 0 11.362944l0 0q-5.681472 5.681472-11.362944 0l-51.944886-51.944887q-5.681472-5.681472 0-11.362944Z"
fill="#FFFFFF"
p-id="50491"
/></svg
>

After

Width:  |  Height:  |  Size: 6.6 KiB

View File

@@ -0,0 +1,8 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg class="w-2.5 h-2.5 text-blue-800 dark:text-blue-300" aria-hidden="true" xmlns="http://www.w3.org/2000/svg" fill="currentColor" viewBox="0 0 20 20">
<path d="M20 4a2 2 0 0 0-2-2h-2V1a1 1 0 0 0-2 0v1h-3V1a1 1 0 0 0-2 0v1H6V1a1 1 0 0 0-2 0v1H2a2 2 0 0 0-2 2v2h20V4ZM0 18a2 2 0 0 0 2 2h16a2 2 0 0 0 2-2V8H0v10Zm5-8h10a1 1 0 0 1 0 2H5a1 1 0 0 1 0-2Z"/>
</svg>

After

Width:  |  Height:  |  Size: 451 B

View File

@@ -0,0 +1,36 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
t="1731987374334"
class="w-4 h-4"
viewBox="0 0 1024 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="47097"
width="200"
height="200"
><path
d="M210.488889 246.670222m35.043555 0l349.923556 0q35.043556 0 35.043556 35.043556l0 0.056889q0 35.043556-35.043556 35.043555l-349.923556 0q-35.043556 0-35.043555-35.043555l0-0.056889q0-35.043556 35.043555-35.043556Z"
fill="#89BAF7"
p-id="47098"
/><path
d="M210.488889 471.210667m35.043555 0l349.923556 0q35.043556 0 35.043556 35.043555l0 0.056889q0 35.043556-35.043556 35.043556l-349.923556 0q-35.043556 0-35.043555-35.043556l0-0.056889q0-35.043556 35.043555-35.043555Z"
fill="#89BAF7"
p-id="47099"
/><path
d="M210.488889 695.296m35.043555 0l140.344889 0q35.043556 0 35.043556 35.043556l0 0.056888q0 35.043556-35.043556 35.043556l-140.344889 0q-35.043556 0-35.043555-35.043556l0-0.056888q0-35.043556 35.043555-35.043556Z"
fill="#89BAF7"
p-id="47100"
/><path
d="M436.565333 982.186667h-261.176889a175.559111 175.559111 0 0 1-175.331555-175.388445v-631.466666a175.559111 175.559111 0 0 1 175.331555-175.388445h490.951112a175.559111 175.559111 0 0 1 175.331555 175.388445v278.016a35.100444 35.100444 0 1 1-70.144 0v-278.016a105.358222 105.358222 0 0 0-105.187555-105.244445h-490.951112a105.358222 105.358222 0 0 0-105.187555 105.244445v631.466666a105.358222 105.358222 0 0 0 105.187555 105.244445h261.176889a35.100444 35.100444 0 0 1 0 70.144z"
fill="#0A71EF"
p-id="47101"
/><path
d="M1008.184889 628.167111l-5.688889-11.889778-2.104889-2.616889a19.683556 19.683556 0 0 0-24.519111-2.616888h-0.910222l-97.28 97.336888-49.265778-49.265777 101.489778-101.717334-1.080889-1.422222a18.090667 18.090667 0 0 0-4.039111-18.944 16.668444 16.668444 0 0 0-5.688889-3.868444l-10.695111-4.721778a192.056889 192.056889 0 0 0-258.958222 235.292444l-105.927112 105.927111a87.608889 87.608889 0 0 0 0 123.619556 87.608889 87.608889 0 0 0 123.448889 0l105.927111-106.097778a188.757333 188.757333 0 0 0 59.278223 9.500445 192.056889 192.056889 0 0 0 176.355555-268.288z m-176.355556 215.836445a137.728 137.728 0 0 1-55.409777-11.377778l-16.327112-6.997334-130.446222 130.446223a35.669333 35.669333 0 0 1-49.265778 0 34.702222 34.702222 0 0 1 0-49.265778l130.446223-130.446222-6.997334-16.497778a136.192 136.192 0 0 1-11.377777-55.239111 139.719111 139.719111 0 0 1 139.548444-139.548445 111.502222 111.502222 0 0 1 15.303111 0.853334l-79.985778 79.985777a20.650667 20.650667 0 0 0-3.356444 21.219556l-0.512 1.251556 101.489778 101.546666a19.569778 19.569778 0 0 0 24.746666 0.341334l81.009778-81.009778a151.210667 151.210667 0 0 1 0.853333 15.416889 139.719111 139.719111 0 0 1-139.605333 139.320889z"
fill="#FD7733"
p-id="47102"
/></svg
>

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

@@ -0,0 +1,28 @@
<!--
Copyright (C) 2025 Intel Corporation
SPDX-License-Identifier: Apache-2.0
-->
<svg
t="1699532005309"
class="icon"
viewBox="0 0 1024 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="31791"
width="1rem"
height="1rem"
><path
d="M505.088 513.1264m-450.816 0a450.816 450.816 0 1 0 901.632 0 450.816 450.816 0 1 0-901.632 0Z"
fill="#e02424"
p-id="31792"
data-spm-anchor-id="a313x.search_index.0.i28.33343a81AAN1qI"
class="selected"
/><path
d="M356.6592 575.0784c0-54.5792 0.3584-109.1584-0.2048-163.6864-0.1536-15.872 5.5296-24.2176 20.992-29.5424 58.88-20.2752 93.7472-63.1296 110.848-121.9072 5.9392-20.4288 11.4176-41.216 19.7632-60.672 13.4656-31.5904 38.2464-42.7008 72.6528-35.328 26.5216 5.6832 43.3152 28.3648 43.5712 60.16 0.3584 40.4992 0.0512 80.9984 0.1536 121.4976 0.0512 22.2208 3.9424 26.7264 26.5728 26.9824 45.568 0.512 91.1872 1.536 136.704-0.256 40.5504-1.5872 69.9392 24.832 59.7504 69.9904-12.2368 54.0672-27.648 107.4688-42.7008 160.8704-9.2672 32.9216-20.1728 65.4336-30.8736 97.9456-14.1312 43.008-40.448 62.0544-84.8896 62.0544H390.2976c-32.1024 0-33.6384-1.536-33.6384-32.8704v-155.2384zM307.8656 573.9008c0 52.8896 0.1024 105.7792-0.0512 158.6688-0.1024 26.0096-4.9152 30.6176-30.3616 30.6688-7.3216 0-14.6432 0.0512-21.9648 0-29.8496-0.1536-44.032-14.08-44.2368-44.6976-0.3072-55.1424-0.1024-110.2848-0.1024-165.4272 0-40.4992-0.1536-81.0496 0.0512-121.5488 0.2048-32.2048 15.7696-47.616 47.5136-47.7184 49.1008-0.2048 49.152-0.2048 49.152 48.2304 0.0512 47.2576 0.0512 94.5152 0 141.824z"
fill="#ffffff"
p-id="31793"
data-spm-anchor-id="a313x.search_index.0.i26.33343a81AAN1qI"
class=""
/></svg
>

After

Width:  |  Height:  |  Size: 1.6 KiB

Some files were not shown because too many files have changed in this diff Show More