Compare commits

...

135 Commits
0.6.7 ... 0.6.9

Author SHA1 Message Date
takatost
e60350d95d version to 0.6.9 (#4692) 2024-05-27 22:48:34 +08:00
Patryk Garstecki
f40743183e 🔧 Add env variable for time signature (#4650) 2024-05-27 22:20:49 +08:00
Yeuoly
e852a21634 Feat/workflow phase2 (#4687) 2024-05-27 22:01:11 +08:00
zxhlyh
45deaee762 feat: workflow new nodes (#4683)
Co-authored-by: Joel <iamjoel007@gmail.com>
Co-authored-by: Patryk Garstecki <patryk20120@yahoo.pl>
Co-authored-by: Sebastian.W <thiner@gmail.com>
Co-authored-by: 呆萌闷油瓶 <253605712@qq.com>
Co-authored-by: takatost <takatost@users.noreply.github.com>
Co-authored-by: rechardwang <wh_goodjob@163.com>
Co-authored-by: Nite Knite <nkCoding@gmail.com>
Co-authored-by: Chenhe Gu <guchenhe@gmail.com>
Co-authored-by: Joshua <138381132+joshua20231026@users.noreply.github.com>
Co-authored-by: Weaxs <459312872@qq.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: leejoo0 <81673835+leejoo0@users.noreply.github.com>
Co-authored-by: JzoNg <jzongcode@gmail.com>
Co-authored-by: sino <sino2322@gmail.com>
Co-authored-by: Vikey Chen <vikeytk@gmail.com>
Co-authored-by: wanghl <Wang-HL@users.noreply.github.com>
Co-authored-by: Haolin Wang-汪皓临 <haolin.wang@atlaslovestravel.com>
Co-authored-by: Zixuan Cheng <61724187+Theysua@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Bowen Liang <bowenliang@apache.org>
Co-authored-by: Bowen Liang <liangbowen@gf.com.cn>
Co-authored-by: fanghongtai <42790567+fanghongtai@users.noreply.github.com>
Co-authored-by: wxfanghongtai <wxfanghongtai@gf.com.cn>
Co-authored-by: Matri <qjp@bithuman.io>
Co-authored-by: Benjamin <benjaminx@gmail.com>
2024-05-27 21:57:08 +08:00
Leheng Lu
444fdb79dc fix typo: stopParameerRule -> stopParameterRule (#4681) 2024-05-27 20:42:07 +08:00
Bowen Liang
140dd873f1 fix: show exception message when sandbox execution fails (#4663) 2024-05-27 18:06:15 +08:00
crazywoola
27dae156db fix: colon in file mistral.mistral-small-2402-v1:0 (#4673) 2024-05-27 13:15:20 +08:00
Giovanny Gutiérrez
2deb23e00e fix: Show rerank in system for localai (#4652) 2024-05-27 12:09:51 +08:00
crazywoola
8152bc6fbf Feat/upgrade check i18n scripts (#4671) 2024-05-27 10:36:34 +08:00
Valerio Narcisi
af026c5953 fix: node.js sdk if request is a get data must not exist (#4618) 2024-05-27 08:48:07 +08:00
Nam Vu
11275cbaaf fix: z-index (#4065) 2024-05-27 08:47:27 +08:00
longzhihun
fe9bf5fc4a [seanguo] add support of amazon titan v2 and modify the price of amazon titan v1 (#4643)
Co-authored-by: Chenhe Gu <guchenhe@gmail.com>
2024-05-26 23:30:22 +08:00
Charlie.Wei
cd4924d472 Fix tts audition (#4656)
Co-authored-by: luowei <glpat-EjySCyNjWiLqAED-YmwM>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-05-26 21:00:36 +08:00
kronus-ho
f56b984d97 Fix Unnecessary Newline Characters in Extracted Tool Response Text (#4646)
Co-authored-by: kronus <kronus@istarshine.com>
2024-05-25 15:24:59 +08:00
miendinh
f804adbff3 feat: Support for Vertex AI - load Default Application Configuration (#4641)
Co-authored-by: miendinh <miendinh@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com>
2024-05-25 13:40:25 +08:00
10YearsDiary
109aabc6f2 fix: incorrect handling when http header value contain multiple colons. (#4574) 2024-05-25 13:20:06 +08:00
liuzhenghua
ad620f02c7 fix: the date is incorrect if the db field is timestamp and the TZ is not the UTC (#4624)
Co-authored-by: liuzhenghua-jk <liuzhenghua-jk@360shuke.com>
2024-05-25 13:11:18 +08:00
Charlie.Wei
5bd432a85f Fix tts audition (#4637)
Co-authored-by: luowei <glpat-EjySCyNjWiLqAED-YmwM>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-05-25 12:55:04 +08:00
yalei
026175c8f7 feat: update notion extractor (#3898)
Co-authored-by: duyalei <>
2024-05-24 20:30:48 +08:00
Krasus.Chen
f156014daa update lite8k/speed8k/128k max_token to newest (#4636)
Co-authored-by: Your Name <chen@krasus.red>
2024-05-24 19:33:42 +08:00
takatost
2cd55e456a fix: WORKFLOW_MAX_EXECUTION_STEPS spell error in config.py (#4642) 2024-05-24 16:32:26 +08:00
majian
8c2ca60c8b feat: Add WORKFLOW_MAX_EXECUTION_TIME env var (#4632) 2024-05-24 15:27:12 +08:00
Bowen Liang
3fda2245a4 improve: extract method for safe loading yaml file and avoid using PyYaml's FullLoader (#4031) 2024-05-24 12:08:12 +08:00
Patryk Garstecki
296887754f Support for Vertex AI (#4586) 2024-05-24 12:01:40 +08:00
QuietRocket
9ae72cdcf4 feat: Add Gemini Flash (#4616) 2024-05-24 11:43:06 +08:00
crazywoola
16fec084f5 Fix/4630 bug api suggested (#4633) 2024-05-24 11:11:12 +08:00
crazywoola
10c61da686 feat: add confirm ui (#4625) 2024-05-23 20:15:51 +08:00
Jyong
24624491cd add qdrant metadata.doc_id index when create qdrant collection (#4570) 2024-05-23 18:11:01 +08:00
Jyong
233c4150d1 support images and tables extract from docx (#4619) 2024-05-23 18:05:23 +08:00
Yeuoly
5893ebec55 fix: code node garbled in Javascript (#4615) 2024-05-23 17:18:57 +08:00
takatost
11642192d1 chore: add https://api.openai.com placeholder in OpenAI api base (#4604) 2024-05-23 12:56:05 +08:00
呆萌闷油瓶
e57bdd4e58 chore:update gpt-3.5-turbo and gpt-4-turbo parameter for azure (#4596) 2024-05-23 11:51:38 +08:00
somethingwentwell
461488e9bf Add Azure OpenAI API version for GPT4o support (#4569)
Co-authored-by: wwwc <wwwc@outlook.com>
2024-05-22 17:43:16 +08:00
sino
2988b67c24 fix: hide automatic button on automatic result page (#4494) 2024-05-22 16:44:20 +08:00
zxhlyh
4f62541bfb chore: remove model provider free token link (#4579) 2024-05-22 16:42:49 +08:00
非法操作
24576a39e5 fix: some google search result raise exception (#4567) 2024-05-22 14:28:52 +08:00
Justin Wu
3ab19be9ea Fix bedrock claude wrong pricing (#4572)
Co-authored-by: Justin Wu <justin.wu@ringcentral.com>
2024-05-22 14:28:28 +08:00
naporitan
5b009a5afb chore(api): Use channel from UI as API query parameter (#4562) 2024-05-22 14:28:03 +08:00
非法操作
3efb5fe7e2 Refactor part of the ProviderManager code to improve readability (#4524) 2024-05-22 11:18:03 +08:00
非法操作
ee53f98d8c Hide the copy button when there is no content to copy (#4546) 2024-05-22 11:15:13 +08:00
呆萌闷油瓶
d5a33a0323 feat:add gpt-4o for azure (#4568) 2024-05-22 11:02:43 +08:00
Jyong
f25927855e add qdrant metadata.doc_id index (#4559) 2024-05-22 01:42:08 +08:00
Garfield Dai
c873035084 oauth2 supports. (#4551) 2024-05-21 17:52:41 +08:00
Li Yi
f32bba6531 Update requirements.txt with latest OSS package with AuthV4 support (#4425) 2024-05-20 17:26:07 +08:00
Benjamin
40bc936739 chore: update yfinance dependency to version 0.2.40 (#4517) 2024-05-20 16:47:05 +08:00
rennokki
6b5685ef0c feat: Jina Search & Jina Reader CSS selectors (#4523) 2024-05-20 16:40:46 +08:00
Bowen Liang
e8e213ad1e chore: apply and fix flake8-bugbear lint rules (#4496) 2024-05-20 16:34:13 +08:00
Bowen Liang
5f4df34829 improve: generalize transformations and scripts of runner and preloads into TemplateTransformer (#4487) 2024-05-20 15:56:26 +08:00
Rain Chen
c255a20d7c allow to config max segmentation tokens length for RAG document using environment variable (#4375) 2024-05-20 13:20:27 +08:00
majian
b5204111da Add UNSTRUCTURED_API_KEY env support (#4369) 2024-05-20 13:14:17 +08:00
Bowen Liang
3a51f2a778 fix: workaround db migration error when adding custom_disclaimer column to recommended_apps (#4518)
Co-authored-by: takatost <takatost@gmail.com>
2024-05-20 12:33:21 +08:00
Ever
4086f5051c feat:Provide parameter config for mask_sensitive_info of MiniMax mode… (#4294)
Co-authored-by: 老潮 <zhangyongsheng@3vjia.com>
Co-authored-by: takatost <takatost@users.noreply.github.com>
Co-authored-by: takatost <takatost@gmail.com>
2024-05-20 10:15:27 +08:00
sino
5440108431 fix: read llm node's first prompt role by optional chaining (#4510) 2024-05-20 08:13:37 +08:00
Benjamin
46bd53a929 chore: sort categories in recommended app service response (#4498) 2024-05-19 22:44:29 +08:00
Matri
a10c2ccd41 fix: files data missed for message (#4512) 2024-05-19 22:42:52 +08:00
fanghongtai
1cca100a48 fix:modify spelling errors: lanuage ->language in schema.md (#4499)
Co-authored-by: wxfanghongtai <wxfanghongtai@gf.com.cn>
2024-05-19 18:31:05 +08:00
Bowen Liang
04ad46dd31 chore: skip unnecessary key checks prior to accessing a dictionary (#4497) 2024-05-19 18:30:45 +08:00
Patryk Garstecki
aa13d14019 Feat/chat custom disclaimer (#4306) 2024-05-18 10:52:48 +08:00
Zixuan Cheng
b1f003646b Update docker-compose.yaml- New DEBUG variable (#4476)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Bowen Liang <bowenliang@apache.org>
2024-05-18 10:35:28 +08:00
wanghl
e90eccdf92 Fix: HTTP request node PARAMS parameters, if ':' appears in the value… (#4403)
Co-authored-by: Haolin Wang-汪皓临 <haolin.wang@atlaslovestravel.com>
2024-05-18 10:35:01 +08:00
Ikko Eltociear Ashimine
ba06447cd5 chore: update docker-compose.yaml (#4492) 2024-05-18 10:27:12 +08:00
Vikey Chen
9808520992 fix: copy button is always displayed on the chat logs page (#4488) 2024-05-17 22:26:19 +08:00
zxhlyh
8b931b085c fix: app logo (#4483) 2024-05-17 18:02:00 +08:00
sino
528faceb35 fix: cot agent token usage is empty (#4474) 2024-05-17 14:45:20 +08:00
Yeuoly
c2a8fa91b1 fix: cot gent duplicate messages (#4470) 2024-05-17 13:32:02 +08:00
Yeuoly
091fba74cb enhance: claude stream tool call (#4469) 2024-05-17 12:43:58 +08:00
Bowen Liang
083ef2e6fc improve: exract Code Node provider for each supported scripting language (#4164) 2024-05-17 11:58:12 +08:00
zxhlyh
de3a7603ac fix: workflow add next node from knowledge retrieval node (#4467) 2024-05-17 11:42:03 +08:00
jiaqianjing
0ac5d621b6 add llm: ernie-character-8k of wenxin (#4448) 2024-05-16 18:31:07 +08:00
Jyong
bdd409970f fix the wrong env variable AZURE_BLOB_CONTAINER_NAME (#4455) 2024-05-16 18:30:52 +08:00
takatost
d8f38f79f2 feat: add pre ping for sqlalchemy configuration (#4454) 2024-05-16 17:07:21 +08:00
Pika
0f1172f55b fix: self node type shouldn't show in the picker (#4445) 2024-05-16 13:25:09 +08:00
Han Fangyuan
3df47b7b59 fix: wrong category name in examples of question classifier completion prompt (#4421) 2024-05-16 13:04:57 +08:00
sino
6e9066ebf4 feat: support doubao llm and embeding models (#4431) 2024-05-16 11:41:24 +08:00
Garfield Dai
dd94931116 Remove useless code (#4416) 2024-05-15 16:14:49 +08:00
huchengyi
da81233d61 Custom sqlalchemy database uri scheme is supported (#4367) 2024-05-15 15:27:15 +08:00
VoidIsVoid
a76ae2d756 chore: remove useless code in knowledge_retrieval_node (#4412) 2024-05-15 15:24:40 +08:00
Charlie.Wei
97b65f9b4b Optimize webscraper (#4392)
Co-authored-by: luowei <glpat-EjySCyNjWiLqAED-YmwM>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-05-15 15:23:16 +08:00
zxhlyh
c0fe414e0a fix: workflow delete edge when node is selected (#4414) 2024-05-15 15:12:36 +08:00
Yeuoly
182dadd433 chore: remove model as tool (#4409) 2024-05-15 12:25:04 +08:00
非法操作
1d0f88264f Fix HTTP REQUEST NODE is always waiting but endpoint have responsed (#4395) 2024-05-15 11:05:46 +08:00
Yash Parmar
332baca538 FIX: fix the temperature value of ollama model (#4027) 2024-05-15 08:05:54 +08:00
Kota Matsumoto
e2a78888b9 Fix: setup google-storage client (#4296)
Co-authored-by: kotamat <kota1681@gmail.com>
Co-authored-by: crazywoola <427733928@qq.com>
2024-05-15 08:05:41 +08:00
Shuto Otaki
5d6d0e63c5 docs: Add CONTRIBUTING_JA.md (#4383) 2024-05-15 07:48:19 +08:00
Charles Zhou
2eb468f885 fix: add timeout to SMTPClient to prevent worker blocking (#4352) 2024-05-14 23:44:53 +08:00
kerlion
98140ae5d9 fix the issue of MILVUS_DATABASE has no effect. (#4353) 2024-05-14 19:54:31 +08:00
rennokki
d1ccb22d8a feat: Use Romanian & other langs in QA (#4205)
Co-authored-by: crazywoola <427733928@qq.com>
2024-05-14 17:48:24 +08:00
Joel
66c8070da8 fix: Jinja switch not aligned in vertical direction (#4374) 2024-05-14 16:29:41 +08:00
非法操作
3271e3e803 improve the code readability of http_executor node (#4360) 2024-05-14 16:11:12 +08:00
Yeuoly
16d47923c3 fix: requests timeout (#4370) 2024-05-14 16:01:23 +08:00
zxhlyh
6f1633fa75 fix: delete end node (#4372) 2024-05-14 15:51:08 +08:00
leejoo0
08e4103fa1 Create README_KR.md (#4364) 2024-05-14 15:36:03 +08:00
takatost
eee95190cc version to 0.6.8 (#4347) 2024-05-14 03:18:26 +08:00
Yeuoly
e8311357ff feat: gpt-4o (#4346) 2024-05-14 02:52:41 +08:00
sino
0f14fdd4c9 fix: handleUpdateWorkflowCanvas is not a function (#4343) 2024-05-13 20:36:23 +08:00
orangeclk
ece0f08a2b add yi models (#4335)
Co-authored-by: 陈力坤 <likunchen@caixin.com>
2024-05-13 17:40:53 +08:00
paragonnov
5edb3d55e5 feat: i18n: add korean language (ko-KR) (#4333) 2024-05-13 15:20:44 +08:00
非法操作
63382f758e fix typo (#4329) 2024-05-13 15:20:16 +08:00
Yeuoly
bbef964eb5 improve: code upgrade (#4231) 2024-05-13 14:39:14 +08:00
Ikko Eltociear Ashimine
e6db7ad1d5 chore: update gmpy2_pkcs10aep_cipher.py (#4314) 2024-05-13 10:45:29 +08:00
Weaxs
8cc492721b fix: minimax streaming function_call message (#4271) 2024-05-11 21:07:22 +08:00
Joshua
a80fe20456 add-some-new-models-hosted-on-nvidia (#4303) 2024-05-11 21:05:31 +08:00
Chenhe Gu
f7986805c6 Update README.md to remove outdated badge (#4302) 2024-05-11 20:48:15 +08:00
Joel
aa5ca90f00 fix: text generation app not show copy button (#4304) 2024-05-11 20:39:17 +08:00
Joel
4af00e4a45 feat: support copy run text result in debug panel in workflow (#4300) 2024-05-11 16:59:17 +08:00
Joel
c01c95d77f fix: chatflow run progress problem (#4298) 2024-05-11 16:23:31 +08:00
Nite Knite
20a9037d5b fix: align versions of react typing package (#4297) 2024-05-11 15:39:56 +08:00
Joel
34d3998566 fix: webapps not show number type input field (#4292) 2024-05-11 14:42:04 +08:00
rechardwang
198d6c00d6 Update docker-compose.yaml (#4288) 2024-05-11 13:41:12 +08:00
Joel
1663df8a05 feat: hide run detail in webapps and installed apps (#4289) 2024-05-11 13:40:27 +08:00
takatost
d8926a2571 feat: hide node detail outputs in webapp & installed app in explore (#3954) 2024-05-11 13:40:11 +08:00
呆萌闷油瓶
4796f9d914 feat:add gpt-4-turbo for azure (#4287) 2024-05-11 13:02:56 +08:00
Sebastian.W
a588df4371 Add rerank model type for LocalAI provider (#3952) 2024-05-11 11:29:28 +08:00
Patryk Garstecki
2c1c660c6e fix(Backend:http_executor): 🔧 prevent splitting JSON data as v… (#4276) 2024-05-11 11:23:35 +08:00
zxhlyh
13f4ed6e0e fix: workflow zoomin/out shortcuts (#4283) 2024-05-11 10:38:12 +08:00
YidaHu
1e451991db fix: deutsch edit app (#4270) 2024-05-11 10:07:54 +08:00
Louie Long
749b236d3d fix: do nothing if switch to current app (#4249)
Co-authored-by: langyong <langyong@lixiang.com>
2024-05-11 08:50:46 +08:00
TinsFox
00ce372b71 fix: hook dependency (#4242) 2024-05-11 08:43:37 +08:00
Patryk Garstecki
370e1c1a17 fix(frontend): 🔧 add privacy policy spaces (#4277) 2024-05-11 08:42:03 +08:00
KiyotakaMatsushita
28495273b4 feat: Add storage type and Google Storage settings to worker (#4266) 2024-05-10 18:54:08 +08:00
Whitewater
36a9c5cc6b fix: remove unexpected zip and add FlipForward arrow icon (#4263) 2024-05-10 18:52:41 +08:00
Bowen Liang
228de1f12a fix: miss usage of os.path.join for URL assembly and add tests on yarl (#4224) 2024-05-10 18:14:48 +08:00
Joel
01555463d2 feat: llm support jinja fe (#4260) 2024-05-10 18:14:05 +08:00
GalvinYang
6b99075dc8 fix: system default model name length (#4245) (#4246)
Co-authored-by: takatost <takatost@gmail.com>
2024-05-10 18:12:18 +08:00
Yeuoly
8578ee0864 feat: support LLM jinja2 template prompt (#3968)
Co-authored-by: Joel <iamjoel007@gmail.com>
2024-05-10 18:08:32 +08:00
Jyong
897e07f639 question classifier prompt optimize (#4262) 2024-05-10 17:22:46 +08:00
LiuVaayne
875249eb00 Feat/vector db pgvector (#3879) 2024-05-10 17:20:30 +08:00
Chenhe Gu
4d5a4e4cef correct comparison chart (#4254) 2024-05-10 14:54:38 +08:00
takatost
86a6e6bd04 feat: increase max steps to 50 in workflow (#4252) 2024-05-10 14:50:00 +08:00
takatost
8f3042e5b3 feat: Add draft hash check in workflow (#4251) 2024-05-10 14:48:29 +08:00
zxhlyh
a1ab87107b chore: workflow sync with hash (#4250) 2024-05-10 14:48:20 +08:00
zxhlyh
f49c99937c fix: workflow end node deletion (#4240) 2024-05-10 10:38:05 +08:00
zxhlyh
9b24f12bf5 feat: workflow interaction (#4214) 2024-05-09 17:18:51 +08:00
crazywoola
487ce7c82a fix: add missing translations (#4212) 2024-05-09 15:38:51 +08:00
TinsFox
cc835d523c refactor: install form (#4154) 2024-05-09 15:38:09 +08:00
798 changed files with 31573 additions and 6132 deletions

View File

@@ -1,4 +1,4 @@
# Devlopment with devcontainer
# Development with devcontainer
This project includes a devcontainer configuration that allows you to open the project in a container with a fully configured development environment.
Both frontend and backend environments are initialized when the container is started.
## GitHub Codespaces
@@ -33,5 +33,5 @@ Performance Impact: While usually minimal, programs running inside a devcontaine
if you see such error message when you open this project in codespaces:
![Alt text](troubleshooting.png)
a simple workaround is change `/signin` endpoint into another one, then login with github account and close the tab, then change it back to `/signin` endpoint. Then all things will be fine.
a simple workaround is change `/signin` endpoint into another one, then login with GitHub account and close the tab, then change it back to `/signin` endpoint. Then all things will be fine.
The reason is `signin` endpoint is not allowed in codespaces, details can be found [here](https://github.com/orgs/community/discussions/5204)

View File

@@ -8,13 +8,13 @@ body:
label: Self Checks
description: "To make sure we get to you in time, please check the following :)"
options:
- label: This is only for bug report, if you would like to ask a quesion, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- label: This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
required: true
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "Pleas do not modify this template :) and fill in all the required fields."
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: input

View File

@@ -1,7 +1,7 @@
name: "📚 Documentation Issue"
description: Report issues in our documentation
labels:
- ducumentation
- documentation
body:
- type: checkboxes
attributes:
@@ -12,7 +12,7 @@ body:
required: true
- label: I confirm that I am using English to submit report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "Pleas do not modify this template :) and fill in all the required fields."
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: textarea
attributes:

View File

@@ -12,7 +12,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "Pleas do not modify this template :) and fill in all the required fields."
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: textarea
attributes:

View File

@@ -12,7 +12,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "Pleas do not modify this template :) and fill in all the required fields."
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: input
attributes:

View File

@@ -46,11 +46,12 @@ jobs:
docker/docker-compose.middleware.yaml
services: |
sandbox
ssrf_proxy
- name: Run Workflow
run: dev/pytest/pytest_workflow.sh
- name: Set up Vector Stores (Weaviate, Qdrant, Milvus, PgVecto-RS)
- name: Set up Vector Stores (Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS)
uses: hoverkraft-tech/compose-action@v2.0.0
with:
compose-file: |
@@ -58,6 +59,7 @@ jobs:
docker/docker-compose.qdrant.yaml
docker/docker-compose.milvus.yaml
docker/docker-compose.pgvecto-rs.yaml
docker/docker-compose.pgvector.yaml
services: |
weaviate
qdrant
@@ -65,6 +67,7 @@ jobs:
minio
milvus-standalone
pgvecto-rs
pgvector
- name: Test Vector Stores
run: dev/pytest/pytest_vdb.sh

160
CONTRIBUTING_JA.md Normal file
View File

@@ -0,0 +1,160 @@
Dify にコントリビュートしたいとお考えなのですね。それは素晴らしいことです。
私たちは、LLM アプリケーションの構築と管理のための最も直感的なワークフローを設計するという壮大な野望を持っています。人数も資金も限られている新興企業として、コミュニティからの支援は本当に重要です。
私たちは現状を鑑み、機敏かつ迅速に開発をする必要がありますが、同時にあなたのようなコントリビューターの方々に、可能な限りスムーズな貢献体験をしていただきたいと思っています。そのためにこのコントリビュートガイドを作成しました。
コードベースやコントリビュータの方々と私たちがどのように仕事をしているのかに慣れていただき、楽しいパートにすぐに飛び込めるようにすることが目的です。
このガイドは Dify そのものと同様に、継続的に改善されています。実際のプロジェクトに遅れをとることがあるかもしれませんが、ご理解をお願いします。
ライセンスに関しては、私たちの短い[ライセンスおよびコントリビューター規約](./LICENSE)をお読みください。また、コミュニティは[行動規範](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)を遵守しています。
## 飛び込む前に
[既存の Issue](https://github.com/langgenius/dify/issues?q=is:issue+is:closed) を探すか、[新しい Issue](https://github.com/langgenius/dify/issues/new/choose) を作成してください。私たちは Issue を 2 つのタイプに分類しています。
### 機能リクエスト
* 新しい機能要望を出す場合は、提案する機能が何を実現するものなのかを説明し、可能な限り多くの文脈を含めてください。[@perzeusss](https://github.com/perzeuss)は、あなたの要望を書き出すのに役立つ [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) を作ってくれました。気軽に試してみてください。
* 既存の課題から 1 つ選びたい場合は、その下にコメントを書いてください。
関連する方向で作業しているチームメンバーが参加します。すべてが良好であれば、コーディングを開始する許可が与えられます。私たちが変更を提案した場合にあなたの作業が無駄になることがないよう、それまでこの機能の作業を控えていただくようお願いいたします。
提案された機能がどの分野に属するかによって、あなたは異なるチーム・メンバーと話をするかもしれません。以下は、各チームメンバーが現在取り組んでいる分野の概要です。
| Member | Scope |
| --------------------------------------------------------------------------------------- | ------------------------------------ |
| [@yeuoly](https://github.com/Yeuoly) | エージェントアーキテクチャ |
| [@jyong](https://github.com/JohnJyong) | RAG パイプライン設計 |
| [@GarfieldDai](https://github.com/GarfieldDai) | workflow orchestrations の構築 |
| [@iamjoel](https://github.com/iamjoel) & [@zxhlyh](https://github.com/zxhlyh) | フロントエンドを使いやすくする |
| [@guchenhe](https://github.com/guchenhe) & [@crazywoola](https://github.com/crazywoola) | 開発者体験、何でも相談できる窓口 |
| [@takatost](https://github.com/takatost) | 全体的な製品の方向性とアーキテクチャ |
優先順位の付け方:
| Feature Type | Priority |
| --------------------------------------------------------------------------------------------------------------------- | --------------- |
| チームメンバーによってラベル付けされた優先度の高い機能 | High Priority |
| [community feedback board](https://github.com/langgenius/dify/discussions/categories/feedbacks)の人気の機能リクエスト | Medium Priority |
| 非コア機能とマイナーな機能強化 | Low Priority |
| 価値はあるが即効性はない | Future-Feature |
### その他 (バグレポート、パフォーマンスの最適化、誤字の修正など)
* すぐにコーディングを始めてください
優先順位の付け方:
| Issue Type | Priority |
| -------------------------------------------------------------------------------------- | --------------- |
| コア機能のバグ(ログインできない、アプリケーションが動作しない、セキュリティの抜け穴) | Critical |
| 致命的でないバグ、パフォーマンス向上 | Medium Priority |
| 細かな修正(誤字脱字、機能はするが分かりにくい UI | Low Priority |
## インストール
Dify を開発用にセットアップする手順は以下の通りです。
### 1. このリポジトリをフォークする
### 2. リポジトリをクローンする
フォークしたリポジトリをターミナルからクローンします。
```
git clone git@github.com:<github_username>/dify.git
```
### 3. 依存関係の確認
Dify を構築するには次の依存関係が必要です。それらがシステムにインストールされていることを確認してください。
- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Node.js v18.x (LTS)](http://nodejs.org)
- [npm](https://www.npmjs.com/) version 8.x.x or [Yarn](https://yarnpkg.com/)
- [Python](https://www.python.org/) version 3.10.x
### 4. インストール
Dify はバックエンドとフロントエンドから構成されています。
まず`cd api/`でバックエンドのディレクトリに移動し、[Backend README](api/README.md)に従ってインストールします。
次に別のターミナルで、`cd web/`でフロントエンドのディレクトリに移動し、[Frontend README](web/README.md)に従ってインストールしてください。
よくある問題とトラブルシューティングの手順については、[installation FAQ](https://docs.dify.ai/getting-started/faq/install-faq) を確認してください。
### 5. ブラウザで dify にアクセスする
設定を確認するために、ブラウザで[http://localhost:3000](http://localhost:3000)(デフォルト、または自分で設定した URL とポート)にアクセスしてください。Dify が起動して実行中であることが確認できるはずです。
## 開発中
モデルプロバイダーを追加する場合は、[このガイド](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md)が役立ちます。
Agent や Workflow にツールプロバイダーを追加する場合は、[このガイド](./api/core/tools/README.md)が役立ちます。
Dify のバックエンドとフロントエンドの概要を簡単に説明します。
### バックエンド
Dify のバックエンドは[Flask](https://flask.palletsprojects.com/en/3.0.x/)を使って Python で書かれています。ORM には[SQLAlchemy](https://www.sqlalchemy.org/)を、タスクキューには[Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html)を使っています。認証ロジックは Flask-login 経由で行われます。
```
[api/]
├── constants // コードベース全体で使用される定数設定
├── controllers // APIルート定義とリクエスト処理ロジック
├── core // アプリケーションの中核的な管理、モデル統合、およびツール
├── docker // Dockerおよびコンテナ関連の設定
├── events // イベントのハンドリングと処理
├── extensions // 第三者のフレームワーク/プラットフォームとの拡張
├── fields // シリアライゼーション/マーシャリング用のフィールド定義
├── libs // 再利用可能なライブラリとヘルパー
├── migrations // データベースマイグレーションスクリプト
├── models // データベースモデルとスキーマ定義
├── services // ビジネスロジックの定義
├── storage // 秘密鍵の保存
├── tasks // 非同期タスクとバックグラウンドジョブの処理
└── tests // テスト関連のファイル
```
### フロントエンド
このウェブサイトは、Typescript の[Next.js](https://nextjs.org/)ボイラープレートでブートストラップされており、スタイリングには[Tailwind CSS](https://tailwindcss.com/)を使用しています。国際化には[React-i18next](https://react.i18next.com/)を使用しています。
```
[web/]
├── app // レイアウト、ページ、コンポーネント
│ ├── (commonLayout) // アプリ全体で共通のレイアウト
│ ├── (shareLayout) // トークン特有のセッションで共有されるレイアウト
│ ├── activate // アクティベートページ
│ ├── components // ページやレイアウトで共有されるコンポーネント
│ ├── install // インストールページ
│ ├── signin // サインインページ
│ └── styles // グローバルに共有されるスタイル
├── assets // 静的アセット
├── bin // ビルドステップで実行されるスクリプト
├── config // 調整可能な設定とオプション
├── context // アプリの異なる部分で使用される共有コンテキスト
├── dictionaries // 言語別の翻訳ファイル
├── docker // コンテナ設定
├── hooks // 再利用可能なフック
├── i18n // 国際化設定
├── models // データモデルとAPIレスポンスの形状を記述
├── public // ファビコンなどのメタアセット
├── service // APIアクションの形状を指定
├── test
├── types // 関数のパラメータと戻り値の記述
└── utils // 共有ユーティリティ関数
```
## PR を投稿する
いよいよ、私たちのリポジトリにプルリクエスト (PR) を提出する時が来ました。主要な機能については、まず `deploy/dev` ブランチにマージしてテストしてから `main` ブランチにマージします。
マージ競合などの問題が発生した場合、またはプル リクエストを開く方法がわからない場合は、[GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) をチェックしてみてください。
これで完了です!あなたの PR がマージされると、[README](https://github.com/langgenius/dify/blob/main/README.md) にコントリビューターとして紹介されます。
## ヘルプを得る
コントリビュート中に行き詰まったり、疑問が生じたりした場合は、GitHub の関連する issue から質問していただくか、[Discord](https://discord.gg/8Tpq4AcN9c)でチャットしてください。

View File

@@ -35,13 +35,10 @@
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
</p>
#
<p align="center">
<a href="https://trendshift.io/repositories/2152" target="_blank"><img src="https://trendshift.io/api/badge/repositories/2152" alt="langgenius%2Fdify | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</p>
Dify is an open-source LLM app development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production. Here's a list of the core features:
</br> </br>
@@ -109,7 +106,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
<td align="center">Agent</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">✅</td>
</tr>
<tr>
@@ -127,7 +124,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
<td align="center">❌</td>
</tr>
<tr>
<td align="center">Enterprise Feature (SSO/Access control)</td>
<td align="center">Enterprise Features (SSO/Access control)</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">❌</td>

View File

@@ -35,6 +35,7 @@
<a href="./README_ES.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/西班牙语-d9d9d9"></a>
<a href="./README_KL.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/法语-d9d9d9"></a>
<a href="./README_FR.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/克林贡语-d9d9d9"></a>
<a href="./README_KR.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/韓國語-d9d9d9"></a>
</div>
@@ -111,7 +112,7 @@ Dify 是一个开源的 LLM 应用开发平台。其直观的界面结合了 AI
<td align="center">Agent</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">✅</td>
</tr>
<tr>

View File

@@ -35,6 +35,7 @@
<a href="./README_ES.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_KL.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
</p>
#
@@ -111,7 +112,7 @@ es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruida
<td align="center">Agente</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">✅</td>
</tr>
<tr>

View File

@@ -35,6 +35,7 @@
<a href="./README_ES.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_KL.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
</p>
#
@@ -111,7 +112,7 @@ ités d'agent**:
<td align="center">Agent</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">✅</td>
</tr>
<tr>

View File

@@ -35,6 +35,7 @@
<a href="./README_ES.md"><img alt="先月のコミット" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_KL.md"><img alt="先月のコミット" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="先月のコミット" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="先月のコミット" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
</p>
#
@@ -110,7 +111,7 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
<td align="center">エージェント</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">✅</td>
</tr>
<tr>

View File

@@ -35,6 +35,7 @@
<a href="./README_ES.md"><img alt="Commits last month" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_KL.md"><img alt="Commits last month" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Commits last month" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Commits last month" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
</p>
#
@@ -111,7 +112,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
<td align="center">Agent</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">✅</td>
</tr>
<tr>

243
README_KR.md Normal file
View File

@@ -0,0 +1,243 @@
![cover-v5-optimized](https://github.com/langgenius/dify/assets/13230914/f9e19af5-61ba-4119-b926-d10c4c06ebab)
<p align="center">
<a href="https://cloud.dify.ai">Dify 클라우드</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">셀프-호스팅</a> ·
<a href="https://docs.dify.ai">문서</a> ·
<a href="https://cal.com/guchenhe/60-min-meeting">기업 문의</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Product-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/free-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Commits last month" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="한국어 README" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
</p>
Dify는 오픈 소스 LLM 앱 개발 플랫폼입니다. 직관적인 인터페이스를 통해 AI 워크플로우, RAG 파이프라인, 에이전트 기능, 모델 관리, 관찰 기능 등을 결합하여 프로토타입에서 프로덕션까지 빠르게 전환할 수 있습니다. 주요 기능 목록은 다음과 같습니다:</br> </br>
**1. 워크플로우**:
다음 기능들을 비롯한 다양한 기능을 활용하여 시각적 캔버스에서 강력한 AI 워크플로우를 구축하고 테스트하세요.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
**2. 포괄적인 모델 지원:**:
수십 개의 추론 제공업체와 자체 호스팅 솔루션에서 제공하는 수백 개의 독점 및 오픈 소스 LLM과 원활하게 통합되며, GPT, Mistral, Llama3 및 모든 OpenAI API 호환 모델을 포함합니다. 지원되는 모델 제공업체의 전체 목록은 [여기](https://docs.dify.ai/getting-started/readme/model-providers)에서 확인할 수 있습니다.
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. 통합 개발환경**:
프롬프트를 작성하고, 모델 성능을 비교하며, 텍스트-음성 변환과 같은 추가 기능을 채팅 기반 앱에 추가할 수 있는 직관적인 인터페이스를 제공합니다.
**4. RAG 파이프라인**:
문서 수집부터 검색까지 모든 것을 다루며, PDF, PPT 및 기타 일반적인 문서 형식에서 텍스트 추출을 위한 기본 지원이 포함되어 있는 광범위한 RAG 기능을 제공합니다.
**5. 에이전트 기능**:
LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DELL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
**6. LLMOps**:
시간 경과에 따른 애플리케이션 로그와 성능을 모니터링하고 분석합니다. 생산 데이터와 주석을 기반으로 프롬프트, 데이터세트, 모델을 지속적으로 개선할 수 있습니다.
**7. Backend-as-a-Service**:
Dify의 모든 제품에는 해당 API가 함께 제공되므로 Dify를 자신의 비즈니스 로직에 쉽게 통합할 수 있습니다.
## 기능 비교
<table style="width: 100%;">
<tr>
<th align="center">기능</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">프로그래밍 접근 방식</td>
<td align="center">API + 앱 중심</td>
<td align="center">Python 코드</td>
<td align="center">앱 중심</td>
<td align="center">API 중심</td>
</tr>
<tr>
<td align="center">지원되는 LLMs</td>
<td align="center">다양한 종류</td>
<td align="center">다양한 종류</td>
<td align="center">다양한 종류</td>
<td align="center">OpenAI 전용</td>
</tr>
<tr>
<td align="center">RAG 엔진</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr>
<td align="center">에이전트</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">✅</td>
</tr>
<tr>
<td align="center">워크플로우</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">가시성</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">기업용 기능 (SSO/접근 제어)</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">❌</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">로컬 배포</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
</table>
## Dify 사용하기
- **클라우드 </br>**
우리는 누구나 설정이 필요 없이 사용해 볼 수 있도록 [Dify 클라우드](https://dify.ai) 서비스를 호스팅합니다. 이는 자체 배포 버전의 모든 기능을 제공하며, 샌드박스 플랜에서 무료로 200회의 GPT-4 호출을 포함합니다.
- **셀프-호스팅 Dify 커뮤니티 에디션</br>**
환경에서 Dify를 빠르게 실행하려면 이 [스타터 가이드를](#quick-start) 참조하세요.
추가 참조 및 더 심층적인 지침은 [문서](https://docs.dify.ai)를 사용하세요.
- **기업 / 조직을 위한 Dify</br>**
우리는 추가적인 기업 중심 기능을 제공합니다. 당사와 [미팅일정](https://cal.com/guchenhe/30min)을 잡거나 [이메일 보내기](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)를 통해 기업 요구 사항을 논의하십시오. </br>
> AWS를 사용하는 스타트업 및 중소기업의 경우 [AWS Marketplace에서 Dify Premium](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6)을 확인하고 한 번의 클릭으로 자체 AWS VPC에 배포하십시오. 맞춤형 로고와 브랜딩이 포함된 앱을 생성할 수 있는 옵션이 포함된 저렴한 AMI 제품입니다.
## 앞서가기
GitHub에서 Dify에 별표를 찍어 새로운 릴리스를 즉시 알림 받으세요.
![star-us](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## 빠른 시작
>Dify를 설치하기 전에 컴퓨터가 다음과 같은 최소 시스템 요구 사항을 충족하는지 확인하세요 :
>- CPU >= 2 Core
>- RAM >= 4GB
</br>
Dify 서버를 시작하는 가장 쉬운 방법은 [docker-compose.yml](docker/docker-compose.yaml) 파일을 실행하는 것입니다. 설치 명령을 실행하기 전에 [Docker](https://docs.docker.com/get-docker/) 및 [Docker Compose](https://docs.docker.com/compose/install/)가 머신에 설치되어 있는지 확인하세요.
```bash
cd docker
docker compose up -d
```
실행 후 브라우저의 [http://localhost/install](http://localhost/install) 에서 Dify 대시보드에 액세스하고 초기화 프로세스를 시작할 수 있습니다.
> Dify에 기여하거나 추가 개발을 하고 싶다면 소스 코드에서 [배포에 대한 가이드](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code)를 참조하세요.
## 다음 단계
구성 커스터마이징이 필요한 경우, [docker-compose.yml](docker/docker-compose.yaml) 파일의 코멘트를 참조하여 환경 구성을 수동으로 설정하십시오. 변경 후 `docker-compose up -d` 를 다시 실행하십시오. 환경 변수의 전체 목록은 [여기](https://docs.dify.ai/getting-started/install-self-hosted/environments)에서 확인할 수 있습니다.
고가용성 설정을 구성하려면 Dify를 Kubernetes에 배포할 수 있는 커뮤니티 제공 [Helm Charts](https://helm.sh/)가 있습니다.
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
## 기여
코드에 기여하고 싶은 분들은 [기여 가이드](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)를 참조하세요.
동시에 Dify를 소셜 미디어와 행사 및 컨퍼런스에 공유하여 지원하는 것을 고려해 주시기 바랍니다.
> 우리는 Dify를 중국어나 영어 이외의 언어로 번역하는 데 도움을 줄 수 있는 기여자를 찾고 있습니다. 도움을 주고 싶으시다면 [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md)에서 더 많은 정보를 확인하시고 [Discord 커뮤니티 서버](https://discord.gg/8Tpq4AcN9c)의 `global-users` 채널에 댓글을 남겨주세요.
**기여자**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## 커뮤니티 & 연락처
* [Github 토론](https://github.com/langgenius/dify/discussions). 피드백 공유 및 질문하기에 적합합니다.
* [GitHub 이슈](https://github.com/langgenius/dify/issues). Dify.AI 사용 중 발견한 버그와 기능 제안에 적합합니다. [기여 가이드](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)를 참조하세요.
* [이메일](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). Dify.AI 사용에 대한 질문하기에 적합합니다.
* [디스코드](https://discord.gg/FngNHpbcY7). 애플리케이션 공유 및 커뮤니티와 소통하기에 적합합니다.
* [트위터](https://twitter.com/dify_ai). 애플리케이션 공유 및 커뮤니티와 소통하기에 적합합니다.
또는 팀원과 직접 미팅을 예약하세요:
<table>
<tr>
<th>연락처</th>
<th>목적</th>
</tr>
<tr>
<td><a href='https://cal.com/guchenhe/15min' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/9ebcd111-1205-4d71-83d5-948d70b809f5' alt='Git-Hub-README-Button-3x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
<td>비즈니스 문의 및 제품 피드백</td>
</tr>
<tr>
<td><a href='https://cal.com/pinkbanana' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/d1edd00a-d7e4-4513-be6c-e57038e143fd' alt='Git-Hub-README-Button-2x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
<td>기여, 이슈 및 기능 요청</td>
</tr>
</table>
## Star 히스토리
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## 보안 공개
개인정보 보호를 위해 보안 문제를 GitHub에 게시하지 마십시오. 대신 security@dify.ai로 질문을 보내주시면 더 자세한 답변을 드리겠습니다.
## 라이선스
이 저장소는 기본적으로 몇 가지 추가 제한 사항이 있는 Apache 2.0인 [Dify 오픈 소스 라이선스](LICENSE)에 따라 사용할 수 있습니다.

View File

@@ -17,6 +17,9 @@ APP_WEB_URL=http://127.0.0.1:3000
# Files URL
FILES_URL=http://127.0.0.1:5001
# The time in seconds after the signature is rejected
FILES_ACCESS_TIMEOUT=300
# celery configuration
CELERY_BROKER_URL=redis://:difyai123456@localhost:6379/1
@@ -65,7 +68,7 @@ GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON=your-google-service-account-json-base64-stri
WEB_API_CORS_ALLOW_ORIGINS=http://127.0.0.1:3000,*
CONSOLE_CORS_ALLOW_ORIGINS=http://127.0.0.1:3000,*
# Vector database configuration, support: weaviate, qdrant, milvus, relyt, pgvecto_rs
# Vector database configuration, support: weaviate, qdrant, milvus, relyt, pgvecto_rs, pgvector
VECTOR_STORE=weaviate
# Weaviate configuration
@@ -102,6 +105,13 @@ PGVECTO_RS_USER=postgres
PGVECTO_RS_PASSWORD=difyai123456
PGVECTO_RS_DATABASE=postgres
# PGVector configuration
PGVECTOR_HOST=127.0.0.1
PGVECTOR_PORT=5433
PGVECTOR_USER=postgres
PGVECTOR_PASSWORD=postgres
PGVECTOR_DATABASE=postgres
# Upload configuration
UPLOAD_FILE_SIZE_LIMIT=15
UPLOAD_FILE_BATCH_LIMIT=5
@@ -137,6 +147,7 @@ NOTION_INTERNAL_SECRET=you-internal-secret
ETL_TYPE=dify
UNSTRUCTURED_API_URL=
UNSTRUCTURED_API_KEY=
SSRF_PROXY_HTTP_URL=
SSRF_PROXY_HTTPS_URL=
@@ -168,3 +179,6 @@ HTTP_REQUEST_NODE_MAX_TEXT_SIZE=1048576 # 1MB
# Log file path
LOG_FILE=
# Indexing configuration
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH=1000

View File

@@ -305,6 +305,14 @@ def migrate_knowledge_vector_database():
"vector_store": {"class_prefix": collection_name}
}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == "pgvector":
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
"type": 'pgvector',
"vector_store": {"class_prefix": collection_name}
}
dataset.index_struct = json.dumps(index_struct_dict)
else:
raise ValueError(f"Vector store {config.get('VECTOR_STORE')} is not supported.")
@@ -440,9 +448,64 @@ def convert_to_agent_apps():
click.echo(click.style('Congratulations! Converted {} agent apps.'.format(len(proceeded_app_ids)), fg='green'))
@click.command('add-qdrant-doc-id-index', help='add qdrant doc_id index.')
@click.option('--field', default='metadata.doc_id', prompt=False, help='index field , default is metadata.doc_id.')
def add_qdrant_doc_id_index(field: str):
click.echo(click.style('Start add qdrant doc_id index.', fg='green'))
config = current_app.config
vector_type = config.get('VECTOR_STORE')
if vector_type != "qdrant":
click.echo(click.style('Sorry, only support qdrant vector store.', fg='red'))
return
create_count = 0
try:
bindings = db.session.query(DatasetCollectionBinding).all()
if not bindings:
click.echo(click.style('Sorry, no dataset collection bindings found.', fg='red'))
return
import qdrant_client
from qdrant_client.http.exceptions import UnexpectedResponse
from qdrant_client.http.models import PayloadSchemaType
from core.rag.datasource.vdb.qdrant.qdrant_vector import QdrantConfig
for binding in bindings:
qdrant_config = QdrantConfig(
endpoint=config.get('QDRANT_URL'),
api_key=config.get('QDRANT_API_KEY'),
root_path=current_app.root_path,
timeout=config.get('QDRANT_CLIENT_TIMEOUT'),
grpc_port=config.get('QDRANT_GRPC_PORT'),
prefer_grpc=config.get('QDRANT_GRPC_ENABLED')
)
try:
client = qdrant_client.QdrantClient(**qdrant_config.to_qdrant_params())
# create payload index
client.create_payload_index(binding.collection_name, field,
field_schema=PayloadSchemaType.KEYWORD)
create_count += 1
except UnexpectedResponse as e:
# Collection does not exist, so return
if e.status_code == 404:
click.echo(click.style(f'Collection not found, collection_name:{binding.collection_name}.', fg='red'))
continue
# Some other error occurred, so re-raise the exception
else:
click.echo(click.style(f'Failed to create qdrant index, collection_name:{binding.collection_name}.', fg='red'))
except Exception as e:
click.echo(click.style('Failed to create qdrant client.', fg='red'))
click.echo(
click.style(f'Congratulations! Create {create_count} collection indexes.',
fg='green'))
def register_commands(app):
app.cli.add_command(reset_password)
app.cli.add_command(reset_email)
app.cli.add_command(reset_encrypt_key_pair)
app.cli.add_command(vdb_migrate)
app.cli.add_command(convert_to_agent_apps)
app.cli.add_command(add_qdrant_doc_id_index)

View File

@@ -23,14 +23,17 @@ DEFAULTS = {
'SERVICE_API_URL': 'https://api.dify.ai',
'APP_WEB_URL': 'https://udify.app',
'FILES_URL': '',
'FILES_ACCESS_TIMEOUT': 300,
'S3_ADDRESS_STYLE': 'auto',
'STORAGE_TYPE': 'local',
'STORAGE_LOCAL_PATH': 'storage',
'CHECK_UPDATE_URL': 'https://updates.dify.ai',
'DEPLOY_ENV': 'PRODUCTION',
'SQLALCHEMY_DATABASE_URI_SCHEME': 'postgresql',
'SQLALCHEMY_POOL_SIZE': 30,
'SQLALCHEMY_MAX_OVERFLOW': 10,
'SQLALCHEMY_POOL_RECYCLE': 3600,
'SQLALCHEMY_POOL_PRE_PING': 'False',
'SQLALCHEMY_ECHO': 'False',
'SENTRY_TRACES_SAMPLE_RATE': 1.0,
'SENTRY_PROFILES_SAMPLE_RATE': 1.0,
@@ -77,6 +80,9 @@ DEFAULTS = {
'KEYWORD_DATA_SOURCE_TYPE': 'database',
'INNER_API': 'False',
'ENTERPRISE_ENABLED': 'False',
'INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH': 1000,
'WORKFLOW_MAX_EXECUTION_STEPS': 50,
'WORKFLOW_MAX_EXECUTION_TIME': 600,
}
@@ -107,7 +113,7 @@ class Config:
# ------------------------
# General Configurations.
# ------------------------
self.CURRENT_VERSION = "0.6.7"
self.CURRENT_VERSION = "0.6.9"
self.COMMIT_SHA = get_env('COMMIT_SHA')
self.EDITION = get_env('EDITION')
self.DEPLOY_ENV = get_env('DEPLOY_ENV')
@@ -138,6 +144,10 @@ class Config:
# Url is signed and has expiration time.
self.FILES_URL = get_env('FILES_URL') if get_env('FILES_URL') else self.CONSOLE_API_URL
# File Access Time specifies a time interval in seconds for the file to be accessed.
# The default value is 300 seconds.
self.FILES_ACCESS_TIMEOUT = int(get_env('FILES_ACCESS_TIMEOUT'))
# Your App secret key will be used for securely signing the session cookie
# Make sure you are changing this key for your deployment with a strong key.
# You can generate a strong key using `openssl rand -base64 42`.
@@ -165,14 +175,17 @@ class Config:
key: get_env(key) for key in
['DB_USERNAME', 'DB_PASSWORD', 'DB_HOST', 'DB_PORT', 'DB_DATABASE', 'DB_CHARSET']
}
self.SQLALCHEMY_DATABASE_URI_SCHEME = get_env('SQLALCHEMY_DATABASE_URI_SCHEME')
db_extras = f"?client_encoding={db_credentials['DB_CHARSET']}" if db_credentials['DB_CHARSET'] else ""
self.SQLALCHEMY_DATABASE_URI = f"postgresql://{db_credentials['DB_USERNAME']}:{db_credentials['DB_PASSWORD']}@{db_credentials['DB_HOST']}:{db_credentials['DB_PORT']}/{db_credentials['DB_DATABASE']}{db_extras}"
self.SQLALCHEMY_DATABASE_URI = f"{self.SQLALCHEMY_DATABASE_URI_SCHEME}://{db_credentials['DB_USERNAME']}:{db_credentials['DB_PASSWORD']}@{db_credentials['DB_HOST']}:{db_credentials['DB_PORT']}/{db_credentials['DB_DATABASE']}{db_extras}"
self.SQLALCHEMY_ENGINE_OPTIONS = {
'pool_size': int(get_env('SQLALCHEMY_POOL_SIZE')),
'max_overflow': int(get_env('SQLALCHEMY_MAX_OVERFLOW')),
'pool_recycle': int(get_env('SQLALCHEMY_POOL_RECYCLE'))
'pool_recycle': int(get_env('SQLALCHEMY_POOL_RECYCLE')),
'pool_pre_ping': get_bool_env('SQLALCHEMY_POOL_PRE_PING'),
'connect_args': {'options': '-c timezone=UTC'},
}
self.SQLALCHEMY_ECHO = get_bool_env('SQLALCHEMY_ECHO')
@@ -211,18 +224,18 @@ class Config:
self.AZURE_BLOB_ACCOUNT_KEY = get_env('AZURE_BLOB_ACCOUNT_KEY')
self.AZURE_BLOB_CONTAINER_NAME = get_env('AZURE_BLOB_CONTAINER_NAME')
self.AZURE_BLOB_ACCOUNT_URL = get_env('AZURE_BLOB_ACCOUNT_URL')
self.ALIYUN_OSS_BUCKET_NAME=get_env('ALIYUN_OSS_BUCKET_NAME')
self.ALIYUN_OSS_ACCESS_KEY=get_env('ALIYUN_OSS_ACCESS_KEY')
self.ALIYUN_OSS_SECRET_KEY=get_env('ALIYUN_OSS_SECRET_KEY')
self.ALIYUN_OSS_ENDPOINT=get_env('ALIYUN_OSS_ENDPOINT')
self.ALIYUN_OSS_REGION=get_env('ALIYUN_OSS_REGION')
self.ALIYUN_OSS_AUTH_VERSION=get_env('ALIYUN_OSS_AUTH_VERSION')
self.ALIYUN_OSS_BUCKET_NAME = get_env('ALIYUN_OSS_BUCKET_NAME')
self.ALIYUN_OSS_ACCESS_KEY = get_env('ALIYUN_OSS_ACCESS_KEY')
self.ALIYUN_OSS_SECRET_KEY = get_env('ALIYUN_OSS_SECRET_KEY')
self.ALIYUN_OSS_ENDPOINT = get_env('ALIYUN_OSS_ENDPOINT')
self.ALIYUN_OSS_REGION = get_env('ALIYUN_OSS_REGION')
self.ALIYUN_OSS_AUTH_VERSION = get_env('ALIYUN_OSS_AUTH_VERSION')
self.GOOGLE_STORAGE_BUCKET_NAME = get_env('GOOGLE_STORAGE_BUCKET_NAME')
self.GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64 = get_env('GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64')
# ------------------------
# Vector Store Configurations.
# Currently, only support: qdrant, milvus, zilliz, weaviate, relyt
# Currently, only support: qdrant, milvus, zilliz, weaviate, relyt, pgvector
# ------------------------
self.VECTOR_STORE = get_env('VECTOR_STORE')
self.KEYWORD_STORE = get_env('KEYWORD_STORE')
@@ -261,6 +274,13 @@ class Config:
self.PGVECTO_RS_PASSWORD = get_env('PGVECTO_RS_PASSWORD')
self.PGVECTO_RS_DATABASE = get_env('PGVECTO_RS_DATABASE')
# pgvector settings
self.PGVECTOR_HOST = get_env('PGVECTOR_HOST')
self.PGVECTOR_PORT = get_env('PGVECTOR_PORT')
self.PGVECTOR_USER = get_env('PGVECTOR_USER')
self.PGVECTOR_PASSWORD = get_env('PGVECTOR_PASSWORD')
self.PGVECTOR_DATABASE = get_env('PGVECTOR_DATABASE')
# ------------------------
# Mail Configurations.
# ------------------------
@@ -274,7 +294,7 @@ class Config:
self.SMTP_USERNAME = get_env('SMTP_USERNAME')
self.SMTP_PASSWORD = get_env('SMTP_PASSWORD')
self.SMTP_USE_TLS = get_bool_env('SMTP_USE_TLS')
# ------------------------
# Workspace Configurations.
# ------------------------
@@ -302,6 +322,9 @@ class Config:
self.UPLOAD_FILE_BATCH_LIMIT = int(get_env('UPLOAD_FILE_BATCH_LIMIT'))
self.UPLOAD_IMAGE_FILE_SIZE_LIMIT = int(get_env('UPLOAD_IMAGE_FILE_SIZE_LIMIT'))
self.WORKFLOW_MAX_EXECUTION_STEPS = int(get_env('WORKFLOW_MAX_EXECUTION_STEPS'))
self.WORKFLOW_MAX_EXECUTION_TIME = int(get_env('WORKFLOW_MAX_EXECUTION_TIME'))
# Moderation in app Configurations.
self.OUTPUT_MODERATION_BUFFER_SIZE = int(get_env('OUTPUT_MODERATION_BUFFER_SIZE'))
@@ -354,6 +377,7 @@ class Config:
self.ETL_TYPE = get_env('ETL_TYPE')
self.UNSTRUCTURED_API_URL = get_env('UNSTRUCTURED_API_URL')
self.UNSTRUCTURED_API_KEY = get_env('UNSTRUCTURED_API_KEY')
self.BILLING_ENABLED = get_bool_env('BILLING_ENABLED')
self.CAN_REPLACE_LOGO = get_bool_env('CAN_REPLACE_LOGO')
@@ -367,3 +391,8 @@ class Config:
self.KEYWORD_DATA_SOURCE_TYPE = get_env('KEYWORD_DATA_SOURCE_TYPE')
self.ENTERPRISE_ENABLED = get_bool_env('ENTERPRISE_ENABLED')
# ------------------------
# Indexing Configurations.
# ------------------------
self.INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH = get_env('INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH')

View File

@@ -16,6 +16,7 @@ language_timezone_mapping = {
'it-IT': 'Europe/Rome',
'uk-UA': 'Europe/Kyiv',
'vi-VN': 'Asia/Ho_Chi_Minh',
'ro-RO': 'Europe/Bucharest',
'pl-PL': 'Europe/Warsaw',
}

View File

@@ -24,7 +24,8 @@
"description": "Welcome to your personalized Investment Analysis Copilot service, where we delve into the depths of stock analysis to provide you with comprehensive insights. \n",
"is_listed": true,
"position": 0,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -40,7 +41,8 @@
"description": "Code interpreter, clarifying the syntax and semantics of the code.",
"is_listed": true,
"position": 13,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -56,7 +58,8 @@
"description": "Hello, I am your creative partner in bringing ideas to vivid life! I can assist you in creating stunning designs by leveraging abilities of DALL\u00b7E 3. ",
"is_listed": true,
"position": 4,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -72,7 +75,8 @@
"description": "Fully SEO Optimized Article including FAQs",
"is_listed": true,
"position": 1,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -88,7 +92,8 @@
"description": "Generate Flat Style Image",
"is_listed": true,
"position": 10,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -104,7 +109,8 @@
"description": "A multilingual translator that provides translation capabilities in multiple languages. Input the text you need to translate and select the target language.",
"is_listed": true,
"position": 10,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -120,7 +126,8 @@
"description": "I am a YouTube Channel Data Analysis Copilot, I am here to provide expert data analysis tailored to your needs. ",
"is_listed": true,
"position": 2,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -136,7 +143,8 @@
"description": "Meeting minutes generator",
"is_listed": true,
"position": 0,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -152,7 +160,8 @@
"description": "Tell me the main elements, I will generate a cyberpunk style image for you. ",
"is_listed": true,
"position": 10,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -168,7 +177,8 @@
"description": "Write SQL from natural language by pasting in your schema with the request.Please describe your query requirements in natural language and select the target database type.",
"is_listed": true,
"position": 13,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -184,7 +194,8 @@
"description": "Welcome to your personalized travel service with Consultant! \ud83c\udf0d\u2708\ufe0f Ready to embark on a journey filled with adventure and relaxation? Let's dive into creating your unforgettable travel experience. ",
"is_listed": true,
"position": 3,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -200,7 +211,8 @@
"description": "I can answer your questions related to strategic marketing.",
"is_listed": true,
"position": 10,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -216,7 +228,8 @@
"description": "A simulated front-end interviewer that tests the skill level of front-end development through questioning.",
"is_listed": true,
"position": 19,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -232,7 +245,8 @@
"description": "I'm here to hear about your feature request about Dify and help you flesh it out further. What's on your mind?",
"is_listed": true,
"position": 6,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
}
]
},
@@ -261,7 +275,8 @@
"description": "\u4e00\u4e2a\u6a21\u62df\u7684\u524d\u7aef\u9762\u8bd5\u5b98\uff0c\u901a\u8fc7\u63d0\u95ee\u7684\u65b9\u5f0f\u5bf9\u524d\u7aef\u5f00\u53d1\u7684\u6280\u80fd\u6c34\u5e73\u8fdb\u884c\u68c0\u9a8c\u3002",
"is_listed": true,
"position": 20,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -277,7 +292,8 @@
"description": "\u8f93\u5165\u76f8\u5173\u5143\u7d20\uff0c\u4e3a\u4f60\u751f\u6210\u6241\u5e73\u63d2\u753b\u98ce\u683c\u7684\u5c01\u9762\u56fe\u7247",
"is_listed": true,
"position": 10,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -293,7 +309,8 @@
"description": "\u4e00\u4e2a\u591a\u8bed\u8a00\u7ffb\u8bd1\u5668\uff0c\u63d0\u4f9b\u591a\u79cd\u8bed\u8a00\u7ffb\u8bd1\u80fd\u529b\uff0c\u8f93\u5165\u4f60\u9700\u8981\u7ffb\u8bd1\u7684\u6587\u672c\uff0c\u9009\u62e9\u76ee\u6807\u8bed\u8a00\u5373\u53ef\u3002\u63d0\u793a\u8bcd\u6765\u81ea\u5b9d\u7389\u3002",
"is_listed": true,
"position": 10,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -309,7 +326,8 @@
"description": "\u6211\u5c06\u5e2e\u52a9\u4f60\u628a\u81ea\u7136\u8bed\u8a00\u8f6c\u5316\u6210\u6307\u5b9a\u7684\u6570\u636e\u5e93\u67e5\u8be2 SQL \u8bed\u53e5\uff0c\u8bf7\u5728\u4e0b\u65b9\u8f93\u5165\u4f60\u9700\u8981\u67e5\u8be2\u7684\u6761\u4ef6\uff0c\u5e76\u9009\u62e9\u76ee\u6807\u6570\u636e\u5e93\u7c7b\u578b\u3002",
"is_listed": true,
"position": 12,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -325,7 +343,8 @@
"description": "\u9610\u660e\u4ee3\u7801\u7684\u8bed\u6cd5\u548c\u8bed\u4e49\u3002",
"is_listed": true,
"position": 2,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -341,7 +360,8 @@
"description": "\u8f93\u5165\u76f8\u5173\u5143\u7d20\uff0c\u4e3a\u4f60\u751f\u6210\u8d5b\u535a\u670b\u514b\u98ce\u683c\u7684\u63d2\u753b",
"is_listed": true,
"position": 10,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -357,7 +377,8 @@
"description": "\u6211\u662f\u4e00\u540dSEO\u4e13\u5bb6\uff0c\u53ef\u4ee5\u6839\u636e\u60a8\u63d0\u4f9b\u7684\u6807\u9898\u3001\u5173\u952e\u8bcd\u3001\u76f8\u5173\u4fe1\u606f\u6765\u6279\u91cf\u751f\u6210SEO\u6587\u7ae0\u3002",
"is_listed": true,
"position": 10,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -373,7 +394,8 @@
"description": "\u5e2e\u4f60\u91cd\u65b0\u7ec4\u7ec7\u548c\u8f93\u51fa\u6df7\u4e71\u590d\u6742\u7684\u4f1a\u8bae\u7eaa\u8981\u3002",
"is_listed": true,
"position": 6,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -389,7 +411,8 @@
"description": "\u6b22\u8fce\u4f7f\u7528\u60a8\u7684\u4e2a\u6027\u5316\u7f8e\u80a1\u6295\u8d44\u5206\u6790\u52a9\u624b\uff0c\u5728\u8fd9\u91cc\u6211\u4eec\u6df1\u5165\u7684\u8fdb\u884c\u80a1\u7968\u5206\u6790\uff0c\u4e3a\u60a8\u63d0\u4f9b\u5168\u9762\u7684\u6d1e\u5bdf\u3002",
"is_listed": true,
"position": 0,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -405,7 +428,8 @@
"description": "\u60a8\u597d\uff0c\u6211\u662f\u60a8\u7684\u521b\u610f\u4f19\u4f34\uff0c\u5c06\u5e2e\u52a9\u60a8\u5c06\u60f3\u6cd5\u751f\u52a8\u5730\u5b9e\u73b0\uff01\u6211\u53ef\u4ee5\u534f\u52a9\u60a8\u5229\u7528DALL\u00b7E 3\u7684\u80fd\u529b\u521b\u9020\u51fa\u4ee4\u4eba\u60ca\u53f9\u7684\u8bbe\u8ba1\u3002",
"is_listed": true,
"position": 4,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -421,7 +445,8 @@
"description": "\u7ffb\u8bd1\u4e13\u5bb6\uff1a\u63d0\u4f9b\u4e2d\u82f1\u6587\u4e92\u8bd1",
"is_listed": true,
"position": 4,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -437,7 +462,8 @@
"description": "\u60a8\u7684\u79c1\u4eba\u5b66\u4e60\u5bfc\u5e08\uff0c\u5e2e\u60a8\u5236\u5b9a\u5b66\u4e60\u8ba1\u5212\u5e76\u8f85\u5bfc",
"is_listed": true,
"position": 26,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -453,7 +479,8 @@
"description": "\u5e2e\u4f60\u64b0\u5199\u8bba\u6587\u6587\u732e\u7efc\u8ff0",
"is_listed": true,
"position": 7,
"privacy_policy": "https://dify.ai"
"privacy_policy": "https://dify.ai",
"custom_disclaimer": null
},
{
"app": {
@@ -469,7 +496,8 @@
"description": "\u4f60\u597d\uff0c\u544a\u8bc9\u6211\u60a8\u60f3\u5206\u6790\u7684 YouTube \u9891\u9053\uff0c\u6211\u5c06\u4e3a\u60a8\u6574\u7406\u4e00\u4efd\u5b8c\u6574\u7684\u6570\u636e\u5206\u6790\u62a5\u544a\u3002",
"is_listed": true,
"position": 0,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
},
{
"app": {
@@ -485,7 +513,8 @@
"description": "\u6b22\u8fce\u4f7f\u7528\u60a8\u7684\u4e2a\u6027\u5316\u65c5\u884c\u670d\u52a1\u987e\u95ee\uff01\ud83c\udf0d\u2708\ufe0f \u51c6\u5907\u597d\u8e0f\u4e0a\u4e00\u6bb5\u5145\u6ee1\u5192\u9669\u4e0e\u653e\u677e\u7684\u65c5\u7a0b\u4e86\u5417\uff1f\u8ba9\u6211\u4eec\u4e00\u8d77\u6df1\u5165\u6253\u9020\u60a8\u96be\u5fd8\u7684\u65c5\u884c\u4f53\u9a8c\u5427\u3002",
"is_listed": true,
"position": 0,
"privacy_policy": null
"privacy_policy": null,
"custom_disclaimer": null
}
]
},

View File

@@ -37,9 +37,6 @@ from .billing import billing
# Import datasets controllers
from .datasets import data_source, datasets, datasets_document, datasets_segments, file, hit_testing
# Import enterprise controllers
from .enterprise import enterprise_sso
# Import explore controllers
from .explore import (
audio,

View File

@@ -48,6 +48,7 @@ class InsertExploreAppListApi(Resource):
parser.add_argument('desc', type=str, location='json')
parser.add_argument('copyright', type=str, location='json')
parser.add_argument('privacy_policy', type=str, location='json')
parser.add_argument('custom_disclaimer', type=str, location='json')
parser.add_argument('language', type=supported_language, required=True, nullable=False, location='json')
parser.add_argument('category', type=str, required=True, nullable=False, location='json')
parser.add_argument('position', type=int, required=True, nullable=False, location='json')
@@ -62,6 +63,7 @@ class InsertExploreAppListApi(Resource):
desc = args['desc'] if args['desc'] else ''
copy_right = args['copyright'] if args['copyright'] else ''
privacy_policy = args['privacy_policy'] if args['privacy_policy'] else ''
custom_disclaimer = args['custom_disclaimer'] if args['custom_disclaimer'] else ''
else:
desc = site.description if site.description else \
args['desc'] if args['desc'] else ''
@@ -69,6 +71,8 @@ class InsertExploreAppListApi(Resource):
args['copyright'] if args['copyright'] else ''
privacy_policy = site.privacy_policy if site.privacy_policy else \
args['privacy_policy'] if args['privacy_policy'] else ''
custom_disclaimer = site.custom_disclaimer if site.custom_disclaimer else \
args['custom_disclaimer'] if args['custom_disclaimer'] else ''
recommended_app = RecommendedApp.query.filter(RecommendedApp.app_id == args['app_id']).first()
@@ -78,6 +82,7 @@ class InsertExploreAppListApi(Resource):
description=desc,
copyright=copy_right,
privacy_policy=privacy_policy,
custom_disclaimer=custom_disclaimer,
language=args['language'],
category=args['category'],
position=args['position']
@@ -93,6 +98,7 @@ class InsertExploreAppListApi(Resource):
recommended_app.description = desc
recommended_app.copyright = copy_right
recommended_app.privacy_policy = privacy_policy
recommended_app.custom_disclaimer = custom_disclaimer
recommended_app.language = args['language']
recommended_app.category = args['category']
recommended_app.position = args['position']

View File

@@ -85,7 +85,7 @@ class ChatMessageTextApi(Resource):
response = AudioService.transcript_tts(
app_model=app_model,
text=request.form['text'],
voice=request.form.get('voice'),
voice=request.form['voice'] if request.form.get('voice') else app_model.app_model_config.text_to_speech_dict.get('voice'),
streaming=False
)

View File

@@ -91,3 +91,9 @@ class DraftWorkflowNotExist(BaseHTTPException):
error_code = 'draft_workflow_not_exist'
description = "Draft workflow need to be initialized."
code = 400
class DraftWorkflowNotSync(BaseHTTPException):
error_code = 'draft_workflow_not_sync'
description = "Workflow graph might have been modified, please refresh and resubmit."
code = 400

View File

@@ -23,6 +23,7 @@ def parse_app_site_args():
parser.add_argument('customize_domain', type=str, required=False, location='json')
parser.add_argument('copyright', type=str, required=False, location='json')
parser.add_argument('privacy_policy', type=str, required=False, location='json')
parser.add_argument('custom_disclaimer', type=str, required=False, location='json')
parser.add_argument('customize_token_strategy', type=str, choices=['must', 'allow', 'not_allow'],
required=False,
location='json')
@@ -56,6 +57,7 @@ class AppSite(Resource):
'customize_domain',
'copyright',
'privacy_policy',
'custom_disclaimer',
'customize_token_strategy',
'prompt_public'
]:

View File

@@ -7,7 +7,7 @@ from werkzeug.exceptions import InternalServerError, NotFound
import services
from controllers.console import api
from controllers.console.app.error import ConversationCompletedError, DraftWorkflowNotExist
from controllers.console.app.error import ConversationCompletedError, DraftWorkflowNotExist, DraftWorkflowNotSync
from controllers.console.app.wraps import get_app_model
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
@@ -20,6 +20,7 @@ from libs.helper import TimestampField, uuid_value
from libs.login import current_user, login_required
from models.model import App, AppMode
from services.app_generate_service import AppGenerateService
from services.errors.app import WorkflowHashNotEqualError
from services.workflow_service import WorkflowService
logger = logging.getLogger(__name__)
@@ -59,6 +60,7 @@ class DraftWorkflowApi(Resource):
parser = reqparse.RequestParser()
parser.add_argument('graph', type=dict, required=True, nullable=False, location='json')
parser.add_argument('features', type=dict, required=True, nullable=False, location='json')
parser.add_argument('hash', type=str, required=False, location='json')
args = parser.parse_args()
elif 'text/plain' in content_type:
try:
@@ -71,7 +73,8 @@ class DraftWorkflowApi(Resource):
args = {
'graph': data.get('graph'),
'features': data.get('features')
'features': data.get('features'),
'hash': data.get('hash')
}
except json.JSONDecodeError:
return {'message': 'Invalid JSON data'}, 400
@@ -79,15 +82,21 @@ class DraftWorkflowApi(Resource):
abort(415)
workflow_service = WorkflowService()
workflow = workflow_service.sync_draft_workflow(
app_model=app_model,
graph=args.get('graph'),
features=args.get('features'),
account=current_user
)
try:
workflow = workflow_service.sync_draft_workflow(
app_model=app_model,
graph=args.get('graph'),
features=args.get('features'),
unique_hash=args.get('hash'),
account=current_user
)
except WorkflowHashNotEqualError:
raise DraftWorkflowNotSync()
return {
"result": "success",
"hash": workflow.unique_hash,
"updated_at": TimestampField().format(workflow.updated_at or workflow.created_at)
}
@@ -128,6 +137,71 @@ class AdvancedChatDraftWorkflowRunApi(Resource):
logging.exception("internal server error.")
raise InternalServerError()
class AdvancedChatDraftRunIterationNodeApi(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT])
def post(self, app_model: App, node_id: str):
"""
Run draft workflow iteration node
"""
parser = reqparse.RequestParser()
parser.add_argument('inputs', type=dict, location='json')
args = parser.parse_args()
try:
response = AppGenerateService.generate_single_iteration(
app_model=app_model,
user=current_user,
node_id=node_id,
args=args,
streaming=True
)
return helper.compact_generate_response(response)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
except services.errors.conversation.ConversationCompletedError:
raise ConversationCompletedError()
except ValueError as e:
raise e
except Exception as e:
logging.exception("internal server error.")
raise InternalServerError()
class WorkflowDraftRunIterationNodeApi(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.WORKFLOW])
def post(self, app_model: App, node_id: str):
"""
Run draft workflow iteration node
"""
parser = reqparse.RequestParser()
parser.add_argument('inputs', type=dict, location='json')
args = parser.parse_args()
try:
response = AppGenerateService.generate_single_iteration(
app_model=app_model,
user=current_user,
node_id=node_id,
args=args,
streaming=True
)
return helper.compact_generate_response(response)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
except services.errors.conversation.ConversationCompletedError:
raise ConversationCompletedError()
except ValueError as e:
raise e
except Exception as e:
logging.exception("internal server error.")
raise InternalServerError()
class DraftWorkflowRunApi(Resource):
@setup_required
@@ -317,6 +391,8 @@ api.add_resource(AdvancedChatDraftWorkflowRunApi, '/apps/<uuid:app_id>/advanced-
api.add_resource(DraftWorkflowRunApi, '/apps/<uuid:app_id>/workflows/draft/run')
api.add_resource(WorkflowTaskStopApi, '/apps/<uuid:app_id>/workflow-runs/tasks/<string:task_id>/stop')
api.add_resource(DraftWorkflowNodeRunApi, '/apps/<uuid:app_id>/workflows/draft/nodes/<string:node_id>/run')
api.add_resource(AdvancedChatDraftRunIterationNodeApi, '/apps/<uuid:app_id>/advanced-chat/workflows/draft/iteration/nodes/<string:node_id>/run')
api.add_resource(WorkflowDraftRunIterationNodeApi, '/apps/<uuid:app_id>/workflows/draft/iteration/nodes/<string:node_id>/run')
api.add_resource(PublishedWorkflowApi, '/apps/<uuid:app_id>/workflows/publish')
api.add_resource(DefaultBlockConfigsApi, '/apps/<uuid:app_id>/workflows/default-workflow-block-configs')
api.add_resource(DefaultBlockConfigApi, '/apps/<uuid:app_id>/workflows/default-workflow-block-configs'

View File

@@ -476,13 +476,13 @@ class DatasetRetrievalSettingApi(Resource):
@account_initialization_required
def get(self):
vector_type = current_app.config['VECTOR_STORE']
if vector_type == 'milvus' or vector_type == 'pgvecto_rs' or vector_type == 'relyt':
if vector_type in {"milvus", "relyt", "pgvector", "pgvecto_rs"}:
return {
'retrieval_method': [
'semantic_search'
]
}
elif vector_type == 'qdrant' or vector_type == 'weaviate':
elif vector_type in {"qdrant", "weaviate"}:
return {
'retrieval_method': [
'semantic_search', 'full_text_search', 'hybrid_search'
@@ -497,14 +497,13 @@ class DatasetRetrievalSettingMockApi(Resource):
@login_required
@account_initialization_required
def get(self, vector_type):
if vector_type == 'milvus' or vector_type == 'relyt':
if vector_type in {'milvus', 'relyt', 'pgvector'}:
return {
'retrieval_method': [
'semantic_search'
]
}
elif vector_type == 'qdrant' or vector_type == 'weaviate':
elif vector_type in {'qdrant', 'weaviate'}:
return {
'retrieval_method': [
'semantic_search', 'full_text_search', 'hybrid_search'

View File

@@ -1,59 +0,0 @@
from flask import current_app, redirect
from flask_restful import Resource, reqparse
from controllers.console import api
from controllers.console.setup import setup_required
from services.enterprise.enterprise_sso_service import EnterpriseSSOService
class EnterpriseSSOSamlLogin(Resource):
@setup_required
def get(self):
return EnterpriseSSOService.get_sso_saml_login()
class EnterpriseSSOSamlAcs(Resource):
@setup_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument('SAMLResponse', type=str, required=True, location='form')
args = parser.parse_args()
saml_response = args['SAMLResponse']
try:
token = EnterpriseSSOService.post_sso_saml_acs(saml_response)
return redirect(f'{current_app.config.get("CONSOLE_WEB_URL")}/signin?console_token={token}')
except Exception as e:
return redirect(f'{current_app.config.get("CONSOLE_WEB_URL")}/signin?message={str(e)}')
class EnterpriseSSOOidcLogin(Resource):
@setup_required
def get(self):
return EnterpriseSSOService.get_sso_oidc_login()
class EnterpriseSSOOidcCallback(Resource):
@setup_required
def get(self):
parser = reqparse.RequestParser()
parser.add_argument('state', type=str, required=True, location='args')
parser.add_argument('code', type=str, required=True, location='args')
parser.add_argument('oidc-state', type=str, required=True, location='cookies')
args = parser.parse_args()
try:
token = EnterpriseSSOService.get_sso_oidc_callback(args)
return redirect(f'{current_app.config.get("CONSOLE_WEB_URL")}/signin?console_token={token}')
except Exception as e:
return redirect(f'{current_app.config.get("CONSOLE_WEB_URL")}/signin?message={str(e)}')
api.add_resource(EnterpriseSSOSamlLogin, '/enterprise/sso/saml/login')
api.add_resource(EnterpriseSSOSamlAcs, '/enterprise/sso/saml/acs')
api.add_resource(EnterpriseSSOOidcLogin, '/enterprise/sso/oidc/login')
api.add_resource(EnterpriseSSOOidcCallback, '/enterprise/sso/oidc/callback')

View File

@@ -76,7 +76,7 @@ class ChatTextApi(InstalledAppResource):
response = AudioService.transcript_tts(
app_model=app_model,
text=request.form['text'],
voice=request.form.get('voice'),
voice=request.form['voice'] if request.form.get('voice') else app_model.app_model_config.text_to_speech_dict.get('voice'),
streaming=False
)
return {'data': response.data.decode('latin1')}

View File

@@ -21,6 +21,7 @@ recommended_app_fields = {
'description': fields.String(attribute='description'),
'copyright': fields.String,
'privacy_policy': fields.String,
'custom_disclaimer': fields.String,
'category': fields.String,
'position': fields.Integer,
'is_listed': fields.Boolean

View File

@@ -1,7 +1,6 @@
from flask_login import current_user
from flask_restful import Resource
from services.enterprise.enterprise_feature_service import EnterpriseFeatureService
from services.feature_service import FeatureService
from . import api
@@ -15,10 +14,10 @@ class FeatureApi(Resource):
return FeatureService.get_features(current_user.current_tenant_id).dict()
class EnterpriseFeatureApi(Resource):
class SystemFeatureApi(Resource):
def get(self):
return EnterpriseFeatureService.get_enterprise_features().dict()
return FeatureService.get_system_features().dict()
api.add_resource(FeatureApi, '/features')
api.add_resource(EnterpriseFeatureApi, '/enterprise-features')
api.add_resource(SystemFeatureApi, '/system-features')

View File

@@ -9,8 +9,13 @@ from controllers.console import api
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
from core.model_runtime.utils.encoders import jsonable_encoder
from libs.helper import alphanumeric, uuid_value
from libs.login import login_required
from services.tools_manage_service import ToolManageService
from services.tools.api_tools_manage_service import ApiToolManageService
from services.tools.builtin_tools_manage_service import BuiltinToolManageService
from services.tools.tool_labels_service import ToolLabelsService
from services.tools.tools_manage_service import ToolCommonService
from services.tools.workflow_tools_manage_service import WorkflowToolManageService
class ToolProviderListApi(Resource):
@@ -21,7 +26,11 @@ class ToolProviderListApi(Resource):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
return ToolManageService.list_tool_providers(user_id, tenant_id)
req = reqparse.RequestParser()
req.add_argument('type', type=str, choices=['builtin', 'model', 'api', 'workflow'], required=False, nullable=True, location='args')
args = req.parse_args()
return ToolCommonService.list_tool_providers(user_id, tenant_id, args.get('type', None))
class ToolBuiltinProviderListToolsApi(Resource):
@setup_required
@@ -31,7 +40,7 @@ class ToolBuiltinProviderListToolsApi(Resource):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
return jsonable_encoder(ToolManageService.list_builtin_tool_provider_tools(
return jsonable_encoder(BuiltinToolManageService.list_builtin_tool_provider_tools(
user_id,
tenant_id,
provider,
@@ -48,7 +57,7 @@ class ToolBuiltinProviderDeleteApi(Resource):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
return ToolManageService.delete_builtin_tool_provider(
return BuiltinToolManageService.delete_builtin_tool_provider(
user_id,
tenant_id,
provider,
@@ -70,7 +79,7 @@ class ToolBuiltinProviderUpdateApi(Resource):
args = parser.parse_args()
return ToolManageService.update_builtin_tool_provider(
return BuiltinToolManageService.update_builtin_tool_provider(
user_id,
tenant_id,
provider,
@@ -85,7 +94,7 @@ class ToolBuiltinProviderGetCredentialsApi(Resource):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
return ToolManageService.get_builtin_tool_provider_credentials(
return BuiltinToolManageService.get_builtin_tool_provider_credentials(
user_id,
tenant_id,
provider,
@@ -94,35 +103,10 @@ class ToolBuiltinProviderGetCredentialsApi(Resource):
class ToolBuiltinProviderIconApi(Resource):
@setup_required
def get(self, provider):
icon_bytes, mimetype = ToolManageService.get_builtin_tool_provider_icon(provider)
icon_bytes, mimetype = BuiltinToolManageService.get_builtin_tool_provider_icon(provider)
icon_cache_max_age = int(current_app.config.get('TOOL_ICON_CACHE_MAX_AGE'))
return send_file(io.BytesIO(icon_bytes), mimetype=mimetype, max_age=icon_cache_max_age)
class ToolModelProviderIconApi(Resource):
@setup_required
def get(self, provider):
icon_bytes, mimetype = ToolManageService.get_model_tool_provider_icon(provider)
return send_file(io.BytesIO(icon_bytes), mimetype=mimetype)
class ToolModelProviderListToolsApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
parser = reqparse.RequestParser()
parser.add_argument('provider', type=str, required=True, nullable=False, location='args')
args = parser.parse_args()
return jsonable_encoder(ToolManageService.list_model_tool_provider_tools(
user_id,
tenant_id,
args['provider'],
))
class ToolApiProviderAddApi(Resource):
@setup_required
@login_required
@@ -141,10 +125,12 @@ class ToolApiProviderAddApi(Resource):
parser.add_argument('provider', type=str, required=True, nullable=False, location='json')
parser.add_argument('icon', type=dict, required=True, nullable=False, location='json')
parser.add_argument('privacy_policy', type=str, required=False, nullable=True, location='json')
parser.add_argument('labels', type=list[str], required=False, nullable=True, location='json', default=[])
parser.add_argument('custom_disclaimer', type=str, required=False, nullable=True, location='json')
args = parser.parse_args()
return ToolManageService.create_api_tool_provider(
return ApiToolManageService.create_api_tool_provider(
user_id,
tenant_id,
args['provider'],
@@ -153,6 +139,8 @@ class ToolApiProviderAddApi(Resource):
args['schema_type'],
args['schema'],
args.get('privacy_policy', ''),
args.get('custom_disclaimer', ''),
args.get('labels', []),
)
class ToolApiProviderGetRemoteSchemaApi(Resource):
@@ -166,7 +154,7 @@ class ToolApiProviderGetRemoteSchemaApi(Resource):
args = parser.parse_args()
return ToolManageService.get_api_tool_provider_remote_schema(
return ApiToolManageService.get_api_tool_provider_remote_schema(
current_user.id,
current_user.current_tenant_id,
args['url'],
@@ -186,7 +174,7 @@ class ToolApiProviderListToolsApi(Resource):
args = parser.parse_args()
return jsonable_encoder(ToolManageService.list_api_tool_provider_tools(
return jsonable_encoder(ApiToolManageService.list_api_tool_provider_tools(
user_id,
tenant_id,
args['provider'],
@@ -211,10 +199,12 @@ class ToolApiProviderUpdateApi(Resource):
parser.add_argument('original_provider', type=str, required=True, nullable=False, location='json')
parser.add_argument('icon', type=dict, required=True, nullable=False, location='json')
parser.add_argument('privacy_policy', type=str, required=True, nullable=True, location='json')
parser.add_argument('labels', type=list[str], required=False, nullable=True, location='json')
parser.add_argument('custom_disclaimer', type=str, required=True, nullable=True, location='json')
args = parser.parse_args()
return ToolManageService.update_api_tool_provider(
return ApiToolManageService.update_api_tool_provider(
user_id,
tenant_id,
args['provider'],
@@ -224,6 +214,8 @@ class ToolApiProviderUpdateApi(Resource):
args['schema_type'],
args['schema'],
args['privacy_policy'],
args['custom_disclaimer'],
args.get('labels', []),
)
class ToolApiProviderDeleteApi(Resource):
@@ -243,7 +235,7 @@ class ToolApiProviderDeleteApi(Resource):
args = parser.parse_args()
return ToolManageService.delete_api_tool_provider(
return ApiToolManageService.delete_api_tool_provider(
user_id,
tenant_id,
args['provider'],
@@ -263,7 +255,7 @@ class ToolApiProviderGetApi(Resource):
args = parser.parse_args()
return ToolManageService.get_api_tool_provider(
return ApiToolManageService.get_api_tool_provider(
user_id,
tenant_id,
args['provider'],
@@ -274,7 +266,7 @@ class ToolBuiltinProviderCredentialsSchemaApi(Resource):
@login_required
@account_initialization_required
def get(self, provider):
return ToolManageService.list_builtin_provider_credentials_schema(provider)
return BuiltinToolManageService.list_builtin_provider_credentials_schema(provider)
class ToolApiProviderSchemaApi(Resource):
@setup_required
@@ -287,7 +279,7 @@ class ToolApiProviderSchemaApi(Resource):
args = parser.parse_args()
return ToolManageService.parser_api_schema(
return ApiToolManageService.parser_api_schema(
schema=args['schema'],
)
@@ -307,7 +299,7 @@ class ToolApiProviderPreviousTestApi(Resource):
args = parser.parse_args()
return ToolManageService.test_api_tool_preview(
return ApiToolManageService.test_api_tool_preview(
current_user.current_tenant_id,
args['provider_name'] if args['provider_name'] else '',
args['tool_name'],
@@ -317,6 +309,153 @@ class ToolApiProviderPreviousTestApi(Resource):
args['schema'],
)
class ToolWorkflowProviderCreateApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
if not current_user.is_admin_or_owner:
raise Forbidden()
user_id = current_user.id
tenant_id = current_user.current_tenant_id
reqparser = reqparse.RequestParser()
reqparser.add_argument('workflow_app_id', type=uuid_value, required=True, nullable=False, location='json')
reqparser.add_argument('name', type=alphanumeric, required=True, nullable=False, location='json')
reqparser.add_argument('label', type=str, required=True, nullable=False, location='json')
reqparser.add_argument('description', type=str, required=True, nullable=False, location='json')
reqparser.add_argument('icon', type=dict, required=True, nullable=False, location='json')
reqparser.add_argument('parameters', type=list[dict], required=True, nullable=False, location='json')
reqparser.add_argument('privacy_policy', type=str, required=False, nullable=True, location='json', default='')
reqparser.add_argument('labels', type=list[str], required=False, nullable=True, location='json')
args = reqparser.parse_args()
return WorkflowToolManageService.create_workflow_tool(
user_id,
tenant_id,
args['workflow_app_id'],
args['name'],
args['label'],
args['icon'],
args['description'],
args['parameters'],
args['privacy_policy'],
args.get('labels', []),
)
class ToolWorkflowProviderUpdateApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
if not current_user.is_admin_or_owner:
raise Forbidden()
user_id = current_user.id
tenant_id = current_user.current_tenant_id
reqparser = reqparse.RequestParser()
reqparser.add_argument('workflow_tool_id', type=uuid_value, required=True, nullable=False, location='json')
reqparser.add_argument('name', type=alphanumeric, required=True, nullable=False, location='json')
reqparser.add_argument('label', type=str, required=True, nullable=False, location='json')
reqparser.add_argument('description', type=str, required=True, nullable=False, location='json')
reqparser.add_argument('icon', type=dict, required=True, nullable=False, location='json')
reqparser.add_argument('parameters', type=list[dict], required=True, nullable=False, location='json')
reqparser.add_argument('privacy_policy', type=str, required=False, nullable=True, location='json', default='')
reqparser.add_argument('labels', type=list[str], required=False, nullable=True, location='json')
args = reqparser.parse_args()
if not args['workflow_tool_id']:
raise ValueError('incorrect workflow_tool_id')
return WorkflowToolManageService.update_workflow_tool(
user_id,
tenant_id,
args['workflow_tool_id'],
args['name'],
args['label'],
args['icon'],
args['description'],
args['parameters'],
args['privacy_policy'],
args.get('labels', []),
)
class ToolWorkflowProviderDeleteApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
if not current_user.is_admin_or_owner:
raise Forbidden()
user_id = current_user.id
tenant_id = current_user.current_tenant_id
reqparser = reqparse.RequestParser()
reqparser.add_argument('workflow_tool_id', type=uuid_value, required=True, nullable=False, location='json')
args = reqparser.parse_args()
return WorkflowToolManageService.delete_workflow_tool(
user_id,
tenant_id,
args['workflow_tool_id'],
)
class ToolWorkflowProviderGetApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
parser = reqparse.RequestParser()
parser.add_argument('workflow_tool_id', type=uuid_value, required=False, nullable=True, location='args')
parser.add_argument('workflow_app_id', type=uuid_value, required=False, nullable=True, location='args')
args = parser.parse_args()
if args.get('workflow_tool_id'):
tool = WorkflowToolManageService.get_workflow_tool_by_tool_id(
user_id,
tenant_id,
args['workflow_tool_id'],
)
elif args.get('workflow_app_id'):
tool = WorkflowToolManageService.get_workflow_tool_by_app_id(
user_id,
tenant_id,
args['workflow_app_id'],
)
else:
raise ValueError('incorrect workflow_tool_id or workflow_app_id')
return jsonable_encoder(tool)
class ToolWorkflowProviderListToolApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
parser = reqparse.RequestParser()
parser.add_argument('workflow_tool_id', type=uuid_value, required=True, nullable=False, location='args')
args = parser.parse_args()
return jsonable_encoder(WorkflowToolManageService.list_single_workflow_tools(
user_id,
tenant_id,
args['workflow_tool_id'],
))
class ToolBuiltinListApi(Resource):
@setup_required
@login_required
@@ -325,7 +464,7 @@ class ToolBuiltinListApi(Resource):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
return jsonable_encoder([provider.to_dict() for provider in ToolManageService.list_builtin_tools(
return jsonable_encoder([provider.to_dict() for provider in BuiltinToolManageService.list_builtin_tools(
user_id,
tenant_id,
)])
@@ -338,20 +477,43 @@ class ToolApiListApi(Resource):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
return jsonable_encoder([provider.to_dict() for provider in ToolManageService.list_api_tools(
return jsonable_encoder([provider.to_dict() for provider in ApiToolManageService.list_api_tools(
user_id,
tenant_id,
)])
class ToolWorkflowListApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
user_id = current_user.id
tenant_id = current_user.current_tenant_id
return jsonable_encoder([provider.to_dict() for provider in WorkflowToolManageService.list_tenant_workflow_tools(
user_id,
tenant_id,
)])
class ToolLabelsApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
return jsonable_encoder(ToolLabelsService.list_tool_labels())
# tool provider
api.add_resource(ToolProviderListApi, '/workspaces/current/tool-providers')
# builtin tool provider
api.add_resource(ToolBuiltinProviderListToolsApi, '/workspaces/current/tool-provider/builtin/<provider>/tools')
api.add_resource(ToolBuiltinProviderDeleteApi, '/workspaces/current/tool-provider/builtin/<provider>/delete')
api.add_resource(ToolBuiltinProviderUpdateApi, '/workspaces/current/tool-provider/builtin/<provider>/update')
api.add_resource(ToolBuiltinProviderGetCredentialsApi, '/workspaces/current/tool-provider/builtin/<provider>/credentials')
api.add_resource(ToolBuiltinProviderCredentialsSchemaApi, '/workspaces/current/tool-provider/builtin/<provider>/credentials_schema')
api.add_resource(ToolBuiltinProviderIconApi, '/workspaces/current/tool-provider/builtin/<provider>/icon')
api.add_resource(ToolModelProviderIconApi, '/workspaces/current/tool-provider/model/<provider>/icon')
api.add_resource(ToolModelProviderListToolsApi, '/workspaces/current/tool-provider/model/tools')
# api tool provider
api.add_resource(ToolApiProviderAddApi, '/workspaces/current/tool-provider/api/add')
api.add_resource(ToolApiProviderGetRemoteSchemaApi, '/workspaces/current/tool-provider/api/remote')
api.add_resource(ToolApiProviderListToolsApi, '/workspaces/current/tool-provider/api/tools')
@@ -361,5 +523,15 @@ api.add_resource(ToolApiProviderGetApi, '/workspaces/current/tool-provider/api/g
api.add_resource(ToolApiProviderSchemaApi, '/workspaces/current/tool-provider/api/schema')
api.add_resource(ToolApiProviderPreviousTestApi, '/workspaces/current/tool-provider/api/test/pre')
# workflow tool provider
api.add_resource(ToolWorkflowProviderCreateApi, '/workspaces/current/tool-provider/workflow/create')
api.add_resource(ToolWorkflowProviderUpdateApi, '/workspaces/current/tool-provider/workflow/update')
api.add_resource(ToolWorkflowProviderDeleteApi, '/workspaces/current/tool-provider/workflow/delete')
api.add_resource(ToolWorkflowProviderGetApi, '/workspaces/current/tool-provider/workflow/get')
api.add_resource(ToolWorkflowProviderListToolApi, '/workspaces/current/tool-provider/workflow/tools')
api.add_resource(ToolBuiltinListApi, '/workspaces/current/tools/builtin')
api.add_resource(ToolApiListApi, '/workspaces/current/tools/api')
api.add_resource(ToolApiListApi, '/workspaces/current/tools/api')
api.add_resource(ToolWorkflowListApi, '/workspaces/current/tools/workflow')
api.add_resource(ToolLabelsApi, '/workspaces/current/tool-labels')

View File

@@ -161,13 +161,13 @@ class CustomConfigWorkspaceApi(Resource):
parser.add_argument('replace_webapp_logo', type=str, location='json')
args = parser.parse_args()
tenant = db.session.query(Tenant).filter(Tenant.id == current_user.current_tenant_id).one_or_404()
custom_config_dict = {
'remove_webapp_brand': args['remove_webapp_brand'],
'replace_webapp_logo': args['replace_webapp_logo'],
'replace_webapp_logo': args['replace_webapp_logo'] if args['replace_webapp_logo'] is not None else tenant.custom_config_dict.get('replace_webapp_logo') ,
}
tenant = db.session.query(Tenant).filter(Tenant.id == current_user.current_tenant_id).one_or_404()
tenant.custom_config_dict = custom_config_dict
db.session.commit()

View File

@@ -97,7 +97,7 @@ class MessageListApi(Resource):
class MessageFeedbackApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON))
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON, required=True))
def post(self, app_model: App, end_user: EndUser, message_id):
message_id = str(message_id)
@@ -114,7 +114,7 @@ class MessageFeedbackApi(Resource):
class MessageSuggestedApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.QUERY))
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.QUERY, required=True))
def get(self, app_model: App, end_user: EndUser, message_id):
message_id = str(message_id)
app_mode = AppMode.value_of(app_model.mode)

View File

@@ -6,4 +6,4 @@ bp = Blueprint('web', __name__, url_prefix='/api')
api = ExternalApi(bp)
from . import app, audio, completion, conversation, file, message, passport, saved_message, site, workflow
from . import app, audio, completion, conversation, feature, file, message, passport, saved_message, site, workflow

View File

@@ -1,14 +1,10 @@
import json
from flask import current_app
from flask_restful import fields, marshal_with
from controllers.web import api
from controllers.web.error import AppUnavailableError
from controllers.web.wraps import WebApiResource
from extensions.ext_database import db
from models.model import App, AppMode, AppModelConfig
from models.tools import ApiToolProvider
from models.model import App, AppMode
from services.app_service import AppService

View File

@@ -74,7 +74,7 @@ class TextApi(WebApiResource):
app_model=app_model,
text=request.form['text'],
end_user=end_user.external_user_id,
voice=request.form.get('voice'),
voice=request.form['voice'] if request.form.get('voice') else app_model.app_model_config.text_to_speech_dict.get('voice'),
streaming=False
)

View File

@@ -115,3 +115,9 @@ class UnsupportedFileTypeError(BaseHTTPException):
error_code = 'unsupported_file_type'
description = "File type not allowed."
code = 415
class WebSSOAuthRequiredError(BaseHTTPException):
error_code = 'web_sso_auth_required'
description = "Web SSO authentication required."
code = 401

View File

@@ -0,0 +1,12 @@
from flask_restful import Resource
from controllers.web import api
from services.feature_service import FeatureService
class SystemFeatureApi(Resource):
def get(self):
return FeatureService.get_system_features().dict()
api.add_resource(SystemFeatureApi, '/system-features')

View File

@@ -5,14 +5,21 @@ from flask_restful import Resource
from werkzeug.exceptions import NotFound, Unauthorized
from controllers.web import api
from controllers.web.error import WebSSOAuthRequiredError
from extensions.ext_database import db
from libs.passport import PassportService
from models.model import App, EndUser, Site
from services.feature_service import FeatureService
class PassportResource(Resource):
"""Base resource for passport."""
def get(self):
system_features = FeatureService.get_system_features()
if system_features.sso_enforced_for_web:
raise WebSSOAuthRequiredError()
app_code = request.headers.get('X-App-Code')
if app_code is None:
raise Unauthorized('X-App-Code header is missing.')
@@ -28,7 +35,7 @@ class PassportResource(Resource):
app_model = db.session.query(App).filter(App.id == site.app_id).first()
if not app_model or app_model.status != 'normal' or not app_model.enable_site:
raise NotFound()
end_user = EndUser(
tenant_id=app_model.tenant_id,
app_id=app_model.id,
@@ -36,6 +43,7 @@ class PassportResource(Resource):
is_anonymous=True,
session_id=generate_session_id(),
)
db.session.add(end_user)
db.session.commit()
@@ -53,8 +61,10 @@ class PassportResource(Resource):
'access_token': tk,
}
api.add_resource(PassportResource, '/passport')
def generate_session_id():
"""
Generate a unique session ID.

View File

@@ -31,6 +31,7 @@ class AppSiteApi(WebApiResource):
'description': fields.String,
'copyright': fields.String,
'privacy_policy': fields.String,
'custom_disclaimer': fields.String,
'default_language': fields.String,
'prompt_public': fields.Boolean
}

View File

@@ -2,11 +2,13 @@ from functools import wraps
from flask import request
from flask_restful import Resource
from werkzeug.exceptions import NotFound, Unauthorized
from werkzeug.exceptions import BadRequest, NotFound, Unauthorized
from controllers.web.error import WebSSOAuthRequiredError
from extensions.ext_database import db
from libs.passport import PassportService
from models.model import App, EndUser, Site
from services.feature_service import FeatureService
def validate_jwt_token(view=None):
@@ -21,34 +23,60 @@ def validate_jwt_token(view=None):
return decorator(view)
return decorator
def decode_jwt_token():
auth_header = request.headers.get('Authorization')
if auth_header is None:
raise Unauthorized('Authorization header is missing.')
system_features = FeatureService.get_system_features()
if ' ' not in auth_header:
raise Unauthorized('Invalid Authorization header format. Expected \'Bearer <api-key>\' format.')
auth_scheme, tk = auth_header.split(None, 1)
auth_scheme = auth_scheme.lower()
try:
auth_header = request.headers.get('Authorization')
if auth_header is None:
raise Unauthorized('Authorization header is missing.')
if auth_scheme != 'bearer':
raise Unauthorized('Invalid Authorization header format. Expected \'Bearer <api-key>\' format.')
decoded = PassportService().verify(tk)
app_code = decoded.get('app_code')
app_model = db.session.query(App).filter(App.id == decoded['app_id']).first()
site = db.session.query(Site).filter(Site.code == app_code).first()
if not app_model:
raise NotFound()
if not app_code or not site:
raise Unauthorized('Site URL is no longer valid.')
if app_model.enable_site is False:
raise Unauthorized('Site is disabled.')
end_user = db.session.query(EndUser).filter(EndUser.id == decoded['end_user_id']).first()
if not end_user:
raise NotFound()
if ' ' not in auth_header:
raise Unauthorized('Invalid Authorization header format. Expected \'Bearer <api-key>\' format.')
auth_scheme, tk = auth_header.split(None, 1)
auth_scheme = auth_scheme.lower()
if auth_scheme != 'bearer':
raise Unauthorized('Invalid Authorization header format. Expected \'Bearer <api-key>\' format.')
decoded = PassportService().verify(tk)
app_code = decoded.get('app_code')
app_model = db.session.query(App).filter(App.id == decoded['app_id']).first()
site = db.session.query(Site).filter(Site.code == app_code).first()
if not app_model:
raise NotFound()
if not app_code or not site:
raise BadRequest('Site URL is no longer valid.')
if app_model.enable_site is False:
raise BadRequest('Site is disabled.')
end_user = db.session.query(EndUser).filter(EndUser.id == decoded['end_user_id']).first()
if not end_user:
raise NotFound()
_validate_web_sso_token(decoded, system_features)
return app_model, end_user
except Unauthorized as e:
if system_features.sso_enforced_for_web:
raise WebSSOAuthRequiredError()
raise Unauthorized(e.description)
def _validate_web_sso_token(decoded, system_features):
# Check if SSO is enforced for web, and if the token source is not SSO, raise an error and redirect to SSO login
if system_features.sso_enforced_for_web:
source = decoded.get('token_source')
if not source or source != 'sso':
raise WebSSOAuthRequiredError()
# Check if SSO is not enforced for web, and if the token source is SSO, raise an error and redirect to normal passport login
if not system_features.sso_enforced_for_web:
source = decoded.get('token_source')
if source and source == 'sso':
raise Unauthorized('sso token expired.')
return app_model, end_user
class WebApiResource(Resource):
method_decorators = [validate_jwt_token]

View File

@@ -165,6 +165,7 @@ class BaseAgentRunner(AppRunner):
tenant_id=self.tenant_id,
app_id=self.app_config.app_id,
agent_tool=tool,
invoke_from=self.application_generate_entity.invoke_from
)
tool_entity.load_variables(self.variables_pool)

View File

@@ -121,7 +121,7 @@ class CotAgentRunner(BaseAgentRunner, ABC):
raise ValueError("failed to invoke llm")
usage_dict = {}
react_chunks = CotAgentOutputParser.handle_react_stream_output(chunks)
react_chunks = CotAgentOutputParser.handle_react_stream_output(chunks, usage_dict)
scratchpad = AgentScratchpadUnit(
agent_response='',
thought='',
@@ -189,7 +189,7 @@ class CotAgentRunner(BaseAgentRunner, ABC):
if not scratchpad.action:
# failed to extract action, return final answer directly
final_answer = scratchpad.agent_response or ''
final_answer = ''
else:
if scratchpad.action.action_name.lower() == "final answer":
# action is final answer, return final answer directly

View File

@@ -8,7 +8,7 @@ class AgentToolEntity(BaseModel):
"""
Agent Tool Entity.
"""
provider_type: Literal["builtin", "api"]
provider_type: Literal["builtin", "api", "workflow"]
provider_id: str
tool_name: str
tool_parameters: dict[str, Any] = {}

View File

@@ -9,7 +9,7 @@ from core.model_runtime.entities.llm_entities import LLMResultChunk
class CotAgentOutputParser:
@classmethod
def handle_react_stream_output(cls, llm_response: Generator[LLMResultChunk, None, None]) -> \
def handle_react_stream_output(cls, llm_response: Generator[LLMResultChunk, None, None], usage_dict: dict) -> \
Generator[Union[str, AgentScratchpadUnit.Action], None, None]:
def parse_action(json_str):
try:
@@ -58,6 +58,8 @@ class CotAgentOutputParser:
thought_idx = 0
for response in llm_response:
if response.delta.usage:
usage_dict['usage'] = response.delta.usage
response = response.delta.message.content
if not isinstance(response, str):
continue

View File

@@ -11,7 +11,7 @@ class SensitiveWordAvoidanceConfigManager:
if not sensitive_word_avoidance_dict:
return None
if 'enabled' in sensitive_word_avoidance_dict and sensitive_word_avoidance_dict['enabled']:
if sensitive_word_avoidance_dict.get('enabled'):
return SensitiveWordAvoidanceEntity(
type=sensitive_word_avoidance_dict.get('type'),
config=sensitive_word_avoidance_dict.get('config'),

View File

@@ -1,7 +1,7 @@
from typing import Optional
from core.agent.entities import AgentEntity, AgentPromptEntity, AgentToolEntity
from core.tools.prompt.template import REACT_PROMPT_TEMPLATES
from core.agent.prompt.template import REACT_PROMPT_TEMPLATES
class AgentConfigManager:

View File

@@ -239,4 +239,4 @@ class WorkflowUIBasedAppConfig(AppConfig):
"""
Workflow UI Based App Config Entity.
"""
workflow_id: str
workflow_id: str

View File

@@ -14,7 +14,7 @@ class FileUploadConfigManager:
"""
file_upload_dict = config.get('file_upload')
if file_upload_dict:
if 'image' in file_upload_dict and file_upload_dict['image']:
if file_upload_dict.get('image'):
if 'enabled' in file_upload_dict['image'] and file_upload_dict['image']['enabled']:
image_config = {
'number_limits': file_upload_dict['image']['number_limits'],

View File

@@ -9,7 +9,7 @@ class MoreLikeThisConfigManager:
more_like_this = False
more_like_this_dict = config.get('more_like_this')
if more_like_this_dict:
if 'enabled' in more_like_this_dict and more_like_this_dict['enabled']:
if more_like_this_dict.get('enabled'):
more_like_this = True
return more_like_this

View File

@@ -4,7 +4,7 @@ class RetrievalResourceConfigManager:
show_retrieve_source = False
retriever_resource_dict = config.get('retriever_resource')
if retriever_resource_dict:
if 'enabled' in retriever_resource_dict and retriever_resource_dict['enabled']:
if retriever_resource_dict.get('enabled'):
show_retrieve_source = True
return show_retrieve_source

View File

@@ -9,7 +9,7 @@ class SpeechToTextConfigManager:
speech_to_text = False
speech_to_text_dict = config.get('speech_to_text')
if speech_to_text_dict:
if 'enabled' in speech_to_text_dict and speech_to_text_dict['enabled']:
if speech_to_text_dict.get('enabled'):
speech_to_text = True
return speech_to_text

View File

@@ -9,7 +9,7 @@ class SuggestedQuestionsAfterAnswerConfigManager:
suggested_questions_after_answer = False
suggested_questions_after_answer_dict = config.get('suggested_questions_after_answer')
if suggested_questions_after_answer_dict:
if 'enabled' in suggested_questions_after_answer_dict and suggested_questions_after_answer_dict['enabled']:
if suggested_questions_after_answer_dict.get('enabled'):
suggested_questions_after_answer = True
return suggested_questions_after_answer

View File

@@ -12,7 +12,7 @@ class TextToSpeechConfigManager:
text_to_speech = False
text_to_speech_dict = config.get('text_to_speech')
if text_to_speech_dict:
if 'enabled' in text_to_speech_dict and text_to_speech_dict['enabled']:
if text_to_speech_dict.get('enabled'):
text_to_speech = TextToSpeechEntity(
enabled=text_to_speech_dict.get('enabled'),
voice=text_to_speech_dict.get('voice'),

View File

@@ -66,7 +66,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
conversation = self._get_conversation_by_user(app_model, args.get('conversation_id'), user)
# parse files
files = args['files'] if 'files' in args and args['files'] else []
files = args['files'] if args.get('files') else []
message_file_parser = MessageFileParser(tenant_id=app_model.tenant_id, app_id=app_model.id)
file_extra_config = FileUploadConfigManager.convert(workflow.features_dict, is_vision=False)
if file_extra_config:
@@ -98,6 +98,90 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
extras=extras
)
return self._generate(
app_model=app_model,
workflow=workflow,
user=user,
invoke_from=invoke_from,
application_generate_entity=application_generate_entity,
conversation=conversation,
stream=stream
)
def single_iteration_generate(self, app_model: App,
workflow: Workflow,
node_id: str,
user: Account,
args: dict,
stream: bool = True) \
-> Union[dict, Generator[dict, None, None]]:
"""
Generate App response.
:param app_model: App
:param workflow: Workflow
:param user: account or end user
:param args: request args
:param invoke_from: invoke from source
:param stream: is stream
"""
if not node_id:
raise ValueError('node_id is required')
if args.get('inputs') is None:
raise ValueError('inputs is required')
extras = {
"auto_generate_conversation_name": False
}
# get conversation
conversation = None
if args.get('conversation_id'):
conversation = self._get_conversation_by_user(app_model, args.get('conversation_id'), user)
# convert to app config
app_config = AdvancedChatAppConfigManager.get_app_config(
app_model=app_model,
workflow=workflow
)
# init application generate entity
application_generate_entity = AdvancedChatAppGenerateEntity(
task_id=str(uuid.uuid4()),
app_config=app_config,
conversation_id=conversation.id if conversation else None,
inputs={},
query='',
files=[],
user_id=user.id,
stream=stream,
invoke_from=InvokeFrom.DEBUGGER,
extras=extras,
single_iteration_run=AdvancedChatAppGenerateEntity.SingleIterationRunEntity(
node_id=node_id,
inputs=args['inputs']
)
)
return self._generate(
app_model=app_model,
workflow=workflow,
user=user,
invoke_from=InvokeFrom.DEBUGGER,
application_generate_entity=application_generate_entity,
conversation=conversation,
stream=stream
)
def _generate(self, app_model: App,
workflow: Workflow,
user: Union[Account, EndUser],
invoke_from: InvokeFrom,
application_generate_entity: AdvancedChatAppGenerateEntity,
conversation: Conversation = None,
stream: bool = True) \
-> Union[dict, Generator[dict, None, None]]:
is_first_conversation = False
if not conversation:
is_first_conversation = True
@@ -167,18 +251,30 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
"""
with flask_app.app_context():
try:
# get conversation and message
conversation = self._get_conversation(conversation_id)
message = self._get_message(message_id)
# chatbot app
runner = AdvancedChatAppRunner()
runner.run(
application_generate_entity=application_generate_entity,
queue_manager=queue_manager,
conversation=conversation,
message=message
)
if application_generate_entity.single_iteration_run:
single_iteration_run = application_generate_entity.single_iteration_run
runner.single_iteration_run(
app_id=application_generate_entity.app_config.app_id,
workflow_id=application_generate_entity.app_config.workflow_id,
queue_manager=queue_manager,
inputs=single_iteration_run.inputs,
node_id=single_iteration_run.node_id,
user_id=application_generate_entity.user_id
)
else:
# get conversation and message
conversation = self._get_conversation(conversation_id)
message = self._get_message(message_id)
# chatbot app
runner = AdvancedChatAppRunner()
runner.run(
application_generate_entity=application_generate_entity,
queue_manager=queue_manager,
conversation=conversation,
message=message
)
except GenerateTaskStoppedException:
pass
except InvokeAuthorizationError:

View File

@@ -102,6 +102,7 @@ class AdvancedChatAppRunner(AppRunner):
user_from=UserFrom.ACCOUNT
if application_generate_entity.invoke_from in [InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER]
else UserFrom.END_USER,
invoke_from=application_generate_entity.invoke_from,
user_inputs=inputs,
system_inputs={
SystemVariable.QUERY: query,
@@ -109,6 +110,35 @@ class AdvancedChatAppRunner(AppRunner):
SystemVariable.CONVERSATION_ID: conversation.id,
SystemVariable.USER_ID: user_id
},
callbacks=workflow_callbacks,
call_depth=application_generate_entity.call_depth
)
def single_iteration_run(self, app_id: str, workflow_id: str,
queue_manager: AppQueueManager,
inputs: dict, node_id: str, user_id: str) -> None:
"""
Single iteration run
"""
app_record: App = db.session.query(App).filter(App.id == app_id).first()
if not app_record:
raise ValueError("App not found")
workflow = self.get_workflow(app_model=app_record, workflow_id=workflow_id)
if not workflow:
raise ValueError("Workflow not initialized")
workflow_callbacks = [WorkflowEventTriggerCallback(
queue_manager=queue_manager,
workflow=workflow
)]
workflow_engine_manager = WorkflowEngineManager()
workflow_engine_manager.single_step_run_iteration_workflow_node(
workflow=workflow,
node_id=node_id,
user_id=user_id,
user_inputs=inputs,
callbacks=workflow_callbacks
)

View File

@@ -8,6 +8,8 @@ from core.app.entities.task_entities import (
ChatbotAppStreamResponse,
ErrorStreamResponse,
MessageEndStreamResponse,
NodeFinishStreamResponse,
NodeStartStreamResponse,
PingStreamResponse,
)
@@ -111,6 +113,8 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter):
if isinstance(sub_stream_response, ErrorStreamResponse):
data = cls._error_to_stream_response(sub_stream_response.err)
response_chunk.update(data)
elif isinstance(sub_stream_response, NodeStartStreamResponse | NodeFinishStreamResponse):
response_chunk.update(sub_stream_response.to_ignore_detail_dict())
else:
response_chunk.update(sub_stream_response.to_dict())

View File

@@ -12,6 +12,9 @@ from core.app.entities.queue_entities import (
QueueAdvancedChatMessageEndEvent,
QueueAnnotationReplyEvent,
QueueErrorEvent,
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
QueueMessageReplaceEvent,
QueueNodeFailedEvent,
QueueNodeStartedEvent,
@@ -64,6 +67,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
_workflow: Workflow
_user: Union[Account, EndUser]
_workflow_system_variables: dict[SystemVariable, Any]
_iteration_nested_relations: dict[str, list[str]]
def __init__(self, application_generate_entity: AdvancedChatAppGenerateEntity,
workflow: Workflow,
@@ -104,6 +108,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
)
self._stream_generate_routes = self._get_stream_generate_routes()
self._iteration_nested_relations = self._get_iteration_nested_relations(self._workflow.graph_dict)
self._conversation_name_generate_thread = None
def process(self) -> Union[ChatbotAppBlockingResponse, Generator[ChatbotAppStreamResponse, None, None]]:
@@ -204,6 +209,8 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
# search stream_generate_routes if node id is answer start at node
if not self._task_state.current_stream_generate_state and event.node_id in self._stream_generate_routes:
self._task_state.current_stream_generate_state = self._stream_generate_routes[event.node_id]
# reset current route position to 0
self._task_state.current_stream_generate_state.current_route_position = 0
# generate stream outputs when node started
yield from self._generate_stream_outputs_when_node_started()
@@ -225,6 +232,22 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
task_id=self._application_generate_entity.task_id,
workflow_node_execution=workflow_node_execution
)
if isinstance(event, QueueNodeFailedEvent):
yield from self._handle_iteration_exception(
task_id=self._application_generate_entity.task_id,
error=f'Child node failed: {event.error}'
)
elif isinstance(event, QueueIterationStartEvent | QueueIterationNextEvent | QueueIterationCompletedEvent):
if isinstance(event, QueueIterationNextEvent):
# clear ran node execution infos of current iteration
iteration_relations = self._iteration_nested_relations.get(event.node_id)
if iteration_relations:
for node_id in iteration_relations:
self._task_state.ran_node_execution_infos.pop(node_id, None)
yield self._handle_iteration_to_stream_response(self._application_generate_entity.task_id, event)
self._handle_iteration_operation(event)
elif isinstance(event, QueueStopEvent | QueueWorkflowSucceededEvent | QueueWorkflowFailedEvent):
workflow_run = self._handle_workflow_finished(event)
if workflow_run:
@@ -263,10 +286,6 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
self._handle_retriever_resources(event)
elif isinstance(event, QueueAnnotationReplyEvent):
self._handle_annotation_reply(event)
# elif isinstance(event, QueueMessageFileEvent):
# response = self._message_file_to_stream_response(event)
# if response:
# yield response
elif isinstance(event, QueueTextChunkEvent):
delta_text = event.text
if delta_text is None:
@@ -342,7 +361,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
id=self._message.id,
**extras
)
def _get_stream_generate_routes(self) -> dict[str, ChatflowStreamGenerateRoute]:
"""
Get stream generate routes.
@@ -372,7 +391,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
)
return stream_generate_routes
def _get_answer_start_at_node_ids(self, graph: dict, target_node_id: str) \
-> list[str]:
"""
@@ -401,14 +420,23 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
continue
node_type = source_node.get('data', {}).get('type')
node_iteration_id = source_node.get('data', {}).get('iteration_id')
iteration_start_node_id = None
if node_iteration_id:
iteration_node = next((node for node in nodes if node.get('id') == node_iteration_id), None)
iteration_start_node_id = iteration_node.get('data', {}).get('start_node_id')
if node_type in [
NodeType.ANSWER.value,
NodeType.IF_ELSE.value,
NodeType.QUESTION_CLASSIFIER.value
NodeType.QUESTION_CLASSIFIER.value,
NodeType.ITERATION.value,
NodeType.LOOP.value
]:
start_node_id = target_node_id
start_node_ids.append(start_node_id)
elif node_type == NodeType.START.value:
elif node_type == NodeType.START.value or \
node_iteration_id is not None and iteration_start_node_id == source_node.get('id'):
start_node_id = source_node_id
start_node_ids.append(start_node_id)
else:
@@ -417,7 +445,27 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
start_node_ids.extend(sub_start_node_ids)
return start_node_ids
def _get_iteration_nested_relations(self, graph: dict) -> dict[str, list[str]]:
"""
Get iteration nested relations.
:param graph: graph
:return:
"""
nodes = graph.get('nodes')
iteration_ids = [node.get('id') for node in nodes
if node.get('data', {}).get('type') in [
NodeType.ITERATION.value,
NodeType.LOOP.value,
]]
return {
iteration_id: [
node.get('id') for node in nodes if node.get('data', {}).get('iteration_id') == iteration_id
] for iteration_id in iteration_ids
}
def _generate_stream_outputs_when_node_started(self) -> Generator:
"""
Generate stream outputs.
@@ -425,7 +473,8 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
"""
if self._task_state.current_stream_generate_state:
route_chunks = self._task_state.current_stream_generate_state.generate_route[
self._task_state.current_stream_generate_state.current_route_position:]
self._task_state.current_stream_generate_state.current_route_position:
]
for route_chunk in route_chunks:
if route_chunk.type == 'text':
@@ -458,7 +507,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
route_chunks = self._task_state.current_stream_generate_state.generate_route[
self._task_state.current_stream_generate_state.current_route_position:]
for route_chunk in route_chunks:
if route_chunk.type == 'text':
route_chunk = cast(TextGenerateRouteChunk, route_chunk)

View File

@@ -1,8 +1,11 @@
from typing import Optional
from typing import Any, Optional
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.entities.queue_entities import (
AppQueueEvent,
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
QueueNodeFailedEvent,
QueueNodeStartedEvent,
QueueNodeSucceededEvent,
@@ -130,6 +133,66 @@ class WorkflowEventTriggerCallback(BaseWorkflowCallback):
), PublishFrom.APPLICATION_MANAGER
)
def on_workflow_iteration_started(self,
node_id: str,
node_type: NodeType,
node_run_index: int = 1,
node_data: Optional[BaseNodeData] = None,
inputs: dict = None,
predecessor_node_id: Optional[str] = None,
metadata: Optional[dict] = None) -> None:
"""
Publish iteration started
"""
self._queue_manager.publish(
QueueIterationStartEvent(
node_id=node_id,
node_type=node_type,
node_run_index=node_run_index,
node_data=node_data,
inputs=inputs,
predecessor_node_id=predecessor_node_id,
metadata=metadata
),
PublishFrom.APPLICATION_MANAGER
)
def on_workflow_iteration_next(self, node_id: str,
node_type: NodeType,
index: int,
node_run_index: int,
output: Optional[Any]) -> None:
"""
Publish iteration next
"""
self._queue_manager._publish(
QueueIterationNextEvent(
node_id=node_id,
node_type=node_type,
index=index,
node_run_index=node_run_index,
output=output
),
PublishFrom.APPLICATION_MANAGER
)
def on_workflow_iteration_completed(self, node_id: str,
node_type: NodeType,
node_run_index: int,
outputs: dict) -> None:
"""
Publish iteration completed
"""
self._queue_manager._publish(
QueueIterationCompletedEvent(
node_id=node_id,
node_type=node_type,
node_run_index=node_run_index,
outputs=outputs
),
PublishFrom.APPLICATION_MANAGER
)
def on_event(self, event: AppQueueEvent) -> None:
"""
Publish event

View File

@@ -83,7 +83,7 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
)
# parse files
files = args['files'] if 'files' in args and args['files'] else []
files = args['files'] if args.get('files') else []
message_file_parser = MessageFileParser(tenant_id=app_model.tenant_id, app_id=app_model.id)
file_extra_config = FileUploadConfigManager.convert(override_model_config_dict or app_model_config.to_dict())
if file_extra_config:
@@ -115,7 +115,8 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
user_id=user.id,
stream=stream,
invoke_from=invoke_from,
extras=extras
extras=extras,
call_depth=0
)
# init generate records

View File

@@ -80,7 +80,7 @@ class ChatAppGenerator(MessageBasedAppGenerator):
)
# parse files
files = args['files'] if 'files' in args and args['files'] else []
files = args['files'] if args.get('files') else []
message_file_parser = MessageFileParser(tenant_id=app_model.tenant_id, app_id=app_model.id)
file_extra_config = FileUploadConfigManager.convert(override_model_config_dict or app_model_config.to_dict())
if file_extra_config:

View File

@@ -75,7 +75,7 @@ class CompletionAppGenerator(MessageBasedAppGenerator):
)
# parse files
files = args['files'] if 'files' in args and args['files'] else []
files = args['files'] if args.get('files') else []
message_file_parser = MessageFileParser(tenant_id=app_model.tenant_id, app_id=app_model.id)
file_extra_config = FileUploadConfigManager.convert(override_model_config_dict or app_model_config.to_dict())
if file_extra_config:

View File

@@ -34,7 +34,8 @@ class WorkflowAppGenerator(BaseAppGenerator):
user: Union[Account, EndUser],
args: dict,
invoke_from: InvokeFrom,
stream: bool = True) \
stream: bool = True,
call_depth: int = 0) \
-> Union[dict, Generator[dict, None, None]]:
"""
Generate App response.
@@ -49,7 +50,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
inputs = args['inputs']
# parse files
files = args['files'] if 'files' in args and args['files'] else []
files = args['files'] if args.get('files') else []
message_file_parser = MessageFileParser(tenant_id=app_model.tenant_id, app_id=app_model.id)
file_extra_config = FileUploadConfigManager.convert(workflow.features_dict, is_vision=False)
if file_extra_config:
@@ -75,9 +76,38 @@ class WorkflowAppGenerator(BaseAppGenerator):
files=file_objs,
user_id=user.id,
stream=stream,
invoke_from=invoke_from
invoke_from=invoke_from,
call_depth=call_depth
)
return self._generate(
app_model=app_model,
workflow=workflow,
user=user,
application_generate_entity=application_generate_entity,
invoke_from=invoke_from,
stream=stream,
call_depth=call_depth
)
def _generate(self, app_model: App,
workflow: Workflow,
user: Union[Account, EndUser],
application_generate_entity: WorkflowAppGenerateEntity,
invoke_from: InvokeFrom,
stream: bool = True,
call_depth: int = 0) \
-> Union[dict, Generator[dict, None, None]]:
"""
Generate App response.
:param app_model: App
:param workflow: Workflow
:param user: account or end user
:param application_generate_entity: application generate entity
:param invoke_from: invoke from source
:param stream: is stream
"""
# init queue manager
queue_manager = WorkflowAppQueueManager(
task_id=application_generate_entity.task_id,
@@ -109,6 +139,64 @@ class WorkflowAppGenerator(BaseAppGenerator):
invoke_from=invoke_from
)
def single_iteration_generate(self, app_model: App,
workflow: Workflow,
node_id: str,
user: Account,
args: dict,
stream: bool = True) \
-> Union[dict, Generator[dict, None, None]]:
"""
Generate App response.
:param app_model: App
:param workflow: Workflow
:param user: account or end user
:param args: request args
:param invoke_from: invoke from source
:param stream: is stream
"""
if not node_id:
raise ValueError('node_id is required')
if args.get('inputs') is None:
raise ValueError('inputs is required')
extras = {
"auto_generate_conversation_name": False
}
# convert to app config
app_config = WorkflowAppConfigManager.get_app_config(
app_model=app_model,
workflow=workflow
)
# init application generate entity
application_generate_entity = WorkflowAppGenerateEntity(
task_id=str(uuid.uuid4()),
app_config=app_config,
inputs={},
files=[],
user_id=user.id,
stream=stream,
invoke_from=InvokeFrom.DEBUGGER,
extras=extras,
single_iteration_run=WorkflowAppGenerateEntity.SingleIterationRunEntity(
node_id=node_id,
inputs=args['inputs']
)
)
return self._generate(
app_model=app_model,
workflow=workflow,
user=user,
invoke_from=InvokeFrom.DEBUGGER,
application_generate_entity=application_generate_entity,
stream=stream
)
def _generate_worker(self, flask_app: Flask,
application_generate_entity: WorkflowAppGenerateEntity,
queue_manager: AppQueueManager) -> None:
@@ -123,10 +211,21 @@ class WorkflowAppGenerator(BaseAppGenerator):
try:
# workflow app
runner = WorkflowAppRunner()
runner.run(
application_generate_entity=application_generate_entity,
queue_manager=queue_manager
)
if application_generate_entity.single_iteration_run:
single_iteration_run = application_generate_entity.single_iteration_run
runner.single_iteration_run(
app_id=application_generate_entity.app_config.app_id,
workflow_id=application_generate_entity.app_config.workflow_id,
queue_manager=queue_manager,
inputs=single_iteration_run.inputs,
node_id=single_iteration_run.node_id,
user_id=application_generate_entity.user_id
)
else:
runner.run(
application_generate_entity=application_generate_entity,
queue_manager=queue_manager
)
except GenerateTaskStoppedException:
pass
except InvokeAuthorizationError:

View File

@@ -73,11 +73,44 @@ class WorkflowAppRunner:
user_from=UserFrom.ACCOUNT
if application_generate_entity.invoke_from in [InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER]
else UserFrom.END_USER,
invoke_from=application_generate_entity.invoke_from,
user_inputs=inputs,
system_inputs={
SystemVariable.FILES: files,
SystemVariable.USER_ID: user_id
},
callbacks=workflow_callbacks,
call_depth=application_generate_entity.call_depth
)
def single_iteration_run(self, app_id: str, workflow_id: str,
queue_manager: AppQueueManager,
inputs: dict, node_id: str, user_id: str) -> None:
"""
Single iteration run
"""
app_record: App = db.session.query(App).filter(App.id == app_id).first()
if not app_record:
raise ValueError("App not found")
if not app_record.workflow_id:
raise ValueError("Workflow not initialized")
workflow = self.get_workflow(app_model=app_record, workflow_id=workflow_id)
if not workflow:
raise ValueError("Workflow not initialized")
workflow_callbacks = [WorkflowEventTriggerCallback(
queue_manager=queue_manager,
workflow=workflow
)]
workflow_engine_manager = WorkflowEngineManager()
workflow_engine_manager.single_step_run_iteration_workflow_node(
workflow=workflow,
node_id=node_id,
user_id=user_id,
user_inputs=inputs,
callbacks=workflow_callbacks
)

View File

@@ -5,6 +5,8 @@ from typing import cast
from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter
from core.app.entities.task_entities import (
ErrorStreamResponse,
NodeFinishStreamResponse,
NodeStartStreamResponse,
PingStreamResponse,
WorkflowAppBlockingResponse,
WorkflowAppStreamResponse,
@@ -68,4 +70,24 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter):
:param stream_response: stream response
:return:
"""
return cls.convert_stream_full_response(stream_response)
for chunk in stream_response:
chunk = cast(WorkflowAppStreamResponse, chunk)
sub_stream_response = chunk.stream_response
if isinstance(sub_stream_response, PingStreamResponse):
yield 'ping'
continue
response_chunk = {
'event': sub_stream_response.event.value,
'workflow_run_id': chunk.workflow_run_id,
}
if isinstance(sub_stream_response, ErrorStreamResponse):
data = cls._error_to_stream_response(sub_stream_response.err)
response_chunk.update(data)
elif isinstance(sub_stream_response, NodeStartStreamResponse | NodeFinishStreamResponse):
response_chunk.update(sub_stream_response.to_ignore_detail_dict())
else:
response_chunk.update(sub_stream_response.to_dict())
yield json.dumps(response_chunk)

View File

@@ -9,6 +9,9 @@ from core.app.entities.app_invoke_entities import (
)
from core.app.entities.queue_entities import (
QueueErrorEvent,
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
QueueMessageReplaceEvent,
QueueNodeFailedEvent,
QueueNodeStartedEvent,
@@ -58,6 +61,7 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
_task_state: WorkflowTaskState
_application_generate_entity: WorkflowAppGenerateEntity
_workflow_system_variables: dict[SystemVariable, Any]
_iteration_nested_relations: dict[str, list[str]]
def __init__(self, application_generate_entity: WorkflowAppGenerateEntity,
workflow: Workflow,
@@ -85,8 +89,11 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
SystemVariable.USER_ID: user_id
}
self._task_state = WorkflowTaskState()
self._task_state = WorkflowTaskState(
iteration_nested_node_ids=[]
)
self._stream_generate_nodes = self._get_stream_generate_nodes()
self._iteration_nested_relations = self._get_iteration_nested_relations(self._workflow.graph_dict)
def process(self) -> Union[WorkflowAppBlockingResponse, Generator[WorkflowAppStreamResponse, None, None]]:
"""
@@ -191,6 +198,22 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
task_id=self._application_generate_entity.task_id,
workflow_node_execution=workflow_node_execution
)
if isinstance(event, QueueNodeFailedEvent):
yield from self._handle_iteration_exception(
task_id=self._application_generate_entity.task_id,
error=f'Child node failed: {event.error}'
)
elif isinstance(event, QueueIterationStartEvent | QueueIterationNextEvent | QueueIterationCompletedEvent):
if isinstance(event, QueueIterationNextEvent):
# clear ran node execution infos of current iteration
iteration_relations = self._iteration_nested_relations.get(event.node_id)
if iteration_relations:
for node_id in iteration_relations:
self._task_state.ran_node_execution_infos.pop(node_id, None)
yield self._handle_iteration_to_stream_response(self._application_generate_entity.task_id, event)
self._handle_iteration_operation(event)
elif isinstance(event, QueueStopEvent | QueueWorkflowSucceededEvent | QueueWorkflowFailedEvent):
workflow_run = self._handle_workflow_finished(event)
@@ -331,13 +354,20 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
continue
node_type = source_node.get('data', {}).get('type')
node_iteration_id = source_node.get('data', {}).get('iteration_id')
iteration_start_node_id = None
if node_iteration_id:
iteration_node = next((node for node in nodes if node.get('id') == node_iteration_id), None)
iteration_start_node_id = iteration_node.get('data', {}).get('start_node_id')
if node_type in [
NodeType.IF_ELSE.value,
NodeType.QUESTION_CLASSIFIER.value
]:
start_node_id = target_node_id
start_node_ids.append(start_node_id)
elif node_type == NodeType.START.value:
elif node_type == NodeType.START.value or \
node_iteration_id is not None and iteration_start_node_id == source_node.get('id'):
start_node_id = source_node_id
start_node_ids.append(start_node_id)
else:
@@ -411,3 +441,24 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
return False
return True
def _get_iteration_nested_relations(self, graph: dict) -> dict[str, list[str]]:
"""
Get iteration nested relations.
:param graph: graph
:return:
"""
nodes = graph.get('nodes')
iteration_ids = [node.get('id') for node in nodes
if node.get('data', {}).get('type') in [
NodeType.ITERATION.value,
NodeType.LOOP.value,
]]
return {
iteration_id: [
node.get('id') for node in nodes if node.get('data', {}).get('iteration_id') == iteration_id
] for iteration_id in iteration_ids
}

View File

@@ -1,8 +1,11 @@
from typing import Optional
from typing import Any, Optional
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.entities.queue_entities import (
AppQueueEvent,
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
QueueNodeFailedEvent,
QueueNodeStartedEvent,
QueueNodeSucceededEvent,
@@ -130,6 +133,66 @@ class WorkflowEventTriggerCallback(BaseWorkflowCallback):
), PublishFrom.APPLICATION_MANAGER
)
def on_workflow_iteration_started(self,
node_id: str,
node_type: NodeType,
node_run_index: int = 1,
node_data: Optional[BaseNodeData] = None,
inputs: dict = None,
predecessor_node_id: Optional[str] = None,
metadata: Optional[dict] = None) -> None:
"""
Publish iteration started
"""
self._queue_manager.publish(
QueueIterationStartEvent(
node_id=node_id,
node_type=node_type,
node_run_index=node_run_index,
node_data=node_data,
inputs=inputs,
predecessor_node_id=predecessor_node_id,
metadata=metadata
),
PublishFrom.APPLICATION_MANAGER
)
def on_workflow_iteration_next(self, node_id: str,
node_type: NodeType,
index: int,
node_run_index: int,
output: Optional[Any]) -> None:
"""
Publish iteration next
"""
self._queue_manager.publish(
QueueIterationNextEvent(
node_id=node_id,
node_type=node_type,
index=index,
node_run_index=node_run_index,
output=output
),
PublishFrom.APPLICATION_MANAGER
)
def on_workflow_iteration_completed(self, node_id: str,
node_type: NodeType,
node_run_index: int,
outputs: dict) -> None:
"""
Publish iteration completed
"""
self._queue_manager.publish(
QueueIterationCompletedEvent(
node_id=node_id,
node_type=node_type,
node_run_index=node_run_index,
outputs=outputs
),
PublishFrom.APPLICATION_MANAGER
)
def on_event(self, event: AppQueueEvent) -> None:
"""
Publish event

View File

@@ -102,6 +102,39 @@ class WorkflowLoggingCallback(BaseWorkflowCallback):
self.print_text(text, color="pink", end="")
def on_workflow_iteration_started(self,
node_id: str,
node_type: NodeType,
node_run_index: int = 1,
node_data: Optional[BaseNodeData] = None,
inputs: dict = None,
predecessor_node_id: Optional[str] = None,
metadata: Optional[dict] = None) -> None:
"""
Publish iteration started
"""
self.print_text("\n[on_workflow_iteration_started]", color='blue')
self.print_text(f"Node ID: {node_id}", color='blue')
def on_workflow_iteration_next(self, node_id: str,
node_type: NodeType,
index: int,
node_run_index: int,
output: Optional[dict]) -> None:
"""
Publish iteration next
"""
self.print_text("\n[on_workflow_iteration_next]", color='blue')
def on_workflow_iteration_completed(self, node_id: str,
node_type: NodeType,
node_run_index: int,
outputs: dict) -> None:
"""
Publish iteration completed
"""
self.print_text("\n[on_workflow_iteration_completed]", color='blue')
def on_event(self, event: AppQueueEvent) -> None:
"""
Publish event

View File

@@ -80,6 +80,9 @@ class AppGenerateEntity(BaseModel):
stream: bool
invoke_from: InvokeFrom
# invoke call depth
call_depth: int = 0
# extra parameters, like: auto_generate_conversation_name
extras: dict[str, Any] = {}
@@ -126,6 +129,14 @@ class AdvancedChatAppGenerateEntity(AppGenerateEntity):
conversation_id: Optional[str] = None
query: Optional[str] = None
class SingleIterationRunEntity(BaseModel):
"""
Single Iteration Run Entity.
"""
node_id: str
inputs: dict
single_iteration_run: Optional[SingleIterationRunEntity] = None
class WorkflowAppGenerateEntity(AppGenerateEntity):
"""
@@ -133,3 +144,12 @@ class WorkflowAppGenerateEntity(AppGenerateEntity):
"""
# app config
app_config: WorkflowUIBasedAppConfig
class SingleIterationRunEntity(BaseModel):
"""
Single Iteration Run Entity.
"""
node_id: str
inputs: dict
single_iteration_run: Optional[SingleIterationRunEntity] = None

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from pydantic import BaseModel, validator
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk
from core.workflow.entities.base_node_data_entities import BaseNodeData
@@ -21,6 +21,9 @@ class QueueEvent(Enum):
WORKFLOW_STARTED = "workflow_started"
WORKFLOW_SUCCEEDED = "workflow_succeeded"
WORKFLOW_FAILED = "workflow_failed"
ITERATION_START = "iteration_start"
ITERATION_NEXT = "iteration_next"
ITERATION_COMPLETED = "iteration_completed"
NODE_STARTED = "node_started"
NODE_SUCCEEDED = "node_succeeded"
NODE_FAILED = "node_failed"
@@ -47,6 +50,55 @@ class QueueLLMChunkEvent(AppQueueEvent):
event = QueueEvent.LLM_CHUNK
chunk: LLMResultChunk
class QueueIterationStartEvent(AppQueueEvent):
"""
QueueIterationStartEvent entity
"""
event = QueueEvent.ITERATION_START
node_id: str
node_type: NodeType
node_data: BaseNodeData
node_run_index: int
inputs: dict = None
predecessor_node_id: Optional[str] = None
metadata: Optional[dict] = None
class QueueIterationNextEvent(AppQueueEvent):
"""
QueueIterationNextEvent entity
"""
event = QueueEvent.ITERATION_NEXT
index: int
node_id: str
node_type: NodeType
node_run_index: int
output: Optional[Any] # output for the current iteration
@validator('output', pre=True, always=True)
def set_output(cls, v):
"""
Set output
"""
if v is None:
return None
if isinstance(v, int | float | str | bool | dict | list):
return v
raise ValueError('output must be a valid type')
class QueueIterationCompletedEvent(AppQueueEvent):
"""
QueueIterationCompletedEvent entity
"""
event = QueueEvent.ITERATION_COMPLETED
node_id: str
node_type: NodeType
node_run_index: int
outputs: dict
class QueueTextChunkEvent(AppQueueEvent):
"""

View File

@@ -1,12 +1,14 @@
from enum import Enum
from typing import Optional
from typing import Any, Optional
from pydantic import BaseModel
from core.model_runtime.entities.llm_entities import LLMResult, LLMUsage
from core.model_runtime.utils.encoders import jsonable_encoder
from core.workflow.entities.base_node_data_entities import BaseNodeData
from core.workflow.entities.node_entities import NodeType
from core.workflow.nodes.answer.entities import GenerateRouteChunk
from models.workflow import WorkflowNodeExecutionStatus
class WorkflowStreamGenerateNodes(BaseModel):
@@ -65,6 +67,7 @@ class WorkflowTaskState(TaskState):
current_stream_generate_state: Optional[WorkflowStreamGenerateNodes] = None
iteration_nested_node_ids: list[str] = None
class AdvancedChatTaskState(WorkflowTaskState):
"""
@@ -91,6 +94,9 @@ class StreamEvent(Enum):
WORKFLOW_FINISHED = "workflow_finished"
NODE_STARTED = "node_started"
NODE_FINISHED = "node_finished"
ITERATION_STARTED = "iteration_started"
ITERATION_NEXT = "iteration_next"
ITERATION_COMPLETED = "iteration_completed"
TEXT_CHUNK = "text_chunk"
TEXT_REPLACE = "text_replace"
@@ -246,6 +252,24 @@ class NodeStartStreamResponse(StreamResponse):
workflow_run_id: str
data: Data
def to_ignore_detail_dict(self):
return {
"event": self.event.value,
"task_id": self.task_id,
"workflow_run_id": self.workflow_run_id,
"data": {
"id": self.data.id,
"node_id": self.data.node_id,
"node_type": self.data.node_type,
"title": self.data.title,
"index": self.data.index,
"predecessor_node_id": self.data.predecessor_node_id,
"inputs": None,
"created_at": self.data.created_at,
"extras": {}
}
}
class NodeFinishStreamResponse(StreamResponse):
"""
@@ -276,6 +300,99 @@ class NodeFinishStreamResponse(StreamResponse):
workflow_run_id: str
data: Data
def to_ignore_detail_dict(self):
return {
"event": self.event.value,
"task_id": self.task_id,
"workflow_run_id": self.workflow_run_id,
"data": {
"id": self.data.id,
"node_id": self.data.node_id,
"node_type": self.data.node_type,
"title": self.data.title,
"index": self.data.index,
"predecessor_node_id": self.data.predecessor_node_id,
"inputs": None,
"process_data": None,
"outputs": None,
"status": self.data.status,
"error": None,
"elapsed_time": self.data.elapsed_time,
"execution_metadata": None,
"created_at": self.data.created_at,
"finished_at": self.data.finished_at,
"files": []
}
}
class IterationNodeStartStreamResponse(StreamResponse):
"""
NodeStartStreamResponse entity
"""
class Data(BaseModel):
"""
Data entity
"""
id: str
node_id: str
node_type: str
title: str
created_at: int
extras: dict = {}
metadata: dict = {}
inputs: dict = {}
event: StreamEvent = StreamEvent.ITERATION_STARTED
workflow_run_id: str
data: Data
class IterationNodeNextStreamResponse(StreamResponse):
"""
NodeStartStreamResponse entity
"""
class Data(BaseModel):
"""
Data entity
"""
id: str
node_id: str
node_type: str
title: str
index: int
created_at: int
pre_iteration_output: Optional[Any]
extras: dict = {}
event: StreamEvent = StreamEvent.ITERATION_NEXT
workflow_run_id: str
data: Data
class IterationNodeCompletedStreamResponse(StreamResponse):
"""
NodeStartStreamResponse entity
"""
class Data(BaseModel):
"""
Data entity
"""
id: str
node_id: str
node_type: str
title: str
outputs: Optional[dict]
created_at: int
extras: dict = None
inputs: dict = None
status: WorkflowNodeExecutionStatus
error: Optional[str]
elapsed_time: float
total_tokens: int
finished_at: int
steps: int
event: StreamEvent = StreamEvent.ITERATION_COMPLETED
workflow_run_id: str
data: Data
class TextChunkStreamResponse(StreamResponse):
"""
@@ -411,3 +528,23 @@ class WorkflowAppBlockingResponse(AppBlockingResponse):
workflow_run_id: str
data: Data
class WorkflowIterationState(BaseModel):
"""
WorkflowIterationState entity
"""
class Data(BaseModel):
"""
Data entity
"""
parent_iteration_id: Optional[str] = None
iteration_id: str
current_index: int
iteration_steps_boundary: list[int] = None
node_execution_id: str
started_at: float
inputs: dict = None
total_tokens: int = 0
node_data: BaseNodeData
current_iterations: dict[str, Data] = None

View File

@@ -1,9 +1,9 @@
import json
import time
from datetime import datetime, timezone
from typing import Any, Optional, Union, cast
from typing import Optional, Union, cast
from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom, WorkflowAppGenerateEntity
from core.app.entities.app_invoke_entities import InvokeFrom
from core.app.entities.queue_entities import (
QueueNodeFailedEvent,
QueueNodeStartedEvent,
@@ -13,18 +13,17 @@ from core.app.entities.queue_entities import (
QueueWorkflowSucceededEvent,
)
from core.app.entities.task_entities import (
AdvancedChatTaskState,
NodeExecutionInfo,
NodeFinishStreamResponse,
NodeStartStreamResponse,
WorkflowFinishStreamResponse,
WorkflowStartStreamResponse,
WorkflowTaskState,
)
from core.app.task_pipeline.workflow_iteration_cycle_manage import WorkflowIterationCycleManage
from core.file.file_obj import FileVar
from core.model_runtime.utils.encoders import jsonable_encoder
from core.tools.tool_manager import ToolManager
from core.workflow.entities.node_entities import NodeRunMetadataKey, NodeType, SystemVariable
from core.workflow.entities.node_entities import NodeRunMetadataKey, NodeType
from core.workflow.nodes.tool.entities import ToolNodeData
from core.workflow.workflow_engine_manager import WorkflowEngineManager
from extensions.ext_database import db
@@ -42,13 +41,7 @@ from models.workflow import (
)
class WorkflowCycleManage:
_application_generate_entity: Union[AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity]
_workflow: Workflow
_user: Union[Account, EndUser]
_task_state: Union[AdvancedChatTaskState, WorkflowTaskState]
_workflow_system_variables: dict[SystemVariable, Any]
class WorkflowCycleManage(WorkflowIterationCycleManage):
def _init_workflow_run(self, workflow: Workflow,
triggered_from: WorkflowRunTriggeredFrom,
user: Union[Account, EndUser],
@@ -237,6 +230,7 @@ class WorkflowCycleManage:
inputs: Optional[dict] = None,
process_data: Optional[dict] = None,
outputs: Optional[dict] = None,
execution_metadata: Optional[dict] = None
) -> WorkflowNodeExecution:
"""
Workflow node execution failed
@@ -255,6 +249,8 @@ class WorkflowCycleManage:
workflow_node_execution.inputs = json.dumps(inputs) if inputs else None
workflow_node_execution.process_data = json.dumps(process_data) if process_data else None
workflow_node_execution.outputs = json.dumps(outputs) if outputs else None
workflow_node_execution.execution_metadata = json.dumps(jsonable_encoder(execution_metadata)) \
if execution_metadata else None
db.session.commit()
db.session.refresh(workflow_node_execution)
@@ -444,6 +440,23 @@ class WorkflowCycleManage:
current_node_execution = self._task_state.ran_node_execution_infos[event.node_id]
workflow_node_execution = db.session.query(WorkflowNodeExecution).filter(
WorkflowNodeExecution.id == current_node_execution.workflow_node_execution_id).first()
execution_metadata = event.execution_metadata if isinstance(event, QueueNodeSucceededEvent) else None
if self._iteration_state and self._iteration_state.current_iterations:
if not execution_metadata:
execution_metadata = {}
current_iteration_data = None
for iteration_node_id in self._iteration_state.current_iterations:
data = self._iteration_state.current_iterations[iteration_node_id]
if data.parent_iteration_id == None:
current_iteration_data = data
break
if current_iteration_data:
execution_metadata[NodeRunMetadataKey.ITERATION_ID] = current_iteration_data.iteration_id
execution_metadata[NodeRunMetadataKey.ITERATION_INDEX] = current_iteration_data.current_index
if isinstance(event, QueueNodeSucceededEvent):
workflow_node_execution = self._workflow_node_execution_success(
workflow_node_execution=workflow_node_execution,
@@ -451,12 +464,18 @@ class WorkflowCycleManage:
inputs=event.inputs,
process_data=event.process_data,
outputs=event.outputs,
execution_metadata=event.execution_metadata
execution_metadata=execution_metadata
)
if event.execution_metadata and event.execution_metadata.get(NodeRunMetadataKey.TOTAL_TOKENS):
if execution_metadata and execution_metadata.get(NodeRunMetadataKey.TOTAL_TOKENS):
self._task_state.total_tokens += (
int(event.execution_metadata.get(NodeRunMetadataKey.TOTAL_TOKENS)))
int(execution_metadata.get(NodeRunMetadataKey.TOTAL_TOKENS)))
if self._iteration_state:
for iteration_node_id in self._iteration_state.current_iterations:
data = self._iteration_state.current_iterations[iteration_node_id]
if execution_metadata.get(NodeRunMetadataKey.TOTAL_TOKENS):
data.total_tokens += int(execution_metadata.get(NodeRunMetadataKey.TOTAL_TOKENS))
if workflow_node_execution.node_type == NodeType.LLM.value:
outputs = workflow_node_execution.outputs_dict
@@ -469,7 +488,8 @@ class WorkflowCycleManage:
error=event.error,
inputs=event.inputs,
process_data=event.process_data,
outputs=event.outputs
outputs=event.outputs,
execution_metadata=execution_metadata
)
db.session.close()

View File

@@ -0,0 +1,16 @@
from typing import Any, Union
from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity
from core.app.entities.task_entities import AdvancedChatTaskState, WorkflowTaskState
from core.workflow.entities.node_entities import SystemVariable
from models.account import Account
from models.model import EndUser
from models.workflow import Workflow
class WorkflowCycleStateManager:
_application_generate_entity: Union[AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity]
_workflow: Workflow
_user: Union[Account, EndUser]
_task_state: Union[AdvancedChatTaskState, WorkflowTaskState]
_workflow_system_variables: dict[SystemVariable, Any]

View File

@@ -0,0 +1,281 @@
import json
import time
from collections.abc import Generator
from typing import Optional, Union
from core.app.entities.queue_entities import (
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
)
from core.app.entities.task_entities import (
IterationNodeCompletedStreamResponse,
IterationNodeNextStreamResponse,
IterationNodeStartStreamResponse,
NodeExecutionInfo,
WorkflowIterationState,
)
from core.app.task_pipeline.workflow_cycle_state_manager import WorkflowCycleStateManager
from core.workflow.entities.node_entities import NodeType
from extensions.ext_database import db
from models.workflow import (
WorkflowNodeExecution,
WorkflowNodeExecutionStatus,
WorkflowNodeExecutionTriggeredFrom,
WorkflowRun,
)
class WorkflowIterationCycleManage(WorkflowCycleStateManager):
_iteration_state: WorkflowIterationState = None
def _init_iteration_state(self) -> WorkflowIterationState:
if not self._iteration_state:
self._iteration_state = WorkflowIterationState(
current_iterations={}
)
def _handle_iteration_to_stream_response(self, task_id: str, event: QueueIterationStartEvent | QueueIterationNextEvent | QueueIterationCompletedEvent) \
-> Union[IterationNodeStartStreamResponse, IterationNodeNextStreamResponse, IterationNodeCompletedStreamResponse]:
"""
Handle iteration to stream response
:param task_id: task id
:param event: iteration event
:return:
"""
if isinstance(event, QueueIterationStartEvent):
return IterationNodeStartStreamResponse(
task_id=task_id,
workflow_run_id=self._task_state.workflow_run_id,
data=IterationNodeStartStreamResponse.Data(
id=event.node_id,
node_id=event.node_id,
node_type=event.node_type.value,
title=event.node_data.title,
created_at=int(time.time()),
extras={},
inputs=event.inputs,
metadata=event.metadata
)
)
elif isinstance(event, QueueIterationNextEvent):
current_iteration = self._iteration_state.current_iterations[event.node_id]
return IterationNodeNextStreamResponse(
task_id=task_id,
workflow_run_id=self._task_state.workflow_run_id,
data=IterationNodeNextStreamResponse.Data(
id=event.node_id,
node_id=event.node_id,
node_type=event.node_type.value,
title=current_iteration.node_data.title,
index=event.index,
pre_iteration_output=event.output,
created_at=int(time.time()),
extras={}
)
)
elif isinstance(event, QueueIterationCompletedEvent):
current_iteration = self._iteration_state.current_iterations[event.node_id]
return IterationNodeCompletedStreamResponse(
task_id=task_id,
workflow_run_id=self._task_state.workflow_run_id,
data=IterationNodeCompletedStreamResponse.Data(
id=event.node_id,
node_id=event.node_id,
node_type=event.node_type.value,
title=current_iteration.node_data.title,
outputs=event.outputs,
created_at=int(time.time()),
extras={},
inputs=current_iteration.inputs,
status=WorkflowNodeExecutionStatus.SUCCEEDED,
error=None,
elapsed_time=time.perf_counter() - current_iteration.started_at,
total_tokens=current_iteration.total_tokens,
finished_at=int(time.time()),
steps=current_iteration.current_index
)
)
def _init_iteration_execution_from_workflow_run(self,
workflow_run: WorkflowRun,
node_id: str,
node_type: NodeType,
node_title: str,
node_run_index: int = 1,
inputs: Optional[dict] = None,
predecessor_node_id: Optional[str] = None
) -> WorkflowNodeExecution:
workflow_node_execution = WorkflowNodeExecution(
tenant_id=workflow_run.tenant_id,
app_id=workflow_run.app_id,
workflow_id=workflow_run.workflow_id,
triggered_from=WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN.value,
workflow_run_id=workflow_run.id,
predecessor_node_id=predecessor_node_id,
index=node_run_index,
node_id=node_id,
node_type=node_type.value,
inputs=json.dumps(inputs) if inputs else None,
title=node_title,
status=WorkflowNodeExecutionStatus.RUNNING.value,
created_by_role=workflow_run.created_by_role,
created_by=workflow_run.created_by,
execution_metadata=json.dumps({
'started_run_index': node_run_index + 1,
'current_index': 0,
'steps_boundary': [],
})
)
db.session.add(workflow_node_execution)
db.session.commit()
db.session.refresh(workflow_node_execution)
db.session.close()
return workflow_node_execution
def _handle_iteration_operation(self, event: QueueIterationStartEvent | QueueIterationNextEvent | QueueIterationCompletedEvent) -> WorkflowNodeExecution:
if isinstance(event, QueueIterationStartEvent):
return self._handle_iteration_started(event)
elif isinstance(event, QueueIterationNextEvent):
return self._handle_iteration_next(event)
elif isinstance(event, QueueIterationCompletedEvent):
return self._handle_iteration_completed(event)
def _handle_iteration_started(self, event: QueueIterationStartEvent) -> WorkflowNodeExecution:
self._init_iteration_state()
workflow_run = db.session.query(WorkflowRun).filter(WorkflowRun.id == self._task_state.workflow_run_id).first()
workflow_node_execution = self._init_iteration_execution_from_workflow_run(
workflow_run=workflow_run,
node_id=event.node_id,
node_type=NodeType.ITERATION,
node_title=event.node_data.title,
node_run_index=event.node_run_index,
inputs=event.inputs,
predecessor_node_id=event.predecessor_node_id
)
latest_node_execution_info = NodeExecutionInfo(
workflow_node_execution_id=workflow_node_execution.id,
node_type=NodeType.ITERATION,
start_at=time.perf_counter()
)
self._task_state.ran_node_execution_infos[event.node_id] = latest_node_execution_info
self._task_state.latest_node_execution_info = latest_node_execution_info
self._iteration_state.current_iterations[event.node_id] = WorkflowIterationState.Data(
parent_iteration_id=None,
iteration_id=event.node_id,
current_index=0,
iteration_steps_boundary=[],
node_execution_id=workflow_node_execution.id,
started_at=time.perf_counter(),
inputs=event.inputs,
total_tokens=0,
node_data=event.node_data
)
db.session.close()
return workflow_node_execution
def _handle_iteration_next(self, event: QueueIterationNextEvent) -> WorkflowNodeExecution:
if event.node_id not in self._iteration_state.current_iterations:
return
current_iteration = self._iteration_state.current_iterations[event.node_id]
current_iteration.current_index = event.index
current_iteration.iteration_steps_boundary.append(event.node_run_index)
workflow_node_execution: WorkflowNodeExecution = db.session.query(WorkflowNodeExecution).filter(
WorkflowNodeExecution.id == current_iteration.node_execution_id
).first()
original_node_execution_metadata = workflow_node_execution.execution_metadata_dict
if original_node_execution_metadata:
original_node_execution_metadata['current_index'] = event.index
original_node_execution_metadata['steps_boundary'] = current_iteration.iteration_steps_boundary
original_node_execution_metadata['total_tokens'] = current_iteration.total_tokens
workflow_node_execution.execution_metadata = json.dumps(original_node_execution_metadata)
db.session.commit()
db.session.close()
def _handle_iteration_completed(self, event: QueueIterationCompletedEvent) -> WorkflowNodeExecution:
if event.node_id not in self._iteration_state.current_iterations:
return
current_iteration = self._iteration_state.current_iterations[event.node_id]
workflow_node_execution: WorkflowNodeExecution = db.session.query(WorkflowNodeExecution).filter(
WorkflowNodeExecution.id == current_iteration.node_execution_id
).first()
workflow_node_execution.status = WorkflowNodeExecutionStatus.SUCCEEDED.value
workflow_node_execution.outputs = json.dumps(event.outputs) if event.outputs else None
workflow_node_execution.elapsed_time = time.perf_counter() - current_iteration.started_at
original_node_execution_metadata = workflow_node_execution.execution_metadata_dict
if original_node_execution_metadata:
original_node_execution_metadata['steps_boundary'] = current_iteration.iteration_steps_boundary
original_node_execution_metadata['total_tokens'] = current_iteration.total_tokens
workflow_node_execution.execution_metadata = json.dumps(original_node_execution_metadata)
db.session.commit()
# remove current iteration
self._iteration_state.current_iterations.pop(event.node_id, None)
# set latest node execution info
latest_node_execution_info = NodeExecutionInfo(
workflow_node_execution_id=workflow_node_execution.id,
node_type=NodeType.ITERATION,
start_at=time.perf_counter()
)
self._task_state.latest_node_execution_info = latest_node_execution_info
db.session.close()
def _handle_iteration_exception(self, task_id: str, error: str) -> Generator[IterationNodeCompletedStreamResponse, None, None]:
"""
Handle iteration exception
"""
if not self._iteration_state or not self._iteration_state.current_iterations:
return
for node_id, current_iteration in self._iteration_state.current_iterations.items():
workflow_node_execution: WorkflowNodeExecution = db.session.query(WorkflowNodeExecution).filter(
WorkflowNodeExecution.id == current_iteration.node_execution_id
).first()
workflow_node_execution.status = WorkflowNodeExecutionStatus.FAILED.value
workflow_node_execution.error = error
workflow_node_execution.elapsed_time = time.perf_counter() - current_iteration.started_at
db.session.commit()
db.session.close()
yield IterationNodeCompletedStreamResponse(
task_id=task_id,
workflow_run_id=self._task_state.workflow_run_id,
data=IterationNodeCompletedStreamResponse.Data(
id=node_id,
node_id=node_id,
node_type=NodeType.ITERATION.value,
title=current_iteration.node_data.title,
outputs={},
created_at=int(time.time()),
extras={},
inputs=current_iteration.inputs,
status=WorkflowNodeExecutionStatus.FAILED,
error=error,
elapsed_time=time.perf_counter() - current_iteration.started_at,
total_tokens=current_iteration.total_tokens,
finished_at=int(time.time()),
steps=current_iteration.current_index
)
)

View File

@@ -121,13 +121,13 @@ class DatasetDocumentStore:
enabled=False,
created_by=self._user_id,
)
if 'answer' in doc.metadata and doc.metadata['answer']:
if doc.metadata.get('answer'):
segment_document.answer = doc.metadata.pop('answer', '')
db.session.add(segment_document)
else:
segment_document.content = doc.page_content
if 'answer' in doc.metadata and doc.metadata['answer']:
if doc.metadata.get('answer'):
segment_document.answer = doc.metadata.pop('answer', '')
segment_document.index_node_hash = doc.metadata['doc_hash']
segment_document.word_count = len(doc.page_content)

View File

@@ -42,6 +42,8 @@ class MessageFileParser:
raise ValueError('Invalid file url')
if file.get('transfer_method') == FileTransferMethod.LOCAL_FILE.value and not file.get('upload_file_id'):
raise ValueError('Missing file upload_file_id')
if file.get('transform_method') == FileTransferMethod.TOOL_FILE.value and not file.get('tool_file_id'):
raise ValueError('Missing file tool_file_id')
# transform files to file objs
type_file_objs = self._to_file_objs(files, file_extra_config)
@@ -149,12 +151,21 @@ class MessageFileParser:
"""
if isinstance(file, dict):
transfer_method = FileTransferMethod.value_of(file.get('transfer_method'))
if transfer_method != FileTransferMethod.TOOL_FILE:
return FileVar(
tenant_id=self.tenant_id,
type=FileType.value_of(file.get('type')),
transfer_method=transfer_method,
url=file.get('url') if transfer_method == FileTransferMethod.REMOTE_URL else None,
related_id=file.get('upload_file_id') if transfer_method == FileTransferMethod.LOCAL_FILE else None,
extra_config=file_extra_config
)
return FileVar(
tenant_id=self.tenant_id,
type=FileType.value_of(file.get('type')),
transfer_method=transfer_method,
url=file.get('url') if transfer_method == FileTransferMethod.REMOTE_URL else None,
related_id=file.get('upload_file_id') if transfer_method == FileTransferMethod.LOCAL_FILE else None,
url=None,
related_id=file.get('tool_file_id'),
extra_config=file_extra_config
)
else:

View File

@@ -77,4 +77,4 @@ class UploadFileParser:
return False
current_time = int(time.time())
return current_time - int(timestamp) <= 300 # expired after 5 minutes
return current_time - int(timestamp) <= current_app.config.get('FILES_ACCESS_TIMEOUT')

View File

@@ -1,14 +1,21 @@
import logging
import time
from enum import Enum
from threading import Lock
from typing import Literal, Optional
from httpx import post
from httpx import get, post
from pydantic import BaseModel
from yarl import URL
from config import get_env
from core.helper.code_executor.javascript_transformer import NodeJsTemplateTransformer
from core.helper.code_executor.jinja2_transformer import Jinja2TemplateTransformer
from core.helper.code_executor.python_transformer import PythonTemplateTransformer
from core.helper.code_executor.entities import CodeDependency
from core.helper.code_executor.javascript.javascript_transformer import NodeJsTemplateTransformer
from core.helper.code_executor.jinja2.jinja2_transformer import Jinja2TemplateTransformer
from core.helper.code_executor.python3.python3_transformer import Python3TemplateTransformer
from core.helper.code_executor.template_transformer import TemplateTransformer
logger = logging.getLogger(__name__)
# Code Executor
CODE_EXECUTION_ENDPOINT = get_env('CODE_EXECUTION_ENDPOINT')
@@ -36,8 +43,11 @@ class CodeLanguage(str, Enum):
class CodeExecutor:
code_template_transformers = {
CodeLanguage.PYTHON3: PythonTemplateTransformer,
dependencies_cache = {}
dependencies_cache_lock = Lock()
code_template_transformers: dict[CodeLanguage, type[TemplateTransformer]] = {
CodeLanguage.PYTHON3: Python3TemplateTransformer,
CodeLanguage.JINJA2: Jinja2TemplateTransformer,
CodeLanguage.JAVASCRIPT: NodeJsTemplateTransformer,
}
@@ -48,8 +58,16 @@ class CodeExecutor:
CodeLanguage.PYTHON3: CodeLanguage.PYTHON3,
}
supported_dependencies_languages: set[CodeLanguage] = {
CodeLanguage.PYTHON3
}
@classmethod
def execute_code(cls, language: Literal['python3', 'javascript', 'jinja2'], preload: str, code: str) -> str:
def execute_code(cls,
language: Literal['python3', 'javascript', 'jinja2'],
preload: str,
code: str,
dependencies: Optional[list[CodeDependency]] = None) -> str:
"""
Execute code
:param language: code language
@@ -65,9 +83,13 @@ class CodeExecutor:
data = {
'language': cls.code_language_to_running_language.get(language),
'code': code,
'preload': preload
'preload': preload,
'enable_network': True
}
if dependencies:
data['dependencies'] = [dependency.dict() for dependency in dependencies]
try:
response = post(str(url), json=data, headers=headers, timeout=CODE_EXECUTION_TIMEOUT)
if response.status_code == 503:
@@ -77,7 +99,9 @@ class CodeExecutor:
except CodeExecutionException as e:
raise e
except Exception as e:
raise CodeExecutionException('Failed to execute code, this is likely a network issue, please check if the sandbox service is running')
raise CodeExecutionException('Failed to execute code, which is likely a network issue,'
' please check if the sandbox service is running.'
f' ( Error: {str(e)} )')
try:
response = response.json()
@@ -95,7 +119,7 @@ class CodeExecutor:
return response.data.stdout
@classmethod
def execute_workflow_code_template(cls, language: Literal['python3', 'javascript', 'jinja2'], code: str, inputs: dict) -> dict:
def execute_workflow_code_template(cls, language: Literal['python3', 'javascript', 'jinja2'], code: str, inputs: dict, dependencies: Optional[list[CodeDependency]] = None) -> dict:
"""
Execute code
:param language: code language
@@ -107,11 +131,67 @@ class CodeExecutor:
if not template_transformer:
raise CodeExecutionException(f'Unsupported language {language}')
runner, preload = template_transformer.transform_caller(code, inputs)
runner, preload, dependencies = template_transformer.transform_caller(code, inputs, dependencies)
try:
response = cls.execute_code(language, preload, runner)
response = cls.execute_code(language, preload, runner, dependencies)
except CodeExecutionException as e:
raise e
return template_transformer.transform_response(response)
return template_transformer.transform_response(response)
@classmethod
def list_dependencies(cls, language: str) -> list[CodeDependency]:
if language not in cls.supported_dependencies_languages:
return []
with cls.dependencies_cache_lock:
if language in cls.dependencies_cache:
# check expiration
dependencies = cls.dependencies_cache[language]
if dependencies['expiration'] > time.time():
return dependencies['data']
# remove expired cache
del cls.dependencies_cache[language]
dependencies = cls._get_dependencies(language)
with cls.dependencies_cache_lock:
cls.dependencies_cache[language] = {
'data': dependencies,
'expiration': time.time() + 60
}
return dependencies
@classmethod
def _get_dependencies(cls, language: Literal['python3']) -> list[CodeDependency]:
"""
List dependencies
"""
url = URL(CODE_EXECUTION_ENDPOINT) / 'v1' / 'sandbox' / 'dependencies'
headers = {
'X-Api-Key': CODE_EXECUTION_API_KEY
}
running_language = cls.code_language_to_running_language.get(language)
if isinstance(running_language, Enum):
running_language = running_language.value
data = {
'language': running_language,
}
try:
response = get(str(url), params=data, headers=headers, timeout=CODE_EXECUTION_TIMEOUT)
if response.status_code != 200:
raise Exception(f'Failed to list dependencies, got status code {response.status_code}, please check if the sandbox service is running')
response = response.json()
dependencies = response.get('data', {}).get('dependencies', [])
return [
CodeDependency(**dependency) for dependency in dependencies
if dependency.get('name') not in Python3TemplateTransformer.get_standard_packages()
]
except Exception as e:
logger.exception(f'Failed to list dependencies: {e}')
return []

View File

@@ -0,0 +1,55 @@
from abc import abstractmethod
from pydantic import BaseModel
from core.helper.code_executor.code_executor import CodeExecutor
class CodeNodeProvider(BaseModel):
@staticmethod
@abstractmethod
def get_language() -> str:
pass
@classmethod
def is_accept_language(cls, language: str) -> bool:
return language == cls.get_language()
@classmethod
@abstractmethod
def get_default_code(cls) -> str:
"""
get default code in specific programming language for the code node
"""
pass
@classmethod
def get_default_available_packages(cls) -> list[dict]:
return [p.dict() for p in CodeExecutor.list_dependencies(cls.get_language())]
@classmethod
def get_default_config(cls) -> dict:
return {
"type": "code",
"config": {
"variables": [
{
"variable": "arg1",
"value_selector": []
},
{
"variable": "arg2",
"value_selector": []
}
],
"code_language": cls.get_language(),
"code": cls.get_default_code(),
"outputs": {
"result": {
"type": "string",
"children": None
}
}
},
"available_dependencies": cls.get_default_available_packages(),
}

View File

@@ -0,0 +1,6 @@
from pydantic import BaseModel
class CodeDependency(BaseModel):
name: str
version: str

View File

@@ -0,0 +1,21 @@
from textwrap import dedent
from core.helper.code_executor.code_executor import CodeLanguage
from core.helper.code_executor.code_node_provider import CodeNodeProvider
class JavascriptCodeProvider(CodeNodeProvider):
@staticmethod
def get_language() -> str:
return CodeLanguage.JAVASCRIPT
@classmethod
def get_default_code(cls) -> str:
return dedent(
"""
function main({arg1, arg2}) {
return {
result: arg1 + arg2
}
}
""")

View File

@@ -0,0 +1,25 @@
from textwrap import dedent
from core.helper.code_executor.template_transformer import TemplateTransformer
class NodeJsTemplateTransformer(TemplateTransformer):
@classmethod
def get_runner_script(cls) -> str:
runner_script = dedent(
f"""
// declare main function
{cls._code_placeholder}
// decode and prepare input object
var inputs_obj = JSON.parse(Buffer.from('{cls._inputs_placeholder}', 'base64').toString('utf-8'))
// execute main function
var output_obj = main(inputs_obj)
// convert output to json and print
var output_json = JSON.stringify(output_obj)
var result = `<<RESULT>>${{output_json}}<<RESULT>>`
console.log(result)
""")
return runner_script

View File

@@ -1,54 +0,0 @@
import json
import re
from core.helper.code_executor.template_transformer import TemplateTransformer
NODEJS_RUNNER = """// declare main function here
{{code}}
// execute main function, and return the result
// inputs is a dict, unstructured inputs
output = main({{inputs}})
// convert output to json and print
output = JSON.stringify(output)
result = `<<RESULT>>${output}<<RESULT>>`
console.log(result)
"""
NODEJS_PRELOAD = """"""
class NodeJsTemplateTransformer(TemplateTransformer):
@classmethod
def transform_caller(cls, code: str, inputs: dict) -> tuple[str, str]:
"""
Transform code to python runner
:param code: code
:param inputs: inputs
:return:
"""
# transform inputs to json string
inputs_str = json.dumps(inputs, indent=4, ensure_ascii=False)
# replace code and inputs
runner = NODEJS_RUNNER.replace('{{code}}', code)
runner = runner.replace('{{inputs}}', inputs_str)
return runner, NODEJS_PRELOAD
@classmethod
def transform_response(cls, response: str) -> dict:
"""
Transform response to dict
:param response: response
:return:
"""
# extract result
result = re.search(r'<<RESULT>>(.*)<<RESULT>>', response, re.DOTALL)
if not result:
raise ValueError('Failed to parse result')
result = result.group(1)
return json.loads(result)

View File

@@ -0,0 +1,17 @@
from core.helper.code_executor.code_executor import CodeExecutor, CodeLanguage
class Jinja2Formatter:
@classmethod
def format(cls, template: str, inputs: str) -> str:
"""
Format template
:param template: template
:param inputs: inputs
:return:
"""
result = CodeExecutor.execute_workflow_code_template(
language=CodeLanguage.JINJA2, code=template, inputs=inputs
)
return result['result']

View File

@@ -0,0 +1,64 @@
from textwrap import dedent
from core.helper.code_executor.python3.python3_transformer import Python3TemplateTransformer
from core.helper.code_executor.template_transformer import TemplateTransformer
class Jinja2TemplateTransformer(TemplateTransformer):
@classmethod
def get_standard_packages(cls) -> set[str]:
return {'jinja2'} | Python3TemplateTransformer.get_standard_packages()
@classmethod
def transform_response(cls, response: str) -> dict:
"""
Transform response to dict
:param response: response
:return:
"""
return {
'result': cls.extract_result_str_from_response(response)
}
@classmethod
def get_runner_script(cls) -> str:
runner_script = dedent(f"""
# declare main function
def main(**inputs):
import jinja2
template = jinja2.Template('''{cls._code_placeholder}''')
return template.render(**inputs)
import json
from base64 import b64decode
# decode and prepare input dict
inputs_obj = json.loads(b64decode('{cls._inputs_placeholder}').decode('utf-8'))
# execute main function
output = main(**inputs_obj)
# convert output and print
result = f'''<<RESULT>>{{output}}<<RESULT>>'''
print(result)
""")
return runner_script
@classmethod
def get_preload_script(cls) -> str:
preload_script = dedent("""
import jinja2
from base64 import b64decode
def _jinja2_preload_():
# prepare jinja2 environment, load template and render before to avoid sandbox issue
template = jinja2.Template('{{s}}')
template.render(s='a')
if __name__ == '__main__':
_jinja2_preload_()
""")
return preload_script

View File

@@ -1,92 +0,0 @@
import json
import re
from base64 import b64encode
from core.helper.code_executor.template_transformer import TemplateTransformer
PYTHON_RUNNER = """
import jinja2
from json import loads
from base64 import b64decode
template = jinja2.Template('''{{code}}''')
def main(**inputs):
return template.render(**inputs)
# execute main function, and return the result
inputs = b64decode('{{inputs}}').decode('utf-8')
output = main(**loads(inputs))
result = f'''<<RESULT>>{output}<<RESULT>>'''
print(result)
"""
JINJA2_PRELOAD_TEMPLATE = """{% set fruits = ['Apple'] %}
{{ 'a' }}
{% for fruit in fruits %}
<li>{{ fruit }}</li>
{% endfor %}
{% if fruits|length > 1 %}
1
{% endif %}
{% for i in range(5) %}
{% if i == 3 %}{{ i }}{% else %}{% endif %}
{% endfor %}
{% for i in range(3) %}
{{ i + 1 }}
{% endfor %}
{% macro say_hello() %}a{{ 'b' }}{% endmacro %}
{{ s }}{{ say_hello() }}"""
JINJA2_PRELOAD = f"""
import jinja2
from base64 import b64decode
def _jinja2_preload_():
# prepare jinja2 environment, load template and render before to avoid sandbox issue
template = jinja2.Template('''{JINJA2_PRELOAD_TEMPLATE}''')
template.render(s='a')
if __name__ == '__main__':
_jinja2_preload_()
"""
class Jinja2TemplateTransformer(TemplateTransformer):
@classmethod
def transform_caller(cls, code: str, inputs: dict) -> tuple[str, str]:
"""
Transform code to python runner
:param code: code
:param inputs: inputs
:return:
"""
inputs_str = b64encode(json.dumps(inputs, ensure_ascii=False).encode()).decode('utf-8')
# transform jinja2 template to python code
runner = PYTHON_RUNNER.replace('{{code}}', code)
runner = runner.replace('{{inputs}}', inputs_str)
return runner, JINJA2_PRELOAD
@classmethod
def transform_response(cls, response: str) -> dict:
"""
Transform response to dict
:param response: response
:return:
"""
# extract result
result = re.search(r'<<RESULT>>(.*)<<RESULT>>', response, re.DOTALL)
if not result:
raise ValueError('Failed to parse result')
result = result.group(1)
return {
'result': result
}

View File

@@ -0,0 +1,20 @@
from textwrap import dedent
from core.helper.code_executor.code_executor import CodeLanguage
from core.helper.code_executor.code_node_provider import CodeNodeProvider
class Python3CodeProvider(CodeNodeProvider):
@staticmethod
def get_language() -> str:
return CodeLanguage.PYTHON3
@classmethod
def get_default_code(cls) -> str:
return dedent(
"""
def main(arg1: int, arg2: int) -> dict:
return {
"result": arg1 + arg2,
}
""")

View File

@@ -0,0 +1,51 @@
from textwrap import dedent
from core.helper.code_executor.template_transformer import TemplateTransformer
class Python3TemplateTransformer(TemplateTransformer):
@classmethod
def get_standard_packages(cls) -> set[str]:
return {
'base64',
'binascii',
'collections',
'datetime',
'functools',
'hashlib',
'hmac',
'itertools',
'json',
'math',
'operator',
'os',
'random',
're',
'string',
'sys',
'time',
'traceback',
'uuid',
}
@classmethod
def get_runner_script(cls) -> str:
runner_script = dedent(f"""
# declare main function
{cls._code_placeholder}
import json
from base64 import b64decode
# decode and prepare input dict
inputs_obj = json.loads(b64decode('{cls._inputs_placeholder}').decode('utf-8'))
# execute main function
output_obj = main(**inputs_obj)
# convert output to json and print
output_json = json.dumps(output_obj, indent=4)
result = f'''<<RESULT>>{{output_json}}<<RESULT>>'''
print(result)
""")
return runner_script

View File

@@ -1,82 +0,0 @@
import json
import re
from base64 import b64encode
from core.helper.code_executor.template_transformer import TemplateTransformer
PYTHON_RUNNER = """# declare main function here
{{code}}
from json import loads, dumps
from base64 import b64decode
# execute main function, and return the result
# inputs is a dict, and it
inputs = b64decode('{{inputs}}').decode('utf-8')
output = main(**json.loads(inputs))
# convert output to json and print
output = dumps(output, indent=4)
result = f'''<<RESULT>>
{output}
<<RESULT>>'''
print(result)
"""
PYTHON_PRELOAD = """
# prepare general imports
import json
import datetime
import math
import random
import re
import string
import sys
import time
import traceback
import uuid
import os
import base64
import hashlib
import hmac
import binascii
import collections
import functools
import operator
import itertools
"""
class PythonTemplateTransformer(TemplateTransformer):
@classmethod
def transform_caller(cls, code: str, inputs: dict) -> tuple[str, str]:
"""
Transform code to python runner
:param code: code
:param inputs: inputs
:return:
"""
# transform inputs to json string
inputs_str = b64encode(json.dumps(inputs, ensure_ascii=False).encode()).decode('utf-8')
# replace code and inputs
runner = PYTHON_RUNNER.replace('{{code}}', code)
runner = runner.replace('{{inputs}}', inputs_str)
return runner, PYTHON_PRELOAD
@classmethod
def transform_response(cls, response: str) -> dict:
"""
Transform response to dict
:param response: response
:return:
"""
# extract result
result = re.search(r'<<RESULT>>(.*?)<<RESULT>>', response, re.DOTALL)
if not result:
raise ValueError('Failed to parse result')
result = result.group(1)
return json.loads(result)

View File

@@ -1,24 +1,87 @@
import json
import re
from abc import ABC, abstractmethod
from base64 import b64encode
from typing import Optional
from pydantic import BaseModel
from core.helper.code_executor.entities import CodeDependency
class TemplateTransformer(ABC):
class TemplateTransformer(ABC, BaseModel):
_code_placeholder: str = '{{code}}'
_inputs_placeholder: str = '{{inputs}}'
_result_tag: str = '<<RESULT>>'
@classmethod
@abstractmethod
def transform_caller(cls, code: str, inputs: dict) -> tuple[str, str]:
def get_standard_packages(cls) -> set[str]:
return set()
@classmethod
def transform_caller(cls, code: str, inputs: dict,
dependencies: Optional[list[CodeDependency]] = None) -> tuple[str, str, list[CodeDependency]]:
"""
Transform code to python runner
:param code: code
:param inputs: inputs
:return: runner, preload
"""
pass
runner_script = cls.assemble_runner_script(code, inputs)
preload_script = cls.get_preload_script()
packages = dependencies or []
standard_packages = cls.get_standard_packages()
for package in standard_packages:
if package not in packages:
packages.append(CodeDependency(name=package, version=''))
packages = list({dep.name: dep for dep in packages if dep.name}.values())
return runner_script, preload_script, packages
@classmethod
def extract_result_str_from_response(cls, response: str) -> str:
result = re.search(rf'{cls._result_tag}(.*){cls._result_tag}', response, re.DOTALL)
if not result:
raise ValueError('Failed to parse result')
result = result.group(1)
return result
@classmethod
@abstractmethod
def transform_response(cls, response: str) -> dict:
"""
Transform response to dict
:param response: response
:return:
"""
pass
return json.loads(cls.extract_result_str_from_response(response))
@classmethod
@abstractmethod
def get_runner_script(cls) -> str:
"""
Get runner script
"""
pass
@classmethod
def serialize_inputs(cls, inputs: dict) -> str:
inputs_json_str = json.dumps(inputs, ensure_ascii=False).encode()
input_base64_encoded = b64encode(inputs_json_str).decode('utf-8')
return input_base64_encoded
@classmethod
def assemble_runner_script(cls, code: str, inputs: dict) -> str:
# assemble runner script
script = cls.get_runner_script()
script = script.replace(cls._code_placeholder, code)
inputs_str = cls.serialize_inputs(inputs)
script = script.replace(cls._inputs_placeholder, inputs_str)
return script
@classmethod
def get_preload_script(cls) -> str:
"""
Get preload script
"""
return ''

View File

@@ -42,6 +42,20 @@ def delete(url, *args, **kwargs):
if kwargs['follow_redirects']:
kwargs['allow_redirects'] = kwargs['follow_redirects']
kwargs.pop('follow_redirects')
if 'timeout' in kwargs:
timeout = kwargs['timeout']
if timeout is None:
kwargs.pop('timeout')
elif isinstance(timeout, tuple):
# check length of tuple
if len(timeout) == 2:
kwargs['timeout'] = timeout
elif len(timeout) == 1:
kwargs['timeout'] = timeout[0]
elif len(timeout) > 2:
kwargs['timeout'] = (timeout[0], timeout[1])
else:
kwargs['timeout'] = (timeout, timeout)
return _delete(url=url, *args, proxies=requests_proxies, **kwargs)
def head(url, *args, **kwargs):

View File

@@ -411,14 +411,15 @@ class IndexingRunner:
# The user-defined segmentation rule
rules = json.loads(processing_rule.rules)
segmentation = rules["segmentation"]
if segmentation["max_tokens"] < 50 or segmentation["max_tokens"] > 1000:
raise ValueError("Custom segment length should be between 50 and 1000.")
max_segmentation_tokens_length = int(current_app.config['INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH'])
if segmentation["max_tokens"] < 50 or segmentation["max_tokens"] > max_segmentation_tokens_length:
raise ValueError(f"Custom segment length should be between 50 and {max_segmentation_tokens_length}.")
separator = segmentation["separator"]
if separator:
separator = separator.replace('\\n', '\n')
if 'chunk_overlap' in segmentation and segmentation['chunk_overlap']:
if segmentation.get('chunk_overlap'):
chunk_overlap = segmentation['chunk_overlap']
else:
chunk_overlap = 0
@@ -427,7 +428,7 @@ class IndexingRunner:
chunk_size=segmentation["max_tokens"],
chunk_overlap=chunk_overlap,
fixed_separator=separator,
separators=["\n\n", "", ".", " ", ""],
separators=["\n\n", "", ". ", " ", ""],
embedding_model_instance=embedding_model_instance
)
else:
@@ -435,7 +436,7 @@ class IndexingRunner:
character_splitter = EnhanceRecursiveCharacterTextSplitter.from_encoder(
chunk_size=DatasetProcessRule.AUTOMATIC_RULES['segmentation']['max_tokens'],
chunk_overlap=DatasetProcessRule.AUTOMATIC_RULES['segmentation']['chunk_overlap'],
separators=["\n\n", "", ".", " ", ""],
separators=["\n\n", "", ". ", " ", ""],
embedding_model_instance=embedding_model_instance
)

View File

@@ -1,4 +1,3 @@
from abc import ABC
from typing import Optional
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk
@@ -14,7 +13,7 @@ _TEXT_COLOR_MAPPING = {
}
class Callback(ABC):
class Callback:
"""
Base class for callbacks.
Only for LLM.

View File

@@ -51,7 +51,7 @@
- `voices` (list) List of available voice.available for model type `tts`
- `mode` (string) voice model.available for model type `tts`
- `name` (string) voice model display name.available for model type `tts`
- `lanuage` (string) the voice model supports languages.available for model type `tts`
- `language` (string) the voice model supports languages.available for model type `tts`
- `word_limit` (int) Single conversion word limit, paragraphwise by defaultavailable for model type `tts`
- `audio_type` (string) Support audio file extension format, e.g.mp3,wavavailable for model type `tts`
- `max_workers` (int) Number of concurrent workers supporting text and audio conversionavailable for model type`tts`

View File

@@ -52,7 +52,7 @@
- `voices` (list) 可选音色列表。
- `mode` (string) 音色模型。(模型类型 `tts` 可用)
- `name` (string) 音色模型显示名称。(模型类型 `tts` 可用)
- `lanuage` (string) 音色模型支持语言。(模型类型 `tts` 可用)
- `language` (string) 音色模型支持语言。(模型类型 `tts` 可用)
- `word_limit` (int) 单次转换字数限制,默认按段落分段(模型类型 `tts` 可用)
- `audio_type` (string) 支持音频文件扩展格式mp3,wav模型类型 `tts` 可用)
- `max_workers` (int) 支持文字音频转换并发任务数(模型类型 `tts` 可用)

Some files were not shown because too many files have changed in this diff Show More