Compare commits

..

294 Commits

Author SHA1 Message Date
-LAN-
ce774c6d15 release(update): bump version to 0.10.0-beta2
- Update CURRENT_VERSION in configuration settings
- Update Docker images to use version 0.10.0-beta2
- Change web package version in package.json
2024-10-01 13:58:37 +08:00
-LAN-
6bb06dda90 feat(ci): add release branch to build-push workflow
- Include `release/0.10.0-beta` in branches triggering build and push jobs
- Ensure automatic builds for the new beta release branch
2024-10-01 13:30:36 +08:00
-LAN-
26c8beb804 refactor(identity): update model identity key
- Change "model_identity" to "dify_model_identity" for consistency
- Adjust file validation logic to use updated key in various components and tests
2024-10-01 13:21:58 +08:00
-LAN-
5c64792793 refactor(models, helper): update identifiers and proxy transport code
- Renamed `model_identity` to `dify_model_identity` for clearer specificity
- Added `proxy` parameter to `httpx.HTTPTransport` for SSRF protection adjustments
2024-10-01 13:08:06 +08:00
-LAN-
afb3e317d3 Merge branch 'feat/attachments' into release/0.10.0-beta1 2024-09-30 23:34:54 +08:00
-LAN-
1e62ad23e6 chore(version): bump to 0.10.0-beta1
- Update version in packaging configuration
- Change docker images to new version in docker-compose files
- Update web package.json with new version
2024-09-30 23:34:04 +08:00
-LAN-
f9baabc9f9 fix(tests): handle invalid variable type for document_extractor_node
- Imported StringVariable to properly mock an invalid variable type.
- Enhanced test case `test_run_invalid_variable_type` to use StringVariable instead of a plain string.
2024-09-30 23:32:51 +08:00
-LAN-
94b946c715 fix(document_extractor): handle empty variable values properly
- Added check for variable.value existence before type validation.
- Prevents erroneous type checks on empty variables, improving robustness.
2024-09-30 23:32:51 +08:00
-LAN-
3ff88f4f3c refactor(api): consolidate allowed extensions handling
- Unified allowed extensions into a single `DOCUMENT_EXTENSIONS` reference
- Adjusted checks and imports in controllers and services to use the new constant
- Enhanced text extraction to support additional file types (EPUB, EML, MSG)
2024-09-30 23:32:51 +08:00
-LAN-
e7f425be91 fix(node): correct conditional checks in list filter node
- Added check for variable value before type checking in ListFilterNode to prevent unnecessary errors.
- Fixed incorrect workflow feature path in AppDslService to ensure correct data fetching.
2024-09-30 23:32:51 +08:00
-LAN-
584053bc01 feat(api): Enhance multi modal support. 2024-09-30 23:32:51 +08:00
JzoNg
44e81dbbc8 Merge branch 'main' into jzh 2024-09-30 17:19:35 +08:00
Joel
944cfd2b68 chore: merge main 2024-09-30 16:43:06 +08:00
Joel
6d2682c751 fix: help link url 2024-09-30 16:36:16 +08:00
StyleZhang
d2971e84bb fix: node handle tooltip 2024-09-30 15:23:04 +08:00
StyleZhang
b67b81bf8f fix: file upload tip 2024-09-30 14:52:16 +08:00
Joel
c05902404d chore: add help link 2024-09-30 14:00:51 +08:00
Joel
1e6d5f2c48 feat: add support doc extract file types 2024-09-30 13:56:24 +08:00
JzoNg
e2b1464db2 fix features update by DSL import 2024-09-29 17:15:39 +08:00
JzoNg
f0285a53d2 fix style of textarea in retrivel test 2024-09-29 15:53:10 +08:00
Joel
00f91b5dc4 chore: list fiter i18n 2024-09-29 14:57:38 +08:00
StyleZhang
dc3e86b82a fix: embedded chat file form 2024-09-29 14:56:13 +08:00
Joel
d239c5b54d fix: list filter first name 2024-09-29 14:47:02 +08:00
StyleZhang
23abccd3a6 fix: lint 2024-09-29 14:43:17 +08:00
JzoNg
2520e40059 fix: feature panel content height in safari 2024-09-29 14:17:27 +08:00
JzoNg
1d4ed3d9e7 fix: feature panel content height in safari 2024-09-29 14:06:07 +08:00
JzoNg
eed8ab9348 fix style of conversation variable 2024-09-29 11:58:23 +08:00
JzoNg
112aaf6e1b fix completion 2024-09-29 11:26:35 +08:00
StyleZhang
094a1a1458 remove chat thumb 2024-09-29 10:49:00 +08:00
JzoNg
955fa4345a fix output of run 2024-09-27 17:37:26 +08:00
Joel
ac5e381a1a chore: change list filter writing 2024-09-27 17:09:12 +08:00
Joel
ae9b9f867a chore: add new node description 2024-09-27 16:37:14 +08:00
JzoNg
8fd04e5313 merge main 2024-09-27 16:27:00 +08:00
Joel
3904782647 fix: add upload i18n 2024-09-27 11:16:02 +08:00
StyleZhang
288be3fbd8 fix: chat message end 2024-09-27 10:30:36 +08:00
JzoNg
f7f836d6f1 fix workflow output 2024-09-26 17:16:08 +08:00
JzoNg
5dedcb74a5 fix form of chat in webapp 2024-09-26 17:01:17 +08:00
StyleZhang
b95d0fa9a9 fix: file upload limits in web app 2024-09-26 16:41:44 +08:00
StyleZhang
543503c398 fix: file progress 2024-09-26 16:33:37 +08:00
JzoNg
3f16caf244 show file list in generation result 2024-09-26 16:10:28 +08:00
JzoNg
54133dfbde files in log 2024-09-26 15:50:19 +08:00
StyleZhang
b491c93b1c image download 2024-09-26 15:49:53 +08:00
StyleZhang
2a6d9c3211 fix: chat send 2024-09-26 15:33:01 +08:00
StyleZhang
c6691bd297 fix: webapp chat embedded chat 2024-09-26 15:26:53 +08:00
StyleZhang
2a0b30de5c fix: image.enable 2024-09-26 14:39:56 +08:00
StyleZhang
a7d53abba9 webapp chat embedded chat 2024-09-26 13:47:36 +08:00
StyleZhang
296253a365 debug chat 2024-09-26 11:54:42 +08:00
Joel
c89cefe526 chore: remove log 2024-09-26 11:50:52 +08:00
StyleZhang
1d027fa065 fix: chat check inputs form 2024-09-26 11:07:08 +08:00
Joel
9ce9a52a86 fix: text not string 2024-09-26 10:15:20 +08:00
JzoNg
c74424ed85 fix text-generation files 2024-09-26 08:45:50 +08:00
JzoNg
719ef9cef9 text generation run 2024-09-25 20:34:48 +08:00
StyleZhang
0ab525a691 fix: file size 2024-09-25 16:38:20 +08:00
StyleZhang
6fdcf6ee21 file-uploader 2024-09-25 16:24:02 +08:00
Joel
d01e97c1fc fix: tiny select ui 2024-09-25 15:44:02 +08:00
Joel
87e560de8a chore: type value change to array 2024-09-25 15:19:34 +08:00
JzoNg
f8d26e46ac text-generation support file type 2024-09-25 15:01:56 +08:00
Joel
195ac19774 chore: list filter transfer value to array 2024-09-25 13:41:56 +08:00
Joel
0281eb796d chore: transfer value to string array 2024-09-25 12:59:19 +08:00
StyleZhang
9fe2f321ae fix: chatflow check start node form 2024-09-25 12:21:53 +08:00
StyleZhang
5f76e665a1 fix: file extension 2024-09-25 11:24:14 +08:00
StyleZhang
81568752c0 fix: file from link 2024-09-24 17:47:01 +08:00
JzoNg
ceb1dde714 Merge branch 'main' into jzh 2024-09-24 17:30:56 +08:00
JzoNg
3209fdca53 legacy for sys.files 2024-09-24 17:08:23 +08:00
JzoNg
dc5010d833 fix step run of file type 2024-09-24 16:24:46 +08:00
Joel
8b26ae6532 fix: http file node not added 2024-09-24 15:18:39 +08:00
Joel
66953d57a2 chore: use new add sub variables 2024-09-24 13:56:06 +08:00
StyleZhang
afc9630cd0 merge main 2024-09-24 11:44:38 +08:00
Joel
7e8bafe186 feat: support file size unit 2024-09-24 11:12:41 +08:00
Joel
6c5fcd1ffc fix: all of not show the right place 2024-09-24 10:55:37 +08:00
Joel
7602d22133 chore: files name 2024-09-23 18:33:14 +08:00
Joel
5ec91e8507 feat: if node render files sub vars 2024-09-23 18:25:11 +08:00
StyleZhang
466966f027 file input error tip 2024-09-23 17:27:06 +08:00
JzoNg
212d04ea27 form inputs hide handle 2024-09-23 10:25:35 +08:00
StyleZhang
0cb50dd4a5 fix: workflow inputs panel 2024-09-20 18:00:35 +08:00
StyleZhang
ab19fccf3d single file 2024-09-20 17:20:28 +08:00
StyleZhang
4ed46e3fed fix: uploader 2024-09-20 15:44:49 +08:00
Joel
9fd2f798ff feat: add type palcehloder in picker 2024-09-20 15:19:38 +08:00
Joel
146be41b1d fix: if show var types 2024-09-20 14:59:46 +08:00
Joel
ce6ae5732a fix: not show the right output type 2024-09-20 14:42:53 +08:00
StyleZhang
edf462c640 fix: uploader 2024-09-20 12:08:03 +08:00
JzoNg
d580fc1e9d fix features initilization 2024-09-20 11:03:45 +08:00
Joel
5544791031 feat: doc extract support both file and file array 2024-09-19 17:55:58 +08:00
JzoNg
099746dd59 fix style of tracing 2024-09-19 17:40:27 +08:00
StyleZhang
c6f53c9030 Merge branch 'main' into feat/attachments 2024-09-19 17:11:28 +08:00
StyleZhang
8236f8fed8 fix: file uploader 2024-09-19 17:08:02 +08:00
StyleZhang
2b0c39ed3f file in chat question 2024-09-19 16:55:14 +08:00
JzoNg
396a240e68 test run 2024-09-19 16:49:30 +08:00
JzoNg
8bd9d8f6ba remove chat input 2024-09-19 15:45:30 +08:00
JzoNg
aa7ae4c5f1 chore: remove console 2024-09-19 15:45:30 +08:00
StyleZhang
49b7acf52e fix: file uploader 2024-09-19 14:52:22 +08:00
StyleZhang
466ac987f5 fix: file type icon 2024-09-19 11:12:18 +08:00
StyleZhang
49972939a9 file icon 2024-09-19 11:06:38 +08:00
StyleZhang
80f167ca02 file upload limit 2024-09-18 18:12:03 +08:00
JzoNg
f652ae0d98 step run 2024-09-18 18:07:03 +08:00
Joel
4dbf56675a fix: change to backend doc extractor 2024-09-18 17:35:14 +08:00
StyleZhang
f5d1f5a20a file uploader 2024-09-18 16:50:53 +08:00
StyleZhang
fd9b71c4d7 file uploader 2024-09-18 16:36:55 +08:00
Joel
1df41cef4c fix: doc extract not export text 2024-09-18 15:23:53 +08:00
Joel
602d2486bd fix: iteration file arry type 2024-09-18 15:03:02 +08:00
JzoNg
403fede432 fix basic app publish 2024-09-18 14:01:20 +08:00
JzoNg
9f66e6e357 fix feature bar in basic chabot 2024-09-18 13:36:11 +08:00
JzoNg
affb2e38a1 fix typo of icon path 2024-09-18 13:14:18 +08:00
Joel
31d87f85b8 merge 2024-09-18 11:57:48 +08:00
JzoNg
54105e85ff fix icon 2024-09-18 11:21:38 +08:00
Joel
5ec604500c chore: change field to backend 2024-09-18 11:15:36 +08:00
JzoNg
96d2582d89 file var in form 2024-09-18 10:44:59 +08:00
JzoNg
a10b0db102 vision setting 2024-09-18 10:44:59 +08:00
StyleZhang
5dd556b4c8 file uploader 2024-09-13 17:31:10 +08:00
StyleZhang
a4c6d0b94b file uploader 2024-09-13 16:46:16 +08:00
StyleZhang
323a835de9 Merge branch 'main' into feat/attachments 2024-09-12 13:53:41 +08:00
StyleZhang
0076577764 file uploader 2024-09-11 18:25:49 +08:00
Joel
9a3b7345c4 fix: split line too long 2024-09-11 14:40:38 +08:00
StyleZhang
2ebf5f5ffa merge main 2024-09-11 13:40:36 +08:00
StyleZhang
02f494c0de merge main 2024-09-10 16:38:32 +08:00
Joel
f0e81e3918 fix: doc extract and node isconversation item 2024-09-10 15:22:44 +08:00
Joel
aa8499efac fix: doc extract var type 2024-09-10 15:16:27 +08:00
Joel
ea40b1dcb2 fix: two scrollbar 2024-09-10 15:05:54 +08:00
Joel
a689cd6fd4 chore: var refernece support theme 2024-09-10 14:49:27 +08:00
StyleZhang
32b6c7063a file uploader 2024-09-10 14:17:56 +08:00
Joel
97056dad30 fix: file type var match page crash 2024-09-10 11:54:34 +08:00
Joel
264f7c2139 fix: file show type error 2024-09-10 10:59:13 +08:00
Joel
007a6fd14a chore: other file types placeholder add + 2024-09-09 15:24:50 +08:00
Joel
c159b7a781 chore: transform field type css to tailwind and multi theme 2024-09-09 15:15:08 +08:00
Joel
6c9c3faf78 fix: allow file extensions remove . 2024-09-09 14:36:56 +08:00
StyleZhang
d933ebb845 file input 2024-09-09 11:27:35 +08:00
JzoNg
b60c7a5826 vision config 2024-09-04 15:45:33 +08:00
JzoNg
0b94218378 remove unused components 2024-09-04 11:47:29 +08:00
Joel
97cc9a5615 feat: check file item key not set 2024-09-04 11:28:36 +08:00
Joel
f6d0fd9848 feat: add check list filter value 2024-09-04 11:19:09 +08:00
Joel
b863dd7de2 fix: list filter init value 2024-09-04 10:47:26 +08:00
JzoNg
b0e7a22a27 annotation reply 2024-09-03 17:19:11 +08:00
JzoNg
565a835947 conversation opener 2024-09-01 15:00:23 +08:00
JzoNg
fe94c876fb multiple model message sending 2024-09-01 13:37:21 +08:00
JzoNg
67a34bdd7a app publish with new features 2024-09-01 13:10:40 +08:00
JzoNg
8c785e268b completion debug & preview 2024-09-01 11:24:54 +08:00
JzoNg
65a6265ff6 new features in chat app configuration 2024-08-30 18:58:08 +08:00
Joel
08d3cb1912 fix: filter file and file sub variable 2024-08-30 17:25:32 +08:00
Joel
48d8b01d81 fix: http node value var rename 2024-08-30 15:30:12 +08:00
Joel
38edb06897 feat: list filter output 2024-08-30 15:21:37 +08:00
Joel
dc919c2a6c feat: output item var type and filter condition triger 2024-08-30 15:06:57 +08:00
Joel
e7a6a0ab01 chore: list filter operate ui 2024-08-30 14:44:16 +08:00
Joel
61d989f413 feat: support order change 2024-08-30 14:34:46 +08:00
Joel
976efd93a1 feat: support filter variable var data sync 2024-08-30 14:14:42 +08:00
JzoNg
0e2f78b3a6 features in workflow 2024-08-29 22:54:36 +08:00
JzoNg
b3529d3ccc file upload 2024-08-29 20:20:28 +08:00
JzoNg
d69b453729 conversation opener 2024-08-29 20:20:28 +08:00
JzoNg
2f658de155 moderation 2024-08-29 20:20:28 +08:00
JzoNg
a691700b48 text2speech 2024-08-29 20:20:28 +08:00
JzoNg
c5317d8f58 feature card 2024-08-29 20:20:28 +08:00
JzoNg
822f03f3cd text to speech card 2024-08-29 20:20:28 +08:00
JzoNg
101e56baaa follow up & citations & speech-to-text 2024-08-29 20:20:28 +08:00
JzoNg
3a8f516dfc more like this 2024-08-29 20:20:28 +08:00
JzoNg
912030c9a1 update style 2024-08-29 20:20:28 +08:00
JzoNg
687661eef7 new style of feature panel 2024-08-29 20:20:28 +08:00
Joel
8efc63a705 feat: handle value picker in body file selector 2024-08-29 16:36:18 +08:00
Joel
dca4f9fe9c feat: support file values in body 2024-08-29 16:18:16 +08:00
Joel
51597629b1 fix: http binary node not valid 2024-08-29 14:32:36 +08:00
Joel
76a07513ba fix: prompt editor not update data 2024-08-29 14:25:04 +08:00
Joel
dae62bef78 fix: change to key value type not show init key values 2024-08-29 11:55:18 +08:00
Joel
2a6629d435 feat: binary files 2024-08-29 11:50:42 +08:00
Joel
41f0ce1012 feat: support http body to new data struct 2024-08-28 16:56:31 +08:00
Joel
e90b055c47 fix: ts problems 2024-08-28 14:40:13 +08:00
Joel
94e40d4ed9 feat: default set vision var value 2024-08-28 14:35:49 +08:00
Joel
c34fc071e0 feat: vison file to api define 2024-08-28 10:57:26 +08:00
Joel
c014ae43e1 feat: if support file exist 2024-08-27 17:37:39 +08:00
Joel
9851153d38 chore: file type checkbox 2024-08-27 17:15:02 +08:00
Joel
cfbabb8383 feat: file valid 2024-08-27 17:09:34 +08:00
Joel
b78e90679d fix: choose file type problems 2024-08-27 16:55:24 +08:00
Joel
ec1bfdc723 feat: change to new start file struct 2024-08-27 16:37:15 +08:00
Joel
e20019f6e9 chore: merge main 2024-08-27 14:23:35 +08:00
Joel
2122cfb152 chore: list filter field 2024-08-27 10:43:03 +08:00
Joel
c2b8beffac feat: global variables 2024-08-26 17:32:46 +08:00
StyleZhang
985651454a progress circle 2024-08-26 10:30:26 +08:00
Joel
f9c1d06e91 chore: tools node 2024-08-23 18:08:54 +08:00
Joel
657f1d2de8 chore: http request 2024-08-23 17:56:34 +08:00
Joel
6e2192c1e0 chore: variable aggregator 2024-08-23 16:40:39 +08:00
Joel
e05b20eb91 chore:paramter extrctor ui 2024-08-23 16:03:01 +08:00
Joel
5117e08def chore: question classify 2024-08-22 17:00:20 +08:00
Joel
34691ca6c9 chore: knownledge node ui 2024-08-22 16:51:44 +08:00
Joel
aa40047b08 chore: knowledge 2024-08-22 15:10:49 +08:00
StyleZhang
eca17767fe chat style 2024-08-22 15:10:09 +08:00
Joel
51cec1b9ba chore: llm upgrade 2024-08-22 14:34:58 +08:00
Joel
651547c3ef fix: number var picker and other tiny css problem 2024-08-22 10:49:56 +08:00
Joel
8fbdaa604c feat: file array variable choose vars 2024-08-21 17:29:03 +08:00
Joel
1bcb30647f chore: select ui 2024-08-21 17:29:03 +08:00
JzoNg
bc245a25bf new style of tables 2024-08-21 17:07:14 +08:00
Joel
85b25ebe1b chore: file selct trigger 2024-08-21 16:45:18 +08:00
Joel
b50e94d681 feat: file arrary sub variable select 2024-08-21 16:22:58 +08:00
Joel
91c0657cf6 fix: select default trigger problem 2024-08-21 15:35:28 +08:00
StyleZhang
0da06128e3 agent tool in chat 2024-08-21 15:32:13 +08:00
Joel
0c4af3a1d2 feat: support sub variable operate changes with key and value support 2024-08-21 15:27:08 +08:00
Joel
5628b293f8 feat: sub var if condindion postion 2024-08-21 14:54:06 +08:00
JzoNg
fff40aae58 Merge branch 'main' into jzh 2024-08-21 13:40:04 +08:00
Joel
b3b87b3e4c chore: sub variable trigger 2024-08-21 11:07:13 +08:00
Joel
9a23cd08d8 fix: sub varibale select trigger 2024-08-19 16:49:52 +08:00
JzoNg
cf61ca24e3 new style of table 2024-08-19 16:05:37 +08:00
Joel
58a56add9c feat: can support value 2024-08-19 15:15:44 +08:00
JzoNg
b362031baf chip 2024-08-19 14:11:25 +08:00
Joel
7ad409b3d9 fix: update operate and value 2024-08-19 13:53:23 +08:00
Joel
876ea90fe9 feat: support update sub variable value 2024-08-19 11:43:52 +08:00
JzoNg
0eb442f954 new style of user inputs 2024-08-16 17:48:31 +08:00
Joel
4554ac3ef8 feat: can add sub variable 2024-08-16 17:12:44 +08:00
Joel
eaa7d114dc feat: file array not sub vars 2024-08-16 11:39:23 +08:00
Joel
581228be74 feat: abstract condition logic to components 2024-08-16 10:14:43 +08:00
StyleZhang
02da0219ff workflow debug and preview panel style 2024-08-15 17:30:17 +08:00
JzoNg
d0bbe43dab chore: fix type 2024-08-15 16:36:47 +08:00
JzoNg
16acdc9be4 new style of workflow process 2024-08-15 16:33:33 +08:00
Joel
a6999b5d02 fix: old not set vision data 2024-08-15 11:37:02 +08:00
StyleZhang
33bfa4758e Merge branch 'main' into feat/attachments 2024-08-15 10:57:59 +08:00
JzoNg
db63c2c219 new style of status 2024-08-14 18:40:11 +08:00
JzoNg
bea4ec5998 style update of log 2024-08-13 17:11:17 +08:00
JzoNg
74333db4c8 update input in env & conversation var 2024-08-13 16:12:12 +08:00
JzoNg
0019fb9f8b Merge branch 'main' into jzh 2024-08-13 15:59:19 +08:00
JzoNg
47615ac8fb meta data style update 2024-08-13 15:25:02 +08:00
StyleZhang
d7c8bced9b file uploader 2024-08-13 15:24:32 +08:00
Joel
57f178902f feat: if node select value 2024-08-13 14:14:38 +08:00
Joel
4586de48d6 feat: default var type 2024-08-13 14:04:43 +08:00
Joel
6549519fa5 feat: files attr select 2024-08-13 13:59:19 +08:00
Joel
ae098ad121 feat: condition operation 2024-08-13 11:16:27 +08:00
Joel
20922fde1c feat: detect file type 2024-08-12 18:22:07 +08:00
StyleZhang
079c802b5c file uploader 2024-08-12 16:24:13 +08:00
JzoNg
efcd462a69 fix style of switch 2024-08-12 13:15:36 +08:00
Joel
843c8ad306 feat: file obj 2024-08-09 17:46:33 +08:00
StyleZhang
594bf96922 file uploader hooks 2024-08-09 16:48:58 +08:00
JzoNg
ade385c9c1 replace input in workflow blocks 2024-08-09 16:19:10 +08:00
JzoNg
baed068231 replace form input 2024-08-09 12:35:01 +08:00
Joel
42f5334ae4 feat: iteration file array input 2024-08-09 11:45:14 +08:00
Joel
3c4ab0632d feat: tool support file type 2024-08-09 11:38:40 +08:00
Joel
bc5f109308 feat: http support body binary 2024-08-09 10:53:59 +08:00
Joel
97b2a42cc3 feat: form data support file type 2024-08-09 10:29:29 +08:00
JzoNg
939df16655 refactor input and replace search input 2024-08-08 17:39:09 +08:00
JzoNg
9362ae045c fix textarea onchange 2024-08-08 17:39:09 +08:00
Joel
257c515178 fix: old no vision data 2024-08-08 15:47:55 +08:00
Joel
6b7520ccc2 fix: old llm code 2024-08-08 15:44:39 +08:00
Joel
85eeaee95a feat: vision valid 2024-08-08 14:40:08 +08:00
Joel
99bf3ff565 feat: params support vison 2024-08-08 14:22:54 +08:00
Joel
36ae154ca2 feat: classify support vision 2024-08-08 11:57:07 +08:00
Joel
ef93d60534 chore: vision logic hooks 2024-08-08 11:36:44 +08:00
JzoNg
6c9a6b99e0 refactor textarea 2024-08-08 11:14:52 +08:00
JzoNg
b73f05fdf0 new style of textarea 2024-08-08 11:14:52 +08:00
StyleZhang
26bca75884 file uploader 2024-08-08 10:27:43 +08:00
Joel
e2962da1b8 chore: add visioin disabled tip check 2024-08-07 11:33:19 +08:00
NFish
1b9ebb8037 feat: add disabled support to tooltip-plus component (#7036) 2024-08-07 11:33:19 +08:00
crazywoola
a945a45b06 doc: correct typos in mdx files (#7029) 2024-08-07 11:33:19 +08:00
Achim
be829a8103 Provide output data also in json property of workflow tool (#6924) (#7027) 2024-08-07 11:33:19 +08:00
crazywoola
9432d41e60 fix: typos in wenxin llm (#7021) 2024-08-07 11:33:19 +08:00
Sa Zhang
0beeb4ab3e fix: Fix incorrect context size for jina-reranker-v2 model (#7006) 2024-08-07 11:33:19 +08:00
Bryan
d7e057be44 fix: tran list issue (#7009)
Co-authored-by: libing <libing@healink.cn>
2024-08-07 11:33:19 +08:00
Jyong
81b11c08d0 Fix/reranking mode is null (#7012) 2024-08-07 11:33:19 +08:00
Joel
83a5cdfff9 feat: agent app support generate prompt (#7007) 2024-08-07 11:33:19 +08:00
yanghx
c837218bc9 fix #6902 .docx handles images within tables and handles cross-column tables (#6951) 2024-08-07 11:33:19 +08:00
crazywoola
68552893ef fix: code-block-missing-checks (#7002) 2024-08-07 11:33:19 +08:00
灰灰
5ba93ed064 fix: code tool fails when null property exists in object (#6988) 2024-08-07 11:33:19 +08:00
Yi Xiao
959107f553 Feat/new confirm (#6984) 2024-08-07 11:33:19 +08:00
Yefori
443d929137 feat: add function calling for deepseek models (#6990) 2024-08-07 11:33:19 +08:00
Vico Chu
1e04418023 Chores: fix name typo (#6987) 2024-08-07 11:33:19 +08:00
小羽
aeda8869bc feat:nvidia add nemotron4-340b and microsoft/phi-3 (#6973) 2024-08-07 11:33:19 +08:00
非法操作
10eed02ec4 chore: update duckduckgo tool (#6983) 2024-08-07 11:33:19 +08:00
Dr. Artificial曾小健
2472c4f890 fix doc (#6974) 2024-08-07 11:33:19 +08:00
Joel
0455e4e1a5 feat: llm support vision 2024-08-07 11:33:19 +08:00
StyleZhang
251ab5418f file-uploader i18n 2024-08-06 18:03:38 +08:00
Joel
38e6e40900 feat: config vision comp 2024-08-06 17:33:02 +08:00
Joel
b3a3672857 chore: new required field 2024-08-06 15:42:22 +08:00
Joel
53a3c199ec chore: input slider 2024-08-06 15:33:38 +08:00
Joel
fca5af5073 feat: max number with slider 2024-08-06 15:25:53 +08:00
Joel
77d0aac1d3 feat: support custom file type 2024-08-06 14:59:26 +08:00
StyleZhang
fd0f8f33b5 Merge branch 'main' into feat/attachments 2024-08-06 09:58:53 +08:00
Joel
0be99ad01c feat: select file types 2024-08-02 18:17:13 +08:00
StyleZhang
a05d16375e Merge branch 'main' into feat/attachments 2024-08-02 16:30:10 +08:00
Joel
0480bb03c3 feat: new input types 2024-08-02 11:43:01 +08:00
StyleZhang
19dfc6d9a8 file uploader 2024-08-02 10:21:20 +08:00
Joel
d361675159 chore: some select style 2024-08-01 17:54:33 +08:00
Joel
23ae150298 feat: fiter condion 2024-08-01 17:10:02 +08:00
Joel
81383d7c74 feat: sub var picker 2024-08-01 14:43:17 +08:00
Joel
573f653789 feat: order by 2024-08-01 11:40:55 +08:00
Joel
f1b61861b6 Merge branch 'main' into feat/attachments 2024-07-31 17:59:10 +08:00
Joel
8ecee8abce fix: ts problem 2024-07-31 17:02:11 +08:00
Joel
e9ce9c1f47 feat: limit config 2024-07-31 17:01:26 +08:00
Joel
944fea4cc9 feat: list filter type and outpt var 2024-07-31 16:31:51 +08:00
StyleZhang
25c029877a progress circle 2024-07-31 15:15:26 +08:00
StyleZhang
9c31c56115 file uploader 2024-07-30 16:19:20 +08:00
StyleZhang
56507c9f7a chat input area 2024-07-30 13:48:39 +08:00
StyleZhang
b322dda3f6 Merge branch 'main' into feat/attachments 2024-07-30 10:06:40 +08:00
StyleZhang
52d69dd55b file uploader 2024-07-29 17:22:11 +08:00
StyleZhang
0451c5590c Merge branch 'main' into feat/attachments 2024-07-29 10:19:11 +08:00
StyleZhang
2498c238b2 file-uploader 2024-07-26 16:54:45 +08:00
Joel
6e15d7f777 feat: doc extract inputs 2024-07-26 15:00:29 +08:00
Joel
f6caf0915b chore: block bg to utils color 2024-07-26 14:23:34 +08:00
Joel
09aa14ca82 feat: node icons 2024-07-26 14:11:03 +08:00
Joel
394f06a27a feat: list filter 2024-07-26 11:55:09 +08:00
Joel
6fafd410d2 feat: doc extract struct 2024-07-26 11:21:17 +08:00
StyleZhang
1668df104f Merge branch 'main' into feat/attachments 2024-07-26 08:49:17 +08:00
StyleZhang
d376b8540e add file-uploader 2024-07-25 16:41:09 +08:00
1221 changed files with 16708 additions and 38593 deletions

View File

@@ -27,18 +27,19 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Install Poetry
uses: abatilo/actions-poetry@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
cache-dependency-path: |
api/pyproject.toml
api/poetry.lock
- name: Install Poetry
uses: abatilo/actions-poetry@v3
- name: Check Poetry lockfile
- name: Poetry check
run: |
poetry check -C api --lock
poetry show -C api
@@ -46,9 +47,6 @@ jobs:
- name: Install dependencies
run: poetry install -C api --with dev
- name: Check dependencies in pyproject.toml
run: poetry run -C api bash dev/pytest/pytest_artifacts.sh
- name: Run Unit tests
run: poetry run -C api bash dev/pytest/pytest_unit_tests.sh

View File

@@ -5,6 +5,7 @@ on:
branches:
- "main"
- "deploy/dev"
- "release/0.10.0-beta"
release:
types: [published]

View File

@@ -23,17 +23,18 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Install Poetry
uses: abatilo/actions-poetry@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
cache-dependency-path: |
api/pyproject.toml
api/poetry.lock
- name: Install Poetry
uses: abatilo/actions-poetry@v3
- name: Install dependencies
run: poetry install -C api

View File

@@ -24,16 +24,15 @@ jobs:
with:
files: api/**
- name: Install Poetry
uses: abatilo/actions-poetry@v3
- name: Set up Python
uses: actions/setup-python@v5
if: steps.changed-files.outputs.any_changed == 'true'
with:
python-version: '3.10'
- name: Install Poetry
if: steps.changed-files.outputs.any_changed == 'true'
uses: abatilo/actions-poetry@v3
- name: Python dependencies
if: steps.changed-files.outputs.any_changed == 'true'
run: poetry install -C api --only lint

2
.gitignore vendored
View File

@@ -175,8 +175,6 @@ docker/volumes/pgvector/data/*
docker/volumes/pgvecto_rs/data/*
docker/nginx/conf.d/default.conf
docker/nginx/ssl/*
!docker/nginx/ssl/.gitkeep
docker/middleware.env
sdks/python-client/build

View File

@@ -6,9 +6,8 @@ Dify is licensed under the Apache License 2.0, with the following additional con
a. Multi-tenant service: Unless explicitly authorized by Dify in writing, you may not use the Dify source code to operate a multi-tenant environment.
- Tenant Definition: Within the context of Dify, one tenant corresponds to one workspace. The workspace provides a separated area for each tenant's data and configurations.
b. LOGO and copyright information: In the process of using Dify's frontend, you may not remove or modify the LOGO or copyright information in the Dify console or applications. This restriction is inapplicable to uses of Dify that do not involve its frontend.
- Frontend Definition: For the purposes of this license, the "frontend" of Dify includes all components located in the `web/` directory when running Dify from the raw source code, or the "web" image when running Dify with Docker.
b. LOGO and copyright information: In the process of using Dify's frontend components, you may not remove or modify the LOGO or copyright information in the Dify console or applications. This restriction is inapplicable to uses of Dify that do not involve its frontend components.
Please contact business@dify.ai by email to inquire about licensing matters.

View File

@@ -1,9 +1,5 @@
![cover-v5-optimized](https://github.com/langgenius/dify/assets/13230914/f9e19af5-61ba-4119-b926-d10c4c06ebab)
<p align="center">
📌 <a href="https://dify.ai/blog/introducing-dify-workflow-file-upload-a-demo-on-ai-podcast">Introducing Dify Workflow File Upload: Recreate Google NotebookLM Podcast</a>
</p>
<p align="center">
<a href="https://cloud.dify.ai">Dify Cloud</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">Self-hosting</a> ·
@@ -21,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
alt="follow on Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -172,7 +168,7 @@ Star Dify on GitHub and be instantly notified of new releases.
> Before installing Dify, make sure your machine meets the following minimum system requirements:
>
>- CPU >= 2 Core
>- RAM >= 4 GiB
>- RAM >= 4GB
</br>
@@ -200,14 +196,10 @@ If you'd like to configure a highly-available setup, there are community-contrib
#### Using Terraform for Deployment
Deploy Dify to Cloud Platform with a single click using [terraform](https://www.terraform.io/)
##### Azure Global
Deploy Dify to Azure with a single click using [terraform](https://www.terraform.io/).
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contributing
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
@@ -227,7 +219,7 @@ At the same time, please consider supporting Dify by sharing it on social media
* [Github Discussion](https://github.com/langgenius/dify/discussions). Best for: sharing feedback and asking questions.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [X(Twitter)](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
## Star history

View File

@@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
alt="follow on Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -179,13 +179,10 @@ docker compose up -d
#### استخدام Terraform للتوزيع
انشر Dify إلى منصة السحابة بنقرة واحدة باستخدام [terraform](https://www.terraform.io/)
##### Azure Global
استخدم [terraform](https://www.terraform.io/) لنشر Dify على Azure بنقرة واحدة.
- [Azure Terraform بواسطة @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform بواسطة @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## المساهمة

View File

@@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
alt="follow on Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -154,7 +154,7 @@ Dify 是一个开源的 LLM 应用开发平台。其直观的界面结合了 AI
我们提供[ Dify 云服务](https://dify.ai),任何人都可以零设置尝试。它提供了自部署版本的所有功能,并在沙盒计划中包含 200 次免费的 GPT-4 调用。
- **自托管 Dify 社区版</br>**
使用这个[入门指南](#快速启动)快速在您的环境中运行 Dify。
使用这个[入门指南](#quick-start)快速在您的环境中运行 Dify。
使用我们的[文档](https://docs.dify.ai)进行进一步的参考和更深入的说明。
- **面向企业/组织的 Dify</br>**
@@ -174,7 +174,7 @@ Dify 是一个开源的 LLM 应用开发平台。其直观的界面结合了 AI
在安装 Dify 之前,请确保您的机器满足以下最低系统要求:
- CPU >= 2 Core
- RAM >= 4 GiB
- RAM >= 4GB
### 快速启动
@@ -202,14 +202,10 @@ docker compose up -d
#### 使用 Terraform 部署
使用 [terraform](https://www.terraform.io/) 一键将 Dify 部署到云平台
##### Azure Global
使用 [terraform](https://www.terraform.io/) 一键部署 Dify 到 Azure。
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
@@ -236,7 +232,7 @@ docker compose up -d
- [GitHub Issues](https://github.com/langgenius/dify/issues)。👉:使用 Dify.AI 时遇到的错误和问题,请参阅[贡献指南](CONTRIBUTING.md)。
- [电子邮件支持](mailto:hello@dify.ai?subject=[GitHub]Questions%20About%20Dify)。👉:关于使用 Dify.AI 的问题。
- [Discord](https://discord.gg/FngNHpbcY7)。👉:分享您的应用程序并与社区交流。
- [X(Twitter)](https://twitter.com/dify_ai)。👉:分享您的应用程序并与社区交流。
- [Twitter](https://twitter.com/dify_ai)。👉:分享您的应用程序并与社区交流。
- [商业许可](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)。👉:有关商业用途许可 Dify.AI 的商业咨询。
- [微信]() 👉:扫描下方二维码,添加微信好友,备注 Dify我们将邀请您加入 Dify 社区。
<img src="./images/wechat.png" alt="wechat" width="100"/>

View File

@@ -17,7 +17,7 @@
alt="chat en Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="seguir en X(Twitter)"></a>
alt="seguir en Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Descargas de Docker" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -204,13 +204,10 @@ Si desea configurar una configuración de alta disponibilidad, la comunidad prop
#### Uso de Terraform para el despliegue
Despliega Dify en una plataforma en la nube con un solo clic utilizando [terraform](https://www.terraform.io/)
##### Azure Global
Utiliza [terraform](https://www.terraform.io/) para desplegar Dify en Azure con un solo clic.
- [Azure Terraform por @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform por @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contribuir
@@ -231,7 +228,7 @@ Al mismo tiempo, considera apoyar a Dify compartiéndolo en redes sociales y en
* [Discusión en GitHub](https://github.com/langgenius/dify/discussions). Lo mejor para: compartir comentarios y hacer preguntas.
* [Reporte de problemas en GitHub](https://github.com/langgenius/dify/issues). Lo mejor para: errores que encuentres usando Dify.AI y propuestas de características. Consulta nuestra [Guía de contribución](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
* [X(Twitter)](https://twitter.com/dify_ai). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
* [Twitter](https://twitter.com/dify_ai). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
## Historial de Estrellas

View File

@@ -17,7 +17,7 @@
alt="chat sur Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="suivre sur X(Twitter)"></a>
alt="suivre sur Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Tirages Docker" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -202,13 +202,10 @@ Si vous souhaitez configurer une configuration haute disponibilité, la communau
#### Utilisation de Terraform pour le déploiement
Déployez Dify sur une plateforme cloud en un clic en utilisant [terraform](https://www.terraform.io/)
##### Azure Global
Utilisez [terraform](https://www.terraform.io/) pour déployer Dify sur Azure en un clic.
- [Azure Terraform par @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform par @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contribuer
@@ -229,7 +226,7 @@ Dans le même temps, veuillez envisager de soutenir Dify en le partageant sur le
* [Discussion GitHub](https://github.com/langgenius/dify/discussions). Meilleur pour: partager des commentaires et poser des questions.
* [Problèmes GitHub](https://github.com/langgenius/dify/issues). Meilleur pour: les bogues que vous rencontrez en utilisant Dify.AI et les propositions de fonctionnalités. Consultez notre [Guide de contribution](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Meilleur pour: partager vos applications et passer du temps avec la communauté.
* [X(Twitter)](https://twitter.com/dify_ai). Meilleur pour: partager vos applications et passer du temps avec la communauté.
* [Twitter](https://twitter.com/dify_ai). Meilleur pour: partager vos applications et passer du temps avec la communauté.
## Historique des étoiles

View File

@@ -17,7 +17,7 @@
alt="Discordでチャット"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="X(Twitter)でフォロー"></a>
alt="Twitterでフォロー"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -68,7 +68,7 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
プロンプトの作成、モデルパフォーマンスの比較が行え、チャットベースのアプリに音声合成などの機能も追加できます。
**4. RAGパイプライン**:
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサートも提供します。
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサーポイントも提供します。
**5. エージェント機能**:
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DALL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
@@ -201,13 +201,10 @@ docker compose up -d
#### Terraformを使用したデプロイ
[terraform](https://www.terraform.io/) を使用して、ワンクリックでDifyをクラウドプラットフォームにデプロイします
##### Azure Global
- [@nikawangによるAzure Terraform](https://github.com/nikawang/dify-azure-terraform)
[terraform](https://www.terraform.io/) を使用して、AzureにDifyをワンクリックでデプロイします。
- [nikawangのAzure Terraform](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [@sotazumによるGoogle Cloud Terraform](https://github.com/DeNA/dify-google-cloud-terraform)
## 貢献
@@ -228,7 +225,7 @@ docker compose up -d
* [Github Discussion](https://github.com/langgenius/dify/discussions). 主に: フィードバックの共有や質問。
* [GitHub Issues](https://github.com/langgenius/dify/issues). 主に: Dify.AIを使用する際に発生するエラーや問題については、[貢献ガイド](CONTRIBUTING_JA.md)を参照してください
* [Discord](https://discord.gg/FngNHpbcY7). 主に: アプリケーションの共有やコミュニティとの交流。
* [X(Twitter)](https://twitter.com/dify_ai). 主に: アプリケーションの共有やコミュニティとの交流。
* [Twitter](https://twitter.com/dify_ai). 主に: アプリケーションの共有やコミュニティとの交流。

View File

@@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
alt="follow on Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -202,13 +202,10 @@ If you'd like to configure a highly-available setup, there are community-contrib
#### Terraform atorlugu pilersitsineq
wa'logh nIqHom neH ghun deployment toy'wI' [terraform](https://www.terraform.io/) lo'laH.
##### Azure Global
- [Azure Terraform mung @nikawang](https://github.com/nikawang/dify-azure-terraform)
Atoruk [terraform](https://www.terraform.io/) Dify-mik Azure-mut ataatsikkut ikkussuilluarlugu.
- [Azure Terraform atorlugu @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform qachlot @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contributing
@@ -231,7 +228,7 @@ At the same time, please consider supporting Dify by sharing it on social media
). Best for: sharing feedback and asking questions.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [X(Twitter)](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
## Star History

View File

@@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
alt="follow on Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -39,6 +39,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
</p>
@@ -194,14 +195,10 @@ Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했
#### Terraform을 사용한 배포
[terraform](https://www.terraform.io/)을 사용하여 단 한 번의 클릭으로 Dify를 클라우드 플랫폼에 배포하십시오
##### Azure Global
[terraform](https://www.terraform.io/)을 사용하여 Azure에 Dify를 원클릭으로 배포하세요.
- [nikawang의 Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [sotazum의 Google Cloud Terraform](https://github.com/DeNA/dify-google-cloud-terraform)
## 기여
코드에 기여하고 싶은 분들은 [기여 가이드](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)를 참조하세요.

View File

@@ -17,7 +17,7 @@
alt="Discord'da sohbet et"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="X(Twitter)'da takip et"></a>
alt="Twitter'da takip et"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Çekmeleri" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -200,13 +200,9 @@ Yüksek kullanılabilirliğe sahip bir kurulum yapılandırmak isterseniz, Dify'
#### Dağıtım için Terraform Kullanımı
Dify'ı bulut platformuna tek tıklamayla dağıtın [terraform](https://www.terraform.io/) kullanarak
##### Azure Global
- [Azure Terraform tarafından @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform tarafından @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
[Terraform](https://www.terraform.io/) kullanarak Dify'ı Azure'a tek tıklamayla dağıtın.
- [@nikawang tarafından Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
## Katkıda Bulunma
@@ -226,7 +222,7 @@ Aynı zamanda, lütfen Dify'ı sosyal medyada, etkinliklerde ve konferanslarda p
* [Github Tartışmaları](https://github.com/langgenius/dify/discussions). En uygun: geri bildirim paylaşmak ve soru sormak için.
* [GitHub Sorunları](https://github.com/langgenius/dify/issues). En uygun: Dify.AI kullanırken karşılaştığınız hatalar ve özellik önerileri için. [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakın.
* [Discord](https://discord.gg/FngNHpbcY7). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
* [X(Twitter)](https://twitter.com/dify_ai). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
* [Twitter](https://twitter.com/dify_ai). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
## Star history

View File

@@ -17,7 +17,7 @@
alt="chat trên Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="theo dõi trên X(Twitter)"></a>
alt="theo dõi trên Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@@ -196,14 +196,10 @@ Nếu bạn muốn cấu hình một cài đặt có độ sẵn sàng cao, có
#### Sử dụng Terraform để Triển khai
Triển khai Dify lên nền tảng đám mây với một cú nhấp chuột bằng cách sử dụng [terraform](https://www.terraform.io/)
##### Azure Global
Triển khai Dify lên Azure chỉ với một cú nhấp chuột bằng cách sử dụng [terraform](https://www.terraform.io/).
- [Azure Terraform bởi @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform bởi @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Đóng góp
Đối với những người muốn đóng góp mã, xem [Hướng dẫn Đóng góp](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) của chúng tôi.
@@ -223,7 +219,7 @@ Triển khai Dify lên nền tảng đám mây với một cú nhấp chuột b
* [Thảo luận GitHub](https://github.com/langgenius/dify/discussions). Tốt nhất cho: chia sẻ phản hồi và đặt câu hỏi.
* [Vấn đề GitHub](https://github.com/langgenius/dify/issues). Tốt nhất cho: lỗi bạn gặp phải khi sử dụng Dify.AI và đề xuất tính năng. Xem [Hướng dẫn Đóng góp](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) của chúng tôi.
* [Discord](https://discord.gg/FngNHpbcY7). Tốt nhất cho: chia sẻ ứng dụng của bạn và giao lưu với cộng đồng.
* [X(Twitter)](https://twitter.com/dify_ai). Tốt nhất cho: chia sẻ ứng dụng của bạn và giao lưu với cộng đồng.
* [Twitter](https://twitter.com/dify_ai). Tốt nhất cho: chia sẻ ứng dụng của bạn và giao lưu với cộng đồng.
## Lịch sử Yêu thích

View File

@@ -20,9 +20,6 @@ FILES_URL=http://127.0.0.1:5001
# The time in seconds after the signature is rejected
FILES_ACCESS_TIMEOUT=300
# Access token expiration time in minutes
ACCESS_TOKEN_EXPIRE_MINUTES=60
# celery configuration
CELERY_BROKER_URL=redis://:difyai123456@localhost:6379/1
@@ -31,17 +28,8 @@ REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_USERNAME=
REDIS_PASSWORD=difyai123456
REDIS_USE_SSL=false
REDIS_DB=0
# redis Sentinel configuration.
REDIS_USE_SENTINEL=false
REDIS_SENTINELS=
REDIS_SENTINEL_SERVICE_NAME=
REDIS_SENTINEL_USERNAME=
REDIS_SENTINEL_PASSWORD=
REDIS_SENTINEL_SOCKET_TIMEOUT=0.1
# PostgreSQL database configuration
DB_USERNAME=postgres
DB_PASSWORD=difyai123456
@@ -51,7 +39,7 @@ DB_DATABASE=dify
# Storage configuration
# use for store upload files, private keys...
# storage type: local, s3, aliyun-oss, azure-blob, baidu-obs, google-storage, huawei-obs, oci-storage, tencent-cos, volcengine-tos, supabase
# storage type: local, s3, azure-blob, google-storage, tencent-cos, huawei-obs, volcengine-tos
STORAGE_TYPE=local
STORAGE_LOCAL_PATH=storage
S3_USE_AWS_MANAGED_IAM=false
@@ -91,12 +79,6 @@ HUAWEI_OBS_SECRET_KEY=your-secret-key
HUAWEI_OBS_ACCESS_KEY=your-access-key
HUAWEI_OBS_SERVER=your-server-url
# Baidu OBS Storage Configuration
BAIDU_OBS_BUCKET_NAME=your-bucket-name
BAIDU_OBS_SECRET_KEY=your-secret-key
BAIDU_OBS_ACCESS_KEY=your-access-key
BAIDU_OBS_ENDPOINT=your-server-url
# OCI Storage configuration
OCI_ENDPOINT=your-endpoint
OCI_BUCKET_NAME=your-bucket-name
@@ -111,16 +93,11 @@ VOLCENGINE_TOS_ACCESS_KEY=your-access-key
VOLCENGINE_TOS_SECRET_KEY=your-secret-key
VOLCENGINE_TOS_REGION=your-region
# Supabase Storage Configuration
SUPABASE_BUCKET_NAME=your-bucket-name
SUPABASE_API_KEY=your-access-key
SUPABASE_URL=your-server-url
# CORS configuration
WEB_API_CORS_ALLOW_ORIGINS=http://127.0.0.1:3000,*
CONSOLE_CORS_ALLOW_ORIGINS=http://127.0.0.1:3000,*
# Vector database configuration, support: weaviate, qdrant, milvus, myscale, relyt, pgvecto_rs, pgvector, pgvector, chroma, opensearch, tidb_vector, vikingdb, upstash
# Vector database configuration, support: weaviate, qdrant, milvus, myscale, relyt, pgvecto_rs, pgvector, pgvector, chroma, opensearch, tidb_vector
VECTOR_STORE=weaviate
# Weaviate configuration
@@ -220,28 +197,6 @@ OPENSEARCH_USER=admin
OPENSEARCH_PASSWORD=admin
OPENSEARCH_SECURE=true
# Baidu configuration
BAIDU_VECTOR_DB_ENDPOINT=http://127.0.0.1:5287
BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS=30000
BAIDU_VECTOR_DB_ACCOUNT=root
BAIDU_VECTOR_DB_API_KEY=dify
BAIDU_VECTOR_DB_DATABASE=dify
BAIDU_VECTOR_DB_SHARD=1
BAIDU_VECTOR_DB_REPLICAS=3
# Upstash configuration
UPSTASH_VECTOR_URL=your-server-url
UPSTASH_VECTOR_TOKEN=your-access-token
# ViKingDB configuration
VIKINGDB_ACCESS_KEY=your-ak
VIKINGDB_SECRET_KEY=your-sk
VIKINGDB_REGION=cn-shanghai
VIKINGDB_HOST=api-vikingdb.xxx.volces.com
VIKINGDB_SCHEMA=http
VIKINGDB_CONNECTION_TIMEOUT=30
VIKINGDB_SOCKET_TIMEOUT=30
# Upload configuration
UPLOAD_FILE_SIZE_LIMIT=15
UPLOAD_FILE_BATCH_LIMIT=5
@@ -252,7 +207,6 @@ UPLOAD_AUDIO_FILE_SIZE_LIMIT=50
# Model Configuration
MULTIMODAL_SEND_IMAGE_FORMAT=base64
PROMPT_GENERATION_MAX_TOKENS=512
CODE_GENERATION_MAX_TOKENS=1024
# Mail configuration, support: resend, smtp
MAIL_TYPE=
@@ -313,15 +267,8 @@ HTTP_REQUEST_MAX_WRITE_TIMEOUT=600
HTTP_REQUEST_NODE_MAX_BINARY_SIZE=10485760
HTTP_REQUEST_NODE_MAX_TEXT_SIZE=1048576
# Respect X-* headers to redirect clients
RESPECT_XFORWARD_HEADERS_ENABLED=false
# Log file path
LOG_FILE=
# Log file max size, the unit is MB
LOG_FILE_MAX_SIZE=20
# Log file max backup count
LOG_FILE_BACKUP_COUNT=5
# Indexing configuration
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH=1000
@@ -348,6 +295,3 @@ POSITION_TOOL_EXCLUDES=
POSITION_PROVIDER_PINS=
POSITION_PROVIDER_INCLUDES=
POSITION_PROVIDER_EXCLUDES=
# Reset password token expiry minutes
RESET_PASSWORD_TOKEN_EXPIRY_MINUTES=5

View File

@@ -55,9 +55,7 @@ RUN apt-get update \
&& echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list \
&& apt-get update \
# For Security
&& apt-get install -y --no-install-recommends zlib1g=1:1.3.dfsg+really1.3.1-1 expat=2.6.3-1 libldap-2.5-0=2.5.18+dfsg-3+b1 perl=5.40.0-6 libsqlite3-0=3.46.1-1 \
# install a chinese font to support the use of tools like matplotlib
&& apt-get install -y fonts-noto-cjk \
&& apt-get install -y --no-install-recommends zlib1g=1:1.3.dfsg+really1.3.1-1 expat=2.6.3-1 libldap-2.5-0=2.5.18+dfsg-3 perl=5.38.2-5 libsqlite3-0=3.46.0-1 \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*

View File

@@ -85,4 +85,3 @@
cd ../
poetry run -C api bash dev/pytest/pytest_all_tests.sh
```

View File

@@ -1,7 +1,5 @@
import os
from configs import dify_config
if os.environ.get("DEBUG", "false").lower() != "true":
from gevent import monkey
@@ -12,20 +10,43 @@ if os.environ.get("DEBUG", "false").lower() != "true":
grpc.experimental.gevent.init_gevent()
import json
import logging
import sys
import threading
import time
import warnings
from logging.handlers import RotatingFileHandler
from flask import Response
from flask import Flask, Response, request
from flask_cors import CORS
from werkzeug.exceptions import Unauthorized
from app_factory import create_app
import contexts
from commands import register_commands
from configs import dify_config
# DO NOT REMOVE BELOW
from events import event_handlers # noqa: F401
from events import event_handlers
from extensions import (
ext_celery,
ext_code_based_extension,
ext_compress,
ext_database,
ext_hosting_provider,
ext_login,
ext_mail,
ext_migrate,
ext_redis,
ext_sentry,
ext_storage,
)
from extensions.ext_database import db
from extensions.ext_login import login_manager
from libs.passport import PassportService
# TODO: Find a way to avoid importing models here
from models import account, dataset, model, source, task, tool, tools, web # noqa: F401
from models import account, dataset, model, source, task, tool, tools, web
from services.account_service import AccountService
# DO NOT REMOVE ABOVE
@@ -38,11 +59,193 @@ if hasattr(time, "tzset"):
time.tzset()
class DifyApp(Flask):
pass
# -------------
# Configuration
# -------------
config_type = os.getenv("EDITION", default="SELF_HOSTED") # ce edition first
# ----------------------------
# Application Factory Function
# ----------------------------
def create_flask_app_with_configs() -> Flask:
"""
create a raw flask app
with configs loaded from .env file
"""
dify_app = DifyApp(__name__)
dify_app.config.from_mapping(dify_config.model_dump())
# populate configs into system environment variables
for key, value in dify_app.config.items():
if isinstance(value, str):
os.environ[key] = value
elif isinstance(value, int | float | bool):
os.environ[key] = str(value)
elif value is None:
os.environ[key] = ""
return dify_app
def create_app() -> Flask:
app = create_flask_app_with_configs()
app.secret_key = app.config["SECRET_KEY"]
log_handlers = None
log_file = app.config.get("LOG_FILE")
if log_file:
log_dir = os.path.dirname(log_file)
os.makedirs(log_dir, exist_ok=True)
log_handlers = [
RotatingFileHandler(
filename=log_file,
maxBytes=1024 * 1024 * 1024,
backupCount=5,
),
logging.StreamHandler(sys.stdout),
]
logging.basicConfig(
level=app.config.get("LOG_LEVEL"),
format=app.config["LOG_FORMAT"],
datefmt=app.config.get("LOG_DATEFORMAT"),
handlers=log_handlers,
force=True,
)
log_tz = app.config.get("LOG_TZ")
if log_tz:
from datetime import datetime
import pytz
timezone = pytz.timezone(log_tz)
def time_converter(seconds):
return datetime.utcfromtimestamp(seconds).astimezone(timezone).timetuple()
for handler in logging.root.handlers:
assert handler.formatter
handler.formatter.converter = time_converter
initialize_extensions(app)
register_blueprints(app)
register_commands(app)
return app
def initialize_extensions(app):
# Since the application instance is now created, pass it to each Flask
# extension instance to bind it to the Flask application instance (app)
ext_compress.init_app(app)
ext_code_based_extension.init()
ext_database.init_app(app)
ext_migrate.init(app, db)
ext_redis.init_app(app)
ext_storage.init_app(app)
ext_celery.init_app(app)
ext_login.init_app(app)
ext_mail.init_app(app)
ext_hosting_provider.init_app(app)
ext_sentry.init_app(app)
# Flask-Login configuration
@login_manager.request_loader
def load_user_from_request(request_from_flask_login):
"""Load user based on the request."""
if request.blueprint not in {"console", "inner_api"}:
return None
# Check if the user_id contains a dot, indicating the old format
auth_header = request.headers.get("Authorization", "")
if not auth_header:
auth_token = request.args.get("_token")
if not auth_token:
raise Unauthorized("Invalid Authorization token.")
else:
if " " not in auth_header:
raise Unauthorized("Invalid Authorization header format. Expected 'Bearer <api-key>' format.")
auth_scheme, auth_token = auth_header.split(None, 1)
auth_scheme = auth_scheme.lower()
if auth_scheme != "bearer":
raise Unauthorized("Invalid Authorization header format. Expected 'Bearer <api-key>' format.")
decoded = PassportService().verify(auth_token)
user_id = decoded.get("user_id")
account = AccountService.load_logged_in_account(account_id=user_id, token=auth_token)
if account:
contexts.tenant_id.set(account.current_tenant_id)
return account
@login_manager.unauthorized_handler
def unauthorized_handler():
"""Handle unauthorized requests."""
return Response(
json.dumps({"code": "unauthorized", "message": "Unauthorized."}),
status=401,
content_type="application/json",
)
# register blueprint routers
def register_blueprints(app):
from controllers.console import bp as console_app_bp
from controllers.files import bp as files_bp
from controllers.inner_api import bp as inner_api_bp
from controllers.service_api import bp as service_api_bp
from controllers.web import bp as web_bp
CORS(
service_api_bp,
allow_headers=["Content-Type", "Authorization", "X-App-Code"],
methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"],
)
app.register_blueprint(service_api_bp)
CORS(
web_bp,
resources={r"/*": {"origins": app.config["WEB_API_CORS_ALLOW_ORIGINS"]}},
supports_credentials=True,
allow_headers=["Content-Type", "Authorization", "X-App-Code"],
methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"],
expose_headers=["X-Version", "X-Env"],
)
app.register_blueprint(web_bp)
CORS(
console_app_bp,
resources={r"/*": {"origins": app.config["CONSOLE_CORS_ALLOW_ORIGINS"]}},
supports_credentials=True,
allow_headers=["Content-Type", "Authorization"],
methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"],
expose_headers=["X-Version", "X-Env"],
)
app.register_blueprint(console_app_bp)
CORS(files_bp, allow_headers=["Content-Type"], methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"])
app.register_blueprint(files_bp)
app.register_blueprint(inner_api_bp)
# create app
app = create_app()
celery = app.extensions["celery"]
if dify_config.TESTING:
if app.config.get("TESTING"):
print("App is running in TESTING mode")
@@ -50,15 +253,15 @@ if dify_config.TESTING:
def after_request(response):
"""Add Version headers to the response."""
response.set_cookie("remember_token", "", expires=0)
response.headers.add("X-Version", dify_config.CURRENT_VERSION)
response.headers.add("X-Env", dify_config.DEPLOY_ENV)
response.headers.add("X-Version", app.config["CURRENT_VERSION"])
response.headers.add("X-Env", app.config["DEPLOY_ENV"])
return response
@app.route("/health")
def health():
return Response(
json.dumps({"pid": os.getpid(), "status": "ok", "version": dify_config.CURRENT_VERSION}),
json.dumps({"pid": os.getpid(), "status": "ok", "version": app.config["CURRENT_VERSION"]}),
status=200,
content_type="application/json",
)

View File

@@ -1,176 +0,0 @@
import os
if os.environ.get("DEBUG", "false").lower() != "true":
from gevent import monkey
monkey.patch_all()
import grpc.experimental.gevent
grpc.experimental.gevent.init_gevent()
import json
from flask import Flask, Response, request
from flask_cors import CORS
from werkzeug.exceptions import Unauthorized
import contexts
from commands import register_commands
from configs import dify_config
from extensions import (
ext_celery,
ext_code_based_extension,
ext_compress,
ext_database,
ext_hosting_provider,
ext_logging,
ext_login,
ext_mail,
ext_migrate,
ext_proxy_fix,
ext_redis,
ext_sentry,
ext_storage,
)
from extensions.ext_database import db
from extensions.ext_login import login_manager
from libs.passport import PassportService
from services.account_service import AccountService
class DifyApp(Flask):
pass
# ----------------------------
# Application Factory Function
# ----------------------------
def create_flask_app_with_configs() -> Flask:
"""
create a raw flask app
with configs loaded from .env file
"""
dify_app = DifyApp(__name__)
dify_app.config.from_mapping(dify_config.model_dump())
# populate configs into system environment variables
for key, value in dify_app.config.items():
if isinstance(value, str):
os.environ[key] = value
elif isinstance(value, int | float | bool):
os.environ[key] = str(value)
elif value is None:
os.environ[key] = ""
return dify_app
def create_app() -> Flask:
app = create_flask_app_with_configs()
app.secret_key = dify_config.SECRET_KEY
initialize_extensions(app)
register_blueprints(app)
register_commands(app)
return app
def initialize_extensions(app):
# Since the application instance is now created, pass it to each Flask
# extension instance to bind it to the Flask application instance (app)
ext_logging.init_app(app)
ext_compress.init_app(app)
ext_code_based_extension.init()
ext_database.init_app(app)
ext_migrate.init(app, db)
ext_redis.init_app(app)
ext_storage.init_app(app)
ext_celery.init_app(app)
ext_login.init_app(app)
ext_mail.init_app(app)
ext_hosting_provider.init_app(app)
ext_sentry.init_app(app)
ext_proxy_fix.init_app(app)
# Flask-Login configuration
@login_manager.request_loader
def load_user_from_request(request_from_flask_login):
"""Load user based on the request."""
if request.blueprint not in {"console", "inner_api"}:
return None
# Check if the user_id contains a dot, indicating the old format
auth_header = request.headers.get("Authorization", "")
if not auth_header:
auth_token = request.args.get("_token")
if not auth_token:
raise Unauthorized("Invalid Authorization token.")
else:
if " " not in auth_header:
raise Unauthorized("Invalid Authorization header format. Expected 'Bearer <api-key>' format.")
auth_scheme, auth_token = auth_header.split(None, 1)
auth_scheme = auth_scheme.lower()
if auth_scheme != "bearer":
raise Unauthorized("Invalid Authorization header format. Expected 'Bearer <api-key>' format.")
decoded = PassportService().verify(auth_token)
user_id = decoded.get("user_id")
logged_in_account = AccountService.load_logged_in_account(account_id=user_id)
if logged_in_account:
contexts.tenant_id.set(logged_in_account.current_tenant_id)
return logged_in_account
@login_manager.unauthorized_handler
def unauthorized_handler():
"""Handle unauthorized requests."""
return Response(
json.dumps({"code": "unauthorized", "message": "Unauthorized."}),
status=401,
content_type="application/json",
)
# register blueprint routers
def register_blueprints(app):
from controllers.console import bp as console_app_bp
from controllers.files import bp as files_bp
from controllers.inner_api import bp as inner_api_bp
from controllers.service_api import bp as service_api_bp
from controllers.web import bp as web_bp
CORS(
service_api_bp,
allow_headers=["Content-Type", "Authorization", "X-App-Code"],
methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"],
)
app.register_blueprint(service_api_bp)
CORS(
web_bp,
resources={r"/*": {"origins": dify_config.WEB_API_CORS_ALLOW_ORIGINS}},
supports_credentials=True,
allow_headers=["Content-Type", "Authorization", "X-App-Code"],
methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"],
expose_headers=["X-Version", "X-Env"],
)
app.register_blueprint(web_bp)
CORS(
console_app_bp,
resources={r"/*": {"origins": dify_config.CONSOLE_CORS_ALLOW_ORIGINS}},
supports_credentials=True,
allow_headers=["Content-Type", "Authorization"],
methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"],
expose_headers=["X-Version", "X-Env"],
)
app.register_blueprint(console_app_bp)
CORS(files_bp, allow_headers=["Content-Type"], methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"])
app.register_blueprint(files_bp)
app.register_blueprint(inner_api_bp)

View File

@@ -259,26 +259,6 @@ def migrate_knowledge_vector_database():
skipped_count = 0
total_count = 0
vector_type = dify_config.VECTOR_STORE
upper_colletion_vector_types = {
VectorType.MILVUS,
VectorType.PGVECTOR,
VectorType.RELYT,
VectorType.WEAVIATE,
VectorType.ORACLE,
VectorType.ELASTICSEARCH,
}
lower_colletion_vector_types = {
VectorType.ANALYTICDB,
VectorType.CHROMA,
VectorType.MYSCALE,
VectorType.PGVECTO_RS,
VectorType.TIDB_VECTOR,
VectorType.OPENSEARCH,
VectorType.TENCENT,
VectorType.BAIDU,
VectorType.VIKINGDB,
VectorType.UPSTASH,
}
page = 1
while True:
try:
@@ -304,9 +284,11 @@ def migrate_knowledge_vector_database():
skipped_count = skipped_count + 1
continue
collection_name = ""
dataset_id = dataset.id
if vector_type in upper_colletion_vector_types:
if vector_type == VectorType.WEAVIATE:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {"type": VectorType.WEAVIATE, "vector_store": {"class_prefix": collection_name}}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == VectorType.QDRANT:
if dataset.collection_binding_id:
dataset_collection_binding = (
@@ -319,15 +301,55 @@ def migrate_knowledge_vector_database():
else:
raise ValueError("Dataset Collection Binding not found")
else:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {"type": VectorType.QDRANT, "vector_store": {"class_prefix": collection_name}}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type in lower_colletion_vector_types:
collection_name = Dataset.gen_collection_name_by_id(dataset_id).lower()
elif vector_type == VectorType.MILVUS:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {"type": VectorType.MILVUS, "vector_store": {"class_prefix": collection_name}}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == VectorType.RELYT:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {"type": "relyt", "vector_store": {"class_prefix": collection_name}}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == VectorType.TENCENT:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {"type": VectorType.TENCENT, "vector_store": {"class_prefix": collection_name}}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == VectorType.PGVECTOR:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {"type": VectorType.PGVECTOR, "vector_store": {"class_prefix": collection_name}}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == VectorType.OPENSEARCH:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
"type": VectorType.OPENSEARCH,
"vector_store": {"class_prefix": collection_name},
}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == VectorType.ANALYTICDB:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
"type": VectorType.ANALYTICDB,
"vector_store": {"class_prefix": collection_name},
}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == VectorType.ELASTICSEARCH:
dataset_id = dataset.id
index_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {"type": "elasticsearch", "vector_store": {"class_prefix": index_name}}
dataset.index_struct = json.dumps(index_struct_dict)
else:
raise ValueError(f"Vector store {vector_type} is not supported.")
index_struct_dict = {"type": vector_type, "vector_store": {"class_prefix": collection_name}}
dataset.index_struct = json.dumps(index_struct_dict)
vector = Vector(dataset)
click.echo(f"Migrating dataset {dataset.id}.")

View File

@@ -1,15 +1,6 @@
from typing import Annotated, Literal, Optional
from pydantic import (
AliasChoices,
Field,
HttpUrl,
NegativeInt,
NonNegativeInt,
PositiveFloat,
PositiveInt,
computed_field,
)
from pydantic import AliasChoices, Field, HttpUrl, NegativeInt, NonNegativeInt, PositiveInt, computed_field
from pydantic_settings import BaseSettings
from configs.feature.hosted_service import HostedServiceConfig
@@ -27,24 +18,9 @@ class SecurityConfig(BaseSettings):
default="",
)
RESET_PASSWORD_TOKEN_EXPIRY_MINUTES: PositiveInt = Field(
description="Duration in minutes for which a password reset token remains valid",
default=5,
)
LOGIN_DISABLED: bool = Field(
description="Whether to disable login checks",
default=False,
)
ADMIN_API_KEY_ENABLE: bool = Field(
description="Whether to enable admin api key for authentication",
default=False,
)
ADMIN_API_KEY: Optional[str] = Field(
description="admin api key for authentication",
default=None,
RESET_PASSWORD_TOKEN_EXPIRY_HOURS: PositiveInt = Field(
description="Duration in hours for which a password reset token remains valid",
default=24,
)
@@ -281,12 +257,6 @@ class HttpConfig(BaseSettings):
default=None,
)
RESPECT_XFORWARD_HEADERS_ENABLED: bool = Field(
description="Enable or disable the X-Forwarded-For Proxy Fix middleware from Werkzeug"
" to respect X-* headers to redirect clients",
default=False,
)
class InnerAPIConfig(BaseSettings):
"""
@@ -319,16 +289,6 @@ class LoggingConfig(BaseSettings):
default=None,
)
LOG_FILE_MAX_SIZE: PositiveInt = Field(
description="Maximum file size for file rotation retention, the unit is megabytes (MB)",
default=20,
)
LOG_FILE_BACKUP_COUNT: PositiveInt = Field(
description="Maximum file backup count file rotation retention",
default=5,
)
LOG_FORMAT: str = Field(
description="Format string for log messages",
default="%(asctime)s.%(msecs)03d %(levelname)s [%(threadName)s] [%(filename)s:%(lineno)d] - %(message)s",
@@ -404,9 +364,9 @@ class WorkflowConfig(BaseSettings):
)
class AuthConfig(BaseSettings):
class OAuthConfig(BaseSettings):
"""
Configuration for authentication and OAuth
Configuration for OAuth authentication
"""
OAUTH_REDIRECT_PATH: str = Field(
@@ -415,7 +375,7 @@ class AuthConfig(BaseSettings):
)
GITHUB_CLIENT_ID: Optional[str] = Field(
description="GitHub OAuth client ID",
description="GitHub OAuth client secret",
default=None,
)
@@ -434,11 +394,6 @@ class AuthConfig(BaseSettings):
default=None,
)
ACCESS_TOKEN_EXPIRE_MINUTES: PositiveInt = Field(
description="Expiration time for access tokens in minutes",
default=60,
)
class ModerationConfig(BaseSettings):
"""
@@ -517,11 +472,6 @@ class MailConfig(BaseSettings):
default=False,
)
EMAIL_SEND_IP_LIMIT_PER_MINUTE: PositiveInt = Field(
description="Maximum number of emails allowed to be sent from the same IP address in a minute",
default=50,
)
class RagEtlConfig(BaseSettings):
"""
@@ -556,26 +506,16 @@ class DataSetConfig(BaseSettings):
Configuration for dataset management
"""
PLAN_SANDBOX_CLEAN_DAY_SETTING: PositiveInt = Field(
description="Interval in days for dataset cleanup operations - plan: sandbox",
CLEAN_DAY_SETTING: PositiveInt = Field(
description="Interval in days for dataset cleanup operations",
default=30,
)
PLAN_PRO_CLEAN_DAY_SETTING: PositiveInt = Field(
description="Interval in days for dataset cleanup operations - plan: pro and team",
default=7,
)
DATASET_OPERATOR_ENABLED: bool = Field(
description="Enable or disable dataset operator functionality",
default=False,
)
TIDB_SERVERLESS_NUMBER: PositiveInt = Field(
description="number of tidb serverless cluster",
default=500,
)
class WorkspaceConfig(BaseSettings):
"""
@@ -669,37 +609,9 @@ class PositionConfig(BaseSettings):
return {item.strip() for item in self.POSITION_TOOL_EXCLUDES.split(",") if item.strip() != ""}
class LoginConfig(BaseSettings):
ENABLE_EMAIL_CODE_LOGIN: bool = Field(
description="whether to enable email code login",
default=False,
)
ENABLE_EMAIL_PASSWORD_LOGIN: bool = Field(
description="whether to enable email password login",
default=True,
)
ENABLE_SOCIAL_OAUTH_LOGIN: bool = Field(
description="whether to enable github/google oauth login",
default=False,
)
EMAIL_CODE_LOGIN_TOKEN_EXPIRY_MINUTES: PositiveInt = Field(
description="expiry time in minutes for email code login token",
default=5,
)
ALLOW_REGISTER: bool = Field(
description="whether to enable register",
default=False,
)
ALLOW_CREATE_WORKSPACE: bool = Field(
description="whether to enable create workspace",
default=False,
)
class FeatureConfig(
# place the configs in alphabet order
AppExecutionConfig,
AuthConfig, # Changed from OAuthConfig to AuthConfig
BillingConfig,
CodeExecutionSandboxConfig,
DataSetConfig,
@@ -714,14 +626,14 @@ class FeatureConfig(
MailConfig,
ModelLoadBalanceConfig,
ModerationConfig,
PositionConfig,
OAuthConfig,
RagEtlConfig,
SecurityConfig,
ToolConfig,
UpdateConfig,
WorkflowConfig,
WorkspaceConfig,
LoginConfig,
PositionConfig,
# hosted services config
HostedServiceConfig,
CeleryBeatConfig,

View File

@@ -8,11 +8,9 @@ from configs.middleware.cache.redis_config import RedisConfig
from configs.middleware.storage.aliyun_oss_storage_config import AliyunOSSStorageConfig
from configs.middleware.storage.amazon_s3_storage_config import S3StorageConfig
from configs.middleware.storage.azure_blob_storage_config import AzureBlobStorageConfig
from configs.middleware.storage.baidu_obs_storage_config import BaiduOBSStorageConfig
from configs.middleware.storage.google_cloud_storage_config import GoogleCloudStorageConfig
from configs.middleware.storage.huawei_obs_storage_config import HuaweiCloudOBSStorageConfig
from configs.middleware.storage.oci_storage_config import OCIStorageConfig
from configs.middleware.storage.supabase_storage_config import SupabaseStorageConfig
from configs.middleware.storage.tencent_cos_storage_config import TencentCloudCOSStorageConfig
from configs.middleware.storage.volcengine_tos_storage_config import VolcengineTOSStorageConfig
from configs.middleware.vdb.analyticdb_config import AnalyticdbConfig
@@ -27,18 +25,14 @@ from configs.middleware.vdb.pgvectors_config import PGVectoRSConfig
from configs.middleware.vdb.qdrant_config import QdrantConfig
from configs.middleware.vdb.relyt_config import RelytConfig
from configs.middleware.vdb.tencent_vector_config import TencentVectorDBConfig
from configs.middleware.vdb.tidb_on_qdrant_config import TidbOnQdrantConfig
from configs.middleware.vdb.tidb_vector_config import TiDBVectorConfig
from configs.middleware.vdb.upstash_config import UpstashConfig
from configs.middleware.vdb.vikingdb_config import VikingDBConfig
from configs.middleware.vdb.weaviate_config import WeaviateConfig
class StorageConfig(BaseSettings):
STORAGE_TYPE: str = Field(
description="Type of storage to use."
" Options: 'local', 's3', 'aliyun-oss', 'azure-blob', 'baidu-obs', 'google-storage', 'huawei-obs', "
"'oci-storage', 'tencent-cos', 'volcengine-tos', 'supabase'. Default is 'local'.",
" Options: 'local', 's3', 'azure-blob', 'aliyun-oss', 'google-storage'. Default is 'local'.",
default="local",
)
@@ -55,11 +49,6 @@ class VectorStoreConfig(BaseSettings):
default=None,
)
VECTOR_STORE_WHITELIST_ENABLE: Optional[bool] = Field(
description="Enable whitelist for vector store.",
default=False,
)
class KeywordStoreConfig(BaseSettings):
KEYWORD_STORE: str = Field(
@@ -201,22 +190,6 @@ class CeleryConfig(DatabaseConfig):
return self.CELERY_BROKER_URL.startswith("rediss://") if self.CELERY_BROKER_URL else False
class InternalTestConfig(BaseSettings):
"""
Configuration settings for Internal Test
"""
AWS_SECRET_ACCESS_KEY: Optional[str] = Field(
description="Internal test AWS secret access key",
default=None,
)
AWS_ACCESS_KEY_ID: Optional[str] = Field(
description="Internal test AWS access key ID",
default=None,
)
class MiddlewareConfig(
# place the configs in alphabet order
CeleryConfig,
@@ -227,14 +200,12 @@ class MiddlewareConfig(
StorageConfig,
AliyunOSSStorageConfig,
AzureBlobStorageConfig,
BaiduOBSStorageConfig,
GoogleCloudStorageConfig,
HuaweiCloudOBSStorageConfig,
OCIStorageConfig,
S3StorageConfig,
SupabaseStorageConfig,
TencentCloudCOSStorageConfig,
HuaweiCloudOBSStorageConfig,
VolcengineTOSStorageConfig,
S3StorageConfig,
OCIStorageConfig,
# configs of vdb and vdb providers
VectorStoreConfig,
AnalyticdbConfig,
@@ -251,9 +222,5 @@ class MiddlewareConfig(
TiDBVectorConfig,
WeaviateConfig,
ElasticsearchConfig,
InternalTestConfig,
VikingDBConfig,
UpstashConfig,
TidbOnQdrantConfig,
):
pass

View File

@@ -1,29 +0,0 @@
from typing import Optional
from pydantic import BaseModel, Field
class BaiduOBSStorageConfig(BaseModel):
"""
Configuration settings for Baidu Object Storage Service (OBS)
"""
BAIDU_OBS_BUCKET_NAME: Optional[str] = Field(
description="Name of the Baidu OBS bucket to store and retrieve objects (e.g., 'my-obs-bucket')",
default=None,
)
BAIDU_OBS_ACCESS_KEY: Optional[str] = Field(
description="Access Key ID for authenticating with Baidu OBS",
default=None,
)
BAIDU_OBS_SECRET_KEY: Optional[str] = Field(
description="Secret Access Key for authenticating with Baidu OBS",
default=None,
)
BAIDU_OBS_ENDPOINT: Optional[str] = Field(
description="URL of the Baidu OSS endpoint for your chosen region (e.g., 'https://.bj.bcebos.com')",
default=None,
)

View File

@@ -1,24 +0,0 @@
from typing import Optional
from pydantic import BaseModel, Field
class SupabaseStorageConfig(BaseModel):
"""
Configuration settings for Supabase Object Storage Service
"""
SUPABASE_BUCKET_NAME: Optional[str] = Field(
description="Name of the Supabase bucket to store and retrieve objects (e.g., 'dify-bucket')",
default=None,
)
SUPABASE_API_KEY: Optional[str] = Field(
description="API KEY for authenticating with Supabase",
default=None,
)
SUPABASE_URL: Optional[str] = Field(
description="URL of the Supabase",
default=None,
)

View File

@@ -1,45 +0,0 @@
from typing import Optional
from pydantic import Field, NonNegativeInt, PositiveInt
from pydantic_settings import BaseSettings
class BaiduVectorDBConfig(BaseSettings):
"""
Configuration settings for Baidu Vector Database
"""
BAIDU_VECTOR_DB_ENDPOINT: Optional[str] = Field(
description="URL of the Baidu Vector Database service (e.g., 'http://vdb.bj.baidubce.com')",
default=None,
)
BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS: PositiveInt = Field(
description="Timeout in milliseconds for Baidu Vector Database operations (default is 30000 milliseconds)",
default=30000,
)
BAIDU_VECTOR_DB_ACCOUNT: Optional[str] = Field(
description="Account for authenticating with the Baidu Vector Database",
default=None,
)
BAIDU_VECTOR_DB_API_KEY: Optional[str] = Field(
description="API key for authenticating with the Baidu Vector Database service",
default=None,
)
BAIDU_VECTOR_DB_DATABASE: Optional[str] = Field(
description="Name of the specific Baidu Vector Database to connect to",
default=None,
)
BAIDU_VECTOR_DB_SHARD: PositiveInt = Field(
description="Number of shards for the Baidu Vector Database (default is 1)",
default=1,
)
BAIDU_VECTOR_DB_REPLICAS: NonNegativeInt = Field(
description="Number of replicas for the Baidu Vector Database (default is 3)",
default=3,
)

View File

@@ -14,7 +14,7 @@ class OracleConfig(BaseSettings):
default=None,
)
ORACLE_PORT: PositiveInt = Field(
ORACLE_PORT: Optional[PositiveInt] = Field(
description="Port number on which the Oracle database server is listening (default is 1521)",
default=1521,
)

View File

@@ -14,7 +14,7 @@ class PGVectorConfig(BaseSettings):
default=None,
)
PGVECTOR_PORT: PositiveInt = Field(
PGVECTOR_PORT: Optional[PositiveInt] = Field(
description="Port number on which the PostgreSQL server is listening (default is 5433)",
default=5433,
)

View File

@@ -14,7 +14,7 @@ class PGVectoRSConfig(BaseSettings):
default=None,
)
PGVECTO_RS_PORT: PositiveInt = Field(
PGVECTO_RS_PORT: Optional[PositiveInt] = Field(
description="Port number on which the PostgreSQL server with PGVecto.RS is listening (default is 5431)",
default=5431,
)

View File

@@ -1,65 +0,0 @@
from typing import Optional
from pydantic import Field, NonNegativeInt, PositiveInt
from pydantic_settings import BaseSettings
class TidbOnQdrantConfig(BaseSettings):
"""
Tidb on Qdrant configs
"""
TIDB_ON_QDRANT_URL: Optional[str] = Field(
description="Tidb on Qdrant url",
default=None,
)
TIDB_ON_QDRANT_API_KEY: Optional[str] = Field(
description="Tidb on Qdrant api key",
default=None,
)
TIDB_ON_QDRANT_CLIENT_TIMEOUT: NonNegativeInt = Field(
description="Tidb on Qdrant client timeout in seconds",
default=20,
)
TIDB_ON_QDRANT_GRPC_ENABLED: bool = Field(
description="whether enable grpc support for Tidb on Qdrant connection",
default=False,
)
TIDB_ON_QDRANT_GRPC_PORT: PositiveInt = Field(
description="Tidb on Qdrant grpc port",
default=6334,
)
TIDB_PUBLIC_KEY: Optional[str] = Field(
description="Tidb account public key",
default=None,
)
TIDB_PRIVATE_KEY: Optional[str] = Field(
description="Tidb account private key",
default=None,
)
TIDB_API_URL: Optional[str] = Field(
description="Tidb API url",
default=None,
)
TIDB_IAM_API_URL: Optional[str] = Field(
description="Tidb IAM API url",
default=None,
)
TIDB_REGION: Optional[str] = Field(
description="Tidb serverless region",
default="regions/aws-us-east-1",
)
TIDB_PROJECT_ID: Optional[str] = Field(
description="Tidb project id",
default=None,
)

View File

@@ -1,20 +0,0 @@
from typing import Optional
from pydantic import Field
from pydantic_settings import BaseSettings
class UpstashConfig(BaseSettings):
"""
Configuration settings for Upstash vector database
"""
UPSTASH_VECTOR_URL: Optional[str] = Field(
description="URL of the upstash server (e.g., 'https://vector.upstash.io')",
default=None,
)
UPSTASH_VECTOR_TOKEN: Optional[str] = Field(
description="Token for authenticating with the upstash server",
default=None,
)

View File

@@ -1,49 +0,0 @@
from typing import Optional
from pydantic import BaseModel, Field
class VikingDBConfig(BaseModel):
"""
Configuration for connecting to Volcengine VikingDB.
Refer to the following documentation for details on obtaining credentials:
https://www.volcengine.com/docs/6291/65568
"""
VIKINGDB_ACCESS_KEY: Optional[str] = Field(
description="The Access Key provided by Volcengine VikingDB for API authentication."
"Refer to the following documentation for details on obtaining credentials:"
"https://www.volcengine.com/docs/6291/65568",
default=None,
)
VIKINGDB_SECRET_KEY: Optional[str] = Field(
description="The Secret Key provided by Volcengine VikingDB for API authentication.",
default=None,
)
VIKINGDB_REGION: str = Field(
description="The region of the Volcengine VikingDB service.(e.g., 'cn-shanghai', 'cn-beijing').",
default="cn-shanghai",
)
VIKINGDB_HOST: str = Field(
description="The host of the Volcengine VikingDB service.(e.g., 'api-vikingdb.volces.com', \
'api-vikingdb.mlp.cn-shanghai.volces.com')",
default="api-vikingdb.mlp.cn-shanghai.volces.com",
)
VIKINGDB_SCHEME: str = Field(
description="The scheme of the Volcengine VikingDB service.(e.g., 'http', 'https').",
default="http",
)
VIKINGDB_CONNECTION_TIMEOUT: int = Field(
description="The connection timeout of the Volcengine VikingDB service.",
default=30,
)
VIKINGDB_SOCKET_TIMEOUT: int = Field(
description="The socket timeout of the Volcengine VikingDB service.",
default=30,
)

View File

@@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description="Dify version",
default="0.10.2",
default="0.10.0-beta2",
)
COMMIT_SHA: str = Field(

View File

@@ -12,13 +12,10 @@ VIDEO_EXTENSIONS.extend([ext.upper() for ext in VIDEO_EXTENSIONS])
AUDIO_EXTENSIONS = ["mp3", "m4a", "wav", "webm", "amr"]
AUDIO_EXTENSIONS.extend([ext.upper() for ext in AUDIO_EXTENSIONS])
DOCUMENT_EXTENSIONS = ["txt", "markdown", "md", "pdf", "html", "htm", "xlsx", "xls", "docx", "csv"]
DOCUMENT_EXTENSIONS.extend([ext.upper() for ext in DOCUMENT_EXTENSIONS])
if dify_config.ETL_TYPE == "Unstructured":
DOCUMENT_EXTENSIONS = ["txt", "markdown", "md", "pdf", "html", "htm", "xlsx", "xls"]
DOCUMENT_EXTENSIONS.extend(("docx", "csv", "eml", "msg", "pptx", "xml", "epub"))
if dify_config.UNSTRUCTURED_API_URL:
DOCUMENT_EXTENSIONS.append("ppt")
DOCUMENT_EXTENSIONS.extend([ext.upper() for ext in DOCUMENT_EXTENSIONS])
else:
DOCUMENT_EXTENSIONS = ["txt", "markdown", "md", "pdf", "html", "htm", "xlsx", "xls", "docx", "csv"]
DOCUMENT_EXTENSIONS.extend(("docx", "csv", "eml", "msg", "pptx", "ppt", "xml", "epub"))
DOCUMENT_EXTENSIONS.extend([ext.upper() for ext in DOCUMENT_EXTENSIONS])

View File

@@ -1,10 +1,10 @@
import os
from functools import wraps
from flask import request
from flask_restful import Resource, reqparse
from werkzeug.exceptions import NotFound, Unauthorized
from configs import dify_config
from constants.languages import supported_language
from controllers.console import api
from controllers.console.wraps import only_edition_cloud
@@ -15,7 +15,7 @@ from models.model import App, InstalledApp, RecommendedApp
def admin_required(view):
@wraps(view)
def decorated(*args, **kwargs):
if not dify_config.ADMIN_API_KEY:
if not os.getenv("ADMIN_API_KEY"):
raise Unauthorized("API key is invalid.")
auth_header = request.headers.get("Authorization")
@@ -31,7 +31,7 @@ def admin_required(view):
if auth_scheme != "bearer":
raise Unauthorized("Invalid Authorization header format. Expected 'Bearer <api-key>' format.")
if dify_config.ADMIN_API_KEY != auth_token:
if os.getenv("ADMIN_API_KEY") != auth_token:
raise Unauthorized("API key is invalid.")
return view(*args, **kwargs)

View File

@@ -189,7 +189,6 @@ class ChatConversationApi(Resource):
subquery.c.from_end_user_session_id.ilike(keyword_filter),
),
)
.group_by(Conversation.id)
)
account = current_user

View File

@@ -52,39 +52,4 @@ class RuleGenerateApi(Resource):
return rules
class RuleCodeGenerateApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("instruction", type=str, required=True, nullable=False, location="json")
parser.add_argument("model_config", type=dict, required=True, nullable=False, location="json")
parser.add_argument("no_variable", type=bool, required=True, default=False, location="json")
parser.add_argument("code_language", type=str, required=False, default="javascript", location="json")
args = parser.parse_args()
account = current_user
CODE_GENERATION_MAX_TOKENS = int(os.getenv("CODE_GENERATION_MAX_TOKENS", "1024"))
try:
code_result = LLMGenerator.generate_code(
tenant_id=account.current_tenant_id,
instruction=args["instruction"],
model_config=args["model_config"],
code_language=args["code_language"],
max_tokens=CODE_GENERATION_MAX_TOKENS,
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
except QuotaExceededError:
raise ProviderQuotaExceededError()
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
except InvokeError as e:
raise CompletionRequestError(e.description)
return code_result
api.add_resource(RuleGenerateApi, "/rule-generate")
api.add_resource(RuleCodeGenerateApi, "/rule-code-generate")

View File

@@ -105,8 +105,6 @@ class ChatMessageListApi(Resource):
if rest_count > 0:
has_more = True
history_messages = list(reversed(history_messages))
return InfiniteScrollPagination(data=history_messages, limit=args["limit"], has_more=has_more)

View File

@@ -10,10 +10,10 @@ from controllers.console import api
from controllers.console.app.wraps import get_app_model
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
from enums import WorkflowRunTriggeredFrom
from extensions.ext_database import db
from libs.helper import DatetimeString
from libs.login import login_required
from models.enums import WorkflowRunTriggeredFrom
from models.model import AppMode

View File

@@ -1,15 +1,17 @@
import base64
import datetime
import secrets
from flask import request
from flask_restful import Resource, reqparse
from constants.languages import supported_language
from controllers.console import api
from controllers.console.error import AlreadyActivateError
from extensions.ext_database import db
from libs.helper import StrLen, email, extract_remote_ip, timezone
from models.account import AccountStatus, Tenant
from services.account_service import AccountService, RegisterService
from libs.helper import StrLen, email, timezone
from libs.password import hash_password, valid_password
from models.account import AccountStatus
from services.account_service import RegisterService
class ActivateCheckApi(Resource):
@@ -25,18 +27,8 @@ class ActivateCheckApi(Resource):
token = args["token"]
invitation = RegisterService.get_invitation_if_token_valid(workspaceId, reg_email, token)
if invitation:
data = invitation.get("data", {})
tenant: Tenant = invitation.get("tenant", None)
workspace_name = tenant.name if tenant else None
workspace_id = tenant.id if tenant else None
invitee_email = data.get("email") if data else None
return {
"is_valid": invitation is not None,
"data": {"workspace_name": workspace_name, "workspace_id": workspace_id, "email": invitee_email},
}
else:
return {"is_valid": False}
return {"is_valid": invitation is not None, "workspace_name": invitation["tenant"].name if invitation else None}
class ActivateApi(Resource):
@@ -46,6 +38,7 @@ class ActivateApi(Resource):
parser.add_argument("email", type=email, required=False, nullable=True, location="json")
parser.add_argument("token", type=str, required=True, nullable=False, location="json")
parser.add_argument("name", type=StrLen(30), required=True, nullable=False, location="json")
parser.add_argument("password", type=valid_password, required=True, nullable=False, location="json")
parser.add_argument(
"interface_language", type=supported_language, required=True, nullable=False, location="json"
)
@@ -61,6 +54,15 @@ class ActivateApi(Resource):
account = invitation["account"]
account.name = args["name"]
# generate password salt
salt = secrets.token_bytes(16)
base64_salt = base64.b64encode(salt).decode()
# encrypt password with salt
password_hashed = hash_password(args["password"], salt)
base64_password_hashed = base64.b64encode(password_hashed).decode()
account.password = base64_password_hashed
account.password_salt = base64_salt
account.interface_language = args["interface_language"]
account.timezone = args["timezone"]
account.interface_theme = "light"
@@ -68,9 +70,7 @@ class ActivateApi(Resource):
account.initialized_at = datetime.datetime.now(datetime.timezone.utc).replace(tzinfo=None)
db.session.commit()
token_pair = AccountService.login(account, ip_address=extract_remote_ip(request))
return {"result": "success", "data": token_pair.model_dump()}
return {"result": "success"}
api.add_resource(ActivateCheckApi, "/activate/check")

View File

@@ -27,29 +27,5 @@ class InvalidTokenError(BaseHTTPException):
class PasswordResetRateLimitExceededError(BaseHTTPException):
error_code = "password_reset_rate_limit_exceeded"
description = "Too many password reset emails have been sent. Please try again in 1 minutes."
code = 429
class EmailCodeError(BaseHTTPException):
error_code = "email_code_error"
description = "Email code is invalid or expired."
code = 400
class EmailOrPasswordMismatchError(BaseHTTPException):
error_code = "email_or_password_mismatch"
description = "The email or password is mismatched."
code = 400
class EmailPasswordLoginLimitError(BaseHTTPException):
error_code = "email_code_login_limit"
description = "Too many incorrect password attempts. Please try again later."
code = 429
class EmailCodeLoginRateLimitExceededError(BaseHTTPException):
error_code = "email_code_login_rate_limit_exceeded"
description = "Too many login emails have been sent. Please try again in 5 minutes."
description = "Password reset rate limit exceeded. Try again later."
code = 429

View File

@@ -1,82 +1,65 @@
import base64
import logging
import secrets
from flask import request
from flask_restful import Resource, reqparse
from constants.languages import languages
from controllers.console import api
from controllers.console.auth.error import (
EmailCodeError,
InvalidEmailError,
InvalidTokenError,
PasswordMismatchError,
PasswordResetRateLimitExceededError,
)
from controllers.console.error import EmailSendIpLimitError, NotAllowedRegister
from controllers.console.setup import setup_required
from events.tenant_event import tenant_was_created
from extensions.ext_database import db
from libs.helper import email, extract_remote_ip
from libs.helper import email as email_validate
from libs.password import hash_password, valid_password
from models.account import Account
from services.account_service import AccountService, TenantService
from services.errors.workspace import WorkSpaceNotAllowedCreateError
from services.feature_service import FeatureService
from models import Account
from services.account_service import AccountService
from services.errors.account import RateLimitExceededError
class ForgotPasswordSendEmailApi(Resource):
@setup_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=email, required=True, location="json")
parser.add_argument("language", type=str, required=False, location="json")
parser.add_argument("email", type=str, required=True, location="json")
args = parser.parse_args()
ip_address = extract_remote_ip(request)
if AccountService.is_email_send_ip_limit(ip_address):
raise EmailSendIpLimitError()
email = args["email"]
if args["language"] is not None and args["language"] == "zh-Hans":
language = "zh-Hans"
if not email_validate(email):
raise InvalidEmailError()
account = Account.query.filter_by(email=email).first()
if account:
try:
AccountService.send_reset_password_email(account=account)
except RateLimitExceededError:
logging.warning(f"Rate limit exceeded for email: {account.email}")
raise PasswordResetRateLimitExceededError()
else:
language = "en-US"
# Return success to avoid revealing email registration status
logging.warning(f"Attempt to reset password for unregistered email: {email}")
account = Account.query.filter_by(email=args["email"]).first()
token = None
if account is None:
if FeatureService.get_system_features().is_allow_register:
token = AccountService.send_reset_password_email(email=args["email"], language=language)
return {"result": "fail", "data": token, "code": "account_not_found"}
else:
raise NotAllowedRegister()
else:
token = AccountService.send_reset_password_email(account=account, email=args["email"], language=language)
return {"result": "success", "data": token}
return {"result": "success"}
class ForgotPasswordCheckApi(Resource):
@setup_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=str, required=True, location="json")
parser.add_argument("code", type=str, required=True, location="json")
parser.add_argument("token", type=str, required=True, nullable=False, location="json")
args = parser.parse_args()
token = args["token"]
user_email = args["email"]
reset_data = AccountService.get_reset_password_data(token)
token_data = AccountService.get_reset_password_data(args["token"])
if token_data is None:
raise InvalidTokenError()
if user_email != token_data.get("email"):
raise InvalidEmailError()
if args["code"] != token_data.get("code"):
raise EmailCodeError()
return {"is_valid": True, "email": token_data.get("email")}
if reset_data is None:
return {"is_valid": False, "email": None}
return {"is_valid": True, "email": reset_data.get("email")}
class ForgotPasswordResetApi(Resource):
@@ -109,26 +92,9 @@ class ForgotPasswordResetApi(Resource):
base64_password_hashed = base64.b64encode(password_hashed).decode()
account = Account.query.filter_by(email=reset_data.get("email")).first()
if account:
account.password = base64_password_hashed
account.password_salt = base64_salt
db.session.commit()
tenant = TenantService.get_join_tenants(account)
if not tenant and not FeatureService.get_system_features().is_allow_create_workspace:
tenant = TenantService.create_tenant(f"{account.name}'s Workspace")
TenantService.create_tenant_member(tenant, account, role="owner")
account.current_tenant = tenant
tenant_was_created.send(tenant)
else:
try:
account = AccountService.create_account_and_tenant(
email=reset_data.get("email"),
name=reset_data.get("email"),
password=password_confirm,
interface_language=languages[0],
)
except WorkSpaceNotAllowedCreateError:
pass
account.password = base64_password_hashed
account.password_salt = base64_salt
db.session.commit()
return {"result": "success"}

View File

@@ -5,29 +5,12 @@ from flask import request
from flask_restful import Resource, reqparse
import services
from constants.languages import languages
from controllers.console import api
from controllers.console.auth.error import (
EmailCodeError,
EmailOrPasswordMismatchError,
EmailPasswordLoginLimitError,
InvalidEmailError,
InvalidTokenError,
)
from controllers.console.error import (
AccountBannedError,
EmailSendIpLimitError,
NotAllowedCreateWorkspace,
NotAllowedRegister,
)
from controllers.console.setup import setup_required
from events.tenant_event import tenant_was_created
from libs.helper import email, extract_remote_ip
from libs.helper import email, get_remote_ip
from libs.password import valid_password
from models.account import Account
from services.account_service import AccountService, RegisterService, TenantService
from services.errors.workspace import WorkSpaceNotAllowedCreateError
from services.feature_service import FeatureService
from models import Account
from services.account_service import AccountService, TenantService
class LoginApi(Resource):
@@ -40,43 +23,15 @@ class LoginApi(Resource):
parser.add_argument("email", type=email, required=True, location="json")
parser.add_argument("password", type=valid_password, required=True, location="json")
parser.add_argument("remember_me", type=bool, required=False, default=False, location="json")
parser.add_argument("invite_token", type=str, required=False, default=None, location="json")
parser.add_argument("language", type=str, required=False, default="en-US", location="json")
args = parser.parse_args()
is_login_error_rate_limit = AccountService.is_login_error_rate_limit(args["email"])
if is_login_error_rate_limit:
raise EmailPasswordLoginLimitError()
invitation = args["invite_token"]
if invitation:
invitation = RegisterService.get_invitation_if_token_valid(None, args["email"], invitation)
if args["language"] is not None and args["language"] == "zh-Hans":
language = "zh-Hans"
else:
language = "en-US"
# todo: Verify the recaptcha
try:
if invitation:
data = invitation.get("data", {})
invitee_email = data.get("email") if data else None
if invitee_email != args["email"]:
raise InvalidEmailError()
account = AccountService.authenticate(args["email"], args["password"], args["invite_token"])
else:
account = AccountService.authenticate(args["email"], args["password"])
except services.errors.account.AccountLoginError:
raise AccountBannedError()
except services.errors.account.AccountPasswordError:
AccountService.add_login_error_rate_limit(args["email"])
raise EmailOrPasswordMismatchError()
except services.errors.account.AccountNotFoundError:
if FeatureService.get_system_features().is_allow_register:
token = AccountService.send_reset_password_email(email=args["email"], language=language)
return {"result": "fail", "data": token, "code": "account_not_found"}
else:
raise NotAllowedRegister()
account = AccountService.authenticate(args["email"], args["password"])
except services.errors.account.AccountLoginError as e:
return {"code": "unauthorized", "message": str(e)}, 401
# SELF_HOSTED only have one workspace
tenants = TenantService.get_join_tenants(account)
if len(tenants) == 0:
@@ -85,138 +40,71 @@ class LoginApi(Resource):
"data": "workspace not found, please contact system admin to invite you to join in a workspace",
}
token_pair = AccountService.login(account=account, ip_address=extract_remote_ip(request))
AccountService.reset_login_error_rate_limit(args["email"])
return {"result": "success", "data": token_pair.model_dump()}
token = AccountService.login(account, ip_address=get_remote_ip(request))
return {"result": "success", "data": token}
class LogoutApi(Resource):
@setup_required
def get(self):
account = cast(Account, flask_login.current_user)
if isinstance(account, flask_login.AnonymousUserMixin):
return {"result": "success"}
AccountService.logout(account=account)
token = request.headers.get("Authorization", "").split(" ")[1]
AccountService.logout(account=account, token=token)
flask_login.logout_user()
return {"result": "success"}
class ResetPasswordSendEmailApi(Resource):
class ResetPasswordApi(Resource):
@setup_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=email, required=True, location="json")
parser.add_argument("language", type=str, required=False, location="json")
args = parser.parse_args()
def get(self):
# parser = reqparse.RequestParser()
# parser.add_argument('email', type=email, required=True, location='json')
# args = parser.parse_args()
if args["language"] is not None and args["language"] == "zh-Hans":
language = "zh-Hans"
else:
language = "en-US"
# import mailchimp_transactional as MailchimpTransactional
# from mailchimp_transactional.api_client import ApiClientError
account = AccountService.get_user_through_email(args["email"])
if account is None:
if FeatureService.get_system_features().is_allow_register:
token = AccountService.send_reset_password_email(email=args["email"], language=language)
else:
raise NotAllowedRegister()
else:
token = AccountService.send_reset_password_email(account=account, language=language)
# account = {'email': args['email']}
# account = AccountService.get_by_email(args['email'])
# if account is None:
# raise ValueError('Email not found')
# new_password = AccountService.generate_password()
# AccountService.update_password(account, new_password)
return {"result": "success", "data": token}
# todo: Send email
# MAILCHIMP_API_KEY = dify_config.MAILCHIMP_TRANSACTIONAL_API_KEY
# mailchimp = MailchimpTransactional(MAILCHIMP_API_KEY)
# message = {
# 'from_email': 'noreply@example.com',
# 'to': [{'email': account['email']}],
# 'subject': 'Reset your Dify password',
# 'html': """
# <p>Dear User,</p>
# <p>The Dify team has generated a new password for you, details as follows:</p>
# <p><strong>{new_password}</strong></p>
# <p>Please change your password to log in as soon as possible.</p>
# <p>Regards,</p>
# <p>The Dify Team</p>
# """
# }
class EmailCodeLoginSendEmailApi(Resource):
@setup_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=email, required=True, location="json")
parser.add_argument("language", type=str, required=False, location="json")
args = parser.parse_args()
# response = mailchimp.messages.send({
# 'message': message,
# # required for transactional email
# ' settings': {
# 'sandbox_mode': dify_config.MAILCHIMP_SANDBOX_MODE,
# },
# })
ip_address = extract_remote_ip(request)
if AccountService.is_email_send_ip_limit(ip_address):
raise EmailSendIpLimitError()
# Check if MSG was sent
# if response.status_code != 200:
# # handle error
# pass
if args["language"] is not None and args["language"] == "zh-Hans":
language = "zh-Hans"
else:
language = "en-US"
account = AccountService.get_user_through_email(args["email"])
if account is None:
if FeatureService.get_system_features().is_allow_register:
token = AccountService.send_email_code_login_email(email=args["email"], language=language)
else:
raise NotAllowedRegister()
else:
token = AccountService.send_email_code_login_email(account=account, language=language)
return {"result": "success", "data": token}
class EmailCodeLoginApi(Resource):
@setup_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=str, required=True, location="json")
parser.add_argument("code", type=str, required=True, location="json")
parser.add_argument("token", type=str, required=True, location="json")
args = parser.parse_args()
user_email = args["email"]
token_data = AccountService.get_email_code_login_data(args["token"])
if token_data is None:
raise InvalidTokenError()
if token_data["email"] != args["email"]:
raise InvalidEmailError()
if token_data["code"] != args["code"]:
raise EmailCodeError()
AccountService.revoke_email_code_login_token(args["token"])
account = AccountService.get_user_through_email(user_email)
if account:
tenant = TenantService.get_join_tenants(account)
if not tenant:
if not FeatureService.get_system_features().is_allow_create_workspace:
raise NotAllowedCreateWorkspace()
else:
tenant = TenantService.create_tenant(f"{account.name}'s Workspace")
TenantService.create_tenant_member(tenant, account, role="owner")
account.current_tenant = tenant
tenant_was_created.send(tenant)
if account is None:
try:
account = AccountService.create_account_and_tenant(
email=user_email, name=user_email, interface_language=languages[0]
)
except WorkSpaceNotAllowedCreateError:
return NotAllowedCreateWorkspace()
token_pair = AccountService.login(account, ip_address=extract_remote_ip(request))
AccountService.reset_login_error_rate_limit(args["email"])
return {"result": "success", "data": token_pair.model_dump()}
class RefreshTokenApi(Resource):
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("refresh_token", type=str, required=True, location="json")
args = parser.parse_args()
try:
new_token_pair = AccountService.refresh_token(args["refresh_token"])
return {"result": "success", "data": new_token_pair.model_dump()}
except Exception as e:
return {"result": "fail", "data": str(e)}, 401
return {"result": "success"}
api.add_resource(LoginApi, "/login")
api.add_resource(LogoutApi, "/logout")
api.add_resource(EmailCodeLoginSendEmailApi, "/email-code-login")
api.add_resource(EmailCodeLoginApi, "/email-code-login/validity")
api.add_resource(ResetPasswordSendEmailApi, "/reset-password")
api.add_resource(RefreshTokenApi, "/refresh-token")

View File

@@ -5,20 +5,15 @@ from typing import Optional
import requests
from flask import current_app, redirect, request
from flask_restful import Resource
from werkzeug.exceptions import Unauthorized
from configs import dify_config
from constants.languages import languages
from events.tenant_event import tenant_was_created
from extensions.ext_database import db
from libs.helper import extract_remote_ip
from libs.helper import get_remote_ip
from libs.oauth import GitHubOAuth, GoogleOAuth, OAuthUserInfo
from models import Account
from models.account import AccountStatus
from services.account_service import AccountService, RegisterService, TenantService
from services.errors.account import AccountNotFoundError
from services.errors.workspace import WorkSpaceNotAllowedCreateError, WorkSpaceNotFoundError
from services.feature_service import FeatureService
from .. import api
@@ -48,7 +43,6 @@ def get_oauth_providers():
class OAuthLogin(Resource):
def get(self, provider: str):
invite_token = request.args.get("invite_token") or None
OAUTH_PROVIDERS = get_oauth_providers()
with current_app.app_context():
oauth_provider = OAUTH_PROVIDERS.get(provider)
@@ -56,7 +50,7 @@ class OAuthLogin(Resource):
if not oauth_provider:
return {"error": "Invalid provider"}, 400
auth_url = oauth_provider.get_authorization_url(invite_token=invite_token)
auth_url = oauth_provider.get_authorization_url()
return redirect(auth_url)
@@ -69,11 +63,6 @@ class OAuthCallback(Resource):
return {"error": "Invalid provider"}, 400
code = request.args.get("code")
state = request.args.get("state")
invite_token = None
if state:
invite_token = state
try:
token = oauth_provider.get_access_token(code)
user_info = oauth_provider.get_user_info(token)
@@ -81,52 +70,21 @@ class OAuthCallback(Resource):
logging.exception(f"An error occurred during the OAuth process with {provider}: {e.response.text}")
return {"error": "OAuth process failed"}, 400
if invite_token and RegisterService.is_valid_invite_token(invite_token):
invitation = RegisterService._get_invitation_by_token(token=invite_token)
if invitation:
invitation_email = invitation.get("email", None)
if invitation_email != user_info.email:
return redirect(f"{dify_config.CONSOLE_WEB_URL}/signin?message=Invalid invitation token.")
return redirect(f"{dify_config.CONSOLE_WEB_URL}/signin/invite-settings?invite_token={invite_token}")
try:
account = _generate_account(provider, user_info)
except AccountNotFoundError:
return redirect(f"{dify_config.CONSOLE_WEB_URL}/signin?message=Account not found.")
except (WorkSpaceNotFoundError, WorkSpaceNotAllowedCreateError):
return redirect(
f"{dify_config.CONSOLE_WEB_URL}/signin"
"?message=Workspace not found, please contact system admin to invite you to join in a workspace."
)
account = _generate_account(provider, user_info)
# Check account status
if account.status == AccountStatus.BANNED.value:
return redirect(f"{dify_config.CONSOLE_WEB_URL}/signin?message=Account is banned.")
if account.status in {AccountStatus.BANNED.value, AccountStatus.CLOSED.value}:
return {"error": "Account is banned or closed."}, 403
if account.status == AccountStatus.PENDING.value:
account.status = AccountStatus.ACTIVE.value
account.initialized_at = datetime.now(timezone.utc).replace(tzinfo=None)
db.session.commit()
try:
TenantService.create_owner_tenant_if_not_exist(account)
except Unauthorized:
return redirect(f"{dify_config.CONSOLE_WEB_URL}/signin?message=Workspace not found.")
except WorkSpaceNotAllowedCreateError:
return redirect(
f"{dify_config.CONSOLE_WEB_URL}/signin"
"?message=Workspace not found, please contact system admin to invite you to join in a workspace."
)
TenantService.create_owner_tenant_if_not_exist(account)
token_pair = AccountService.login(
account=account,
ip_address=extract_remote_ip(request),
)
token = AccountService.login(account, ip_address=get_remote_ip(request))
return redirect(
f"{dify_config.CONSOLE_WEB_URL}?access_token={token_pair.access_token}&refresh_token={token_pair.refresh_token}"
)
return redirect(f"{dify_config.CONSOLE_WEB_URL}?console_token={token}")
def _get_account_by_openid_or_email(provider: str, user_info: OAuthUserInfo) -> Optional[Account]:
@@ -142,20 +100,8 @@ def _generate_account(provider: str, user_info: OAuthUserInfo):
# Get account by openid or email.
account = _get_account_by_openid_or_email(provider, user_info)
if account:
tenant = TenantService.get_join_tenants(account)
if not tenant:
if not FeatureService.get_system_features().is_allow_create_workspace:
raise WorkSpaceNotAllowedCreateError()
else:
tenant = TenantService.create_tenant(f"{account.name}'s Workspace")
TenantService.create_tenant_member(tenant, account, role="owner")
account.current_tenant = tenant
tenant_was_created.send(tenant)
if not account:
if not FeatureService.get_system_features().is_allow_register:
raise AccountNotFoundError()
# Create account
account_name = user_info.name or "Dify"
account = RegisterService.register(
email=user_info.email, name=account_name, password=None, open_id=user_info.id, provider=provider

View File

@@ -102,13 +102,6 @@ class DatasetListApi(Resource):
help="type is required. Name must be between 1 to 40 characters.",
type=_validate_name,
)
parser.add_argument(
"description",
type=str,
nullable=True,
required=False,
default="",
)
parser.add_argument(
"indexing_technique",
type=str,
@@ -147,7 +140,6 @@ class DatasetListApi(Resource):
dataset = DatasetService.create_empty_dataset(
tenant_id=current_user.current_tenant_id,
name=args["name"],
description=args["description"],
indexing_technique=args["indexing_technique"],
account=current_user,
permission=DatasetPermissionEnum.ONLY_ME,
@@ -625,9 +617,6 @@ class DatasetRetrievalSettingApi(Resource):
| VectorType.CHROMA
| VectorType.TENCENT
| VectorType.PGVECTO_RS
| VectorType.BAIDU
| VectorType.VIKINGDB
| VectorType.UPSTASH
):
return {"retrieval_method": [RetrievalMethod.SEMANTIC_SEARCH.value]}
case (
@@ -639,7 +628,6 @@ class DatasetRetrievalSettingApi(Resource):
| VectorType.ORACLE
| VectorType.ELASTICSEARCH
| VectorType.PGVECTOR
| VectorType.TIDB_ON_QDRANT
):
return {
"retrieval_method": [
@@ -665,9 +653,6 @@ class DatasetRetrievalSettingMockApi(Resource):
| VectorType.CHROMA
| VectorType.TENCENT
| VectorType.PGVECTO_RS
| VectorType.BAIDU
| VectorType.VIKINGDB
| VectorType.UPSTASH
):
return {"retrieval_method": [RetrievalMethod.SEMANTIC_SEARCH.value]}
case (

View File

@@ -13,7 +13,6 @@ from libs.login import login_required
from services.dataset_service import DatasetService
from services.external_knowledge_service import ExternalDatasetService
from services.hit_testing_service import HitTestingService
from services.knowledge_service import ExternalDatasetTestService
def _validate_name(name):
@@ -233,31 +232,8 @@ class ExternalKnowledgeHitTestingApi(Resource):
raise InternalServerError(str(e))
class BedrockRetrievalApi(Resource):
# this api is only for internal testing
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("retrieval_setting", nullable=False, required=True, type=dict, location="json")
parser.add_argument(
"query",
nullable=False,
required=True,
type=str,
)
parser.add_argument("knowledge_id", nullable=False, required=True, type=str)
args = parser.parse_args()
# Call the knowledge retrieval service
result = ExternalDatasetTestService.knowledge_retrieval(
args["retrieval_setting"], args["query"], args["knowledge_id"]
)
return result, 200
api.add_resource(ExternalKnowledgeHitTestingApi, "/datasets/<uuid:dataset_id>/external-hit-testing")
api.add_resource(ExternalDatasetCreateApi, "/datasets/external")
api.add_resource(ExternalApiTemplateListApi, "/datasets/external-knowledge-api")
api.add_resource(ExternalApiTemplateApi, "/datasets/external-knowledge-api/<uuid:external_knowledge_api_id>")
api.add_resource(ExternalApiUseCheckApi, "/datasets/external-knowledge-api/<uuid:external_knowledge_api_id>/use-check")
# this api is only for internal test
api.add_resource(BedrockRetrievalApi, "/test/retrieval")

View File

@@ -2,7 +2,7 @@ import urllib.parse
from flask import request
from flask_login import current_user
from flask_restful import Resource, marshal_with, reqparse
from flask_restful import Resource, marshal_with
import services
from configs import dify_config
@@ -30,12 +30,13 @@ class FileApi(Resource):
@account_initialization_required
@marshal_with(upload_config_fields)
def get(self):
file_size_limit = dify_config.UPLOAD_FILE_SIZE_LIMIT
batch_count_limit = dify_config.UPLOAD_FILE_BATCH_LIMIT
image_file_size_limit = dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT
return {
"file_size_limit": dify_config.UPLOAD_FILE_SIZE_LIMIT,
"batch_count_limit": dify_config.UPLOAD_FILE_BATCH_LIMIT,
"image_file_size_limit": dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT,
"video_file_size_limit": dify_config.UPLOAD_VIDEO_FILE_SIZE_LIMIT,
"audio_file_size_limit": dify_config.UPLOAD_AUDIO_FILE_SIZE_LIMIT,
"file_size_limit": file_size_limit,
"batch_count_limit": batch_count_limit,
"image_file_size_limit": image_file_size_limit,
}, 200
@setup_required
@@ -47,10 +48,6 @@ class FileApi(Resource):
# get file from request
file = request.files["file"]
parser = reqparse.RequestParser()
parser.add_argument("source", type=str, required=False, location="args")
source = parser.parse_args().get("source")
# check file
if "file" not in request.files:
raise NoFileUploadedError()
@@ -58,7 +55,7 @@ class FileApi(Resource):
if len(request.files) > 1:
raise TooManyFilesError()
try:
upload_file = FileService.upload_file(file=file, user=current_user, source=source)
upload_file = FileService.upload_file(file=file, user=current_user)
except services.errors.file.FileTooLargeError as file_too_large_error:
raise FileTooLargeError(file_too_large_error.description)
except services.errors.file.UnsupportedFileTypeError:

View File

@@ -1,24 +1,88 @@
from flask_restful import Resource
import logging
from flask_login import current_user
from flask_restful import Resource, marshal, reqparse
from werkzeug.exceptions import Forbidden, InternalServerError, NotFound
import services
from controllers.console import api
from controllers.console.datasets.hit_testing_base import DatasetsHitTestingBase
from controllers.console.app.error import (
CompletionRequestError,
ProviderModelCurrentlyNotSupportError,
ProviderNotInitializeError,
ProviderQuotaExceededError,
)
from controllers.console.datasets.error import DatasetNotInitializedError
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
from core.errors.error import (
LLMBadRequestError,
ModelCurrentlyNotSupportError,
ProviderTokenNotInitError,
QuotaExceededError,
)
from core.model_runtime.errors.invoke import InvokeError
from fields.hit_testing_fields import hit_testing_record_fields
from libs.login import login_required
from services.dataset_service import DatasetService
from services.hit_testing_service import HitTestingService
class HitTestingApi(Resource, DatasetsHitTestingBase):
class HitTestingApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self, dataset_id):
dataset_id_str = str(dataset_id)
dataset = self.get_and_validate_dataset(dataset_id_str)
args = self.parse_args()
self.hit_testing_args_check(args)
dataset = DatasetService.get_dataset(dataset_id_str)
if dataset is None:
raise NotFound("Dataset not found.")
return self.perform_hit_testing(dataset, args)
try:
DatasetService.check_dataset_permission(dataset, current_user)
except services.errors.account.NoPermissionError as e:
raise Forbidden(str(e))
parser = reqparse.RequestParser()
parser.add_argument("query", type=str, location="json")
parser.add_argument("retrieval_model", type=dict, required=False, location="json")
parser.add_argument("external_retrieval_model", type=dict, required=False, location="json")
args = parser.parse_args()
HitTestingService.hit_testing_args_check(args)
try:
response = HitTestingService.retrieve(
dataset=dataset,
query=args["query"],
account=current_user,
retrieval_model=args["retrieval_model"],
external_retrieval_model=args["external_retrieval_model"],
limit=10,
)
return {"query": response["query"], "records": marshal(response["records"], hit_testing_record_fields)}
except services.errors.index.IndexNotInitializedError:
raise DatasetNotInitializedError()
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
except QuotaExceededError:
raise ProviderQuotaExceededError()
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
except LLMBadRequestError:
raise ProviderNotInitializeError(
"No Embedding Model or Reranking Model available. Please configure a valid provider "
"in the Settings -> Model Provider."
)
except InvokeError as e:
raise CompletionRequestError(e.description)
except ValueError as e:
raise ValueError(str(e))
except Exception as e:
logging.exception("Hit testing failed.")
raise InternalServerError(str(e))
api.add_resource(HitTestingApi, "/datasets/<uuid:dataset_id>/hit-testing")

View File

@@ -1,85 +0,0 @@
import logging
from flask_login import current_user
from flask_restful import marshal, reqparse
from werkzeug.exceptions import Forbidden, InternalServerError, NotFound
import services.dataset_service
from controllers.console.app.error import (
CompletionRequestError,
ProviderModelCurrentlyNotSupportError,
ProviderNotInitializeError,
ProviderQuotaExceededError,
)
from controllers.console.datasets.error import DatasetNotInitializedError
from core.errors.error import (
LLMBadRequestError,
ModelCurrentlyNotSupportError,
ProviderTokenNotInitError,
QuotaExceededError,
)
from core.model_runtime.errors.invoke import InvokeError
from fields.hit_testing_fields import hit_testing_record_fields
from services.dataset_service import DatasetService
from services.hit_testing_service import HitTestingService
class DatasetsHitTestingBase:
@staticmethod
def get_and_validate_dataset(dataset_id: str):
dataset = DatasetService.get_dataset(dataset_id)
if dataset is None:
raise NotFound("Dataset not found.")
try:
DatasetService.check_dataset_permission(dataset, current_user)
except services.errors.account.NoPermissionError as e:
raise Forbidden(str(e))
return dataset
@staticmethod
def hit_testing_args_check(args):
HitTestingService.hit_testing_args_check(args)
@staticmethod
def parse_args():
parser = reqparse.RequestParser()
parser.add_argument("query", type=str, location="json")
parser.add_argument("retrieval_model", type=dict, required=False, location="json")
parser.add_argument("external_retrieval_model", type=dict, required=False, location="json")
return parser.parse_args()
@staticmethod
def perform_hit_testing(dataset, args):
try:
response = HitTestingService.retrieve(
dataset=dataset,
query=args["query"],
account=current_user,
retrieval_model=args["retrieval_model"],
external_retrieval_model=args["external_retrieval_model"],
limit=10,
)
return {"query": response["query"], "records": marshal(response["records"], hit_testing_record_fields)}
except services.errors.index.IndexNotInitializedError:
raise DatasetNotInitializedError()
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
except QuotaExceededError:
raise ProviderQuotaExceededError()
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
except LLMBadRequestError:
raise ProviderNotInitializeError(
"No Embedding Model or Reranking Model available. Please configure a valid provider "
"in the Settings -> Model Provider."
)
except InvokeError as e:
raise CompletionRequestError(e.description)
except ValueError as e:
raise ValueError(str(e))
except Exception as e:
logging.exception("Hit testing failed.")
raise InternalServerError(str(e))

View File

@@ -38,27 +38,3 @@ class AlreadyActivateError(BaseHTTPException):
error_code = "already_activate"
description = "Auth Token is invalid or account already activated, please check again."
code = 403
class NotAllowedCreateWorkspace(BaseHTTPException):
error_code = "not_allowed_create_workspace"
description = "Workspace not found, please contact system admin to invite you to join in a workspace."
code = 400
class AccountBannedError(BaseHTTPException):
error_code = "account_banned"
description = "Account is banned."
code = 400
class NotAllowedRegister(BaseHTTPException):
error_code = "unauthorized"
description = "Account not found."
code = 400
class EmailSendIpLimitError(BaseHTTPException):
error_code = "email_send_ip_limit"
description = "Too many emails have been sent from this IP address recently. Please try again later."
code = 429

View File

@@ -21,12 +21,7 @@ class AppParameterApi(InstalledAppResource):
"options": fields.List(fields.String),
}
system_parameters_fields = {
"image_file_size_limit": fields.Integer,
"video_file_size_limit": fields.Integer,
"audio_file_size_limit": fields.Integer,
"file_size_limit": fields.Integer,
}
system_parameters_fields = {"image_file_size_limit": fields.String}
parameters_fields = {
"opening_statement": fields.String,
@@ -87,12 +82,7 @@ class AppParameterApi(InstalledAppResource):
}
},
),
"system_parameters": {
"image_file_size_limit": dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT,
"video_file_size_limit": dify_config.UPLOAD_VIDEO_FILE_SIZE_LIMIT,
"audio_file_size_limit": dify_config.UPLOAD_AUDIO_FILE_SIZE_LIMIT,
"file_size_limit": dify_config.UPLOAD_FILE_SIZE_LIMIT,
},
"system_parameters": {"image_file_size_limit": dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT},
}

View File

@@ -18,7 +18,7 @@ message_fields = {
"inputs": fields.Raw,
"query": fields.String,
"answer": fields.String,
"message_files": fields.List(fields.Nested(message_file_fields)),
"message_files": fields.List(fields.Nested(message_file_fields), attribute="files"),
"feedback": fields.Nested(feedback_fields, attribute="user_feedback", allow_null=True),
"created_at": TimestampField,
}

View File

@@ -4,7 +4,7 @@ from flask import request
from flask_restful import Resource, reqparse
from configs import dify_config
from libs.helper import StrLen, email, extract_remote_ip
from libs.helper import StrLen, email, get_remote_ip
from libs.password import valid_password
from models.model import DifySetup
from services.account_service import RegisterService, TenantService
@@ -46,7 +46,7 @@ class SetupApi(Resource):
# setup
RegisterService.setup(
email=args["email"], name=args["name"], password=args["password"], ip_address=extract_remote_ip(request)
email=args["email"], name=args["name"], password=args["password"], ip_address=get_remote_ip(request)
)
return {"result": "success"}, 201

View File

@@ -126,12 +126,13 @@ class ModelProviderIconApi(Resource):
Get model provider icon
"""
@setup_required
@login_required
@account_initialization_required
def get(self, provider: str, icon_type: str, lang: str):
model_provider_service = ModelProviderService()
icon, mimetype = model_provider_service.get_model_provider_icon(
provider=provider,
icon_type=icon_type,
lang=lang,
provider=provider, icon_type=icon_type, lang=lang
)
return send_file(io.BytesIO(icon), mimetype=mimetype)

View File

@@ -1,7 +0,0 @@
from libs.exception import BaseHTTPException
class UnsupportedFileTypeError(BaseHTTPException):
error_code = "unsupported_file_type"
description = "File type not allowed."
code = 415

View File

@@ -1,19 +1,15 @@
from flask import Response, request
from flask_restful import Resource, reqparse
from flask_restful import Resource
from werkzeug.exceptions import NotFound
import services
from controllers.files import api
from controllers.files.error import UnsupportedFileTypeError
from libs.exception import BaseHTTPException
from services.account_service import TenantService
from services.file_service import FileService
class ImagePreviewApi(Resource):
"""
Deprecated
"""
def get(self, file_id):
file_id = str(file_id)
@@ -41,39 +37,24 @@ class FilePreviewApi(Resource):
def get(self, file_id):
file_id = str(file_id)
parser = reqparse.RequestParser()
parser.add_argument("timestamp", type=str, required=True, location="args")
parser.add_argument("nonce", type=str, required=True, location="args")
parser.add_argument("sign", type=str, required=True, location="args")
parser.add_argument("as_attachment", type=bool, required=False, default=False, location="args")
timestamp = request.args.get("timestamp")
nonce = request.args.get("nonce")
sign = request.args.get("sign")
args = parser.parse_args()
if not args["timestamp"] or not args["nonce"] or not args["sign"]:
if not timestamp or not nonce or not sign:
return {"content": "Invalid request."}, 400
try:
generator, upload_file = FileService.get_file_generator_by_file_id(
generator, mimetype = FileService.get_signed_file_preview(
file_id=file_id,
timestamp=args["timestamp"],
nonce=args["nonce"],
sign=args["sign"],
timestamp=timestamp,
nonce=nonce,
sign=sign,
)
except services.errors.file.UnsupportedFileTypeError:
raise UnsupportedFileTypeError()
response = Response(
generator,
mimetype=upload_file.mime_type,
direct_passthrough=True,
headers={},
)
if upload_file.size > 0:
response.headers["Content-Length"] = str(upload_file.size)
if args["as_attachment"]:
response.headers["Content-Disposition"] = f"attachment; filename={upload_file.name}"
return response
return Response(generator, mimetype=mimetype)
class WorkspaceWebappLogoApi(Resource):
@@ -99,3 +80,9 @@ class WorkspaceWebappLogoApi(Resource):
api.add_resource(ImagePreviewApi, "/files/<uuid:file_id>/image-preview")
api.add_resource(FilePreviewApi, "/files/<uuid:file_id>/file-preview")
api.add_resource(WorkspaceWebappLogoApi, "/files/workspaces/<uuid:workspace_id>/webapp-logo")
class UnsupportedFileTypeError(BaseHTTPException):
error_code = "unsupported_file_type"
description = "File type not allowed."
code = 415

View File

@@ -3,8 +3,8 @@ from flask_restful import Resource, reqparse
from werkzeug.exceptions import Forbidden, NotFound
from controllers.files import api
from controllers.files.error import UnsupportedFileTypeError
from core.tools.tool_file_manager import ToolFileManager
from libs.exception import BaseHTTPException
class ToolFilePreviewApi(Resource):
@@ -16,7 +16,6 @@ class ToolFilePreviewApi(Resource):
parser.add_argument("timestamp", type=str, required=True, location="args")
parser.add_argument("nonce", type=str, required=True, location="args")
parser.add_argument("sign", type=str, required=True, location="args")
parser.add_argument("as_attachment", type=bool, required=False, default=False, location="args")
args = parser.parse_args()
@@ -29,27 +28,24 @@ class ToolFilePreviewApi(Resource):
raise Forbidden("Invalid request.")
try:
stream, tool_file = ToolFileManager.get_file_generator_by_tool_file_id(
result = ToolFileManager.get_file_generator_by_tool_file_id(
file_id,
)
if not stream or not tool_file:
if not result:
raise NotFound("file is not found")
generator, mimetype = result
except Exception:
raise UnsupportedFileTypeError()
response = Response(
stream,
mimetype=tool_file.mimetype,
direct_passthrough=True,
headers={},
)
if tool_file.size > 0:
response.headers["Content-Length"] = str(tool_file.size)
if args["as_attachment"]:
response.headers["Content-Disposition"] = f"attachment; filename={tool_file.name}"
return response
return Response(generator, mimetype=mimetype)
api.add_resource(ToolFilePreviewApi, "/files/tools/<uuid:file_id>.<string:extension>")
class UnsupportedFileTypeError(BaseHTTPException):
error_code = "unsupported_file_type"
description = "File type not allowed."
code = 415

View File

@@ -21,7 +21,7 @@ class EnterpriseWorkspace(Resource):
if account is None:
return {"message": "owner account not found."}, 404
tenant = TenantService.create_tenant(args["name"], is_from_dashboard=True)
tenant = TenantService.create_tenant(args["name"])
TenantService.create_tenant_member(tenant, account, role="owner")
tenant_was_created.send(tenant)

View File

@@ -5,6 +5,7 @@ from libs.external_api import ExternalApi
bp = Blueprint("service_api", __name__, url_prefix="/v1")
api = ExternalApi(bp)
from . import index
from .app import app, audio, completion, conversation, file, message, workflow
from .dataset import dataset, document, hit_testing, segment
from .dataset import dataset, document, segment

View File

@@ -21,12 +21,7 @@ class AppParameterApi(Resource):
"options": fields.List(fields.String),
}
system_parameters_fields = {
"image_file_size_limit": fields.Integer,
"video_file_size_limit": fields.Integer,
"audio_file_size_limit": fields.Integer,
"file_size_limit": fields.Integer,
}
system_parameters_fields = {"image_file_size_limit": fields.String}
parameters_fields = {
"opening_statement": fields.String,
@@ -86,12 +81,7 @@ class AppParameterApi(Resource):
}
},
),
"system_parameters": {
"image_file_size_limit": dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT,
"video_file_size_limit": dify_config.UPLOAD_VIDEO_FILE_SIZE_LIMIT,
"audio_file_size_limit": dify_config.UPLOAD_AUDIO_FILE_SIZE_LIMIT,
"file_size_limit": dify_config.UPLOAD_FILE_SIZE_LIMIT,
},
"system_parameters": {"image_file_size_limit": dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT},
}

View File

@@ -48,7 +48,7 @@ class MessageListApi(Resource):
"tool_input": fields.String,
"created_at": TimestampField,
"observation": fields.String,
"message_files": fields.List(fields.Nested(message_file_fields)),
"message_files": fields.List(fields.String, attribute="files"),
}
message_fields = {
@@ -58,7 +58,7 @@ class MessageListApi(Resource):
"inputs": fields.Raw,
"query": fields.String,
"answer": fields.String(attribute="re_sign_file_url_answer"),
"message_files": fields.List(fields.Nested(message_file_fields)),
"message_files": fields.List(fields.Nested(message_file_fields), attribute="files"),
"feedback": fields.Nested(feedback_fields, attribute="user_feedback", allow_null=True),
"retriever_resources": fields.List(fields.Nested(retriever_resource_fields)),
"created_at": TimestampField,

View File

@@ -66,13 +66,6 @@ class DatasetListApi(DatasetApiResource):
help="type is required. Name must be between 1 to 40 characters.",
type=_validate_name,
)
parser.add_argument(
"description",
type=str,
nullable=True,
required=False,
default="",
)
parser.add_argument(
"indexing_technique",
type=str,
@@ -115,7 +108,6 @@ class DatasetListApi(DatasetApiResource):
dataset = DatasetService.create_empty_dataset(
tenant_id=tenant_id,
name=args["name"],
description=args["description"],
indexing_technique=args["indexing_technique"],
account=current_user,
permission=args["permission"],

View File

@@ -1,17 +0,0 @@
from controllers.console.datasets.hit_testing_base import DatasetsHitTestingBase
from controllers.service_api import api
from controllers.service_api.wraps import DatasetApiResource
class HitTestingApi(DatasetApiResource, DatasetsHitTestingBase):
def post(self, tenant_id, dataset_id):
dataset_id_str = str(dataset_id)
dataset = self.get_and_validate_dataset(dataset_id_str)
args = self.parse_args()
self.hit_testing_args_check(args)
return self.perform_hit_testing(dataset, args)
api.add_resource(HitTestingApi, "/datasets/<uuid:dataset_id>/hit-testing")

View File

@@ -21,12 +21,7 @@ class AppParameterApi(WebApiResource):
"options": fields.List(fields.String),
}
system_parameters_fields = {
"image_file_size_limit": fields.Integer,
"video_file_size_limit": fields.Integer,
"audio_file_size_limit": fields.Integer,
"file_size_limit": fields.Integer,
}
system_parameters_fields = {"image_file_size_limit": fields.String}
parameters_fields = {
"opening_statement": fields.String,
@@ -85,12 +80,7 @@ class AppParameterApi(WebApiResource):
}
},
),
"system_parameters": {
"image_file_size_limit": dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT,
"video_file_size_limit": dify_config.UPLOAD_VIDEO_FILE_SIZE_LIMIT,
"audio_file_size_limit": dify_config.UPLOAD_AUDIO_FILE_SIZE_LIMIT,
"file_size_limit": dify_config.UPLOAD_FILE_SIZE_LIMIT,
},
"system_parameters": {"image_file_size_limit": dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT},
}

View File

@@ -1,7 +1,7 @@
import urllib.parse
from flask import request
from flask_restful import marshal_with, reqparse
from flask_restful import marshal_with
import services
from controllers.web import api
@@ -18,10 +18,6 @@ class FileApi(WebApiResource):
# get file from request
file = request.files["file"]
parser = reqparse.RequestParser()
parser.add_argument("source", type=str, required=False, location="args")
source = parser.parse_args().get("source")
# check file
if "file" not in request.files:
raise NoFileUploadedError()
@@ -29,7 +25,7 @@ class FileApi(WebApiResource):
if len(request.files) > 1:
raise TooManyFilesError()
try:
upload_file = FileService.upload_file(file=file, user=end_user, source=source)
upload_file = FileService.upload_file(file, end_user)
except services.errors.file.FileTooLargeError as file_too_large_error:
raise FileTooLargeError(file_too_large_error.description)
except services.errors.file.UnsupportedFileTypeError:
@@ -46,7 +42,7 @@ class RemoteFileInfoApi(WebApiResource):
response = ssrf_proxy.head(decoded_url)
return {
"file_type": response.headers.get("Content-Type", "application/octet-stream"),
"file_length": int(response.headers.get("Content-Length", -1)),
"file_length": int(response.headers.get("Content-Length", 0)),
}
except Exception as e:
return {"error": str(e)}, 400

View File

@@ -62,7 +62,7 @@ class MessageListApi(WebApiResource):
"inputs": FilesContainedField,
"query": fields.String,
"answer": fields.String(attribute="re_sign_file_url_answer"),
"message_files": fields.List(fields.Nested(message_file_fields)),
"message_files": fields.List(fields.Nested(message_file_fields), attribute="files"),
"feedback": fields.Nested(feedback_fields, attribute="user_feedback", allow_null=True),
"retriever_resources": fields.List(fields.Nested(retriever_resource_fields)),
"created_at": TimestampField,

View File

@@ -17,7 +17,7 @@ message_fields = {
"inputs": fields.Raw,
"query": fields.String,
"answer": fields.String,
"message_files": fields.List(fields.Nested(message_file_fields)),
"message_files": fields.List(fields.Nested(message_file_fields), attribute="files"),
"feedback": fields.Nested(feedback_fields, attribute="user_feedback", allow_null=True),
"created_at": TimestampField,
}

View File

@@ -43,7 +43,7 @@ from core.tools.tool.tool import Tool
from core.tools.tool_manager import ToolManager
from extensions.ext_database import db
from factories import file_factory
from models.model import Conversation, Message, MessageAgentThought, MessageFile
from models.model import Conversation, Message, MessageAgentThought
from models.tools import ToolConversationVariables
logger = logging.getLogger(__name__)
@@ -165,12 +165,6 @@ class BaseAgentRunner(AppRunner):
continue
parameter_type = parameter.type.as_normal_type()
if parameter.type in {
ToolParameter.ToolParameterType.SYSTEM_FILES,
ToolParameter.ToolParameterType.FILE,
ToolParameter.ToolParameterType.FILES,
}:
continue
enum = []
if parameter.type == ToolParameter.ToolParameterType.SELECT:
enum = [option.value for option in parameter.options]
@@ -256,12 +250,6 @@ class BaseAgentRunner(AppRunner):
continue
parameter_type = parameter.type.as_normal_type()
if parameter.type in {
ToolParameter.ToolParameterType.SYSTEM_FILES,
ToolParameter.ToolParameterType.FILE,
ToolParameter.ToolParameterType.FILES,
}:
continue
enum = []
if parameter.type == ToolParameter.ToolParameterType.SELECT:
enum = [option.value for option in parameter.options]
@@ -507,7 +495,7 @@ class BaseAgentRunner(AppRunner):
return result
def organize_agent_user_prompt(self, message: Message) -> UserPromptMessage:
files = db.session.query(MessageFile).filter(MessageFile.message_id == message.id).all()
files = message.message_files
if files:
file_extra_config = FileUploadConfigManager.convert(message.app_model_config.to_dict())

View File

@@ -369,7 +369,7 @@ class CotAgentRunner(BaseAgentRunner, ABC):
return message
def _organize_historic_prompt_messages(
self, current_session_messages: Optional[list[PromptMessage]] = None
self, current_session_messages: list[PromptMessage] = None
) -> list[PromptMessage]:
"""
organize historic prompt messages

View File

@@ -29,7 +29,7 @@ class CotChatAgentRunner(CotAgentRunner):
return SystemPromptMessage(content=system_prompt)
def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
def _organize_user_query(self, query, prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
"""
Organize user query
"""

View File

@@ -1,5 +1,4 @@
import json
from typing import Optional
from core.agent.cot_agent_runner import CotAgentRunner
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessage, UserPromptMessage
@@ -22,7 +21,7 @@ class CotCompletionAgentRunner(CotAgentRunner):
return system_prompt
def _organize_historic_prompt(self, current_session_messages: Optional[list[PromptMessage]] = None) -> str:
def _organize_historic_prompt(self, current_session_messages: list[PromptMessage] = None) -> str:
"""
Organize historic prompt
"""

View File

@@ -2,7 +2,7 @@ import json
import logging
from collections.abc import Generator
from copy import deepcopy
from typing import Any, Optional, Union
from typing import Any, Union
from core.agent.base_agent_runner import BaseAgentRunner
from core.app.apps.base_app_queue_manager import PublishFrom
@@ -375,7 +375,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
return tool_calls
def _init_system_message(
self, prompt_template: str, prompt_messages: Optional[list[PromptMessage]] = None
self, prompt_template: str, prompt_messages: list[PromptMessage] = None
) -> list[PromptMessage]:
"""
Initialize system message
@@ -390,7 +390,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
return prompt_messages
def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
def _organize_user_query(self, query, prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
"""
Organize user query
"""

View File

@@ -14,7 +14,7 @@ class CotAgentOutputParser:
) -> Generator[Union[str, AgentScratchpadUnit.Action], None, None]:
def parse_action(json_str):
try:
action = json.loads(json_str, strict=False)
action = json.loads(json_str)
action_name = None
action_input = None
@@ -62,8 +62,6 @@ class CotAgentOutputParser:
thought_str = "thought:"
thought_idx = 0
last_character = ""
for response in llm_response:
if response.delta.usage:
usage_dict["usage"] = response.delta.usage
@@ -76,38 +74,35 @@ class CotAgentOutputParser:
while index < len(response):
steps = 1
delta = response[index : index + steps]
yield_delta = False
last_character = response[index - 1] if index > 0 else ""
if delta == "`":
last_character = delta
code_block_cache += delta
code_block_delimiter_count += 1
else:
if not in_code_block:
if code_block_delimiter_count > 0:
last_character = delta
yield code_block_cache
code_block_cache = ""
else:
last_character = delta
code_block_cache += delta
code_block_delimiter_count = 0
if not in_code_block and not in_json:
if delta.lower() == action_str[action_idx] and action_idx == 0:
if last_character not in {"\n", " ", ""}:
yield_delta = True
else:
last_character = delta
action_cache += delta
action_idx += 1
if action_idx == len(action_str):
action_cache = ""
action_idx = 0
index += steps
yield delta
continue
action_cache += delta
action_idx += 1
if action_idx == len(action_str):
action_cache = ""
action_idx = 0
index += steps
continue
elif delta.lower() == action_str[action_idx] and action_idx > 0:
last_character = delta
action_cache += delta
action_idx += 1
if action_idx == len(action_str):
@@ -117,25 +112,24 @@ class CotAgentOutputParser:
continue
else:
if action_cache:
last_character = delta
yield action_cache
action_cache = ""
action_idx = 0
if delta.lower() == thought_str[thought_idx] and thought_idx == 0:
if last_character not in {"\n", " ", ""}:
yield_delta = True
else:
last_character = delta
thought_cache += delta
thought_idx += 1
if thought_idx == len(thought_str):
thought_cache = ""
thought_idx = 0
index += steps
yield delta
continue
thought_cache += delta
thought_idx += 1
if thought_idx == len(thought_str):
thought_cache = ""
thought_idx = 0
index += steps
continue
elif delta.lower() == thought_str[thought_idx] and thought_idx > 0:
last_character = delta
thought_cache += delta
thought_idx += 1
if thought_idx == len(thought_str):
@@ -145,20 +139,12 @@ class CotAgentOutputParser:
continue
else:
if thought_cache:
last_character = delta
yield thought_cache
thought_cache = ""
thought_idx = 0
if yield_delta:
index += steps
last_character = delta
yield delta
continue
if code_block_delimiter_count == 3:
if in_code_block:
last_character = delta
yield from extra_json_from_code_block(code_block_cache)
code_block_cache = ""
@@ -170,10 +156,8 @@ class CotAgentOutputParser:
if delta == "{":
json_quote_count += 1
in_json = True
last_character = delta
json_cache += delta
elif delta == "}":
last_character = delta
json_cache += delta
if json_quote_count > 0:
json_quote_count -= 1
@@ -184,19 +168,16 @@ class CotAgentOutputParser:
continue
else:
if in_json:
last_character = delta
json_cache += delta
if got_json:
got_json = False
last_character = delta
yield parse_action(json_cache)
json_cache = ""
json_quote_count = 0
in_json = False
if not in_code_block and not in_json:
last_character = delta
yield delta.replace("`", "")
index += steps

View File

@@ -53,11 +53,11 @@ class BasicVariablesConfigManager:
VariableEntity(
type=variable_type,
variable=variable.get("variable"),
description=variable.get("description") or "",
description=variable.get("description", ""),
label=variable.get("label"),
required=variable.get("required", False),
max_length=variable.get("max_length"),
options=variable.get("options") or [],
options=variable.get("options", []),
)
)

View File

@@ -2,7 +2,7 @@ from collections.abc import Sequence
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel, Field, field_validator
from pydantic import BaseModel, Field
from core.file import FileExtraConfig, FileTransferMethod, FileType
from core.model_runtime.entities.message_entities import PromptMessageRole
@@ -114,16 +114,6 @@ class VariableEntity(BaseModel):
allowed_file_extensions: Sequence[str] = Field(default_factory=list)
allowed_file_upload_methods: Sequence[FileTransferMethod] = Field(default_factory=list)
@field_validator("description", mode="before")
@classmethod
def convert_none_description(cls, v: Any) -> str:
return v or ""
@field_validator("options", mode="before")
@classmethod
def convert_none_options(cls, v: Any) -> Sequence[str]:
return v or []
class ExternalDataVariableEntity(BaseModel):
"""

View File

@@ -17,13 +17,10 @@ class FileUploadConfigManager:
file_upload_dict = config.get("file_upload")
if file_upload_dict:
if file_upload_dict.get("enabled"):
transform_methods = file_upload_dict.get("allowed_file_upload_methods") or file_upload_dict.get(
"allowed_upload_methods", []
)
data = {
"image_config": {
"number_limits": file_upload_dict["number_limits"],
"transfer_methods": transform_methods,
"transfer_methods": file_upload_dict["allowed_file_upload_methods"],
}
}

View File

@@ -10,7 +10,6 @@ from flask import Flask, current_app
from pydantic import ValidationError
import contexts
from constants import UUID_NIL
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfigManager
from core.app.apps.advanced_chat.app_runner import AdvancedChatAppRunner
@@ -23,10 +22,10 @@ from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity,
from core.app.entities.task_entities import ChatbotAppBlockingResponse, ChatbotAppStreamResponse
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
from core.ops.ops_trace_manager import TraceQueueManager
from enums import CreatedByRole
from extensions.ext_database import db
from factories import file_factory
from models.account import Account
from models.enums import CreatedByRole
from models.model import App, Conversation, EndUser, Message
from models.workflow import Workflow
@@ -122,7 +121,6 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
# always enable retriever resource in debugger mode
app_config.additional_features.show_retrieve_source = True
workflow_run_id = str(uuid.uuid4())
# init application generate entity
application_generate_entity = AdvancedChatAppGenerateEntity(
task_id=str(uuid.uuid4()),
@@ -133,13 +131,12 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
else self._prepare_user_inputs(user_inputs=inputs, app_config=app_config, user_id=user.id, role=role),
query=query,
files=file_objs,
parent_message_id=args.get("parent_message_id") if invoke_from != InvokeFrom.SERVICE_API else UUID_NIL,
parent_message_id=args.get("parent_message_id"),
user_id=user.id,
stream=stream,
invoke_from=invoke_from,
extras=extras,
trace_manager=trace_manager,
workflow_run_id=workflow_run_id,
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)

View File

@@ -20,8 +20,8 @@ from core.workflow.callbacks import WorkflowCallback, WorkflowLoggingCallback
from core.workflow.entities.variable_pool import VariablePool
from core.workflow.enums import SystemVariableKey
from core.workflow.workflow_entry import WorkflowEntry
from enums import UserFrom
from extensions.ext_database import db
from models.enums import UserFrom
from models.model import App, Conversation, EndUser, Message
from models.workflow import ConversationVariable, WorkflowType
@@ -135,9 +135,6 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
SystemVariableKey.CONVERSATION_ID: self.conversation.id,
SystemVariableKey.USER_ID: user_id,
SystemVariableKey.DIALOGUE_COUNT: conversation_dialogue_count,
SystemVariableKey.APP_ID: app_config.app_id,
SystemVariableKey.WORKFLOW_ID: app_config.workflow_id,
SystemVariableKey.WORKFLOW_RUN_ID: self.application_generate_entity.workflow_run_id,
}
# init variable pool

View File

@@ -9,7 +9,6 @@ from core.app.apps.advanced_chat.app_generator_tts_publisher import AppGenerator
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.entities.app_invoke_entities import (
AdvancedChatAppGenerateEntity,
InvokeFrom,
)
from core.app.entities.queue_entities import (
QueueAdvancedChatMessageEndEvent,
@@ -46,20 +45,17 @@ from core.app.entities.task_entities import (
from core.app.task_pipeline.based_generate_task_pipeline import BasedGenerateTaskPipeline
from core.app.task_pipeline.message_cycle_manage import MessageCycleManage
from core.app.task_pipeline.workflow_cycle_manage import WorkflowCycleManage
from core.model_runtime.entities.llm_entities import LLMUsage
from core.model_runtime.utils.encoders import jsonable_encoder
from core.ops.ops_trace_manager import TraceQueueManager
from core.workflow.enums import SystemVariableKey
from core.workflow.graph_engine.entities.graph_runtime_state import GraphRuntimeState
from core.workflow.nodes import NodeType
from enums.workflow_nodes import NodeType
from events.message_event import message_was_created
from extensions.ext_database import db
from models import Conversation, EndUser, Message, MessageFile
from models.account import Account
from models.enums import CreatedByRole
from models.model import Conversation, EndUser, Message
from models.workflow import (
Workflow,
WorkflowNodeExecution,
WorkflowRunStatus,
)
@@ -76,7 +72,6 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
_workflow: Workflow
_user: Union[Account, EndUser]
_workflow_system_variables: dict[SystemVariableKey, Any]
_wip_workflow_node_executions: dict[str, WorkflowNodeExecution]
def __init__(
self,
@@ -113,14 +108,9 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
SystemVariableKey.FILES: application_generate_entity.files,
SystemVariableKey.CONVERSATION_ID: conversation.id,
SystemVariableKey.USER_ID: user_id,
SystemVariableKey.DIALOGUE_COUNT: conversation.dialogue_count,
SystemVariableKey.APP_ID: application_generate_entity.app_config.app_id,
SystemVariableKey.WORKFLOW_ID: workflow.id,
SystemVariableKey.WORKFLOW_RUN_ID: application_generate_entity.workflow_run_id,
}
self._task_state = WorkflowTaskState()
self._wip_workflow_node_executions = {}
self._conversation_name_generate_thread = None
self._recorded_files: list[Mapping[str, Any]] = []
@@ -498,6 +488,10 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
self._conversation_name_generate_thread.join()
def _save_message(self, graph_runtime_state: Optional[GraphRuntimeState] = None) -> None:
"""
Save message.
:return:
"""
self._refetch_message()
self._message.answer = self._task_state.answer
@@ -505,22 +499,6 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
self._message.message_metadata = (
json.dumps(jsonable_encoder(self._task_state.metadata)) if self._task_state.metadata else None
)
message_files = [
MessageFile(
message_id=self._message.id,
type=file["type"],
transfer_method=file["transfer_method"],
url=file["remote_url"],
belongs_to="assistant",
upload_file_id=file["related_id"],
created_by_role=CreatedByRole.ACCOUNT
if self._message.invoke_from in {InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER}
else CreatedByRole.END_USER,
created_by=self._message.from_account_id or self._message.from_end_user_id or "",
)
for file in self._recorded_files
]
db.session.add_all(message_files)
if graph_runtime_state and graph_runtime_state.llm_usage:
usage = graph_runtime_state.llm_usage
@@ -533,10 +511,6 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
self._message.total_price = usage.total_price
self._message.currency = usage.currency
self._task_state.metadata["usage"] = jsonable_encoder(usage)
else:
self._task_state.metadata["usage"] = jsonable_encoder(LLMUsage.empty_usage())
db.session.commit()
message_was_created.send(

View File

@@ -8,7 +8,6 @@ from typing import Any, Literal, Union, overload
from flask import Flask, current_app
from pydantic import ValidationError
from constants import UUID_NIL
from core.app.app_config.easy_ui_based_app.model_config.converter import ModelConfigConverter
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfigManager
@@ -20,10 +19,10 @@ from core.app.apps.message_based_app_queue_manager import MessageBasedAppQueueMa
from core.app.entities.app_invoke_entities import AgentChatAppGenerateEntity, InvokeFrom
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
from core.ops.ops_trace_manager import TraceQueueManager
from enums import CreatedByRole
from extensions.ext_database import db
from factories import file_factory
from models import Account, App, EndUser
from models.enums import CreatedByRole
logger = logging.getLogger(__name__)
@@ -141,7 +140,7 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
else self._prepare_user_inputs(user_inputs=inputs, app_config=app_config, user_id=user.id, role=role),
query=query,
files=file_objs,
parent_message_id=args.get("parent_message_id") if invoke_from != InvokeFrom.SERVICE_API else UUID_NIL,
parent_message_id=args.get("parent_message_id"),
user_id=user.id,
stream=stream,
invoke_from=invoke_from,

View File

@@ -7,7 +7,7 @@ from factories import file_factory
if TYPE_CHECKING:
from core.app.app_config.entities import AppConfig, VariableEntity
from models.enums import CreatedByRole
from enums import CreatedByRole
class BaseAppGenerator:

View File

@@ -8,7 +8,6 @@ from typing import Any, Literal, Union, overload
from flask import Flask, current_app
from pydantic import ValidationError
from constants import UUID_NIL
from core.app.app_config.easy_ui_based_app.model_config.converter import ModelConfigConverter
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.apps.base_app_queue_manager import AppQueueManager, GenerateTaskStoppedError, PublishFrom
@@ -20,10 +19,10 @@ from core.app.apps.message_based_app_queue_manager import MessageBasedAppQueueMa
from core.app.entities.app_invoke_entities import ChatAppGenerateEntity, InvokeFrom
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
from core.ops.ops_trace_manager import TraceQueueManager
from enums import CreatedByRole
from extensions.ext_database import db
from factories import file_factory
from models.account import Account
from models.enums import CreatedByRole
from models.model import App, EndUser
logger = logging.getLogger(__name__)
@@ -139,7 +138,7 @@ class ChatAppGenerator(MessageBasedAppGenerator):
else self._prepare_user_inputs(user_inputs=inputs, app_config=app_config, user_id=user.id, role=role),
query=query,
files=file_objs,
parent_message_id=args.get("parent_message_id") if invoke_from != InvokeFrom.SERVICE_API else UUID_NIL,
parent_message_id=args.get("parent_message_id"),
user_id=user.id,
invoke_from=invoke_from,
extras=extras,

View File

@@ -19,10 +19,10 @@ from core.app.apps.message_based_app_queue_manager import MessageBasedAppQueueMa
from core.app.entities.app_invoke_entities import CompletionAppGenerateEntity, InvokeFrom
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
from core.ops.ops_trace_manager import TraceQueueManager
from enums import CreatedByRole
from extensions.ext_database import db
from factories import file_factory
from models import Account, App, EndUser, Message
from models.enums import CreatedByRole
from services.errors.app import MoreLikeThisDisabledError
from services.errors.message import MessageNotExistsError
@@ -263,7 +263,7 @@ class CompletionAppGenerator(MessageBasedAppGenerator):
file_extra_config = FileUploadConfigManager.convert(override_model_config_dict)
if file_extra_config:
file_objs = file_factory.build_from_mappings(
mappings=message.message_files,
mappings=message.files,
tenant_id=app_model.tenant_id,
user_id=user.id,
role=role,

View File

@@ -27,7 +27,6 @@ from core.app.task_pipeline.easy_ui_based_generate_task_pipeline import EasyUIBa
from core.prompt.utils.prompt_template_parser import PromptTemplateParser
from extensions.ext_database import db
from models import Account
from models.enums import CreatedByRole
from models.model import App, AppMode, AppModelConfig, Conversation, EndUser, Message, MessageFile
from services.errors.app_model_config import AppModelConfigBrokenError
from services.errors.conversation import ConversationCompletedError, ConversationNotExistsError
@@ -236,13 +235,13 @@ class MessageBasedAppGenerator(BaseAppGenerator):
for file in application_generate_entity.files:
message_file = MessageFile(
message_id=message.id,
type=file.type,
transfer_method=file.transfer_method,
type=file.type.value,
transfer_method=file.transfer_method.value,
belongs_to="user",
url=file.remote_url,
upload_file_id=file.related_id,
created_by_role=(CreatedByRole.ACCOUNT if account_id else CreatedByRole.END_USER),
created_by=account_id or end_user_id or "",
created_by_role=("account" if account_id else "end_user"),
created_by=account_id or end_user_id,
)
db.session.add(message_file)
db.session.commit()

View File

@@ -22,10 +22,10 @@ from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerat
from core.app.entities.task_entities import WorkflowAppBlockingResponse, WorkflowAppStreamResponse
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
from core.ops.ops_trace_manager import TraceQueueManager
from enums import CreatedByRole
from extensions.ext_database import db
from factories import file_factory
from models import Account, App, EndUser, Workflow
from models.enums import CreatedByRole
logger = logging.getLogger(__name__)
@@ -95,7 +95,6 @@ class WorkflowAppGenerator(BaseAppGenerator):
)
inputs: Mapping[str, Any] = args["inputs"]
workflow_run_id = str(uuid.uuid4())
# init application generate entity
application_generate_entity = WorkflowAppGenerateEntity(
task_id=str(uuid.uuid4()),
@@ -107,7 +106,6 @@ class WorkflowAppGenerator(BaseAppGenerator):
invoke_from=invoke_from,
call_depth=call_depth,
trace_manager=trace_manager,
workflow_run_id=workflow_run_id,
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)

View File

@@ -13,8 +13,8 @@ from core.workflow.callbacks import WorkflowCallback, WorkflowLoggingCallback
from core.workflow.entities.variable_pool import VariablePool
from core.workflow.enums import SystemVariableKey
from core.workflow.workflow_entry import WorkflowEntry
from enums import UserFrom
from extensions.ext_database import db
from models.enums import UserFrom
from models.model import App, EndUser
from models.workflow import WorkflowType
@@ -89,9 +89,6 @@ class WorkflowAppRunner(WorkflowBasedAppRunner):
system_inputs = {
SystemVariableKey.FILES: files,
SystemVariableKey.USER_ID: user_id,
SystemVariableKey.APP_ID: app_config.app_id,
SystemVariableKey.WORKFLOW_ID: app_config.workflow_id,
SystemVariableKey.WORKFLOW_RUN_ID: self.application_generate_entity.workflow_run_id,
}
variable_pool = VariablePool(

View File

@@ -51,7 +51,6 @@ from models.workflow import (
Workflow,
WorkflowAppLog,
WorkflowAppLogCreatedFrom,
WorkflowNodeExecution,
WorkflowRun,
WorkflowRunStatus,
)
@@ -69,7 +68,6 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
_task_state: WorkflowTaskState
_application_generate_entity: WorkflowAppGenerateEntity
_workflow_system_variables: dict[SystemVariableKey, Any]
_wip_workflow_node_executions: dict[str, WorkflowNodeExecution]
def __init__(
self,
@@ -98,13 +96,9 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
self._workflow_system_variables = {
SystemVariableKey.FILES: application_generate_entity.files,
SystemVariableKey.USER_ID: user_id,
SystemVariableKey.APP_ID: application_generate_entity.app_config.app_id,
SystemVariableKey.WORKFLOW_ID: workflow.id,
SystemVariableKey.WORKFLOW_RUN_ID: application_generate_entity.workflow_run_id,
}
self._task_state = WorkflowTaskState()
self._wip_workflow_node_executions = {}
def process(self) -> Union[WorkflowAppBlockingResponse, Generator[WorkflowAppStreamResponse, None, None]]:
"""

View File

@@ -40,10 +40,11 @@ from core.workflow.graph_engine.entities.event import (
ParallelBranchRunSucceededEvent,
)
from core.workflow.graph_engine.entities.graph import Graph
from core.workflow.nodes import NodeType
from core.workflow.nodes.iteration import IterationNodeData
from core.workflow.nodes.node_mapping import node_type_classes_mapping
from core.workflow.nodes.base_node import BaseNode
from core.workflow.nodes.iteration.entities import IterationNodeData
from core.workflow.nodes.node_mapping import node_classes
from core.workflow.workflow_entry import WorkflowEntry
from enums import NodeType
from extensions.ext_database import db
from models.model import App
from models.workflow import Workflow
@@ -136,8 +137,9 @@ class WorkflowBasedAppRunner(AppRunner):
raise ValueError("iteration node id not found in workflow graph")
# Get node class
node_type = NodeType(iteration_node_config.get("data", {}).get("type"))
node_cls = node_type_classes_mapping[node_type]
node_type = NodeType.value_of(iteration_node_config.get("data", {}).get("type"))
node_cls = node_classes.get(node_type)
node_cls = cast(type[BaseNode], node_cls)
# init variable pool
variable_pool = VariablePool(

View File

@@ -2,9 +2,8 @@ from collections.abc import Mapping, Sequence
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel, ConfigDict, Field, ValidationInfo, field_validator
from pydantic import BaseModel, ConfigDict
from constants import UUID_NIL
from core.app.app_config.entities import AppConfig, EasyUIBasedAppConfig, WorkflowUIBasedAppConfig
from core.entities.provider_configuration import ProviderModelBundle
from core.file.models import File
@@ -117,36 +116,13 @@ class EasyUIBasedAppGenerateEntity(AppGenerateEntity):
model_config = ConfigDict(protected_namespaces=())
class ConversationAppGenerateEntity(AppGenerateEntity):
"""
Base entity for conversation-based app generation.
"""
conversation_id: Optional[str] = None
parent_message_id: Optional[str] = Field(
default=None,
description=(
"Starting from v0.9.0, parent_message_id is used to support message regeneration for internal chat API."
"For service API, we need to ensure its forward compatibility, "
"so passing in the parent_message_id as request arg is not supported for now. "
"It needs to be set to UUID_NIL so that the subsequent processing will treat it as legacy messages."
),
)
@field_validator("parent_message_id")
@classmethod
def validate_parent_message_id(cls, v, info: ValidationInfo):
if info.data.get("invoke_from") == InvokeFrom.SERVICE_API and v != UUID_NIL:
raise ValueError("parent_message_id should be UUID_NIL for service API")
return v
class ChatAppGenerateEntity(ConversationAppGenerateEntity, EasyUIBasedAppGenerateEntity):
class ChatAppGenerateEntity(EasyUIBasedAppGenerateEntity):
"""
Chat Application Generate Entity.
"""
pass
conversation_id: Optional[str] = None
parent_message_id: Optional[str] = None
class CompletionAppGenerateEntity(EasyUIBasedAppGenerateEntity):
@@ -157,15 +133,16 @@ class CompletionAppGenerateEntity(EasyUIBasedAppGenerateEntity):
pass
class AgentChatAppGenerateEntity(ConversationAppGenerateEntity, EasyUIBasedAppGenerateEntity):
class AgentChatAppGenerateEntity(EasyUIBasedAppGenerateEntity):
"""
Agent Chat Application Generate Entity.
"""
pass
conversation_id: Optional[str] = None
parent_message_id: Optional[str] = None
class AdvancedChatAppGenerateEntity(ConversationAppGenerateEntity):
class AdvancedChatAppGenerateEntity(AppGenerateEntity):
"""
Advanced Chat Application Generate Entity.
"""
@@ -173,7 +150,8 @@ class AdvancedChatAppGenerateEntity(ConversationAppGenerateEntity):
# app config
app_config: WorkflowUIBasedAppConfig
workflow_run_id: Optional[str] = None
conversation_id: Optional[str] = None
parent_message_id: Optional[str] = None
query: str
class SingleIterationRunEntity(BaseModel):
@@ -194,7 +172,6 @@ class WorkflowAppGenerateEntity(AppGenerateEntity):
# app config
app_config: WorkflowUIBasedAppConfig
workflow_run_id: Optional[str] = None
class SingleIterationRunEntity(BaseModel):
"""

View File

@@ -5,10 +5,10 @@ from typing import Any, Optional
from pydantic import BaseModel, field_validator
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk
from core.workflow.entities.base_node_data_entities import BaseNodeData
from core.workflow.entities.node_entities import NodeRunMetadataKey
from core.workflow.graph_engine.entities.graph_runtime_state import GraphRuntimeState
from core.workflow.nodes import NodeType
from core.workflow.nodes.base import BaseNodeData
from enums import NodeType
class QueueEvent(str, Enum):

View File

@@ -1,4 +1,4 @@
from collections.abc import Mapping, Sequence
from collections.abc import Mapping
from enum import Enum
from typing import Any, Optional
@@ -120,7 +120,7 @@ class MessageEndStreamResponse(StreamResponse):
event: StreamEvent = StreamEvent.MESSAGE_END
id: str
metadata: dict = {}
files: Optional[Sequence[Mapping[str, Any]]] = None
files: Optional[list[Mapping[str, Any]]] = None
class MessageFileStreamResponse(StreamResponse):
@@ -213,7 +213,7 @@ class WorkflowFinishStreamResponse(StreamResponse):
created_by: Optional[dict] = None
created_at: int
finished_at: int
files: Optional[Sequence[Mapping[str, Any]]] = []
files: Optional[list[dict]] = []
event: StreamEvent = StreamEvent.WORKFLOW_FINISHED
workflow_run_id: str
@@ -298,7 +298,7 @@ class NodeFinishStreamResponse(StreamResponse):
execution_metadata: Optional[dict] = None
created_at: int
finished_at: int
files: Optional[Sequence[Mapping[str, Any]]] = []
files: Optional[list[dict]] = []
parallel_id: Optional[str] = None
parallel_start_node_id: Optional[str] = None
parent_parallel_id: Optional[str] = None

View File

@@ -53,7 +53,7 @@ class BasedGenerateTaskPipeline:
self._output_moderation_handler = self._init_output_moderation()
self._stream = stream
def _handle_error(self, event: QueueErrorEvent, message: Optional[Message] = None):
def _handle_error(self, event: QueueErrorEvent, message: Optional[Message] = None) -> Exception:
"""
Handle error event.
:param event: event
@@ -100,7 +100,7 @@ class BasedGenerateTaskPipeline:
return message
def _error_to_stream_response(self, e: Exception):
def _error_to_stream_response(self, e: Exception) -> ErrorStreamResponse:
"""
Error to stream response.
:param e: exception

View File

@@ -1,10 +1,8 @@
import logging
from threading import Thread
from typing import Optional, Union
from flask import Flask, current_app
from configs import dify_config
from core.app.entities.app_invoke_entities import (
AdvancedChatAppGenerateEntity,
AgentChatAppGenerateEntity,
@@ -84,9 +82,7 @@ class MessageCycleManage:
try:
name = LLMGenerator.generate_conversation_name(app_model.tenant_id, query)
conversation.name = name
except Exception as e:
if dify_config.DEBUG:
logging.exception(f"generate conversation name failed: {e}")
except:
pass
db.session.merge(conversation)

View File

@@ -1,11 +1,9 @@
import json
import time
from collections.abc import Mapping, Sequence
from collections.abc import Mapping
from datetime import datetime, timezone
from typing import Any, Optional, Union, cast
from sqlalchemy.orm import Session
from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom, WorkflowAppGenerateEntity
from core.app.entities.queue_entities import (
QueueIterationCompletedEvent,
@@ -36,14 +34,14 @@ from core.ops.entities.trace_entity import TraceTaskName
from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.tools.tool_manager import ToolManager
from core.workflow.enums import SystemVariableKey
from core.workflow.nodes import NodeType
from core.workflow.nodes.tool.entities import ToolNodeData
from core.workflow.workflow_entry import WorkflowEntry
from enums import NodeType, WorkflowRunTriggeredFrom
from extensions.ext_database import db
from models.account import Account
from models.enums import CreatedByRole, WorkflowRunTriggeredFrom
from models.model import EndUser
from models.workflow import (
CreatedByRole,
Workflow,
WorkflowNodeExecution,
WorkflowNodeExecutionStatus,
@@ -59,7 +57,6 @@ class WorkflowCycleManage:
_user: Union[Account, EndUser]
_task_state: WorkflowTaskState
_workflow_system_variables: dict[SystemVariableKey, Any]
_wip_workflow_node_executions: dict[str, WorkflowNodeExecution]
def _handle_workflow_run_start(self) -> WorkflowRun:
max_sequence = (
@@ -88,9 +85,6 @@ class WorkflowCycleManage:
# init workflow run
workflow_run = WorkflowRun()
workflow_run_id = self._workflow_system_variables[SystemVariableKey.WORKFLOW_RUN_ID]
if workflow_run_id:
workflow_run.id = workflow_run_id
workflow_run.tenant_id = self._workflow.tenant_id
workflow_run.app_id = self._workflow.app_id
workflow_run.sequence_number = new_sequence_number
@@ -138,7 +132,7 @@ class WorkflowCycleManage:
outputs = WorkflowEntry.handle_special_values(outputs)
workflow_run.status = WorkflowRunStatus.SUCCEEDED.value
workflow_run.outputs = json.dumps(outputs or {})
workflow_run.outputs = json.dumps(outputs) if outputs else None
workflow_run.elapsed_time = time.perf_counter() - start_at
workflow_run.total_tokens = total_tokens
workflow_run.total_steps = total_steps
@@ -234,30 +228,28 @@ class WorkflowCycleManage:
self, workflow_run: WorkflowRun, event: QueueNodeStartedEvent
) -> WorkflowNodeExecution:
# init workflow node execution
workflow_node_execution = WorkflowNodeExecution()
workflow_node_execution.tenant_id = workflow_run.tenant_id
workflow_node_execution.app_id = workflow_run.app_id
workflow_node_execution.workflow_id = workflow_run.workflow_id
workflow_node_execution.triggered_from = WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN.value
workflow_node_execution.workflow_run_id = workflow_run.id
workflow_node_execution.predecessor_node_id = event.predecessor_node_id
workflow_node_execution.index = event.node_run_index
workflow_node_execution.node_execution_id = event.node_execution_id
workflow_node_execution.node_id = event.node_id
workflow_node_execution.node_type = event.node_type.value
workflow_node_execution.title = event.node_data.title
workflow_node_execution.status = WorkflowNodeExecutionStatus.RUNNING.value
workflow_node_execution.created_by_role = workflow_run.created_by_role
workflow_node_execution.created_by = workflow_run.created_by
workflow_node_execution.created_at = datetime.now(timezone.utc).replace(tzinfo=None)
with Session(db.engine, expire_on_commit=False) as session:
workflow_node_execution = WorkflowNodeExecution()
workflow_node_execution.tenant_id = workflow_run.tenant_id
workflow_node_execution.app_id = workflow_run.app_id
workflow_node_execution.workflow_id = workflow_run.workflow_id
workflow_node_execution.triggered_from = WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN.value
workflow_node_execution.workflow_run_id = workflow_run.id
workflow_node_execution.predecessor_node_id = event.predecessor_node_id
workflow_node_execution.index = event.node_run_index
workflow_node_execution.node_execution_id = event.node_execution_id
workflow_node_execution.node_id = event.node_id
workflow_node_execution.node_type = event.node_type.value
workflow_node_execution.title = event.node_data.title
workflow_node_execution.status = WorkflowNodeExecutionStatus.RUNNING.value
workflow_node_execution.created_by_role = workflow_run.created_by_role
workflow_node_execution.created_by = workflow_run.created_by
workflow_node_execution.created_at = datetime.now(timezone.utc).replace(tzinfo=None)
db.session.add(workflow_node_execution)
db.session.commit()
db.session.refresh(workflow_node_execution)
db.session.close()
session.add(workflow_node_execution)
session.commit()
session.refresh(workflow_node_execution)
self._wip_workflow_node_executions[workflow_node_execution.node_execution_id] = workflow_node_execution
return workflow_node_execution
def _handle_workflow_node_execution_success(self, event: QueueNodeSucceededEvent) -> WorkflowNodeExecution:
@@ -269,39 +261,22 @@ class WorkflowCycleManage:
workflow_node_execution = self._refetch_workflow_node_execution(event.node_execution_id)
inputs = WorkflowEntry.handle_special_values(event.inputs)
process_data = WorkflowEntry.handle_special_values(event.process_data)
outputs = WorkflowEntry.handle_special_values(event.outputs)
execution_metadata = (
json.dumps(jsonable_encoder(event.execution_metadata)) if event.execution_metadata else None
)
finished_at = datetime.now(timezone.utc).replace(tzinfo=None)
elapsed_time = (finished_at - event.start_at).total_seconds()
db.session.query(WorkflowNodeExecution).filter(WorkflowNodeExecution.id == workflow_node_execution.id).update(
{
WorkflowNodeExecution.status: WorkflowNodeExecutionStatus.SUCCEEDED.value,
WorkflowNodeExecution.inputs: json.dumps(inputs) if inputs else None,
WorkflowNodeExecution.process_data: json.dumps(process_data) if event.process_data else None,
WorkflowNodeExecution.outputs: json.dumps(outputs) if outputs else None,
WorkflowNodeExecution.execution_metadata: execution_metadata,
WorkflowNodeExecution.finished_at: finished_at,
WorkflowNodeExecution.elapsed_time: elapsed_time,
}
)
db.session.commit()
db.session.close()
process_data = WorkflowEntry.handle_special_values(event.process_data)
workflow_node_execution.status = WorkflowNodeExecutionStatus.SUCCEEDED.value
workflow_node_execution.inputs = json.dumps(inputs) if inputs else None
workflow_node_execution.process_data = json.dumps(process_data) if process_data else None
workflow_node_execution.outputs = json.dumps(outputs) if outputs else None
workflow_node_execution.execution_metadata = execution_metadata
workflow_node_execution.finished_at = finished_at
workflow_node_execution.elapsed_time = elapsed_time
workflow_node_execution.execution_metadata = (
json.dumps(jsonable_encoder(event.execution_metadata)) if event.execution_metadata else None
)
workflow_node_execution.finished_at = datetime.now(timezone.utc).replace(tzinfo=None)
workflow_node_execution.elapsed_time = (workflow_node_execution.finished_at - event.start_at).total_seconds()
self._wip_workflow_node_executions.pop(workflow_node_execution.node_execution_id)
db.session.commit()
db.session.refresh(workflow_node_execution)
db.session.close()
return workflow_node_execution
@@ -314,36 +289,20 @@ class WorkflowCycleManage:
workflow_node_execution = self._refetch_workflow_node_execution(event.node_execution_id)
inputs = WorkflowEntry.handle_special_values(event.inputs)
process_data = WorkflowEntry.handle_special_values(event.process_data)
outputs = WorkflowEntry.handle_special_values(event.outputs)
finished_at = datetime.now(timezone.utc).replace(tzinfo=None)
elapsed_time = (finished_at - event.start_at).total_seconds()
db.session.query(WorkflowNodeExecution).filter(WorkflowNodeExecution.id == workflow_node_execution.id).update(
{
WorkflowNodeExecution.status: WorkflowNodeExecutionStatus.FAILED.value,
WorkflowNodeExecution.error: event.error,
WorkflowNodeExecution.inputs: json.dumps(inputs) if inputs else None,
WorkflowNodeExecution.process_data: json.dumps(process_data) if event.process_data else None,
WorkflowNodeExecution.outputs: json.dumps(outputs) if outputs else None,
WorkflowNodeExecution.finished_at: finished_at,
WorkflowNodeExecution.elapsed_time: elapsed_time,
}
)
db.session.commit()
db.session.close()
process_data = WorkflowEntry.handle_special_values(event.process_data)
workflow_node_execution.status = WorkflowNodeExecutionStatus.FAILED.value
workflow_node_execution.error = event.error
workflow_node_execution.finished_at = datetime.now(timezone.utc).replace(tzinfo=None)
workflow_node_execution.inputs = json.dumps(inputs) if inputs else None
workflow_node_execution.process_data = json.dumps(process_data) if process_data else None
workflow_node_execution.outputs = json.dumps(outputs) if outputs else None
workflow_node_execution.finished_at = finished_at
workflow_node_execution.elapsed_time = elapsed_time
workflow_node_execution.elapsed_time = (workflow_node_execution.finished_at - event.start_at).total_seconds()
self._wip_workflow_node_executions.pop(workflow_node_execution.node_execution_id)
db.session.commit()
db.session.refresh(workflow_node_execution)
db.session.close()
return workflow_node_execution
@@ -645,7 +604,7 @@ class WorkflowCycleManage:
),
)
def _fetch_files_from_node_outputs(self, outputs_dict: dict) -> Sequence[Mapping[str, Any]]:
def _fetch_files_from_node_outputs(self, outputs_dict: dict) -> list[dict]:
"""
Fetch files from node outputs
:param outputs_dict: node outputs dict
@@ -662,7 +621,7 @@ class WorkflowCycleManage:
return files
def _fetch_files_from_variable_value(self, value: Union[dict, list]) -> Sequence[Mapping[str, Any]]:
def _fetch_files_from_variable_value(self, value: Union[dict, list]) -> list[dict]:
"""
Fetch files from variable value
:param value: variable value
@@ -674,17 +633,17 @@ class WorkflowCycleManage:
files = []
if isinstance(value, list):
for item in value:
file = self._get_file_var_from_value(item)
if file:
files.append(file)
file_var = self._get_file_var_from_value(item)
if file_var:
files.append(file_var)
elif isinstance(value, dict):
file = self._get_file_var_from_value(value)
if file:
files.append(file)
file_var = self._get_file_var_from_value(value)
if file_var:
files.append(file_var)
return files
def _get_file_var_from_value(self, value: Union[dict, list]) -> Mapping[str, Any] | None:
def _get_file_var_from_value(self, value: Union[dict, list]) -> Mapping[str, str | int | None] | None:
"""
Get file var from value
:param value: variable value
@@ -717,7 +676,17 @@ class WorkflowCycleManage:
:param node_execution_id: workflow node execution id
:return:
"""
workflow_node_execution = self._wip_workflow_node_executions.get(node_execution_id)
workflow_node_execution = (
db.session.query(WorkflowNodeExecution)
.filter(
WorkflowNodeExecution.tenant_id == self._application_generate_entity.app_config.tenant_id,
WorkflowNodeExecution.app_id == self._application_generate_entity.app_config.app_id,
WorkflowNodeExecution.workflow_id == self._workflow.id,
WorkflowNodeExecution.triggered_from == WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN.value,
WorkflowNodeExecution.node_execution_id == node_execution_id,
)
.first()
)
if not workflow_node_execution:
raise Exception(f"Workflow node execution not found: {node_execution_id}")

View File

@@ -1,9 +1,9 @@
import os
from collections.abc import Mapping, Sequence
from typing import Any, Optional, TextIO, Union
from pydantic import BaseModel
from configs import dify_config
from core.ops.entities.trace_entity import TraceTaskName
from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.tools.entities.tool_entities import ToolInvokeMessage
@@ -50,8 +50,7 @@ class DifyAgentCallbackHandler(BaseModel):
tool_inputs: Mapping[str, Any],
) -> None:
"""Do nothing."""
if dify_config.DEBUG:
print_text("\n[on_tool_start] ToolCall:" + tool_name + "\n" + str(tool_inputs) + "\n", color=self.color)
print_text("\n[on_tool_start] ToolCall:" + tool_name + "\n" + str(tool_inputs) + "\n", color=self.color)
def on_tool_end(
self,
@@ -63,12 +62,11 @@ class DifyAgentCallbackHandler(BaseModel):
trace_manager: Optional[TraceQueueManager] = None,
) -> None:
"""If not the final action, print out observation."""
if dify_config.DEBUG:
print_text("\n[on_tool_end]\n", color=self.color)
print_text("Tool: " + tool_name + "\n", color=self.color)
print_text("Inputs: " + str(tool_inputs) + "\n", color=self.color)
print_text("Outputs: " + str(tool_outputs)[:1000] + "\n", color=self.color)
print_text("\n")
print_text("\n[on_tool_end]\n", color=self.color)
print_text("Tool: " + tool_name + "\n", color=self.color)
print_text("Inputs: " + str(tool_inputs) + "\n", color=self.color)
print_text("Outputs: " + str(tool_outputs)[:1000] + "\n", color=self.color)
print_text("\n")
if trace_manager:
trace_manager.add_trace_task(
@@ -84,33 +82,30 @@ class DifyAgentCallbackHandler(BaseModel):
def on_tool_error(self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any) -> None:
"""Do nothing."""
if dify_config.DEBUG:
print_text("\n[on_tool_error] Error: " + str(error) + "\n", color="red")
print_text("\n[on_tool_error] Error: " + str(error) + "\n", color="red")
def on_agent_start(self, thought: str) -> None:
"""Run on agent start."""
if dify_config.DEBUG:
if thought:
print_text(
"\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\nThought: " + thought + "\n",
color=self.color,
)
else:
print_text("\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\n", color=self.color)
if thought:
print_text(
"\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\nThought: " + thought + "\n",
color=self.color,
)
else:
print_text("\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\n", color=self.color)
def on_agent_finish(self, color: Optional[str] = None, **kwargs: Any) -> None:
"""Run on agent end."""
if dify_config.DEBUG:
print_text("\n[on_agent_finish]\n Loop: " + str(self.current_loop) + "\n", color=self.color)
print_text("\n[on_agent_finish]\n Loop: " + str(self.current_loop) + "\n", color=self.color)
self.current_loop += 1
@property
def ignore_agent(self) -> bool:
"""Whether to ignore agent callbacks."""
return not dify_config.DEBUG
return not os.environ.get("DEBUG") or os.environ.get("DEBUG").lower() != "true"
@property
def ignore_chat_model(self) -> bool:
"""Whether to ignore chat model callbacks."""
return not dify_config.DEBUG
return not os.environ.get("DEBUG") or os.environ.get("DEBUG").lower() != "true"

View File

@@ -44,6 +44,7 @@ class DatasetIndexToolCallbackHandler:
DocumentSegment.index_node_id == document.metadata["doc_id"]
)
# if 'dataset_id' in document.metadata:
if "dataset_id" in document.metadata:
query = query.filter(DocumentSegment.dataset_id == document.metadata["dataset_id"])

View File

@@ -5,12 +5,11 @@ from typing import Optional, cast
import numpy as np
from sqlalchemy.exc import IntegrityError
from configs import dify_config
from core.entities.embedding_type import EmbeddingInputType
from core.embedding.embedding_constant import EmbeddingInputType
from core.model_manager import ModelInstance
from core.model_runtime.entities.model_entities import ModelPropertyKey
from core.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel
from core.rag.embedding.embedding_base import Embeddings
from core.rag.datasource.entity.embedding import Embeddings
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from libs import helper
@@ -111,8 +110,6 @@ class CacheEmbedding(Embeddings):
embedding_results = embedding_result.embeddings[0]
embedding_results = (embedding_results / np.linalg.norm(embedding_results)).tolist()
except Exception as ex:
if dify_config.DEBUG:
logging.exception(f"Failed to embed query text: {ex}")
raise ex
try:
@@ -125,8 +122,6 @@ class CacheEmbedding(Embeddings):
encoded_str = encoded_vector.decode("utf-8")
redis_client.setex(embedding_cache_key, 600, encoded_str)
except Exception as ex:
if dify_config.DEBUG:
logging.exception("Failed to add embedding to redis %s", ex)
raise ex
logging.exception("Failed to add embedding to redis %s", ex)
return embedding_results

View File

@@ -1,11 +1,10 @@
import base64
from configs import dify_config
from core.file import file_repository
from core.helper import ssrf_proxy
from core.model_runtime.entities import AudioPromptMessageContent, ImagePromptMessageContent
from core.model_runtime.entities.message_entities import ImagePromptMessageContent
from extensions.ext_database import db
from extensions.ext_storage import storage
from models import UploadFile
from . import helpers
from .enums import FileAttribute
@@ -13,7 +12,7 @@ from .models import File, FileTransferMethod, FileType
from .tool_file_parser import ToolFileParser
def get_attr(*, file: File, attr: FileAttribute):
def get_attr(*, file: "File", attr: "FileAttribute"):
match attr:
case FileAttribute.TYPE:
return file.type.value
@@ -33,7 +32,7 @@ def get_attr(*, file: File, attr: FileAttribute):
raise ValueError(f"Invalid file attribute: {attr}")
def to_prompt_message_content(f: File, /):
def to_prompt_message_content(file: "File", /):
"""
Convert a File object to an ImagePromptMessageContent object.
@@ -53,42 +52,34 @@ def to_prompt_message_content(f: File, /):
The detail level of the image prompt is determined by the file's extra_config.
If not specified, it defaults to ImagePromptMessageContent.DETAIL.LOW.
"""
match f.type:
case FileType.IMAGE:
if dify_config.MULTIMODAL_SEND_IMAGE_FORMAT == "url":
data = _to_url(f)
else:
data = _to_base64_data_string(f)
if file.type != FileType.IMAGE:
raise ValueError("Only image file can convert to prompt message content")
if f._extra_config and f._extra_config.image_config and f._extra_config.image_config.detail:
detail = f._extra_config.image_config.detail
else:
detail = ImagePromptMessageContent.DETAIL.LOW
url_or_b64_data = _get_url_or_b64_data(file=file)
if url_or_b64_data is None:
raise ValueError("Missing file data")
return ImagePromptMessageContent(data=data, detail=detail)
case FileType.AUDIO:
encoded_string = _file_to_encoded_string(f)
if f.extension is None:
raise ValueError("Missing file extension")
return AudioPromptMessageContent(data=encoded_string, format=f.extension.lstrip("."))
case _:
raise ValueError(f"file type {f.type} is not supported")
# decide the detail of image prompt message content
if file._extra_config and file._extra_config.image_config and file._extra_config.image_config.detail:
detail = file._extra_config.image_config.detail
else:
detail = ImagePromptMessageContent.DETAIL.LOW
return ImagePromptMessageContent(data=url_or_b64_data, detail=detail)
def download(f: File, /):
if f.transfer_method == FileTransferMethod.TOOL_FILE:
tool_file = file_repository.get_tool_file(session=db.session(), file=f)
return _download_file_content(tool_file.file_key)
elif f.transfer_method == FileTransferMethod.LOCAL_FILE:
upload_file = file_repository.get_upload_file(session=db.session(), file=f)
return _download_file_content(upload_file.key)
# remote file
response = ssrf_proxy.get(f.remote_url, follow_redirects=True)
response.raise_for_status()
return response.content
def download(*, upload_file_id: str, tenant_id: str):
upload_file = (
db.session.query(UploadFile).filter(UploadFile.id == upload_file_id, UploadFile.tenant_id == tenant_id).first()
)
if not upload_file:
raise ValueError("upload file not found")
return _download(upload_file.key)
def _download_file_content(path: str, /):
def _download(path: str, /):
"""
Download and return the contents of a file as bytes.
@@ -109,56 +100,37 @@ def _download_file_content(path: str, /):
return data
def _get_encoded_string(f: File, /):
match f.transfer_method:
case FileTransferMethod.REMOTE_URL:
response = ssrf_proxy.get(f.remote_url)
response.raise_for_status()
content = response.content
encoded_string = base64.b64encode(content).decode("utf-8")
return encoded_string
case FileTransferMethod.LOCAL_FILE:
upload_file = file_repository.get_upload_file(session=db.session(), file=f)
data = _download_file_content(upload_file.key)
encoded_string = base64.b64encode(data).decode("utf-8")
return encoded_string
case FileTransferMethod.TOOL_FILE:
tool_file = file_repository.get_tool_file(session=db.session(), file=f)
data = _download_file_content(tool_file.file_key)
encoded_string = base64.b64encode(data).decode("utf-8")
return encoded_string
case _:
raise ValueError(f"Unsupported transfer method: {f.transfer_method}")
def _get_base64(*, upload_file_id: str, tenant_id: str) -> str | None:
upload_file = (
db.session.query(UploadFile).filter(UploadFile.id == upload_file_id, UploadFile.tenant_id == tenant_id).first()
)
if not upload_file:
return None
data = _download(upload_file.key)
if data is None:
return None
encoded_string = base64.b64encode(data).decode("utf-8")
return f"data:{upload_file.mime_type};base64,{encoded_string}"
def _to_base64_data_string(f: File, /):
encoded_string = _get_encoded_string(f)
return f"data:{f.mime_type};base64,{encoded_string}"
def _get_url_or_b64_data(file: "File"):
if file.type == FileType.IMAGE:
if file.transfer_method == FileTransferMethod.REMOTE_URL:
return file.remote_url
elif file.transfer_method == FileTransferMethod.LOCAL_FILE:
if file.related_id is None:
raise ValueError("Missing file related_id")
def _file_to_encoded_string(f: File, /):
match f.type:
case FileType.IMAGE:
return _to_base64_data_string(f)
case FileType.AUDIO:
return _get_encoded_string(f)
case _:
raise ValueError(f"file type {f.type} is not supported")
def _to_url(f: File, /):
if f.transfer_method == FileTransferMethod.REMOTE_URL:
if f.remote_url is None:
raise ValueError("Missing file remote_url")
return f.remote_url
elif f.transfer_method == FileTransferMethod.LOCAL_FILE:
if f.related_id is None:
raise ValueError("Missing file related_id")
return helpers.get_signed_file_url(upload_file_id=f.related_id)
elif f.transfer_method == FileTransferMethod.TOOL_FILE:
# add sign url
if f.related_id is None or f.extension is None:
raise ValueError("Missing file related_id or extension")
return ToolFileParser.get_tool_file_manager().sign_file(tool_file_id=f.related_id, extension=f.extension)
else:
raise ValueError(f"Unsupported transfer method: {f.transfer_method}")
if dify_config.MULTIMODAL_SEND_IMAGE_FORMAT == "url":
return helpers.get_signed_image_url(upload_file_id=file.related_id)
return _get_base64(upload_file_id=file.related_id, tenant_id=file.tenant_id)
elif file.transfer_method == FileTransferMethod.TOOL_FILE:
# add sign url
if file.related_id is None or file.extension is None:
raise ValueError("Missing file related_id or extension")
return ToolFileParser.get_tool_file_manager().sign_file(
tool_file_id=file.related_id, extension=file.extension
)

Some files were not shown because too many files have changed in this diff Show More