Compare commits

...

1421 Commits

Author SHA1 Message Date
Joel
91649c9bfd fix: skill preview error
Some checks are pending
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Waiting to run
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Waiting to run
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Waiting to run
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Waiting to run
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Blocked by required conditions
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Blocked by required conditions
2026-02-09 17:12:01 +08:00
Harry
8a057ac242 fix: apply ruff 2026-02-09 17:01:07 +08:00
Harry
83c260ee4c Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-02-09 17:00:56 +08:00
Stephen Zhou
ff7b62f2de chore: fix type for useTranslation in #i18n (#32134)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-09 16:47:23 +08:00
yyh
363802aa66 chore(web): comprehensive unit tests 2026-02-09 16:47:23 +08:00
Joel
3a1eefa477 feat: in editor preview support change the same to file preview 2026-02-09 16:45:29 +08:00
Stephen Zhou
4e0a7a7f9e chore: fix type for useTranslation in #i18n (#32134)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-09 16:42:53 +08:00
Harry
3c0b50ee77 feat(sandbox): add SSH agentbox provider for middleware and docker deployments 2026-02-09 16:38:05 +08:00
Joel
b014e91740 chore: refact tool and filepreveiw context to zustand to reduce rerender 2026-02-09 16:06:23 +08:00
wangxiaolei
8f6a8997f4 fix: fix trigger output schema miss (#32116) 2026-02-09 15:46:59 +08:00
GuanMu
63d965bc44 fix: pass user timezone from app context to the date picker component. (#31831)
Co-authored-by: yyh <92089059+lyzno1@users.noreply.github.com>
2026-02-09 15:39:24 +08:00
盐粒 Yanli
a303560b98 feat: Service API - add end-user lookup endpoint (#32015)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-09 15:39:16 +08:00
Stephen Zhou
6f50915d2b test: stable test (#32108) 2026-02-09 15:39:08 +08:00
Yessenia-d
bc9ca4e0dd style: update banner item styles and enhance dark/light theme variables (#32111)
Co-authored-by: Crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-09 15:34:31 +08:00
yyh
ca243d7efc chore(web): pre-align HITL frontend from build/feat/hitl 2026-02-09 15:34:15 +08:00
Yessenia-d
e4ab6e0919 style: update banner item styles and enhance dark/light theme variables (#32111)
Co-authored-by: Crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-09 15:32:40 +08:00
Joel
b289e6a2b6 fix: basic app crash by llm editor use the workflow context 2026-02-09 15:19:52 +08:00
QuantumGhost
6fa943fe75 chore(api): update launch.json.template (#32124) 2026-02-09 15:10:29 +08:00
QuantumGhost
a1fc280102 feat: Human Input Node (#32060)
The frontend and backend implementation for the human input node.

Co-authored-by: twwu <twwu@dify.ai>
Co-authored-by: JzoNg <jzongcode@gmail.com>
Co-authored-by: yyh <92089059+lyzno1@users.noreply.github.com>
Co-authored-by: zhsama <torvalds@linux.do>
2026-02-09 14:57:23 +08:00
wangxiaolei
56e3a55023 fix: fix trigger output schema miss (#32116) 2026-02-09 14:54:21 +08:00
Joel
2d6b30f3b8 fix: stop but tracing is still loaing and not show current tracing res 2026-02-09 14:39:06 +08:00
GuanMu
6c63c6a221 fix: pass user timezone from app context to the date picker component. (#31831)
Co-authored-by: yyh <92089059+lyzno1@users.noreply.github.com>
2026-02-09 14:10:24 +08:00
盐粒 Yanli
5b06203ef5 feat: Service API - add end-user lookup endpoint (#32015)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-09 14:01:22 +08:00
wangxiaolei
3348b89436 refactor: decouple database operations from knowledge retrieval nodes (#31981)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-09 13:56:55 +08:00
Stephen Zhou
0428ac5f3a test: stable test (#32108) 2026-02-09 13:36:37 +08:00
yyh
f6b036b121 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-02-09 11:35:42 +08:00
Joel
0868f941f6 chore: item hover padding 2026-02-09 11:30:14 +08:00
Joel
9fba10036e chore: remove not support shortcut icon 2026-02-09 10:50:24 +08:00
wangxiaolei
aead4fe65c refactor: document_indexing_update_task split database session (#32105)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-09 10:49:23 +08:00
zxhlyh
bdf6739b86 fix: search model provider list (#32106) 2026-02-09 10:35:40 +08:00
wangxiaolei
483db22b97 feat: extract mcp tool usage (#31802)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-09 09:52:14 +08:00
yyh
6e750814b3 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-08 21:26:41 +08:00
yyh
404ac3aeab feat(skill-editor): hide tab dividers when only Start tab is visible
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Remove border-right on Start tab and border-bottom on tab bar when no
file tabs are open, making the tab area blend seamlessly with the
content area below.
2026-02-08 13:56:31 +08:00
yyh
7d2e630fc8 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-02-08 13:27:02 +08:00
zhsama
68f7f2f19b feat: Unify sandbox detection and apply Agent icon override 2026-02-08 02:59:58 +08:00
zhsama
e528112394 fix: Hide Agent node in sandboxed apps and relabel LLM 2026-02-08 02:59:58 +08:00
yyh
e9bff0b7b7 feat(sandbox): use official brand assets for provider icons
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Replace placeholder sandbox provider icons with official brand assets:
- Docker: white whale SVG on brand blue (#1D63ED) background
- E2B: official PNG logo via CSS module
- Local: Dify-branded SVG icon (SandboxLocal)
2026-02-08 02:04:20 +08:00
yyh
d23a94982d Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-02-08 01:30:21 +08:00
yyh
724700acc4 test(workflow): add regression coverage for artifacts download query reset
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Add a component-level regression test for variable inspect artifacts tab:\n- verifies selected file path is used before reset\n- verifies stale selected path is dropped after files are cleared\n- verifies download url query call keeps retry disabled in this component
2026-02-07 22:50:47 +08:00
yyh
2b848d7e93 fix(workflow): prevent redundant sandbox download refetch after reset
Problem:\n- In variable inspect artifacts view, clicking Reset All invalidates sandbox download query keys.\n- If a previously selected file has been removed, the download-url query may still refetch with stale path and return 400.\n- Default query retry amplifies this into repeated failed requests in this scenario.\n\nSolution:\n- Extend sandbox file invalidation with an option to skip download query refetch.\n- Use that option in Reset All flow so download-url queries are marked stale without immediate refetch.\n- Derive selected file path from latest sandbox flat data and disable download-url query when file no longer exists.\n- Disable retry only for artifacts-tab download-url query to avoid repeated 400 retries in this path.\n- Align tree selectedPath with derived selectedFilePath and add hook tests for invalidation behavior.\n\nValidation:\n- pnpm vitest --run service/use-sandbox-file.spec.tsx
2026-02-07 22:43:13 +08:00
yyh
a761ab5cee test(skill): add comprehensive unit tests for file-tree domain 2026-02-07 16:53:58 +08:00
yyh
f5a29b69a8 chore: prune expressions 2026-02-07 14:23:31 +08:00
yyh
11d5efc13e refactor(skill): regroup skill body, file tree, and tree hooks 2026-02-07 14:20:01 +08:00
hjlarry
e10996c368 chore: log 20 recent crdt import changes
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-07 10:12:47 +08:00
hjlarry
865b221ce6 fix: make sure restart server not get ghost online user 2026-02-07 09:48:07 +08:00
hjlarry
bb9ae66f81 fix: ensure leader online to accept graph change 2026-02-07 09:34:13 +08:00
Joel
c5439a3739 fix: tool icon hover 2026-02-06 18:35:01 +08:00
Joel
776fb04bf0 chore: use more good availableNodes 2026-02-06 18:10:32 +08:00
Joel
dae2e3b6fb feat: support choose var in tool config in sandbox prompt editor
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-06 17:52:35 +08:00
zhsama
971bfa4758 chore: Add ENABLE_SOURCE_MAP env var with priority over legacy var 2026-02-06 17:37:16 +08:00
yyh
3bc574234f feat(web): use terminal-square icon for bash tool calls 2026-02-06 17:34:29 +08:00
Joel
81715426d2 chore: plugin in sandbox auto set to true 2026-02-06 17:23:30 +08:00
Harry
c61129590d fix: improve download filename handling in S3 storage and asset service 2026-02-06 16:32:55 +08:00
Joel
fef42a05ee chore: max interatons default to 100 2026-02-06 16:13:22 +08:00
yyh
287c1bbc35 fix(skill): use nuqs query state for fileId param
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-06 16:03:32 +08:00
yyh
d3b32645f4 feat(skill-editor): add opacity style to dragged node's original position
Apply the same opacity-50 visual feedback used for cut nodes to nodes
being dragged, so the source position is visually dimmed during drag.
2026-02-06 15:50:04 +08:00
yyh
3bfa495795 refactor(skill-editor): add single/double click and optimize re-renders in search results
Extract SearchResultRow component with useDelayedClick to match file
tree behavior (single-click preview, double-click pin). Subscribe to
derived boolean instead of raw activeTabId to avoid unnecessary
re-renders across all rows.
2026-02-06 15:39:00 +08:00
yyh
dc213ca76c refactor(skill)!: add file node view-state flow and mode-based file data hook
- introduce resolving/ready/missing node view-state to avoid unsupported flicker

- switch useSkillFileData to explicit mode: none/content/download

- add hook tests for view-state transitions and mode query gating

BREAKING CHANGE: useSkillFileData signature changed from (appId, nodeId, isEditable) to (appId, nodeId, mode).
2026-02-06 15:39:00 +08:00
yyh
f1100b82f9 feat(skill-editor): render flat search result list in file tree
Replace the tree-filtered search with a flat list that shows icon + name
on the left and parent path on the right, matching the Figma design.
Clicking a file opens its tab; clicking a folder clears the search and
reveals the folder in the tree.
2026-02-06 15:39:00 +08:00
Joel
ad3a5ad473 fix: placehoder pos 2026-02-06 14:59:23 +08:00
Joel
c5d1b2a02e fix: fold not exist not same with file 2026-02-06 14:52:03 +08:00
Joel
768bfa8a7e chore: hover show tool icon 2026-02-06 14:42:51 +08:00
yyh
871ec3b0ca Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/controllers/console/app/app.py
#	web/eslint-suppressions.json
#	web/eslint.config.mjs
2026-02-06 14:40:44 +08:00
yyh
44fd58853c fix(skill-editor): remove redundant focus ring and suppress row outline
Remove isFocused ring style from TreeNode since focus-visible already
handles keyboard focus indication. Add rowClassName="outline-none" to
suppress the default browser outline on react-arborist row containers.
2026-02-06 14:31:56 +08:00
yyh
4d1d83b509 test(skill-editor): add tests for TreeEditInput filename stem selection 2026-02-06 14:31:38 +08:00
yyh
f0ba739e44 fix(skill-editor): select only filename stem when renaming files
Use setSelectionRange to exclude the file extension from the initial
selection, matching the behavior of VS Code and Finder.
2026-02-06 14:27:52 +08:00
yyh
799d0c0d0b feat(skill-editor): auto-focus editor on file creation and improve tree-tab sync
Add editorAutoFocusFileId state to automatically focus the editor when
a new text file is created. Improve tree-tab synchronization by adding
syncSignal/isTreeLoading guards, deduplicating rAF calls, and skipping
redundant select/openParents operations when the node is already active.
2026-02-06 14:21:33 +08:00
yyh
92c3656fe5 fix: vertically center empty search state to match Figma design 2026-02-06 14:21:33 +08:00
yyh
ecbcd5803b fix(workflow): avoid nested button in skill file tree menu 2026-02-06 14:21:32 +08:00
yyh
30981dfa7c feat: add empty state for skill template search with no results 2026-02-06 14:21:32 +08:00
Joel
3eba0c561e feat: support show deleted file and folds 2026-02-06 14:20:25 +08:00
Joel
a0984a779f feat: support file and fold not find 2026-02-06 11:13:07 +08:00
Stream
6ac9bbfd5f fix: fetch LLM node input correctly
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-06 03:14:57 +08:00
yyh
c9c826d0d2 fix: render pdf preview in skill file panel
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-05 18:12:35 +08:00
Harry
cb7c086377 feat: enhance file upload process with content type detection and command building 2026-02-05 18:00:45 +08:00
Harry
306ef79526 feat: add current user retrieval in EnvironmentVariableCollectionApi post method 2026-02-05 17:47:48 +08:00
Harry
06c31dfdb2 feat: refactor sandbox file handling to use pipeline execution and improve script utilities 2026-02-05 17:47:48 +08:00
yyh
7161c3dd80 fix(web): exclude PDF from text-like file detection
PDF files were incorrectly parsed as text because isTextLikeFile
did not exclude PDF after removing it from BINARY_EXTENSIONS.
2026-02-05 17:45:38 +08:00
yyh
94c354e36d feat(web): add inline PDF preview support for skill file viewer
Enable PDF files to be previewed directly in the file content panel
instead of showing as unsupported files requiring download. Uses the
existing react-pdf-highlighter library with zoom controls and keyboard
shortcuts (up/down arrows).
2026-02-05 17:21:01 +08:00
hjlarry
083154e57b Revert "fix: image preview in artifacts panel"
This reverts commit 71f15741b0.
2026-02-05 17:05:25 +08:00
zhsama
7446779198 perf: Update agent extraction UI styling and labels 2026-02-05 16:57:20 +08:00
zhsama
8235ad9e62 fix: Fix variable availability resolution for child nodes in workflow 2026-02-05 16:57:19 +08:00
yyh
b60f9c7703 refactor(web): use FileAdd and FolderAdd icons in skill menus
Replace RiFileAddLine and RiFolderAddLine with custom FileAdd and
FolderAdd icons for new file/folder menu items in skill sidebar.
2026-02-05 16:56:27 +08:00
yyh
23f7f188bd chore(web): add FileAdd and FolderAdd icons 2026-02-05 16:56:27 +08:00
yyh
9893bf267e feat(web): add import skills menu item with tooltip to skill file tree
Add "Import skills(.zip)" option to root-level context menu and sidebar
add menu with a question mark tooltip showing usage hint. Update menu
item labels and icons for consistency with design.
2026-02-05 16:56:27 +08:00
yyh
7dcb0897c4 chore(web): add UploadCloud02 icon 2026-02-05 16:56:27 +08:00
Joel
6913d5b88c chore: fold support preivew 2026-02-05 16:42:18 +08:00
Joel
9e08f5827b fix: can editor in disabled skill editor 2026-02-05 16:26:19 +08:00
Joel
befefb04b4 chore: open in editor tooltip 2026-02-05 16:26:18 +08:00
yyh
733c8a0d76 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-02-05 16:06:38 +08:00
yyh
7f018a3e7f Revert "fix(web): resolve serwist precaching 404 errors"
This reverts commit 561d8d301f.
2026-02-05 16:06:33 +08:00
yyh
3c214f762a chore: update skills 2026-02-05 16:05:36 +08:00
Joel
ce3d2b581b feat: support open file in new tab 2026-02-05 16:04:06 +08:00
Joel
882ad92c24 feat: can show file preview 2026-02-05 15:44:10 +08:00
yyh
561d8d301f fix(web): resolve serwist precaching 404 errors
- Use defaultCache for service worker caching strategy
- Update serwist route handler configuration
- Simplify sw.ts caching logic
2026-02-05 15:37:38 +08:00
yyh
e92b9afd4e Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-02-05 15:33:01 +08:00
hjlarry
71f15741b0 fix: image preview in artifacts panel 2026-02-05 15:28:04 +08:00
Harry
d690b97568 feat: enhance file download functionality with pipeline execution and improved error handling
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-05 15:15:37 +08:00
Harry
469fda8327 refactor: optimize unique name assignment for batch upload nodes using a stack 2026-02-05 15:15:37 +08:00
Harry
a750d87ae4 feat: ensure unique names for asset nodes during creation and batch upload 2026-02-05 15:15:37 +08:00
Harry
6b0e6b2785 fix: update file reading method to handle bytes format in E2BEnvironment 2026-02-05 15:15:37 +08:00
yyh
eb87512122 fix: scope Reset All button visibility to current tab's data
Show the button only when the active tab has data, preventing
the empty-list-with-button scenario on the Variables tab when
only artifacts exist.
2026-02-05 11:43:54 +08:00
yyh
aad15a0777 fix: return invalidate promises and parallelize invalidations 2026-02-05 11:42:59 +08:00
yyh
740fafc926 feat: show Reset All button on both variable inspect tabs
- Change Reset All button visibility from Variables-tab-only to both tabs,
  displaying when either variables or artifacts have data
- Invalidate sandbox files cache in deleteAllInspectorVars alongside
  existing conversation/system var invalidations
2026-02-05 11:30:53 +08:00
yyh
61cfbd1c8d Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-02-05 10:51:28 +08:00
hjlarry
b2df9ba9d3 fix: expand status display different icon 2026-02-05 10:10:04 +08:00
hjlarry
6840c7e37f chore: rm unused console.warn 2026-02-05 09:13:26 +08:00
hjlarry
21a723fb26 fix: icon_url of llm node log detail page incorrect 2026-02-05 09:09:32 +08:00
zhsama
a2380c4fd3 fix: ensure sub-graph modal syncs immediately when
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
applying generated code.
2026-02-05 06:07:15 +08:00
zhsama
52b34b1fdb fix: Handle format for context generate outputs 2026-02-05 06:07:13 +08:00
Stream
5fdcedcbed fix: generate code node that returns correctly 2026-02-05 02:01:31 +08:00
zhsama
0618b2532f feat: Add Enter key handler support to assemble variables generate modal 2026-02-05 01:36:13 +08:00
Stream
47fffedd2e fix: context generate questions 2026-02-05 01:26:54 +08:00
Stream
15c0011897 feat: implement file structured output 2026-02-05 00:11:39 +08:00
zhsama
10fb482351 perf: Remove deprecated optional props in LLM node tool config
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-04 22:36:19 +08:00
zhsama
9bd714623e feat: Add mutual exclusion between structured output and tools in LLM
node
2026-02-04 22:36:19 +08:00
Stream
e0082dbf18 revert: add tools for output in agent mode
feat: hide output tools and improve JSON formatting for structured output
feat: hide output tools and improve JSON formatting for structured output
fix: handle prompt template correctly to extract selectors for step run
fix: emit StreamChunkEvent correctly for sandbox agent
chore: better debug message
fix: incorrect output tool runtime selection
fix: type issues
fix: align parameter list
fix: align parameter list
fix: hide internal builtin providers from tool list
vibe: implement file structured output
vibe: implement file structured output
fix: refix parameter for tool
fix: crash
fix: crash
refactor: remove union types
fix: type check
Merge branch 'feat/structured-output-with-sandbox' into feat/support-agent-sandbox
fix: provide json as text
fix: provide json as text
fix: get AgentResult correctly
fix: provides correct prompts, tools and terminal predicates
fix: provides correct prompts, tools and terminal predicates
fix: circular import
feat: support structured output in sandbox and tool mode
2026-02-04 21:43:53 +08:00
yyh
25065a4f2f Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	web/app/components/workflow-app/hooks/use-nodes-sync-draft.ts
#	web/contract/router.ts
2026-02-04 21:12:53 +08:00
yyh
8a9e0e3b31 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	web/eslint-suppressions.json
2026-02-04 18:15:32 +08:00
yyh
49de78a20b fix: use oRPC query keys for sandbox file download and invalidation
Replaced plain array query key in useSandboxFileDownloadUrl with
oRPC-generated queryKey for type safety and consistency. Added
downloadFile cache invalidation to useInvalidateSandboxFiles so
stale download URLs are cleared after workflow/chatflow runs.
2026-02-04 18:14:20 +08:00
zhsama
23f98652e1 perf: Add null check for onAssembleVariables callback 2026-02-04 17:52:13 +08:00
yyh
2df0d540a9 fix: remove unreachable polling from artifacts-section
The Skill view is locked (ViewPicker disabled) while a workflow
is running or chatflow is responding, so ArtifactsSection is never
mounted during runs. Polling there is dead code.
2026-02-04 17:31:44 +08:00
yyh
625163705b Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-02-04 16:59:23 +08:00
yyh
d0200e90d2 feat: poll sandbox files every 5s during workflow/chatflow runs
Add conditional refetchInterval to Artifacts components so the file
list refreshes automatically while a workflow debug run or chatflow
preview is in progress, stopping once the run completes.
2026-02-04 16:59:09 +08:00
Stream
8850a0c5c7 feat: hide output tools and improve JSON formatting for structured output 2026-02-04 15:42:55 +08:00
zhsama
ecf4c06ed7 chore: Update prompt editor context labels to Chat History 2026-02-04 15:22:27 +08:00
zhsama
b96459b656 fix: Fix sub-graph variable null check logic 2026-02-04 15:22:27 +08:00
Stream
4466688e97 feat: hide output tools and improve JSON formatting for structured output 2026-02-04 15:12:50 +08:00
yyh
d84aaff825 feat: add loading state to Publish button during workflow publishing
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Leverage React Query mutation's isPending to disable the Publish button,
header trigger, and keyboard shortcut while a publish is in progress,
preventing duplicate submissions even when the menu is closed and reopened.
2026-02-04 14:34:06 +08:00
hjlarry
dee3e026a6 chore: use AccountWithRole instead of account_with_role_fields 2026-02-04 14:11:32 +08:00
Harry
75d2148ef6 fix: add account_with_role_fields for enhanced user data representation 2026-02-04 13:54:04 +08:00
yyh
00e9dce3ad feat: add isPreviewable guard for binary file preview in artifacts
Add a unified isPreviewable flag to useFileTypeInfo that guards against
rendering binary files as text in both skill artifacts and variable
inspect artifacts preview. Upgrade extension arrays to Sets for O(1)
lookups.
2026-02-04 13:32:22 +08:00
Harry
60a5d5c67c refactor: replace reqparse with Pydantic models in dsl and sandbox_providers 2026-02-04 13:23:07 +08:00
Harry
8c6139fa21 chore: update pnpm-lock and remove unused import after merge 2026-02-04 13:16:55 +08:00
Harry
c111079624 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/core/file/file_manager.py
#	api/core/workflow/graph_engine/response_coordinator/coordinator.py
#	api/core/workflow/nodes/llm/node.py
#	api/core/workflow/nodes/tool/tool_node.py
#	api/pyproject.toml
#	web/package.json
#	web/pnpm-lock.yaml
2026-02-04 13:15:49 +08:00
yyh
d056b23168 fix: use neutral title for artifacts empty state to fit both scenarios 2026-02-04 12:02:39 +08:00
hjlarry
36bfb08f34 fix: collaboration user's name display in editor line 1 2026-02-04 10:11:40 +08:00
Stream
7535b67423 fix: handle prompt template correctly to extract selectors for step run
idk why. This may cause further issues.
2026-02-04 07:23:02 +08:00
Stream
9172997f24 fix: emit StreamChunkEvent correctly for sandbox agent 2026-02-03 21:52:15 +08:00
yyh
f1d099d50d refactor: extract skill save context, stabilize mutation dependency, and deduplicate cache updates
Split SkillSaveContext and useSkillSaveManager into a separate file to
fix react-refresh/only-export-components lint error. Destructure
mutateAsync from useUpdateAppAssetFileContent for a stable callback
reference, preventing unnecessary useCallback cascade rebuilds. Extract
shared patchFileContentCache helper to unify setQueryData logic between
updateCachedContent and the collaboration event handler.
2026-02-03 21:09:35 +08:00
Harry
b6b2af45a7 refactor: use jsonable_encoder for consistent JSON response formatting in SandboxFilesApi
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-03 18:32:49 +08:00
hjlarry
c9b0d90ca7 Merge branch 'feat/support-agent-sandbox' of github.com:langgenius/dify into feat/support-agent-sandbox 2026-02-03 18:00:44 +08:00
hjlarry
70c887ed5c fix undo/redo 2026-02-03 18:00:06 +08:00
yyh
178421a8ac fix: pass appId instead of userId to sandbox file API calls
The backend route /apps/{app_id}/sandbox/files expects the actual app ID
as the URL parameter and derives sandbox_id from the logged-in user
internally. The frontend was incorrectly passing userProfile.id (user ID)
as the appId, resulting in wrong storage paths.
2026-02-03 17:59:21 +08:00
yyh
0fcddfe9a5 feat: invalidate sandbox files cache on workflow/chatflow run completion
Add useInvalidateSandboxFiles hook and call it alongside
fetchInspectVars/invalidAllLastRun so the Artifacts tab refreshes
automatically when a chatflow preview or workflow debug run finishes.
2026-02-03 16:47:05 +08:00
yyh
19a5aee38e fix: hide divider when OnlineUsers component is not rendered
Move the Divider into the OnlineUsers component so it conditionally
renders together with the online users content, preventing an orphaned
divider from appearing next to the preview button.
2026-02-03 15:35:49 +08:00
Novice
31177462e1 fix: clean up mixed variable extractor code 2026-02-03 15:26:15 +08:00
Harry
10f5d9e7ad fix: stream never ruff 2026-02-03 14:42:30 +08:00
Harry
49befa6d3f feat: enhance download URL generation with optional filename parameter
Added support for an optional `download_filename` parameter in the `get_download_url` and `get_download_urls` methods across various storage classes. This allows users to specify a custom filename for downloads, improving user experience by enabling better file naming during downloads. Updated related methods and tests to accommodate this new functionality.
2026-02-03 14:40:14 +08:00
Novice
5441b9c3ad fix: add computer_use property to mixed variable extractor 2026-02-03 10:33:47 +08:00
Stream
beba89cc0a chore: better debug message 2026-02-02 16:39:50 +08:00
yyh
808a32c457 fix: add pending state to export button to prevent duplicate clicks
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Use useTransition to disable the export button and show loading state
in the DSL export confirm modal during async export operations.
2026-02-02 15:52:03 +08:00
Stream
6ea16837ff fix: incorrect output tool runtime selection
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-02 09:09:22 +08:00
Stream
987ea0f764 fix: type issues
Signed-off-by: Stream <Stream_2@qq.com>
2026-02-02 06:45:21 +08:00
Stream
303e1e3eb0 fix: align parameter list
Signed-off-by: Stream <Stream_2@qq.com>
2026-02-02 06:40:33 +08:00
Stream
85b9661b82 fix: align parameter list
Signed-off-by: Stream <Stream_2@qq.com>
2026-02-02 06:37:36 +08:00
Stream
5e808f6f31 fix: hide internal builtin providers from tool list
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-02-01 22:46:55 +08:00
yyh
f1c15e0a17 Revert "refactor!: migrate commonLayout to SSR prefetch with TanStack Query hydration"
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
This reverts commit 2833965815.
2026-02-01 19:06:45 +08:00
yyh
e2913d9ee1 Revert "refactor!: replace Zustand global store with TanStack Query for systemFeatures"
This reverts commit 806ece9a67.
2026-02-01 19:06:45 +08:00
yyh
806ece9a67 refactor!: replace Zustand global store with TanStack Query for systemFeatures
Follow-up to SSR prefetch migration (2833965). Eliminates the Zustand
middleman that was syncing TanStack Query data into a separate store.

- Remove useGlobalPublicStore Zustand store entirely
- Create hooks/use-global-public.ts with useSystemFeatures,
  useSystemFeaturesQuery, useIsSystemFeaturesPending, useSetupStatusQuery
- Migrate all 93 consumers to import from @/hooks/use-global-public
- Simplify global-public-context.tsx to a thin provider component
- Update 18 test files to mock the new hook interface
- Fix SetupStatusResponse.setup_at type from Date to string (JSON)
- Fix setup-status.spec.ts mock target to match consoleClient

BREAKING CHANGE: useGlobalPublicStore is removed. Use useSystemFeatures()
from @/hooks/use-global-public instead.
2026-02-01 19:06:08 +08:00
yyh
2833965815 refactor!: migrate commonLayout to SSR prefetch with TanStack Query hydration
BREAKING CHANGE: commonLayout is now an async Server Component that
prefetches user-profile and current-workspace on the server via
TanStack Query's prefetchQuery + HydrationBoundary pattern. This
replaces the previous purely client-side data fetching approach.

Key changes:

- **SSR data prefetch (root layout)**: prefetch systemFeatures and
  setupStatus in the root layout server component, wrap children with
  HydrationBoundary to hydrate TanStack Query cache on the client.

- **SSR data prefetch (commonLayout)**: convert commonLayout from a
  client component to an async server component that prefetches
  user-profile (with x-version/x-env response headers) and
  current-workspace. Client-side providers/UI extracted to a new
  layout-client.tsx component.

- **Add loading.tsx (Next.js convention)**: add a Next.js loading.tsx
  file in commonLayout that shows a centered spinner. This replaces the
  deleted Splash component but works via Next.js built-in Suspense
  boundary for route segments, not a client-side overlay.

- **Extract shared SSR fetch utilities (utils/ssr-fetch.ts)**: create
  serverFetch (unauthenticated) and serverFetchWithAuth (with cookie
  forwarding + CSRF token). getAuthHeaders is wrapped with React.cache()
  for per-request deduplication across multiple SSR fetches.

- **Refactor AppInitializer**: split single monolithic async IIFE effect
  into three independent useEffects (oauth tracking, education verify,
  setup status check). Use useReducer for init flag, useRef to prevent
  duplicate tracking in StrictMode. Now reads setupStatus from TanStack
  Query cache (useSetupStatusQuery) instead of fetching independently.

- **Refactor global-public-context**: move Zustand store sync from
  queryFn side-effect to a dedicated useEffect, keeping queryFn pure.
  fetchSystemFeatures now simply returns the API response.

- **Fix usePSInfo SSR crash**: defer globalThis.location access from
  hook top-level to callback execution time via getDomain() helper,
  preventing "Cannot read properties of undefined" during server render.

- **Remove Splash component**: delete the client-side loading overlay
  that relied on useIsLogin polling, replaced by Next.js loading.tsx.

- **Remove staleTime/gcTime overrides in useUserProfile**: allow the
  SSR-prefetched data to be reused via default cache policy instead of
  forcing refetch on every mount.

- **Revert middleware auth guard**: remove the cookie-based session
  check in proxy.ts that caused false redirects to /signin for
  authenticated users (Dify's auth uses token refresh, not simple
  cookie presence).
2026-02-01 18:29:41 +08:00
yyh
3ca767de47 refactor: migrate localStorage calls to storage utility module
Replace direct localStorage.getItem/setItem/removeItem with the
centralized storage module which provides versioned keys, automatic
JSON serialization, SSR safety, and error handling.
2026-02-01 17:34:37 +08:00
Stream
726fc1851f vibe: implement file structured output 2026-02-01 02:49:48 +08:00
Stream
b66db183c9 vibe: implement file structured output 2026-02-01 02:47:28 +08:00
zhsama
b6465327c1 fix: Fix race condition in prompt editor reference sync
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-31 22:10:38 +08:00
Stream
03774a7bd0 fix: refix parameter for tool 2026-01-31 21:53:08 +08:00
zhsama
4d82769baa fix: Fix null safety issues in workflow variable components 2026-01-31 21:26:32 +08:00
Stream
d353feb172 fix: crash 2026-01-31 01:51:25 +08:00
Stream
db56fe546b fix: crash 2026-01-31 01:21:53 +08:00
zhsama
f76f4252e0 Merge remote-tracking branch 'origin/feat/support-agent-sandbox' into feat/support-agent-sandbox 2026-01-31 01:17:14 +08:00
zhsama
be96e6032e refactor: add json schema type guard 2026-01-31 00:57:45 +08:00
Stream
9ad49340bf refactor: remove union types
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-31 00:41:23 +08:00
zhsama
5572de1d3c chore: prune eslint suppressions 2026-01-31 00:31:57 +08:00
zhsama
078e2d7150 refactor: streamline variable inspect editor state 2026-01-31 00:31:26 +08:00
zhsama
52d0716159 chore:prune eslint suppressions 2026-01-31 00:04:58 +08:00
zhsama
473262d70e refactor: type event emitter payloads
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-Claude)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-31 00:04:57 +08:00
zhsama
f880ef0052 chore: prune eslint suppressions
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-Claude)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-31 00:04:57 +08:00
zhsama
51ffab8a1a refactor: type variable inspect handlers
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-Claude)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-31 00:04:57 +08:00
Harry
a87560d667 fix: revert stupid changes
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-30 23:56:24 +08:00
zhsama
7be3c4c7b4 chore: remove eslint suppressions for type-safe validateJSONSchema
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-Claude)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-30 23:33:04 +08:00
zhsama
3bdc16ac5f refactor: make validateJSONSchema type-safe
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-Claude)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-30 23:30:51 +08:00
zhsama
03ec2f64cd refactor: Refactor storage keys into hierarchical structure 2026-01-30 23:23:21 +08:00
zhsama
a052c414ac refactor: Replace hardcoded localStorage key with constant 2026-01-30 23:07:41 +08:00
zhsama
d0d553ba38 refactor: Refactor context generate modal storage key management 2026-01-30 22:59:53 +08:00
Harry
b67d0d8c45 refactor(sandbox): update sandbox service to use sandbox_id instead of workflow_execution_id
- Modified the SandboxService and related app generators to replace workflow_execution_id with sandbox_id for improved clarity and consistency in sandbox handling.
- Adjusted the AdvancedChatAppGenerator and WorkflowAppGenerator to align with the new parameter naming convention.
2026-01-30 22:45:28 +08:00
Harry
bb4dd85ae3 feat(sandbox): refactor sandbox file handling to include app_id
- Updated API routes to use app_id instead of sandbox_id for file operations, aligning with user-specific sandbox workspaces.
- Enhanced SandboxFileService and related classes to accommodate app_id in file listing and download functionalities.
- Refactored storage key generation for sandbox archives to include app_id, ensuring proper file organization.
- Adjusted frontend contracts and services to reflect the new app_id parameter in API calls.
2026-01-30 22:45:28 +08:00
zhsama
bc1d3bdf57 refactor: Extract nested node ID parsing into shared utility 2026-01-30 22:24:30 +08:00
zhsama
618dde1e3d refactor: Refactor chat view to use useMemo for version mapping 2026-01-30 22:24:30 +08:00
yyh
9d5db4993d fix: disable SSR for ImportSkillModal dynamic import 2026-01-30 22:02:45 +08:00
yyh
ea88bcfbd2 feat: add ZIP skill import with client-side extraction
Add import skill modal that accepts .zip files via drag-and-drop or
file picker, extracts them client-side using fflate, validates structure
and security constraints, then batch uploads via presigned URLs.

- Add fflate dependency for browser-side ZIP decompression
- Create zip-extract.ts with fflate filter API for validation
- Create zip-to-upload-tree.ts for BatchUploadNodeInput tree building
- Create import-skill-modal.tsx with drag-and-drop support
- Lazy-load ImportSkillModal via next/dynamic for bundle optimization
- Add en-US and zh-Hans i18n keys for import modal
2026-01-30 21:54:00 +08:00
zhsama
ea91f96924 refactor: Replace hardcoded string checks with VarType enum 2026-01-30 21:51:03 +08:00
zhsama
73b78c9edc refactor: Improve type safety in context generate modal hooks 2026-01-30 21:41:04 +08:00
zhsama
617b64bb93 refactor: Improve type safety in context generate modal hooks 2026-01-30 21:41:04 +08:00
Stream
0265cc0403 fix: type check
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-30 21:17:19 +08:00
zhsama
c1de8f75ca fix: Fix missing current_user variable in conversation variable API 2026-01-30 20:49:14 +08:00
zhsama
304d8e5fe7 feat: Add @ and keyboard navigation to tool picker in prompt editor 2026-01-30 20:49:14 +08:00
yyh
dbc32af932 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/app.py
#	api/controllers/console/app/generator.py
#	api/core/llm_generator/llm_generator.py
#	web/eslint-suppressions.json
#	web/pnpm-lock.yaml
#	web/tailwind-common-config.ts
2026-01-30 20:08:35 +08:00
Harry
40b58dfcde feat(storage): enhance AWS S3 client configuration for presigned URLs
- Updated the AWS S3 client initialization to enforce SigV4 for presigned URLs, ensuring consistent header signing behavior across S3-compatible providers.
- Introduced a dedicated configuration for the S3 client to manage addressing styles and signature versions more effectively.
2026-01-30 17:34:40 +08:00
Harry
3f5f893e6c feat: add exists method to sandbox sources for existence checks
- Implemented the `exists` method in `SandboxFileSource` and its subclasses to verify the availability of sandbox sources.
- Updated `SandboxFileService` to utilize the new `exists` method for improved error handling when listing files and downloading files.
- Removed the previous check for storage existence in `archive_source.py` and replaced it with the new method.
2026-01-30 17:34:40 +08:00
yyh
25ee3f7bc4 fix(skill): restore flex spacer to keep search input right-aligned
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Add placeholder div for future CategoryTabs from marketplace API.
2026-01-30 17:18:18 +08:00
yyh
a4b0e4a2a0 fix(skill): clear file tree highlight when switching to start tab
The sync hook skipped deselectAll() when activeTabId was START_TAB_ID,
leaving stale highlights after closing the last file tab.
2026-01-30 17:12:51 +08:00
yyh
0f6f46b1f0 fix: use kebab-case in skills 2026-01-30 16:18:33 +08:00
yyh
03c8387830 fix(skill): make template header and search sticky on scroll 2026-01-30 16:12:47 +08:00
yyh
142b72f435 refactor(skill): remove tags/icons/categories, use kebab-case folder names
Drop CategoryTabs component, SkillTemplateTag type, icon/tags fields,
and UI_CONFIG from the fetch script. Upload folders now use the
kebab-case skill id (e.g. "skill-creator") instead of the display name.
Card shows the human-readable name from SKILL.md frontmatter while the
created folder uses the id for consistent naming.
2026-01-30 16:10:19 +08:00
yyh
4338632a78 fix(skill): use Dialog initialFocus to focus input on modal open
Expose initialFocus prop on Modal component (passthrough to Headless
UI Dialog) so the create blank skill modal reliably focuses the name
input when opened, replacing the ineffective autoFocus attribute.
2026-01-30 16:10:19 +08:00
yyh
5c0023b788 feat(skill): add create blank skill modal with name validation
Wire up the "Create Blank Skill" action card to open a modal where
users enter a skill name. The modal validates against existing skill
names in real-time and creates a folder with a SKILL.md file via
batchUpload, then opens the file as a pinned tab.
2026-01-30 16:10:19 +08:00
yyh
e9608532bd feat(skill): guard template cards against duplicate skill addition
Add useExistingSkillNames hook that derives root folder names from the
cached asset tree via TanStack Query select, then use it to show an
"Added" state on hover for already-present skills and block re-upload.
2026-01-30 16:10:19 +08:00
yyh
60b4b10622 fix(skill): disable template buttons during upload to prevent duplicates
Pass disabled/loading props to TemplateCard declaratively from
loadingId state. All cards are disabled while any upload is in
progress, and the active card shows a loading spinner. Remove the
imperative pointer-events overlay in favor of native button disabled.
2026-01-30 16:10:19 +08:00
yyh
abe2b37e3a fix(skill): use SearchInput with debounce and align card to Figma
Replace custom search input with SearchInput component (built-in clear
button) and add 300ms debounce. Fix template card: use Tailwind token
for icon background, fix Badge to use children with badge-s class and
uppercase, match empty-tag fallback height to badge size.
2026-01-30 16:10:18 +08:00
yyh
c33d27938d fix(skill): align category tabs with actual skill tags
Remove unused categories (Search, Security, Analysis) and add
real ones (Document, Design, Creative). Consolidate xlsx tags to
Document/Productivity and webapp-testing to Development only,
eliminating orphan tags with single-skill coverage.
2026-01-30 16:10:18 +08:00
yyh
32329cf27b perf(skill): stabilize useCallback refs and memoize filtered list
Use useRef for batchUpload and emitTreeUpdate to remove unstable
dependencies from useCallback, preventing unnecessary memo invalidation
on all 16 TemplateCard components. Wrap filtered list in useMemo and
replace && conditional with ternary for rendering safety.
2026-01-30 16:10:18 +08:00
yyh
038b03fa8e feat(skill): add script-driven full skill template generation
Add fetch-skill-templates.ts script that clones anthropics/skills repo
and generates complete directory trees (scripts, references, assets)
for all 16 skills with base64 encoding for binary files, replacing
the previous single-SKILL.md-only approach. Generated files are
lazy-loaded per skill on user click.
2026-01-30 16:10:18 +08:00
yyh
acc8671c28 fix: hide artifacts tab in variable inspect panel for classic mode
Guard variable-inspect from rendering artifacts-related UI and API calls
when sandbox is not enabled, preventing unnecessary sandbox-file requests.
2026-01-30 16:10:17 +08:00
yyh
66b4fa102b feat(skill): add skill template types, card component and upload utility
Introduce type definitions separating raw skill data (SkillTemplate)
from UI metadata (SkillTemplateWithMetadata) to match the actual
skill format from upstream repos. Add template card component with
hover state and file count display, template-to-upload conversion
utility, and i18n keys for en-US/zh-Hans.
2026-01-30 16:10:17 +08:00
Joel
f5b84384cf feat: support search tool after @ 2026-01-30 15:49:30 +08:00
Joel
bf2e3d5151 fix: remove show file transtation 2026-01-30 15:17:08 +08:00
hjlarry
ed9efba039 fix: variable of sync display in prompt editor 2026-01-30 15:13:32 +08:00
Joel
2a46bf26b5 fix: choose file has extra bg 2026-01-30 14:49:08 +08:00
Joel
fb97bcfdc7 fix: not show more acion in app 2026-01-30 14:21:03 +08:00
Joel
0711af20f2 feat: add feature provider to rag pipeline to reduce problem 2026-01-30 14:15:33 +08:00
zxhlyh
e9c2279b80 fix: chat message generation render 2026-01-30 13:19:56 +08:00
yyh
561f383cbc Revert "refactor(skill): replace React icon components with CSS Icons"
This reverts commit 919d7ef5cd.
2026-01-30 12:42:20 +08:00
yyh
d2a60b3b94 Revert "fix(icons): normalize SVG fill/stroke colors to black for CSS Icons pipeline"
This reverts commit a886bfef8a.
2026-01-30 12:42:20 +08:00
Novice
27664ec37a feat: pull variable add sandbox file support
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-30 11:21:08 +08:00
Joel
e8f45a9685 fix: css icon run error 2026-01-30 11:06:45 +08:00
Joel
2e63e20131 chore: computer use to true 2026-01-30 11:06:44 +08:00
Stream
c577e2a5ec Merge branch 'feat/structured-output-with-sandbox' into feat/support-agent-sandbox 2026-01-30 10:49:43 +08:00
Stream
cb4391f705 fix: provide json as text 2026-01-30 10:47:58 +08:00
Stream
0035587fed fix: provide json as text
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-30 10:45:14 +08:00
Stream
171551cfb3 fix: get AgentResult correctly 2026-01-30 10:42:29 +08:00
hjlarry
ac9985321e feat: send email when user mentioned in comment 2026-01-30 10:39:30 +08:00
hjlarry
cdb1449a96 feat: add online users to skills 2026-01-30 10:00:38 +08:00
Stream
2008768cb9 fix: provides correct prompts, tools and terminal predicates
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-30 08:44:26 +08:00
Stream
22b0a08a5f fix: provides correct prompts, tools and terminal predicates
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-30 08:15:42 +08:00
Stream
ec9ade62f5 fix: circular import 2026-01-30 07:11:20 +08:00
Stream
7926024569 feat: support structured output in sandbox and tool mode
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-30 06:46:52 +08:00
Stream
869b43a95b refactor: remove streaming structured output unused function
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-30 03:47:22 +08:00
zhsama
4a7f6597c4 chore: Add script to start API server without debug mode 2026-01-30 02:58:59 +08:00
zhsama
206d56358d feat(prompt-editor): add external search and keyboard navigation to
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
variable picker
2026-01-30 02:54:50 +08:00
Harry
d3fc457331 optimize: brand new cli api mechanism 2026-01-30 02:36:18 +08:00
Harry
cb12ada689 refactor all 2026-01-30 02:36:18 +08:00
Harry
806016244f refactor(storage): integrate SilentStorage for improved file handling
- Replaced direct storage references with SilentStorage in various components to enhance fallback mechanisms.
- Updated storage key formats for sandbox archives and files to improve clarity and consistency.
- Refactored related classes and methods to utilize the new SandboxFilePath structure.
- Adjusted unit tests to reflect changes in the StorageTicket model and its serialization methods.
2026-01-30 02:36:18 +08:00
yyh
ff478b6fef perf(workflow): optimize plugin installation hooks to reduce unnecessary queries and memo invalidations
Only enable the tool query matching the node's provider_type instead of
all four, and use primitive useMemo dependencies instead of the whole
data object to avoid redundant recomputations on every render.
2026-01-30 01:56:07 +08:00
zhsama
e9c9f0d7cc perf(workflow): optimize nested-node cascade deletion 2026-01-30 01:53:19 +08:00
yyh
c8a0a2c00d perf(workflow): skip unnecessary query subscriptions for non-plugin nodes
Add `enabled` parameter to tool query hooks so non-plugin nodes
(LLM, Code, IfElse, etc.) avoid registering React Query observers.
Extract shared matching functions into plugin-install-check utils to
eliminate duplicate logic between the hook and the checklist.
2026-01-30 01:46:50 +08:00
zhsama
5d8ba8f8cc feat: Recursively delete nested children (sub-graph) when removing nodes 2026-01-30 01:40:57 +08:00
yyh
03f1197d77 fix(workflow): memoize useCollaborativeWorkflow return value
Wrap the returned object in useMemo to maintain referential stability
and prevent unnecessary re-renders in consumers.
2026-01-30 01:31:25 +08:00
yyh
464b92da32 fix(workflow): eliminate infinite loop in plugin install state management
Replace useEffect-based state sync (_pluginInstallLocked/_dimmed) with
render-time derived computation in BaseNode, breaking the cycle of
effect → node data update → re-render → effect. Extract plugin missing
check into a pure utility function for checklist reuse.
2026-01-30 01:30:57 +08:00
zhsama
1a51f52061 Merge remote-tracking branch 'origin/feat/support-agent-sandbox' into feat/support-agent-sandbox 2026-01-29 23:53:34 +08:00
Stream
edce6d4152 refactor: remove streaming structured output from invoke_llm_with_structured_output
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-29 23:42:06 +08:00
Stream
749cebe60d refactor: remove streaming structured output from invoke_llm_with_pydantic_model
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-29 23:42:06 +08:00
Harry
6be800e14f refactor(storage): replace storage proxy with ticket-based URL system
- Removed the storage proxy controller and its associated endpoints for file download and upload.
- Updated the file controller to use the new storage ticket service for generating download and upload URLs.
- Modified the file presign storage to fallback to ticket-based URLs instead of signed proxy URLs.
- Enhanced unit tests to validate the new ticket generation and retrieval logic.
2026-01-29 23:39:24 +08:00
zhsama
20a4a83129 feat: Refactor app export to support sandboxed bundle format 2026-01-29 23:36:19 +08:00
Harry
f52fb919d1 refactor(storage): remove signer, using general file storage
- Removed unused app asset download and upload endpoints, along with sandbox archive and file download endpoints.
- Updated imports in the file controller to reflect the removal of these endpoints.
- Simplified the generator.py file by consolidating the code context field definition.
- Enhanced the storage layer with a unified presign wrapper for better handling of presigned URLs.
2026-01-29 23:01:12 +08:00
Harry
4aea4071a8 fix: monkey patch
- Moved the import of Queue and Empty from the top of the file to within the QueueTransportReadCloser class.
- This change improves encapsulation and ensures that the imports are only available where needed.
2026-01-29 22:33:31 +08:00
Harry
f198540357 feat(bundle): manifest-driven import with sandbox upload
- Add BundleManifest with dsl_filename for 100% tree ID restoration
- Implement two-step import flow: prepare (get upload URL) + confirm
- Use sandbox for zip extraction and file upload via presigned URLs
- Store import session in Redis with 1h TTL
- Add SandboxUploadItem for symmetric download/upload API
- Remove legacy source_zip_extractor, inline logic in service
- Update frontend to use new prepare/confirm API flow
2026-01-29 22:33:31 +08:00
yyh
919d7ef5cd refactor(skill): replace React icon components with CSS Icons
Migrate all icon usage in the skill directory from @remixicon/react
and custom SVG components to Tailwind CSS icon classes (i-ri-*, i-custom-*).
Update MenuItem API to accept string class names instead of React.ElementType.
2026-01-29 21:57:17 +08:00
yyh
a886bfef8a fix(icons): normalize SVG fill/stroke colors to black for CSS Icons pipeline
Hardcoded colors (#354052, #676F83, #98A2B3, #155EEF, white) prevent
the Iconify parseColors callback from converting them to currentColor,
causing the icons to render as background-image instead of mask-image
and making text-* color utilities ineffective.
2026-01-29 21:56:52 +08:00
yyh
f1321765c6 fix: migrations 2026-01-29 21:43:00 +08:00
yyh
6ee9078349 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/.env.example
#	api/uv.lock
#	web/app/components/app/create-app-modal/index.tsx
#	web/app/components/app/create-from-dsl-modal/index.tsx
#	web/app/components/apps/app-card.tsx
#	web/pnpm-lock.yaml
2026-01-29 21:25:28 +08:00
yyh
ff71816373 refactor: use query data for selected file download, keep mutation for tree downloads
The toolbar download button now uses the already-fetched download URL
from useQuery (zero extra requests), while tree node downloads keep
using useMutation with React Query-managed isPending state instead of
a hand-rolled useState wrapper.
2026-01-29 20:48:58 +08:00
zhsama
9b62be2eb1 Refetch suggested questions after reset in context generate modal 2026-01-29 19:17:44 +08:00
zhsama
8f7b9e2de4 feat: Trigger single run in sub-graph after modal opens 2026-01-29 19:17:43 +08:00
zhsama
e47f690cd2 refactore: Replace hardcoded null strategy strings with constant 2026-01-29 19:17:42 +08:00
yyh
92731bffba feat: add ArtifactSlice and integrate artifact preview into skill editor tabs
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Introduce a dedicated Zustand ArtifactSlice to manage artifact selection
state with mutual exclusion against the main file tree. Artifact files
from the sandbox can now be opened as tabs in the skill editor, rendered
via a lightweight ArtifactContentPanel that reuses ReadOnlyFilePreview.
2026-01-29 17:52:41 +08:00
Novice
9d99675a1d chore: enhance NestedNodeGraphService with additional node data fields 2026-01-29 17:46:43 +08:00
yyh
fb034a1324 fix: strip leading dot from file extension before matching
Backend returns extensions with a leading dot (e.g., `.png` from
`os.path.splitext`), causing binary/media files to be misclassified
as text since they didn't match the dot-free extension lists.
2026-01-29 17:07:40 +08:00
yyh
5c91311077 fix: use downloadUrl utility instead of window.open for file downloads
Replace window.open with downloadUrl from utils/download to trigger
proper file downloads instead of opening files in a new browser tab.
2026-01-29 17:05:01 +08:00
Joel
807d0e5fba chore: llm editor prompt placeholder support sandbox 2026-01-29 16:57:04 +08:00
yyh
76484406a2 feat(inspect): add read-only file preview in ArtifactsTab
Implement ReadOnlyFilePreview to render sandbox files by type
(code, markdown, image, video, SQLite, unsupported) using existing
skill viewer components with readOnly support. Add
useSandboxFileDownloadUrl and useFetchTextContent hooks for data
fetching, and generalize useFileTypeInfo to accept any file-like
object.
2026-01-29 16:42:22 +08:00
hjlarry
079484d21c feat: sync file tree 2026-01-29 16:33:33 +08:00
zhsama
72c712b3bb refactor: Remove thought process display from context generate chat 2026-01-29 16:03:30 +08:00
zhsama
b93e21f99f feat: Add sourcemap build arg for web dev deployments 2026-01-29 15:43:56 +08:00
Joel
8a5983e071 chore: update no data ui 2026-01-29 15:39:45 +08:00
Joel
9fa42a65e1 chore: no refer tools data text and loading 2026-01-29 15:23:17 +08:00
Joel
459d9b5842 chore: not show upgrade to sandbox 2026-01-29 15:00:04 +08:00
Joel
2973968cc6 feat: handle upgrade confirm and icon 2026-01-29 15:00:04 +08:00
Joel
255b7511ae chore: closed not show migriation again 2026-01-29 15:00:04 +08:00
Joel
37f35bced2 feat: migration classical to sandbox engineer 2026-01-29 15:00:03 +08:00
yyh
8185d146b6 fix(inspect): extract ArtifactsEmpty component and align split-panel empty state
Extract shared empty state card into ArtifactsEmpty component to
deduplicate the no-files and no-selection empty states. Align the
split-panel right-side empty state with the variables tab pattern.
Remove FC type annotations in favor of inline parameter types.
2026-01-29 14:38:08 +08:00
hjlarry
a5ace48f96 feat: code editor cursor sync 2026-01-29 14:28:30 +08:00
yyh
d73a36d6bc fix(inspect): add aria-hidden to decorative icon and use stable keys for path breadcrumb
Mark the empty state SearchLinesSparkle icon as aria-hidden for screen
readers. Replace array-index keys with cumulative path keys (O(n) vs
O(n²)) to satisfy react/no-array-index-key and improve key stability.
2026-01-29 14:20:12 +08:00
yyh
3ad05be9ca fix(inspect): align artifacts empty state with variables empty state design
Replace the minimal centered text card in artifacts tab empty state with
a full-height card layout matching the variables tab, including icon
container (SearchLinesSparkle), title, description, and learn more link.
2026-01-29 14:04:30 +08:00
yyh
bacc5c32f5 feat(portal): add useContextMenuFloating hook for coordinate-based context menus
Replace useClickAway + fixed positioning in file tree context menu with
a floating-ui based hook that provides collision detection (flip/shift),
ARIA role="menu", Escape/outside-click dismiss, and scroll dismiss via
passive capture listener with ref-stabilized callback.
2026-01-29 14:01:36 +08:00
yyh
efb3657cfe fix(skill): use downloadUrl utility instead of window.open for file downloads
Replaces window.open with the downloadUrl helper from utils/download.ts
to trigger proper browser download behavior via <a download> instead of
opening a new tab that may display garbled content.
2026-01-29 12:49:15 +08:00
yyh
bd4b76db5c Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-29 12:44:37 +08:00
yyh
8a96f9f8df Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-29 11:44:01 +08:00
hjlarry
90bb7bf2f3 feat: code/txt editor sync 2026-01-29 09:16:52 +08:00
hjlarry
26dd6c128c feat: mouse right click can add new comment 2026-01-29 09:13:12 +08:00
Harry
0495dc5085 feat(skill): tool switcher for llm node
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
- Added an `enabled` field to `DifyCliToolConfig` and `ToolDependency` to manage tool activation status.
- Updated `DifyCliConfig` to handle tool dependencies more effectively, ensuring only enabled tools are processed.
- Refactored `SkillCompiler` to utilize `tool_id` for better identification of tools and improved handling of disabled tools.
- Introduced a new method `_extract_disabled_tools` in `LLMNode` to streamline the extraction of disabled tools from node data.
- Enhanced metadata parsing to account for tool enablement, improving overall tool management.
2026-01-29 01:21:18 +08:00
zhsama
23ee9e618b fix: Remove default completion params and omit empty params from model
config
2026-01-29 01:10:03 +08:00
yyh
8326b9e3e5 refactor(skill): remove React.FC type annotations from all components
Replace FC<Props> pattern with direct props typing in function parameters
for better TypeScript inference and modern React best practices.
2026-01-28 23:34:08 +08:00
yyh
999587fbdd fix(workflow): replace hardcoded strings with i18n in test run panel title
Use existing i18n keys for internationalization:
- singleRun.testRun: "Test Run" / "测试运行"
- common.running: "Running" / "运行中"
2026-01-28 23:03:49 +08:00
yyh
f16516549e feat(workflow): add clear button to workflow test run panel
Features:
- Add refresh button to clear test run history (data, inputs, node highlights)
- Persist workflowRunningData when closing panel (from previous commit)

Code quality improvements:
- Refactor to declarative pattern: effectiveTab derived from state, not set in effects
- Replace && with ternary operators for conditional rendering (Vercel best practices)
- Fix created_by type: change from string to object to match backend API
- Remove `as any` type assertion, use proper type-safe access
- Title now declaratively shows status based on workflowRunningData presence

Files changed:
- use-workflow-interactions.ts: add handleClearWorkflowRunHistory hook
- workflow-preview.tsx: declarative tab state, clear button, type-safe props
- types.ts: fix created_by type definition
- test files: update mock data to match corrected types
2026-01-28 22:48:27 +08:00
Stream
2df4445aa7 fix: structured output should be non-streaming 2026-01-28 21:38:58 +08:00
yyh
d63a012680 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-28 21:29:25 +08:00
zhsama
2aa6dcaa1a feat: Improve error messages for missing workflow outputs 2026-01-28 21:23:57 +08:00
Harry
144089d3ed feat(skill): tool switcher implementation
- Introduced a new regex pattern for tool groups to support multiple tool placeholders.
- Updated the DefaultToolResolver to format outputs for specific built-in tools (bash, python).
- Enhanced the SkillCompiler to filter out disabled tools in tool groups, ensuring only enabled tools are rendered.
- Added tests to verify the correct behavior of tool group filtering and rendering.
2026-01-28 21:16:10 +08:00
yyh
3dad0b38df refactor(workflow)!: persist workflowRunningData when closing debug panel
Keep test run results (RESULT/DETAIL/TRACING tabs) available after
closing and reopening the debug panel, or switching between Graph
and Skill views.
2026-01-28 21:15:45 +08:00
Joel
c600fdafcd fix: skill always same 2026-01-28 21:10:50 +08:00
zxhlyh
3af927556e fix: workflow result generation render 2026-01-28 21:01:47 +08:00
Harry
0c1e812d21 fix: defer sandbox SDK imports for gevent 2026-01-28 20:51:22 +08:00
Harry
e95241b94f fix: command node 2026-01-28 20:51:22 +08:00
Harry
2513e191fb feat: add computer use feature to LLMNodeData
- Introduced a new boolean field `computer_use` in LLMNodeData to indicate whether the computer use feature should be enabled.
- Updated LLMNode to check the `computer_use` field when determining sandbox usage, ensuring proper error handling if sandbox is not available.
- Removed the obsolete `_has_skill_prompt` method to streamline the code.
2026-01-28 20:51:22 +08:00
yyh
f9f3d33911 fix(variable-inspect): anchor clear button to right side of tab header
Position the clear button relative to the right divider instead of
following the tab labels, ensuring consistent positioning across
different language translations. Also fix tab switching jitter by
setting a fixed header height.
2026-01-28 20:34:23 +08:00
yyh
acec271e88 fix(skill): resolve race condition in upload progress tracking
Use shared object reference instead of separate variables to track
upload progress across concurrent Promise.all operations, preventing
progress bar from showing incorrect or regressing values.
2026-01-28 20:25:12 +08:00
yyh
76c4d7f62c feat(skill): add preprocessing for markdown files before upload
Introduce prepareSkillUploadFile utility that wraps markdown file content
in a JSON payload format before uploading. This ensures consistent handling
of skill files across file upload, folder upload, and drag-and-drop operations.
2026-01-28 20:17:21 +08:00
yyh
fb78a4450d feat: implement node reordering functionality in file tree component 2026-01-28 19:38:41 +08:00
Novice
209e4f8f7b fix: structured output prompt skill should be false 2026-01-28 19:07:01 +08:00
zhsama
636156f5da fix: Fix workflow inspect vars to include parent nodes in subgraph mode 2026-01-28 18:24:04 +08:00
zhsama
7408405c91 feat: Add subgraph output validation for single-run debugging 2026-01-28 18:24:04 +08:00
yyh
135fc45ae9 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-28 17:50:58 +08:00
yyh
a7890c140e fix: i18n in canvas/skill toggle
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-28 17:22:04 +08:00
Joel
bd80cd217d fix: no auth label text and default handle auth not back 2026-01-28 16:48:05 +08:00
Joel
ceea593010 feat: tool no auth node 2026-01-28 16:48:05 +08:00
yyh
3c7f641f60 fix(skill): remove non-functional copy menu item from file tree context menu 2026-01-28 16:34:57 +08:00
yyh
000bdf6bc0 style: uploading state for upload folder 2026-01-28 16:34:57 +08:00
hjlarry
4c77b5f5c5 feat: sync the markdown file dirty status 2026-01-28 16:29:17 +08:00
Joel
7cf54238c3 feat: handle provide label and action label 2026-01-28 16:14:57 +08:00
yyh
f00d9186e4 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/commands.py
2026-01-28 16:03:44 +08:00
Joel
807697c664 chore: icon to fn 2026-01-28 16:01:22 +08:00
Joel
116e075b94 feat: tool icon map 2026-01-28 16:01:22 +08:00
yyh
a784121070 fix(skill): align upload tooltip with Figma design
Replace animated pulse background with static progress bar using
design tokens and increase upload icon to 24px.
2026-01-28 15:58:49 +08:00
yyh
543802cc65 feat(skill): add three-state upload progress tooltip
Replace simple uploading/success indicator with a full three-state
tooltip (uploading, success, partial_error) that overlays the DropTip
position. Add upload slice to skill editor store and wire progress
tracking into file/folder upload operations.
2026-01-28 15:52:08 +08:00
Joel
156b779a1d chore: enchance ui 2026-01-28 15:43:03 +08:00
Joel
bdb923984f chore: data reload problem 2026-01-28 15:43:03 +08:00
Joel
680b238a36 chore: only pass enable tool to draft 2026-01-28 15:43:02 +08:00
Joel
c88acf48bf feat: add tools sync config 2026-01-28 15:43:01 +08:00
Joel
0ae02938e6 feat: fetch tools and set tools enabled from api 2026-01-28 15:43:01 +08:00
Novice
ca95b6684f chore: improve assemble variable context generation 2026-01-28 15:34:28 +08:00
yyh
b7fc738cfb fix(skill): indicator css 2026-01-28 15:12:11 +08:00
yyh
d07ba03a2a test(web): add unit tests for storage utility 2026-01-28 15:12:11 +08:00
yyh
a0526143e2 feat(web): add resizable sidebar to skill page with localStorage persistence 2026-01-28 15:12:11 +08:00
Harry
190453d397 fix: add timeout to queue.get() in QueueTransportReadCloser to prevent indefinite blocking 2026-01-28 14:45:17 +08:00
yyh
20d19fead9 fix(web): align variable inspect tabs in empty state
- reserve left column width in empty layout to prevent tab shift
- keep close action aligned with split-panel header
2026-01-28 14:15:55 +08:00
yyh
27639600f9 feat(web): add FileDownload01 icon and use it in artifacts tab
Replace RiDownloadLine with a file-specific download icon for the
artifacts file list and remove the unused copy-path button.
2026-01-28 14:11:18 +08:00
yyh
0b6522df42 refactor(web): extract split layout for variable inspect
- add SplitPanel to share left/right shell and narrow menu handling
- reuse InspectHeaderProps for tab header + actions across tabs
- refactor variables/artifacts tabs to plug into shared split layout
- align right-side header/close behavior and consolidate empty/loading flows
2026-01-28 14:06:34 +08:00
Harry
40a8e8febc fix: add timeout to queue.get() in DockerDemuxer to prevent indefinite blocking 2026-01-28 13:52:31 +08:00
Harry
392cec2f54 Revert "refactor: replace threading with gevent primitives for cooperative scheduling"
This reverts commit 27781d6b7e.
2026-01-28 13:51:48 +08:00
Harry
27781d6b7e refactor: replace threading with gevent primitives for cooperative scheduling
Updated multiple modules to utilize gevent for concurrency, ensuring compatibility with gevent-based WSGI servers. This includes replacing threading.Thread and threading.Event with gevent.spawn and gevent.event.Event, respectively, to prevent blocking and improve performance during I/O operations.

- Refactored SandboxBuilder, Sandbox, CommandFuture, and DockerDemuxer to use gevent.
- Added detailed docstrings explaining the changes and benefits of using gevent primitives.

This change enhances the responsiveness and efficiency of the application in a gevent environment.
2026-01-28 13:29:53 +08:00
yyh
ef6f7f2a6c refactor: extract InspectLayout composition component to eliminate repeated header/close patterns
Consolidate duplicated TabHeader + close button layout (8 occurrences) into a single
InspectLayout wrapper. Replace boolean props with children slots for better composition.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:50:04 +08:00
yyh
d10d3b7021 chore: api linter 2026-01-28 11:42:55 +08:00
yyh
a38b8987b4 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/core/app/apps/workflow/app_runner.py
2026-01-28 11:41:58 +08:00
hjlarry
0d9de79fae feat: skill markdown cursor pos sync 2026-01-28 11:03:21 +08:00
Novice
cd688a0d8f fix: nested node single step run
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-28 10:18:10 +08:00
Stream
a571b3abb2 chore: fix type issues 2026-01-28 06:43:08 +08:00
Stream
9d287647c1 fix: load $context correctly in step run llm node 2026-01-28 06:35:01 +08:00
Stream
403114eee9 chore: run ruff 2026-01-28 05:20:37 +08:00
Stream
efbd92fc7f chore: run ruff 2026-01-28 05:20:19 +08:00
Stream
71d44ec52c fix: union type handling 2026-01-28 02:07:03 +08:00
zhsama
c46856d5ac feat: Validate LLM node context before running workflow 2026-01-27 23:54:42 +08:00
zhsama
ffca687f4e fix: Add type safety for LLM node context variable_selector 2026-01-27 23:12:03 +08:00
Joel
fd078f8853 fix: computer tooptip ui 2026-01-27 20:20:29 +08:00
Joel
af543d2a7f chore: tool ui 2026-01-27 20:20:29 +08:00
Joel
015befad43 feat: config reference tool ui 2026-01-27 20:20:29 +08:00
yyh
ae9c7d4e9f Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-27 19:55:03 +08:00
yyh
daa6413353 chore: rm skills 2026-01-27 19:50:26 +08:00
zhsama
5b485d7cd0 Merge branch 'zhsama/dar-es-salaam' into feat/support-agent-sandbox
# Conflicts:
#	api/core/llm_generator/llm_generator.py
#	api/core/llm_generator/output_models.py
#	api/core/llm_generator/output_parser/structured_output.py
#	api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py
2026-01-27 19:45:02 +08:00
zhsama
39ec2b3277 feat: Add file type support to LLM node JSON schema editor 2026-01-27 19:39:32 +08:00
Harry
bf66627204 feat(skills): enhance skill retrieval by incorporating user context and app model in API endpoints 2026-01-27 19:11:52 +08:00
Harry
506163ab2d feat(sandbox): restructure file handling by introducing a new inspector module with runtime and archive sources 2026-01-27 19:11:52 +08:00
Harry
951af125af feat(skills): implement API endpoints for retrieving skill references in workflows and add related data models 2026-01-27 19:11:52 +08:00
Stream
a4a85f7168 feat: improve the suggest-question prompt 2026-01-27 19:10:43 +08:00
Stream
8174b67e24 fix: call get_text_content instead of accessing content directly 2026-01-27 18:56:13 +08:00
Stream
ae23d30da2 fix: call get_text_content instead of accessing content directly 2026-01-27 18:54:36 +08:00
Stream
adf104becd fix: enhanced structured output 2026-01-27 18:33:51 +08:00
zxhlyh
17807dbf18 fix: llm generation log 2026-01-27 17:13:40 +08:00
yyh
5d41f67fe1 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
# Conflicts:
#	api/core/workflow/graph_events/__init__.py
2026-01-27 16:22:02 +08:00
yyh
ab52550abe feat(sandbox): use extension field for file icon type mapping
Enhance getFileIconType to accept an extension parameter and cover all
13 FileAppearanceTypeEnum types using an O(1) Map lookup. Update all
call sites to pass the API-provided extension for accurate icon display.
2026-01-27 16:21:03 +08:00
Harry
85ecf1a198 feat(sandbox): add file extension attribute to SandboxFileNode and update related logic 2026-01-27 15:58:14 +08:00
Joel
046aff93f6 fix: metadata not set in llm prompt 2026-01-27 15:35:16 +08:00
Harry
57a588265a refactor(archive-storage): streamline archive handling by introducing dynamic archive name and path properties 2026-01-27 15:31:26 +08:00
Novice
2fb391a642 fix: generation stream abort 2026-01-27 15:28:35 +08:00
yyh
c9e428facf Merge branch 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-27 15:21:22 +08:00
yyh
1beafd8558 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-27 15:17:40 +08:00
Harry
504280995d refactor(tool-access): rename ToolKey to ToolDescription and update ToolAccessPolicy to use mappings for allowed tools and credentials 2026-01-27 15:14:12 +08:00
Harry
b889ab8853 refactor(download): replace wget with curl for asset and archive downloads 2026-01-27 15:14:07 +08:00
Harry
394a811e5e fix(local-env): replace os.rmdir with shutil.rmtree for directory removal 2026-01-27 15:13:55 +08:00
yyh
d098e72c13 feat(variable-inspect): add Artifacts tab with sandbox file tree browser
Refactor the variable inspect panel into a tabbed layout with Variables
and Artifacts tabs. Extract variable logic into VariablesTab, add new
ArtifactsTab with sandbox file tree selection and preview pane, and
improve accessibility across tree nodes and interactive elements.
2026-01-27 15:05:11 +08:00
Harry
a29f569e08 feat(sandbox): enhance logging with colored output and add AppAssetAttrsInitializer 2026-01-27 14:19:27 +08:00
Harry
64b6a5dd31 feat(sandbox-zip-service): using sandbox to zip files
- refactor allllllllll!!!!!!
2026-01-27 14:19:27 +08:00
Harry
9094f9d313 feat(zip-sandbox): special use of sandbox implementation 2026-01-27 14:19:27 +08:00
Harry
89eb7b17db feat(dify-cli): session level tool white list 2026-01-27 14:19:26 +08:00
hjlarry
a9e1394011 add skill markdown file collaboration 2026-01-27 14:08:44 +08:00
yyh
61608e0423 fix: remove fixed width on collaboration avatar tooltip to prevent username truncation 2026-01-27 13:48:48 +08:00
Joel
1ea7d2d9a1 chore: sandbox llm not show tools 2026-01-27 13:42:33 +08:00
zxhlyh
53bc060cea fix: chat generation render 2026-01-27 13:37:08 +08:00
yyh
74f94633d7 fix(skill): tighten cached content typing 2026-01-27 13:09:23 +08:00
yyh
a6a1ac4fa6 fix(skill): prevent infinite save loop caused by unstable saveFile reference
Use useRef to store saveFile reference and remove it from useEffect
dependencies to prevent cleanup from re-triggering on reference changes.
Also normalize metadata before comparison when clearing dirty state to
ensure filtered tools match correctly.
2026-01-27 13:02:55 +08:00
yyh
c5ccdcc331 update skills 2026-01-27 12:51:21 +08:00
Novice
f0f796fdc0 fix: remove additional sse event 2026-01-27 10:49:37 +08:00
hjlarry
c4e5eba6c3 switch to skills tab, keep ws connected and ensure has leader 2026-01-27 10:22:05 +08:00
Novice
585e11a1fc fix: llm invoke condition 2026-01-27 10:12:51 +08:00
zhsama
54fce5e903 feat: Add @agent icon and implement agent alias variables in workflow
inspector
2026-01-27 02:42:37 +08:00
yyh
772dbe620d fix(workflow): disable view switch during preview run instead of mounted guard
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Simpler approach: disable the view picker toggle when preview is running,
preventing users from switching views during active runs.

This replaces the previous mounted ref guard approach (commits a0188bd9b5,
b7f1eb9b7b, 8332f0de2b) which added complexity to handle post-unmount
operations. Disabling the toggle is more direct and follows KISS principle.

Changes:
- Add disabled prop to ViewPicker based on isResponding state
- Revert mounted ref guards in use-chat-flow-control.ts
- Revert isMountedRef parameter in use-nodes/edges-interactions-without-sync.ts
- Revert defensive type check in markdown-utils.ts (no longer needed)
2026-01-27 01:31:22 +08:00
yyh
8332f0de2b fix(workflow): reinitialize mounted ref on effect setup for StrictMode
In React StrictMode (dev mode), effects are run twice to help detect
side effects. The cleanup-only pattern left isMountedRef as false after
StrictMode's simulated unmount-remount cycle, causing stop/cancel
operations to be skipped even when the component was mounted.

Now the effect setup explicitly sets isMountedRef.current = true,
ensuring correct behavior in both development and production.
2026-01-27 01:23:08 +08:00
yyh
b7f1eb9b7b fix(markdown)!: return empty string for non-string content in preprocessors
Related to a9c5201485 - when switching views during active preview run,
the markdown preprocessors could receive non-string content (e.g., frozen
arrays from immer). Returning the original value caused ReactMarkdown to
fail with "Cannot assign to read only property" error.

Now both preprocessLaTeX and preprocessThinkTag return '' for non-string
input, preventing runtime errors during view switches.
2026-01-27 01:10:00 +08:00
yyh
a0188bd9b5 fix(workflow)!: add mounted guard to prevent ReactFlow operations after unmount
When switching from graph view to skill view during an active preview run,
SSE callbacks continue executing and attempt to update ReactFlow node/edge
states. This could cause errors since the component is unmounted.

Add optional `isMountedRef` parameter to `useNodesInteractionsWithoutSync`
and `useEdgesInteractionsWithoutSync` hooks. When provided, operations are
skipped if the component has unmounted, preventing potential errors while
allowing the SSE connection to continue running in the background.

BREAKING CHANGE: `useNodesInteractionsWithoutSync` and
`useEdgesInteractionsWithoutSync` now accept an optional `isMountedRef`
parameter. Existing callers are unaffected as the parameter is optional.
2026-01-27 00:43:58 +08:00
Stream
6b439b1699 fix: behave correctly when user is not talking Dify 2026-01-27 00:33:41 +08:00
yyh
bf12445960 fix(workflow): make FileTree and ArtifactsSection scroll independently
The sidebar layout was broken when ArtifactsSection expanded - it would
squeeze the FileTree and neither area could scroll. This restructures the
layout so each section has its own scroll container with proper height
constraints.
2026-01-27 00:14:33 +08:00
Stream
b57b1a6926 feat: generate better instructions 2026-01-27 00:09:11 +08:00
Stream
a9fb73fa31 fix: avoid flask backend error 2026-01-27 00:02:41 +08:00
yyh
a9c5201485 refactor(workflow)!: persist the debug state of the chatflow preview panel to the zustand store and split useChat hook into modular files 2026-01-26 23:21:44 +08:00
yyh
87d033e186 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-26 23:12:44 +08:00
Joel
55120ea9f7 chore: prompt change also clear useless tool id 2026-01-26 18:29:38 +08:00
Joel
b99311baa0 chore: remove useless toolid 2026-01-26 18:09:46 +08:00
zhsama
ec5964c419 feat: Add support for array[message](List[promptMessage]) variable type
in workflow
2026-01-26 17:50:18 +08:00
yyh
5ac70633a2 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	web/pnpm-lock.yaml
2026-01-26 17:09:36 +08:00
Joel
c9edd71395 Merge branch 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-26 16:59:31 +08:00
Joel
2e954388f5 merge 2026-01-26 16:57:45 +08:00
yyh
2c02c8ac18 fix(sandbox): fetch artifacts on mount for blue dot indicator
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Remove enabled condition so data is fetched when component mounts,
allowing blue dot to show when files exist even before expanding.
TanStack Query handles request deduplication automatically.
2026-01-26 16:53:16 +08:00
zhsama
5cdd69e7e0 refactor: Refactor context generation to use available vars 2026-01-26 16:50:44 +08:00
yyh
e5e43bc2b9 fix(sandbox): fix guide line offset and use spinner for loading
- Adjust TreeGuideLines lineOffset to 2 to center lines under folder icons
- Replace skeleton loading with spinner in header area
2026-01-26 16:37:51 +08:00
yyh
70d88bc522 refactor(sandbox): align artifacts tree with file tree styles
Reuse TreeGuideLines for indent lines, use FileTypeIcon for file icons,
and match folder open/close icons with the main file tree component.
2026-01-26 16:19:28 +08:00
yyh
765b548be4 fix(a11y): improve accessibility for artifacts tree
Add aria-label, aria-expanded attributes and focus-visible styles
for folder buttons. Hide loading skeleton from assistive tech.
2026-01-26 16:01:22 +08:00
Joel
830c286062 feat: support credical 2026-01-26 15:55:24 +08:00
Joel
f90f3287d8 fix: provider and tool auth click not show 2026-01-26 15:55:23 +08:00
yyh
d396d92059 feat(sandbox): implement artifacts section UI
- Replace placeholder with functional ArtifactsSection component
- Add ArtifactsTree component for file tree rendering
- Support expand/collapse with lazy loading
- Show blue dot indicator when collapsed with files
- Add empty state card with hint text
- Add download button on file hover
- Add i18n translations (en-US, zh-Hans)
2026-01-26 15:43:06 +08:00
yyh
166b4a5a2b feat(sandbox): add sandbox file API service layer
- Add types for sandbox file API (SandboxFileNode, SandboxFileDownloadTicket)
- Add oRPC contracts for listFiles and downloadFile endpoints
- Add TanStack Query hooks (useGetSandboxFiles, useDownloadSandboxFile)
- Add useSandboxFilesTree hook with flat-to-tree conversion
2026-01-26 15:40:27 +08:00
Joel
694ed4f5e3 chore: small ui 2026-01-26 15:06:33 +08:00
yyh
e39711f9ea perf: remove unnecessary tree cache invalidation on file save
The tree invalidation was causing redundant network requests since the
file content cache is already managed via setQueryData in the save manager.
2026-01-26 15:02:42 +08:00
Joel
154018fe31 chore: fix tool ui 2026-01-26 14:52:56 +08:00
Joel
010cbd0a73 chore: remove useless meata config when save file 2026-01-26 14:28:35 +08:00
Novice
87bcd70f59 feat: add tool call based structured output 2026-01-26 14:17:57 +08:00
Harry
39799b9db7 feat(sandbox): artifact browser 2026-01-26 14:13:36 +08:00
Joel
453844b9e8 chore: editor config in new slide 2026-01-26 14:07:35 +08:00
yyh
677775079f fix: use IS_CLOUD_EDITION for 'Managed by SaaS' tag visibility
Change from !IS_CE_EDITION to IS_CLOUD_EDITION to ensure the tag only
shows in cloud edition, not in enterprise or other self-hosted variants.
2026-01-26 11:42:55 +08:00
Joel
3f4d6b9452 fix: click readme hide config 2026-01-26 11:40:28 +08:00
yyh
4f75d7f8e2 fix: hide 'Managed by SaaS' tag in CE edition for sandbox providers
The tag should only display in SaaS version since CE edition also has
system config from migrations but the label is misleading for self-hosted.
2026-01-26 11:33:40 +08:00
Joel
902468e3e0 chore: tool picker height 2026-01-26 11:30:48 +08:00
Joel
c75afdb321 chore: no auth no choose show tools 2026-01-26 11:30:48 +08:00
yyh
567634f2a8 update skills 2026-01-26 11:20:03 +08:00
yyh
83c3c23c27 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-26 11:18:41 +08:00
Joel
3bde614bd3 feat: enabel tool num 2026-01-26 11:00:29 +08:00
Joel
9a68243fcc feat: show provider config 2026-01-26 10:44:09 +08:00
hjlarry
b44169de41 fix mysql-connector-python 9.6 not work 2026-01-26 09:37:21 +08:00
Stream
bd2ee70c63 fix: remove output tokens constraint
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-26 02:53:40 +08:00
Harry
87dba2765b fix(tests): remove unnecessary noqa directive from test function 2026-01-26 01:12:07 +08:00
Harry
af17e20f99 feat(sandbox): implement sandbox archive upload/download endpoints and security enhancements
- Added sandbox archive upload and download proxy endpoints with signed URL verification.
- Introduced security helpers for generating and verifying signed URLs.
- Updated file-related API routes to include sandbox archive functionality.
- Refactored app asset storage methods to streamline download/upload URL generation.
2026-01-26 01:11:53 +08:00
yyh
a471caf787 test(skill-editor): add keyboard shortcut tests for SkillSaveProvider
Cover Ctrl+S and Cmd+S save triggers, guard clauses for start tab and
null active tab, success/error toast notifications, and fallback
registry integration.
2026-01-25 21:21:35 +08:00
yyh
cdcd9fd1a2 refactor(skill-editor): lift Ctrl+S handler to Provider and remove redundant hook
Move global keyboard shortcut handling from component-level hook to
SkillSaveProvider, eliminating duplicate event listener registrations
and race conditions. Delete use-skill-file-save hook as its logic is
now consolidated in the provider with direct store access.
2026-01-25 21:17:25 +08:00
yyh
84d032c104 fix: test
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-25 19:54:09 +08:00
yyh
b305abdc8f fix(skill-editor): align autosave fallbacks
- use cleanup-based save on tab switch with stable fallback snapshots
- add fallback registry for metadata-only autosave consistency
- add autosave/save-manager tests
2026-01-25 19:51:56 +08:00
yyh
e1e7b7e88a refactor(skill): extract save logic into SkillSaveProvider with auto-save support
Centralize file save operations using Context/Provider pattern for better
maintainability. Add auto-save on tab switch, visibility change, page unload,
and component unmount.
2026-01-25 19:09:33 +08:00
yyh
150730d322 rm 2026-01-25 17:19:55 +08:00
zhsama
8754b321df fix: Add sandbox feature to workflow features mapping 2026-01-25 16:30:50 +08:00
Harry
47835aaad9 feat(app-assets): add upload functionality and update proxy handling 2026-01-25 15:34:53 +08:00
Harry
c035133353 refactor(asset-storage): fix security problems 2026-01-25 03:44:36 +08:00
hjlarry
22287e16e2 fix header 2026-01-25 00:16:30 +08:00
hjlarry
1c943eb89f fix migration file version 2026-01-25 00:05:19 +08:00
hjlarry
4c596aaac2 Merge branch 'feat/collaboration2' into feat/support-agent-sandbox 2026-01-25 00:00:03 +08:00
hjlarry
f4321279b9 fix migration file 2026-01-24 19:51:43 +08:00
hjlarry
2a372df33c fix web unittest 2026-01-24 19:49:20 +08:00
hjlarry
ef536ba909 fix 2026-01-24 15:30:45 +08:00
hjlarry
b192c6e658 fix package version 2026-01-24 15:26:59 +08:00
autofix-ci[bot]
89b2ae01a6 [autofix.ci] apply automated fixes 2026-01-24 07:26:47 +00:00
hjlarry
edb4457684 Merge remote-tracking branch 'myori/main' into feat/collaboration2 2026-01-24 15:22:07 +08:00
hjlarry
bb6d6a4f96 improve compute nodes diff speed 2026-01-24 15:04:51 +08:00
zhsama
a36ea5addc Merge branch 'main' into feat/support-agent-sandbox
# Conflicts:
#	api/pyproject.toml
#	api/uv.lock
2026-01-23 22:31:01 +08:00
yyh
98a050e664 chore: support folder upload in root file tree 2026-01-23 21:17:49 +08:00
Yeuoly
0419dc9632 fix(docker): warn about localhost FILES_URL and forward port to container 2026-01-23 19:57:41 +08:00
Harry
eed9faedaa refactor: replace AppAssetsInitializer with DraftAppAssetsInitializer and ensure assets directory creation in app_assets_initializer
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-23 18:51:31 +08:00
Harry
41dd93c6b2 fix(e2b): stupid e2b env variable not work 2026-01-23 18:42:13 +08:00
yyh
00ae975f0b fix: comments 2026-01-23 18:25:43 +08:00
yyh
c51c40ede7 fix: migrations 2026-01-23 18:04:42 +08:00
yyh
65ffc5b3d9 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-23 18:04:32 +08:00
yyh
d39708b3f6 fix(csp): add wasm-unsafe-eval to enable WebAssembly execution
SQLite preview feature requires WebAssembly to run wa-sqlite, but CSP
policy was blocking WebAssembly.instantiate() without wasm-unsafe-eval
directive in script-src.
2026-01-23 17:38:38 +08:00
Harry
e1b0ab5c3f feat(app_asset_service): implement asynchronous file deletion in asset management
- Added a threaded function to handle the deletion of storage files asynchronously after asset removal.
- Updated the asset removal logic to include a call to the new deletion function, improving performance and responsiveness during asset management operations.
2026-01-23 17:27:13 +08:00
Yeuoly
8a6e1a695b fix: import storage module and ensure file record creation for presigned URL uploads 2026-01-23 17:26:26 +08:00
Novice
e9f4bde18f fix: assemble variable support nested node format 2026-01-23 17:22:43 +08:00
Joel
6551814396 feat: add all can bundle to provider 2026-01-23 17:17:28 +08:00
yyh
88887ea58e feat(icons): add custom FileImageFill icon for image file type
Replace RiFileImageFill with a custom FileImageFill icon to provide
a more distinctive visual representation for image files in the file tree.
2026-01-23 17:09:40 +08:00
yyh
dcd79df9fb fix: upload folder support parent_id 2026-01-23 17:09:39 +08:00
Harry
63ffac6afe fix(asset_download_service): suppress error output during download command execution
- Updated the download script to redirect error output to /dev/null, preventing unnecessary error messages from being displayed.
- Added an explicit exit command to ensure the script terminates correctly after execution.
2026-01-23 17:06:11 +08:00
Harry
248fa38c34 refactor(storage): unified storage cache layer and preasign interface
- Updated storage wrappers to utilize a new base class, StorageWrapper, for better delegation of methods.
- Introduced SilentStorage to handle read operations gracefully by returning empty values instead of raising exceptions.
- Enhanced CachedPresignStorage to support batch caching of download URLs, improving performance.
- Refactored FilePresignStorage to support both presigned URLs and signed proxy URLs for downloads.
- Updated AppAssetService to utilize the new storage structure, ensuring consistent asset management.
2026-01-23 17:01:10 +08:00
Harry
3165f3adbe feat(app_assets): enhance asset management with CachedPresignStorage
- Introduced CachedPresignStorage to cache presigned download URLs, reducing repeated API calls.
- Updated AppAssetService to utilize CachedPresignStorage for improved performance in asset download URL generation.
- Refactored asset builders and packagers to support the new storage mechanism.
- Removed unused AppAssetsAttrsInitializer to streamline initialization processes.
- Added unit tests for CachedPresignStorage to ensure functionality and reliability.
2026-01-23 16:10:28 +08:00
Joel
b5d843b1fd feat: combine 2 export 2026-01-23 15:50:33 +08:00
yyh
c4714d757d style(file-tree): node menu item 2026-01-23 15:49:11 +08:00
yyh
5ac6dc62e7 fix(app-asset): refresh tree on upload failure to show orphaned nodes
Change onSuccess to onSettled for upload mutations so the file tree
refreshes regardless of success or failure, ensuring consistency when
backend creates nodes but storage upload fails.
2026-01-23 15:26:28 +08:00
Joel
788deffa2b feat: support import zip 2026-01-23 15:23:14 +08:00
yyh
f8438704a6 refactor(app-asset): migrate file upload to presigned URL and batch upload
- Replace FormData file upload with presigned URL two-step upload
- Add batch-upload contract for folder uploads (reduces N+M to 1+M requests)
- Remove deprecated createFile contract and useCreateAppAssetFile hook
- Remove checksum field from AppAssetNode and AppAssetTreeView types
- Add upload-to-presigned-url utility for direct storage uploads
2026-01-23 15:11:04 +08:00
Harry
4448737bd8 refactor(app_asset): remove file upload resource and related methods
- Deleted `AppAssetFileResource` class and its associated file upload logic.
- Removed the `create_file` method from `AppAssetService` to streamline asset management.
- Updated `AppAssetBatchUploadResource` for improved readability by condensing method calls.
2026-01-23 14:56:39 +08:00
Harry
c3decbab32 feat(app): introduce runtime type handling for apps
- Added `RuntimeType` enum to define app runtime types: CLASSIC and SANDBOXED.
- Updated `AppPartial` model to include `runtime_type` field.
- Enhanced `AppListApi` to determine and assign the appropriate runtime type based on sandbox feature availability.
2026-01-23 14:56:38 +08:00
yyh
a91d709aa5 feat(skill-editor): add CategoryTabs and TemplateSearch to skill templates section
Add filter controls for skill templates:
- CategoryTabs: tab navigation with mock categories (All, Productivity, etc.)
- TemplateSearch: search input with accessibility attributes
- Grid layout fix to prevent tab width changes on font-weight switch

Update SectionHeader to accept className prop for flexible styling.
Add search placeholder i18n translations.
2026-01-23 14:39:53 +08:00
yyh
4d465d6cf9 feat(skill-editor): implement StartTabContent with modular component structure
Refactor StartTabContent into separate components following Figma design specs:
- ActionCard: reusable card with icon, title, description
- SectionHeader: title/xl-semi-bold header with description
- CreateImportSection: 3-column grid layout for Create/Import cards
- SkillTemplatesSection: templates area with placeholder

Align styles with Figma: 3-col grid, 16px title, proper spacing and padding.
Add i18n translations for all user-facing text (en-US, zh-Hans).
2026-01-23 14:39:53 +08:00
yyh
083f45678d prune suppressions 2026-01-23 14:39:53 +08:00
Harry
225c33633a feat(app_asset): add batch upload and file upload URL generation
- Introduced `GetUploadUrlPayload` and `BatchUploadPayload` models for handling file uploads.
- Implemented `AppAssetFileUploadUrlResource` for generating pre-signed upload URLs.
- Added `AppAssetBatchUploadResource` to support batch creation of asset nodes from a tree structure.
- Enhanced `AppAssetService` with methods for obtaining upload URLs and batch creation of assets.
- Removed checksum handling from file creation to streamline the process.
2026-01-23 14:34:27 +08:00
Joel
a522327662 chore: i18n 2026-01-23 14:33:17 +08:00
hjlarry
486a30402b remove forceUpload 2026-01-23 14:33:15 +08:00
Joel
64c4f7302d chore: import app ui 2026-01-23 14:24:23 +08:00
hjlarry
e105dc6289 new restore 2026-01-23 14:22:58 +08:00
yyh
aa3cc9b9a0 fix(skill-editor): add START_TAB_ID guards to prevent invalid metadata operations
- Add guards in tool-block component to skip metadata read/write when Start tab is active
- Add guard in tool-picker-block to prevent writing tool config to Start tab
- Add guard in use-sync-tree-with-active-tab to skip tree sync for Start tab
2026-01-23 13:15:39 +08:00
yyh
98d1aac765 feat(skill-editor): add persistent Start tab and optimize store subscriptions
- Add START_TAB_ID constant and StartTabItem/StartTabContent components
- Default to Start tab when no file tabs are open
- Optimize zustand selectors to subscribe to specific Map values instead of
  entire Map objects, reducing unnecessary re-renders when other tabs change
- Refactor useSkillFileSave to accept precise values instead of Map/Set
2026-01-23 13:12:22 +08:00
yyh
693a9c5b95 rm i18n .ts file 2026-01-23 12:42:42 +08:00
yyh
f555492292 update skill 2026-01-23 12:36:47 +08:00
yyh
c52755e747 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-23 11:36:16 +08:00
Joel
61fa20d6a9 fix: search text bink
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-23 11:05:22 +08:00
Joel
85cf995011 fix: search click blur 2026-01-23 10:59:12 +08:00
Novice
8f75be52a1 fix: allow repeated tool calls with same tool_call_id 2026-01-23 10:04:21 +08:00
zhsama
4707a319e5 refactor: use bivariance to normalize node metadata types 2026-01-23 06:57:00 +08:00
zhsama
ef8d0f497d feat: Merge parent workflow nodes into subgraph variable scope.And some
performance improve.
2026-01-23 06:56:59 +08:00
zhsama
e22996735f fix: Prevent workflow data updates in subgraph interaction mode 2026-01-23 06:56:59 +08:00
zhsama
5e78aaaec3 perf: Update context generate modal UI styling 2026-01-23 06:56:59 +08:00
Harry
5f0a21d2d4 refactor(sandbox): enhance system default configuration retrieval
- Updated the `get_system_default_config` method to accept a `provider_type` parameter for more precise querying.
- Improved error handling to raise a ValueError if no system default provider is configured for the specified tenant and provider type.
- Added fallback logic to ensure a system default configuration is returned when available.
2026-01-23 02:06:13 +08:00
Stream
a409e3d32e refactor: better /context-generate with frontend support
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-23 01:44:12 +08:00
Harry
71f811930f feat(web): add app bundle import/export UI support 2026-01-23 01:09:05 +08:00
Harry
cbac914649 refactor(sandbox): rename delete_storage to delete_draft_storage for clarity
- Updated the SandboxManager to rename the method for deleting storage to better reflect its purpose.
- Adjusted the WorkflowVariableCollectionApi to utilize the new method name.
- Improved error handling in ArchiveSandboxStorage's delete method to log exceptions during deletion.
2026-01-23 00:12:37 +08:00
yyh
2f01107b09 feat(sqlite-preview): add truncation notice when row limit is reached
Display a notice at the bottom of SQLite table preview when data
is truncated due to PREVIEW_ROW_LIMIT (1000 rows), informing users
that additional rows are not displayed.
2026-01-22 23:49:06 +08:00
Harry
dde2bea2cc fix(llm-skill): prompt tool call
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
- Renamed `build_skill_artifact_set` to `build_skill_bundle` for improved clarity in asset management.
- Updated references in `SkillManager` to reflect the new method name and ensure consistent handling of skill bundles.
- Added `AppAssetsAttrsInitializer` to `SandboxManager` to enhance asset initialization processes.
- Implemented output truncation in `SandboxBashTool` to manage long command outputs effectively.
2026-01-22 23:36:32 +08:00
Harry
6ec4a093c2 fix(app_asset_service): correct parameter passing in get_or_create_assets method and remove unused method for published assets
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-22 22:46:57 +08:00
Harry
521b66c488 feat(app-bundle): implement app bundle import/export functionality
- Introduced AppBundleService for managing app bundle publishing and importing, integrating workflow and asset services.
- Added methods for exporting app bundles as ZIP files, including DSL and asset management.
- Implemented source zip extraction and validation to enhance asset import processes.
- Refactored asset packaging to utilize AssetZipPackager for improved performance and organization.
- Enhanced error handling for bundle format and security during import operations.
2026-01-22 22:46:57 +08:00
Harry
a43efef9f0 refactor(skill): transition from artifact set to bundle structure
- Replaced SkillArtifactSet with SkillBundle across various components, enhancing the organization of skill dependencies and references.
- Updated SkillManager methods to load and save bundles instead of artifacts, improving clarity in asset management.
- Refactored SkillCompiler to compile skills into bundles, streamlining the dependency resolution process.
- Adjusted DifyCli and SandboxBashSession to utilize ToolDependencies, ensuring consistent handling of tool references.
- Introduced AssetReferences for better management of file dependencies within skill bundles.
2026-01-22 22:46:57 +08:00
Harry
17404e0956 chore: update binary files and refactor LLMNode skill compilation
- Updated binary files for Dify CLI on various platforms (darwin amd64, darwin arm64, linux amd64, linux arm64).
- Refactored skill compilation in LLMNode to improve clarity and maintainability by explicitly naming parameters and incorporating AppAssets for base path management.
- Minor fix in AppAssetFileTree to remove unnecessary leading slash in path construction.
2026-01-22 22:46:57 +08:00
yyh
b87e303c00 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	web/eslint-suppressions.json
#	web/service/debug.ts
2026-01-22 22:40:32 +08:00
Yeuoly
d32996b8c9 feat: enhance sandbox initializers with async and sync support 2026-01-22 19:58:38 +08:00
Yeuoly
a3cf73b220 feat: refactor initializers to support async and sync execution 2026-01-22 19:54:54 +08:00
Yeuoly
55c588a03a feat: add async_initialize method to asset initializers 2026-01-22 19:49:41 +08:00
Yeuoly
3058415b4e feat: switch async 2026-01-22 19:42:09 +08:00
Yeuoly
c3b4029d0b feat: DraftAppAssetsInitializer 2026-01-22 19:18:46 +08:00
Yeuoly
5e16d85ff6 refactor(sandbox): async init and draft downloads
Reduce startup latency by deferring sandbox setup and downloading draft assets directly with cached presigned URLs.
2026-01-22 19:18:34 +08:00
zhsama
87f35efa2f chore: Update pnpm to 10.28.1 2026-01-22 19:07:24 +08:00
zhsama
73ce9993f2 refactor: Replace SimpleSelect with PortalToFollowElem in sub-graph
config panel
2026-01-22 18:57:04 +08:00
Harry
9d80770dfc feat(sandbox): enhance sandbox management and tool artifact handling
- Introduced SandboxManager.delete_storage method for improved storage management.
- Refactored skill loading and tool artifact handling in DifyCliInitializer and SandboxBashSession.
- Updated LLMNode to extract and compile tool artifacts, enhancing integration with skills.
- Improved attribute management in AttrMap for better error handling and retrieval methods.
2026-01-22 17:26:09 +08:00
Harry
e7c3e4cd21 feat: introduce attribute management system for sandbox
- Added AttrMap and AttrKey classes for type-safe attribute storage.
- Implemented AppAssetsAttrs and SkillAttrs for managing application and skill attributes.
- Refactored Sandbox and initializers to utilize the new attribute management system, enhancing modularity and clarity in asset handling.
2026-01-22 17:26:09 +08:00
yyh
ecd6c44a32 perf(web): parallelize folder upload for better performance
Optimize folder upload by creating folders at the same depth level in
parallel and uploading all files concurrently. This reduces upload time
from O(n) sequential requests to O(depth) folder requests + 1 file request.
2026-01-22 17:06:22 +08:00
Joel
43648b1660 feat: tool config 2026-01-22 17:00:59 +08:00
yyh
9733621301 fix(web): align table selector dropdown style with design
- Update font from system-xs-regular to system-sm-medium
- Add table icon to dropdown items
- Adjust spacing and border radius to match Figma design
2026-01-22 16:46:32 +08:00
yyh
bc22739a96 fix: migrations 2026-01-22 16:42:04 +08:00
yyh
d09d8d34c2 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/controllers/console/app/error.py
2026-01-22 16:40:19 +08:00
yyh
6325a3458f chore: cap sqlite preview rows 2026-01-22 16:37:33 +08:00
yyh
808510746e fix: show columns for empty sqlite tables 2026-01-22 16:25:43 +08:00
yyh
da738dddab refactor: extract sqlite table hook 2026-01-22 16:19:35 +08:00
yyh
aa1ee123b3 refactor: extract sqlite table panel 2026-01-22 16:18:15 +08:00
yyh
e69163d072 refactor: extract sqlite constants 2026-01-22 16:15:58 +08:00
yyh
b6228c99cd refactor: extract sqlite types 2026-01-22 16:15:19 +08:00
yyh
6c75893956 feat: use virtual scroll for db preview 2026-01-22 16:13:10 +08:00
Stephen Zhou
878e34c582 fix: more doc link fix (#31395)
Co-authored-by: Riskey <36894937+RiskeyL@users.noreply.github.com>
2026-01-22 16:13:10 +08:00
Stephen Zhou
f7f4d066dc fix: following docs link fix (#31390)
Co-authored-by: Riskey <36894937+RiskeyL@users.noreply.github.com>
2026-01-22 16:13:10 +08:00
zejiewang
bc3629370d fix: non-auto variable type params of agent node tool are not correctly parsed (#31128)
Co-authored-by: wangzejie <wangzejie@meicai.cn>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-22 16:13:10 +08:00
wangxiaolei
397d6decc0 fix: fix visibilityState event handle (#31354) 2026-01-22 16:13:10 +08:00
wangxiaolei
f23cc6be02 fix: remove _try_resolve_user_from_request (#31360) 2026-01-22 16:13:10 +08:00
Stephen Zhou
512c117395 test: enhance HitTestingPage tests with additional coverage for rende… (#31355) 2026-01-22 16:13:10 +08:00
Stephen Zhou
03f53f2282 chore: revert jsdom update (#31353) 2026-01-22 16:13:10 +08:00
-LAN-
ef97352f71 refactor(db): enforce non-null message annotation questions (#27915) 2026-01-22 16:13:10 +08:00
Shemol
b2071a4352 refactor(web): useClipboard hook to reduce duplication (#31308)
Signed-off-by: SherlockShemol <shemol@163.com>
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
2026-01-22 16:13:10 +08:00
Stephen Zhou
463060ce52 test: fix test in #30849 (#31350) 2026-01-22 16:13:09 +08:00
github-actions[bot]
dc55591a5e chore(i18n): sync translations with en-US (#31342)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: yyh <92089059+lyzno1@users.noreply.github.com>
2026-01-22 16:13:09 +08:00
Stephen Zhou
3de33f7a4e fix: check and update doc links (#30849)
Co-authored-by: Riskey <36894937+RiskeyL@users.noreply.github.com>
2026-01-22 16:13:09 +08:00
github-actions[bot]
deeadb7f8e chore(i18n): sync translations with en-US (#31332)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: yyh <92089059+lyzno1@users.noreply.github.com>
2026-01-22 16:13:09 +08:00
Coding On Star
29deee8161 fix(i18n): update model provider tip to only mention OpenAI in English, Japanese, and Simplified Chinese translations (#31339)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-22 16:13:09 +08:00
Bowen Liang
c46b12b234 chore(web): refactor next.config.js to next.config.ts (#31331) 2026-01-22 16:13:09 +08:00
yyh
ff07ca97df fix: prevent horizontal page scroll in skill editor layout
Add overflow-hidden to SkillPageLayout and min-w-0 to flex children
to ensure wide content (like SQLite tables with many columns) scrolls
internally rather than causing the entire page to scroll horizontally.
2026-01-22 16:13:09 +08:00
yyh
ed60a375b5 fix: improve sqlite file preview layout and single table handling
- Add min-w-0 to flex containers for proper text truncation
- Use w-max on table to ensure columns don't collapse
- Simplify table selector when only one table exists (remove dropdown)
2026-01-22 16:13:09 +08:00
yyh
11005ccb63 feat: add sqlite file preview 2026-01-22 16:13:09 +08:00
yyh
4a88ffdf2a feat: align workflow view picker layout 2026-01-22 16:13:09 +08:00
yyh
84b4fed3df chore: add table cells icon to db selector 2026-01-22 16:13:09 +08:00
yyh
3dcb34e462 Revert "fix workflow view switch refresh"
This reverts commit 1341b25e74f8d529e434877afc426ad02abe4e6b.
2026-01-22 16:13:09 +08:00
yyh
b7da988ee0 chore: add wa-sqlite dependencies 2026-01-22 16:13:09 +08:00
yyh
7ec0a36dc2 prune suppressions 2026-01-22 16:13:09 +08:00
yyh
bddb41cd47 feat: add db types in file tree icon 2026-01-22 16:13:09 +08:00
yyh
ee35f72861 fix workflow view switch refresh 2026-01-22 16:13:09 +08:00
yyh
62ec464d91 fix(graph/skill): use push to persist history in browser 2026-01-22 16:13:08 +08:00
Joel
2c95622890 chore: choose tools show 2026-01-22 15:47:28 +08:00
Joel
219f4a2f3b chore: hide featured tools 2026-01-22 15:30:16 +08:00
Harry
e38a4121e7 fix: workflow publish
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-22 13:53:45 +08:00
Harry
b470cca533 feat(skill-builder): enhance skill loading and compilation with parallel processing
- Introduced threading for loading skills and uploading compiled content to improve performance.
- Added data classes for better structure and clarity in handling loaded and compiled skills.
- Refactored the skill compilation process to separate loading and uploading, enhancing maintainability.
2026-01-22 13:41:21 +08:00
Harry
5565546295 feat(skill-compiler): skill compiler 2026-01-22 13:41:21 +08:00
Novice
5cb8d4cc11 refactor: rename mention node to nested_node for generic sub-graph support 2026-01-22 13:15:13 +08:00
hjlarry
51c8c50b82 expire leader key in redis 2026-01-22 09:30:51 +08:00
zhsama
c7d106cfa4 refactor: Refactor context generation modal into composable components 2026-01-22 01:34:44 +08:00
yyh
29e1f5d98b update skills 2026-01-21 23:16:03 +08:00
Harry
aac90133d6 refactor: update session cleanup logic and extend command timeout
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
- Changed the command timeout duration from 60 seconds to 1 hour for improved session stability.
- Refactored session cleanup logic to utilize the CLI API session object instead of session ID, enhancing clarity and maintainability.
2026-01-21 21:19:46 +08:00
Stream
0ac847fb3c refactor: unify structured output with pydantic model
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-21 21:04:33 +08:00
zhsama
70f5365398 Merge remote-tracking branch 'origin/feat/support-agent-sandbox' into feat/support-agent-sandbox 2026-01-21 20:55:02 +08:00
Harry
b1eecb7051 feat: implement keepalive mechanism for E2B sandbox
- Added a keepalive thread to maintain the E2B sandbox timeout, preventing premature termination.
- Introduced a stop event to manage the lifecycle of the keepalive thread.
- Refactored the sandbox initialization to include the new keepalive functionality.
- Enhanced logging to capture failures in refreshing the sandbox timeout.
2026-01-21 20:51:46 +08:00
zhsama
d82943f48c Merge remote-tracking branch 'origin/feat/support-agent-sandbox' into feat/support-agent-sandbox 2026-01-21 20:50:44 +08:00
zhsama
c4249f94de feat: Add suggested questions to context generate modal 2026-01-21 20:49:12 +08:00
Harry
9ed83a808a refactor: consolidate sandbox management and initialization
- Moved sandbox-related classes and functions into a dedicated module for better organization.
- Updated the sandbox initialization process to streamline asset management and environment setup.
- Removed deprecated constants and refactored related code to utilize new sandbox entities.
- Enhanced the workflow context to support sandbox integration, allowing for improved state management during execution.
- Adjusted various components to utilize the new sandbox structure, ensuring compatibility across the application.
2026-01-21 20:42:44 +08:00
Stream
ea37904c75 refactor: unify structured output with pydantic model
Signed-off-by: Stream <Stream_2@qq.com>
2026-01-21 20:01:52 +08:00
hjlarry
1b70a7e4c7 use contract for api request 2026-01-21 18:20:38 +08:00
zhsama
d7ccea8ac5 refactor: Refactor mixed-variable-text-input to extract hooks 2026-01-21 17:55:41 +08:00
Joel
1fcff5f8d1 fix: click files close 2026-01-21 17:15:43 +08:00
Joel
78c7be09f8 chore: not show switch graph skill map in classical 2026-01-21 17:15:42 +08:00
yyh
a37adddacd Merge branches 'feat/support-agent-sandbox' and 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-21 16:55:13 +08:00
Joel
ccbf908d22 feat: support computer use config 2026-01-21 16:53:04 +08:00
yyh
d444a8eadc feat: use blacklist approach for file editability in Monaco Editor
Switch from whitelist to blacklist pattern for determining editable files.
Files are now editable unless they are known binary types (audio, archives,
executables, Office documents, fonts, etc.), enabling support for any
runtime-generated text files without needing to add extensions one by one.
2026-01-21 16:53:01 +08:00
Yeuoly
b5e31c0f25 feat: parallelize asset packing 2026-01-21 16:23:44 +08:00
Yeuoly
c4943ff4f5 fix: parse uname output for arch/os 2026-01-21 16:09:57 +08:00
Yeuoly
699650565e fix: reduce e2b uname calls 2026-01-21 16:07:12 +08:00
yyh
1c90c729bc feat: add ignore files support in monaco editor 2026-01-21 15:18:56 +08:00
yyh
45a76fa90b fix: improve accessibility for file-tree components
- Convert clickable div to semantic button in artifacts-section
- Add aria-hidden to decorative icons
- Add aria-label to rename inputs and hidden file inputs
- Add i18n keys for artifacts section and rename labels
- Support ignore file extensions (.gitignore, etc.)
2026-01-21 15:13:50 +08:00
Joel
911c1852d5 feat: support choose tools 2026-01-21 15:05:58 +08:00
zxhlyh
e85b0c49d8 fix: llm generation variable 2026-01-21 14:57:54 +08:00
yyh
b0a059250a Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-21 14:52:11 +08:00
Joel
b94b7860d9 chore: remove useless void
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-21 14:07:19 +08:00
Joel
478833f069 fix: switch refresh 2026-01-21 14:06:07 +08:00
Joel
5657bf52f0 fix: can not save when switch to skill 2026-01-21 13:56:18 +08:00
yyh
c3333006cf Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-21 13:52:47 +08:00
yyh
c2885077c2 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-21 13:21:39 +08:00
yyh
8e20ef6cb5 merge 2026-01-21 10:53:11 +08:00
yyh
468d84faba Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	web/app/components/header/account-setting/model-provider-page/model-selector/popup-item.tsx
#	web/package.json
2026-01-21 10:52:43 +08:00
Joel
cf6c089e72 chore: add skill true for sandbox agent llm 2026-01-21 10:15:00 +08:00
zhsama
2f70f778c9 feat: Refactor context generate modal UI and improve UX 2026-01-21 04:18:57 +08:00
zhsama
9400863949 feat: add mention graph API integration for tool parameters 2026-01-21 04:18:57 +08:00
Harry
f831d3bbd6 fix(app_assets_initializer): specify output directory for unzip command to ensure proper asset extraction
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-21 02:58:47 +08:00
Harry
7fd9ef3d22 fix(dify_cli): solve the permission error on e2b 2026-01-21 01:25:21 +08:00
Harry
705d4cbba9 feat(sandbox_provider): add default sandbox provider for CE 2026-01-21 00:37:38 +08:00
Harry
c9e53bf78c fix(llm): update final chunk event condition to include sandbox check
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-20 21:35:10 +08:00
Harry
7cd280557c fix(agent): fix damn bug 2026-01-20 21:10:53 +08:00
hjlarry
eaf888b02a env var NEXT_PUBLIC_SOCKET_URL 2026-01-20 20:34:56 +08:00
zhsama
58da9c3c11 refactor: Refactor context generation modal and improve type safety
# Conflicts:
#	web/i18n/en-US/workflow.json
#	web/i18n/zh-Hans/workflow.json
2026-01-20 20:25:09 +08:00
zhsama
68d36ff3ed refactor: Refactor agent context insertion in prompt editor 2026-01-20 20:25:09 +08:00
zhsama
0ed5ed20b5 feat(workflow): add multi-turn context code generator modal 2026-01-20 20:25:09 +08:00
Harry
18a589003e feat(sandbox): enhance sandbox initialization with draft support and asset management
- Introduced DraftAppAssetsInitializer for handling draft assets.
- Updated SandboxLayer to conditionally set sandbox ID and storage based on workflow version.
- Improved asset initialization logging and error handling.
- Refactored ArchiveSandboxStorage to support exclusion patterns during archiving.
- Modified command and LLM nodes to retrieve sandbox from workflow context, supporting draft workflows.
2026-01-20 19:45:04 +08:00
yyh
da6fdc963c Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-20 19:17:51 +08:00
Harry
1c76ed2c40 feat(sandbox): draft storage 2026-01-20 18:45:13 +08:00
Harry
ceb410fb5c fix: Update archive path for sandbox storage to use a temporary directory 2026-01-20 18:44:19 +08:00
yyh
4fa7843050 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-20 18:42:02 +08:00
yyh
3205f98d05 refactor(web): unify auto-expand trigger for drag-and-drop
Replace event-based auto-expand trigger with Zustand state-driven
approach. Now both external file uploads and internal node drag use
the same isDragOver state as the single source of truth for folder
auto-expand timing (1s blink, 2s expand).
2026-01-20 18:10:52 +08:00
yyh
0092254007 Revert "refactor(web): remove redundant useUnifiedDrag abstraction layer"
This reverts commit ee91c9d5f1.
2026-01-20 18:09:25 +08:00
yyh
ee91c9d5f1 refactor(web): remove redundant useUnifiedDrag abstraction layer
Simplify file drop hooks by removing the unnecessary useUnifiedDrag
wrapper that became redundant after internal node drag was migrated
to react-arborist's built-in system. Now useFolderFileDrop and
useRootFileDrop directly use useFileDrop, reducing code complexity
and eliminating unused treeChildren prop drilling.
2026-01-20 18:09:08 +08:00
yyh
2151676db1 refactor: use react-arborist built-in drag for internal node moves
Switch from native HTML5 drag to react-arborist's built-in drag system
for internal node drag-and-drop. The HTML5Backend used by react-arborist
was intercepting dragstart events, preventing native drag from working.

- Add onMove callback and disableDrop validation to Tree component
- Sync react-arborist drag state (isDragging, willReceiveDrop) to Zustand
- Simplify use-node-move to only handle API execution
- Update use-unified-drag to only handle external file uploads
- External file drops continue to work via native HTML5 events
2026-01-20 18:09:08 +08:00
yyh
dc9658b003 perf(web): avoid per-node tree query subscription 2026-01-20 18:09:08 +08:00
yyh
b527921f3f feat: unified drag-and-drop for skill file tree
Implement unified drag system that supports both internal node moves
and external file uploads with consistent UI feedback. Uses native
HTML5 drag API with shared visual states (isDragOver, isBlinking,
DragActionTooltip showing 'Move to' or 'Upload to').
2026-01-20 18:09:08 +08:00
zxhlyh
0e66b51ca0 fix: history messages toolcalls 2026-01-20 17:37:23 +08:00
zhsama
33e96fd11a Merge remote-tracking branch 'origin/feat/support-agent-sandbox' into feat/support-agent-sandbox 2026-01-20 17:07:30 +08:00
hjlarry
f99ac24d5c websocket use cookie connect 2026-01-20 17:01:40 +08:00
zhsama
2e037014c3 refactor: Replace manual ref syncing with useLatest hook 2026-01-20 17:00:47 +08:00
Novice
8c4aaa8286 fix: add message tool call icon 2026-01-20 16:59:53 +08:00
zhsama
dc8c018e28 refactor: Refactor agent context insertion to use regex 2026-01-20 16:48:05 +08:00
zhsama
57a8c453b9 fix: Fix variable insertion to only trigger on current line 2026-01-20 16:45:20 +08:00
zhsama
e5dc56c483 Merge remote-tracking branch 'origin/feat/support-agent-sandbox' into feat/support-agent-sandbox 2026-01-20 16:37:04 +08:00
zhsama
812df81d92 feat: Add paramKey prop to VariableReferenceFields component 2026-01-20 16:35:52 +08:00
Novice
67c29be3c6 fix: message answer include tool result 2026-01-20 16:05:28 +08:00
yyh
cf5e8491df chore: optimize code quality and performance 2026-01-20 15:54:31 +08:00
yyh
53f828f00e feat: paste operation for skill file tree 2026-01-20 15:42:53 +08:00
yyh
357489d444 feat: multi select for file tree & clipboard support 2026-01-20 15:42:53 +08:00
Joel
331c65fd1d fix: click file tab caused popup hide 2026-01-20 15:35:08 +08:00
yyh
56b09d9f72 fix: download option trigger open tab 2026-01-20 14:28:05 +08:00
Stephen Zhou
d4ed398e4f fix lint 2026-01-20 14:26:01 +08:00
yyh
951a580907 feat: artifacts section layout 2026-01-20 14:21:31 +08:00
Joel
3b72b45319 Merge branch 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-20 14:01:43 +08:00
Joel
2650ceb0a6 feat: support picker vars files ui in editor 2026-01-20 14:01:30 +08:00
yyh
c5fc3cc08e revert icons 2026-01-20 14:00:46 +08:00
zxhlyh
fdaf471a03 fix: answer node text 2026-01-20 13:59:49 +08:00
hjlarry
bdac6f91dd add socket edit permission validate 2026-01-20 13:56:28 +08:00
Novice
27de07e93d chore: fix the llm node memory issue 2026-01-20 13:52:45 +08:00
yyh
8154d0af53 feat: add FolderSpark icon for workflow 2026-01-20 13:51:49 +08:00
yyh
466f76345b feat: add drag action tooltip 2026-01-20 13:50:51 +08:00
hjlarry
9be496f953 fix publish workflow not sync 2026-01-20 13:20:02 +08:00
yyh
fc83e2b1c4 feat!: file download in skill file tree menu 2026-01-20 13:16:27 +08:00
yyh
552f9a8989 refactor(skill): simplify file tree search state management
Move searchTerm from props drilling to zustand store for cleaner
  architecture. Remove unnecessary controlled/uncontrolled pattern
  and unused debounce logic since search is pure frontend filtering.

  - Add fileTreeSearchTerm state to file-tree-slice
  - Remove useState and props from main.tsx
  - Simplify sidebar-search-add.tsx to read/write store directly
  - Add empty state UI with reset filter button
2026-01-20 12:43:56 +08:00
Novice
4f5b175e55 fix: emoji icon validate error 2026-01-20 11:09:32 +08:00
Novice
13d6923c11 Merge branch 'feat/llm-support-tools' into feat/support-agent-sandbox 2026-01-20 10:27:42 +08:00
hjlarry
4acca22ff0 whether resolved sync to canvas 2026-01-20 10:12:15 +08:00
Novice
1483a51aa1 Merge branch 'feat/pull-a-variable' into feat/support-agent-sandbox 2026-01-20 09:54:41 +08:00
Harry
f5a34e9ee8 feat(skill): skill support
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-20 03:02:34 +08:00
zhsama
d69e7eb12a fix: Fix variable insertion to only remove @ trigger on current line 2026-01-20 01:32:42 +08:00
zhsama
c44aaf1883 fix: Fix prompt editor trigger match to use current selection 2026-01-20 00:42:19 +08:00
zhsama
4b91969d0f refactor: Refactor keyboard navigation in agent and variable lists 2026-01-20 00:41:23 +08:00
zhsama
92c54d3c9d feat: merge app and meta defaults when creating workflow nodes 2026-01-19 23:56:15 +08:00
yyh
bc9ce23fdc refactor(skill): rename components for semantic clarity
Rename components and reorganize directory structure:
- skill-doc-editor.tsx → file-content-panel.tsx (handles edit/preview/download)
- editor-area.tsx → content-area.tsx
- editor-body.tsx → content-body.tsx
- editor-tabs.tsx → file-tabs.tsx
- editor-tab-item.tsx → file-tab-item.tsx

Create viewer/ directory for non-editor components:
- Move media-file-preview.tsx from editor/ to viewer/
- Move unsupported-file-download.tsx from editor/ to viewer/

This clarifies the distinction between:
- editor/: actual file editors (code, markdown)
- viewer/: preview and download components (media, unsupported files)
2026-01-19 23:50:08 +08:00
yyh
cab33d440b refactor(skill): remove Office file special handling, merge into unsupported
Remove the Office file placeholder that only showed "Preview will be
supported in a future update" without any download option. Office files
(pdf, doc, docx, xls, xlsx, ppt, pptx) now fall through to the generic
"unsupported file" handler which provides a download button.

Removed:
- OfficeFilePlaceholder component
- isOfficeFile function and OFFICE_EXTENSIONS constant
- isOffice flag from useFileTypeInfo hook
- i18n keys for officePlaceholder

This simplifies the file type handling to just three categories:
- Editable: markdown, code, text files → editor
- Previewable: image, video files → media preview
- Everything else: download button
2026-01-19 23:39:32 +08:00
zhsama
267de1861d perf: reduce input lag in variable pickers 2026-01-19 23:35:45 +08:00
yyh
b3793b0198 fix(skill): use download URL for all non-editable files
Change useSkillFileData to use isEditable instead of isMediaFile:
- Editable files (markdown, code, text) fetch file content for editing
- Non-editable files (image, video, office, unsupported) fetch download URL

This fixes the download button for unsupported files which was incorrectly
using file content (UTF-8 decoded garbage) instead of the presigned URL.
2026-01-19 23:34:56 +08:00
yyh
8486c675c8 refactor(skill): extract hooks from skill-doc-editor for better separation
Extract business logic into dedicated hooks to reduce component complexity:
- useFileTypeInfo: file type detection (markdown, code, image, video, etc.)
- useSkillFileData: data fetching with conditional API calls
- useSkillFileSave: save logic with Ctrl+S keyboard shortcut

Also fix Vercel best practice: use ternary instead of && for conditional rendering.
2026-01-19 23:25:48 +08:00
zhsama
5e49b27dba Merge branch 'zhsama/panel-var-popup' into feat/pull-a-variable 2026-01-19 23:15:01 +08:00
yyh
b6df7b3afe fix(skill): use presigned URL for image/video preview in skill editor
Previously, media files were fetched via getFileContent API which decodes
binary data as UTF-8, resulting in corrupted strings that cannot be used
as img/video src. Now media files use getFileDownloadUrl API to get a
presigned URL, enabling proper preview of images and videos of any size.
2026-01-19 23:15:00 +08:00
zhsama
6f74a66c8a feat: enable typeahead filtering and keyboard navigation 2026-01-19 23:12:08 +08:00
yyh
31a7db2657 refactor(skill): unify root/blank constants and eliminate magic strings
- Add constants.ts with ROOT_ID, CONTEXT_MENU_TYPE, NODE_MENU_TYPE
- Add root utilities to tree-utils.ts (isRootId, toApiParentId, etc.)
- Replace '__root__' with ROOT_ID for consistent root identifier
- Replace inline 'blank'/'root' strings with constants
- Use NodeMenuType for type-safe menu type props
- Remove duplicate ContextMenuType from types.ts, use from constants.ts
2026-01-19 23:07:49 +08:00
zhsama
68fd7c021c feat: Remove allowGraphActions check from retry and error panels 2026-01-19 23:07:32 +08:00
zhsama
e1e64ae430 feat: code node output initialization and agent placeholder1 2026-01-19 23:06:08 +08:00
yyh
9080607028 refactor(skill): unify tree selection with VSCode-style single state
Remove redundant createTargetNodeId and use selectedTreeNodeId for both
visual highlight and creation target. This simplifies the state management
by having a single source of truth for tree selection, similar to VSCode's
file explorer behavior where both files and folders can be selected.
2026-01-19 22:36:04 +08:00
zhsama
6e9a5139b4 chore: Remove sonarjs ESLint suppressions and reformat code 2026-01-19 22:31:04 +08:00
zhsama
f44305af0d feat: add AssembleVariablesAlt icon and integrate into sub-graph
components.
2026-01-19 22:31:04 +08:00
yyh
8f4a4214a1 feat(sandbox): preserve user config when switching to system default
Update frontend to use new backend API:
- save_config now accepts optional 'activate' parameter
- activate endpoint now requires 'type' parameter ('system' | 'user')

When switching to managed mode, call activate with type='system' instead
of deleting user config, so custom configurations are preserved for
future use.
2026-01-19 22:27:06 +08:00
yyh
ff210a98db feat(skill): add placeholder for inline tree node input
Display localized placeholder text ("File name" / "Folder name") when
creating new files or folders in the skill editor file tree.
2026-01-19 22:01:31 +08:00
hjlarry
018175ec2d Merge branch 'feat/collaboration2' of github.com:langgenius/dify into feat/collaboration2 2026-01-19 21:54:01 +08:00
hjlarry
faa88dc2f3 fix unittests 2026-01-19 21:53:56 +08:00
Harry
9ad1f30a8c fix(app_asset_service): increase maximum preview content size from 1MB to 5MB 2026-01-19 21:53:48 +08:00
Harry
5053fae5b4 fix(app_asset_service): reduce maximum preview content size from 5MB to 1MB 2026-01-19 21:52:18 +08:00
hjlarry
060c7f2b45 fix pyright 2026-01-19 21:48:05 +08:00
hjlarry
acb603bff7 fix migration file 2026-01-19 21:46:40 +08:00
Harry
d297167fef feat(sandbox): add optional activate argument to sandbox provider config
- Updated the request parser in SandboxProviderListApi to include an optional 'activate' boolean argument for JSON input.
- This enhancement allows users to specify activation status when configuring sandbox providers.
2026-01-19 21:46:26 +08:00
hjlarry
e36ee54a16 fix web style 2026-01-19 21:44:26 +08:00
Harry
41aec357b0 feat(sandbox): add activation functionality for sandbox providers
- Enhanced the SandboxProviderConfigApi to accept an 'activate' argument when saving provider configurations.
- Introduced a new request parser for activating sandbox providers, requiring a 'type' argument.
- Updated the SandboxProviderService to handle the activation state during configuration saving and provider activation.
2026-01-19 21:43:03 +08:00
autofix-ci[bot]
f3fa4f11ba [autofix.ci] apply automated fixes 2026-01-19 13:18:15 +00:00
hjlarry
cb8fc9cf2d Merge remote-tracking branch 'myori/main' into feat/collaboration2 2026-01-19 21:15:53 +08:00
hjlarry
aaa3d2d74f add unittests 2026-01-19 21:11:44 +08:00
hjlarry
c17f564718 add unittests 2026-01-19 20:41:21 +08:00
hjlarry
3389071361 add unittests 2026-01-19 20:25:47 +08:00
yyh
96da3b9560 fix: migration 2026-01-19 20:13:24 +08:00
yyh
3bb9625ced fix(sandbox): prevent revoking active provider config
Hide revoke button for active providers to avoid "no sandbox provider"
error when user deletes the only available configuration.
2026-01-19 20:09:14 +08:00
Novice
1bdc47220b fix: mention graph config don't support structured output 2026-01-19 19:59:19 +08:00
hjlarry
41473ff450 refactor workflow collaboration service 2026-01-19 19:56:18 +08:00
yyh
5aa4088051 fix(sandbox): use deleteConfig when switching to managed mode
Delete user config instead of saving empty config when switching to
managed mode, allowing the system to fall back to system defaults.
2026-01-19 19:51:47 +08:00
yyh
9f444f1f6a refactor(skill): split file operations hook and extract TreeNodeIcon component
Split use-file-operations.ts (248 lines) into smaller focused hooks:
- use-create-operations.ts for file/folder creation and upload
- use-modify-operations.ts for rename and delete operations
- use-file-operations.ts now serves as orchestrator maintaining backward compatibility

Extract TreeNodeIcon component from tree-node.tsx for cleaner separation of concerns.

Add brief comments to drag hooks explaining their purpose and relationships.
2026-01-19 19:13:09 +08:00
Joel
49effca35d fix: auto default 2026-01-19 18:41:05 +08:00
yyh
fb28f03155 Merge branch 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-19 18:37:48 +08:00
Joel
2afc4704ad chore: add limit to tool param auto 2026-01-19 18:35:57 +08:00
yyh
5496fc014c feat(sandbox): add connect mode selection for E2B provider
Add ability to choose between "Managed by Dify" (using system config)
and "Bring Your Own API Key" modes when configuring E2B sandbox provider.
This allows Cloud users to use Dify's pre-configured credentials or
their own E2B account for more control over resources and billing.
2026-01-19 18:35:53 +08:00
yyh
7756c151ed feat: add VSCode-style blink animation before folder auto-expand
When dragging files over a closed folder, the highlight now blinks
during the second half of the 2-second hover period to signal that
the folder is about to expand. This provides better visual feedback
similar to VSCode's drag-and-drop behavior.
2026-01-19 18:35:26 +08:00
Joel
83c458d2fe chore: change tool setting copywriting and ts promble 2026-01-19 18:27:33 +08:00
Harry
956436b943 feat(sandbox): skill initialize & draft run 2026-01-19 18:15:39 +08:00
Harry
3bb9c4b280 feat(constants): introduce DIFY_CLI_ROOT and update paths for Dify CLI and app assets
- Added DIFY_CLI_ROOT constant for the root directory of Dify CLI.
- Updated DIFY_CLI_PATH and DIFY_CLI_CONFIG_PATH to use absolute paths.
- Modified app asset initialization to create directories under DIFY_CLI_ROOT.
- Enhanced Docker and E2B environment file handling to use workspace paths.
2026-01-19 18:15:39 +08:00
Harry
c38463c9a9 refactor: reorganize asset-related classes into entities module and remove unused skill and asset files 2026-01-19 18:15:39 +08:00
yyh
fc49592769 Merge branch 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-19 18:07:15 +08:00
Joel
6643569efc fix: tool can not auth modal 2026-01-19 18:06:23 +08:00
yyh
fe0ea13f70 perf: parallelize file uploads and add consistent drag validation
Use Promise.all for concurrent file uploads instead of sequential
processing, improving upload performance for multiple files. Also
add isFileDrag check to handleFolderDragOver for consistency with
other drag handlers.
2026-01-19 18:05:59 +08:00
yyh
c979b59e1e fix: correct test expectation for model provider setting payload
The test was expecting 'provider' but the actual value passed is
'model-provider' from ACCOUNT_SETTING_TAB.MODEL_PROVIDER constant.
2026-01-19 18:05:59 +08:00
yyh
144ca11c03 refactor file drop handlers into hooks 2026-01-19 18:05:58 +08:00
yyh
a432fa5fcf feat: add external file drag-and-drop upload to file tree
Enable users to drag files from their system directly into the file tree
to upload them. Files can be dropped on the tree container (uploads to root)
or on specific folders. Hovering over a closed folder for 2 seconds auto-
expands it. Uses Zustand for drag state management instead of React Context
for better performance.
2026-01-19 18:05:58 +08:00
hjlarry
805bb7c468 fix node in panel sync 2026-01-19 18:01:43 +08:00
Novice
dbc70f8f05 feat: add inner graph api 2026-01-19 17:13:07 +08:00
Joel
4b67008dba fix: not blank not render tool correct 2026-01-19 17:01:32 +08:00
Joel
f4b683aa2f fix: no blank not render file write 2026-01-19 17:01:32 +08:00
yyh
7de6ecdedf fix: lint 2026-01-19 16:35:50 +08:00
Joel
bd070857ed fix: fold indent style 2026-01-19 16:34:46 +08:00
yyh
d3d1ba2488 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/core/app/apps/workflow/app_generator.py
2026-01-19 16:33:10 +08:00
Joel
eae82b1085 chore: remove sync from left panel tree 2026-01-19 16:11:10 +08:00
Joel
f9fd234cf8 feat: support expand the selected file struct 2026-01-19 15:38:43 +08:00
Joel
1dfee05b7e fix: view file popup place error 2026-01-19 15:25:57 +08:00
Joel
dd42e7706a fix: workflow can not init 2026-01-19 15:15:24 +08:00
zhsama
066d18df7a Merge branch 'main' into feat/pull-a-variable 2026-01-19 15:00:15 +08:00
zhsama
06f6ded20f fix: Fix assemble variables insertion in prompt editor 2026-01-19 14:59:08 +08:00
Joel
3a775fc2bf feat: support choose folders and files 2026-01-19 14:47:57 +08:00
yyh
0d5e971a0c fix(skill): pass root nodeId for blank-area context menu
The previous refactor inadvertently passed undefined nodeId for blank
area menus, causing root-level folder creation/upload to fail. This
restores the original behavior by explicitly passing 'root' when the
context menu type is 'blank'.
2026-01-19 14:23:38 +08:00
yyh
9aed4f830f refactor(skill): merge BlankAreaMenu into NodeMenu
Consolidate menu components by extending NodeMenu to support a 'root'
type, eliminating the redundant BlankAreaMenu component. This reduces
code duplication and simplifies the context menu logic by storing
isFolder in the context menu state instead of re-querying tree data.
2026-01-19 14:22:25 +08:00
yyh
5947e04226 feat: decouple create target from tab selection 2026-01-19 14:09:37 +08:00
yyh
611ff05bde feat: sync tree selection with active tab 2026-01-19 14:05:46 +08:00
yyh
0e890e5692 feat: auto pin created editable files 2026-01-19 13:51:08 +08:00
hjlarry
995d5ccf66 fix graph not sync 2026-01-19 13:45:00 +08:00
yyh
6584dc2480 feat: inline create nodes in skill file tree 2026-01-19 13:43:29 +08:00
yyh
a922e844eb fix(skill): return raw content as fallback for non-JSON file content
When file content is not in JSON format (e.g., newly uploaded files),
return the raw content instead of empty string to ensure files display
correctly.
2026-01-19 12:55:22 +08:00
yyh
4bd05ed96e fix(types): remove unused and misaligned app-asset types
Remove types that don't match backend API:
- AppAssetFileContentResponse (unused, had extra metadata field)
- CreateFilePayload (unused, FormData built manually)
- metadata field from UpdateFileContentPayload
2026-01-19 12:43:44 +08:00
Harry
0de32f682a feat(skill): skill parser & packager 2026-01-19 12:41:01 +08:00
Joel
245567118c chore: struct to wrap with content 2026-01-19 12:19:40 +08:00
yyh
021f055c36 feat(skill-editor): add blank area context menu and align search/add styles
Add right-click context menu for file tree blank area with New File,
New Folder, and Upload Files options. Also align search input and
add button styles to match Figma design specs (24px height, 6px radius).
2026-01-19 11:38:59 +08:00
yyh
5f707c5585 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-19 10:53:16 +08:00
yyh
232da66b53 chore: update eslint suppressions 2026-01-19 10:51:53 +08:00
yyh
ebeee92e51 fix(sandbox-provider): align frontend types with backend API after refactor
Remove label, description, and icon fields from SandboxProvider type
as they are no longer returned by the backend API. Use i18n translations
to display provider labels instead of relying on API response data.
2026-01-19 10:50:57 +08:00
yyh
f481947b0d Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-19 10:38:36 +08:00
yyh
94ea7031e8 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-19 10:31:54 +08:00
hjlarry
0d08f7db97 fix 2026-01-18 18:36:44 +08:00
autofix-ci[bot]
6443366f50 [autofix.ci] apply automated fixes 2026-01-18 10:01:22 +00:00
非法操作
70c41a7dc3 Update api/controllers/console/app/workflow.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-18 17:59:18 +08:00
非法操作
8804623121 Update api/app_factory.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-18 17:59:10 +08:00
hjlarry
1fb6d1286f fix webtest 2026-01-18 17:27:29 +08:00
hjlarry
511df81201 fix web style 2026-01-18 13:40:12 +08:00
yyh
2f081fa6fa refactor(skill-editor): adopt 4-generic StateCreator pattern for type-safe cross-slice access
Use explicit StateCreator<FullStore, [], [], SliceType> pattern instead of
StateCreator<SliceType> for all skill-editor slices. This enables:
- Type-safe cross-slice state access via get()
- Explicit type contracts instead of relying on spread args behavior
- Better maintainability following Lobe-chat's proven pattern

Extract all type definitions to types.ts to avoid circular dependencies.
2026-01-18 13:24:34 +08:00
yyh
3b27d9e819 refactor(skill-editor): remove type assertions by using spread args pattern
Replace explicit parameter destructuring with spread args pattern to
eliminate `as unknown as` type assertions when composing sub-slices.
This aligns with the pattern used in the main workflow store.
2026-01-18 13:11:06 +08:00
hjlarry
682c93f262 Merge remote-tracking branch 'myori/main' into feat/collaboration2 2026-01-18 10:28:50 +08:00
hjlarry
51c96b0b7e fix CI 2026-01-18 10:12:43 +08:00
hjlarry
224f426765 fix CI 2026-01-18 10:07:46 +08:00
autofix-ci[bot]
e9657cfb48 [autofix.ci] apply automated fixes 2026-01-17 15:00:37 +00:00
hjlarry
4200ac0da3 fix CI 2026-01-17 22:58:27 +08:00
hjlarry
434f7f3bcb fix web style 2026-01-17 22:10:10 +08:00
hjlarry
03cc196965 fix CI 2026-01-17 22:05:14 +08:00
yyh
c0a76220dd fix(skill-editor): resolve React Compiler memoization warnings
Consolidate file type derivations into a single useMemo with stable
dependencies (currentFileNode?.name and currentFileNode?.extension)
to help React Compiler track stability.

Extract originalContent as a separate variable to avoid property access
in useCallback dependencies, which caused Compiler to infer broader
dependencies than specified.
2026-01-17 22:01:33 +08:00
yyh
9d04fb4992 fix(skill-editor): resolve React Compiler memoization warnings
Wrap isEditable in useMemo to help React Compiler track its stability
and preserve memoization for callbacks that depend on it. Also replace
Record<string, any> with Record<string, unknown> to satisfy no-explicit-any.
2026-01-17 21:51:25 +08:00
yyh
02fcf33067 fix(skill-editor): remove unnecessary store subscriptions in tool-picker-block
Move activeTabId and fileMetadata reads from selector subscriptions to
getState() calls inside the callback. These values were only used in the
insertTools callback, not for rendering, causing unnecessary re-renders
when they changed.
2026-01-17 21:47:31 +08:00
hjlarry
25c88b3f5c fix mypy 2026-01-17 21:41:03 +08:00
hjlarry
2d94904241 fix web unittests 2026-01-17 19:43:40 +08:00
yyh
bbf1247f80 fix(skill-editor): compare content with original to determine dirty state
Previously, any edit would mark the file as dirty even if the content
was restored to its original state. Now we compare against the original
content and clear the dirty flag when they match.
2026-01-17 17:52:00 +08:00
yyh
b82b73ef94 refactor(skill-editor): split slice into separate files for better organization
Split the monolithic skill-editor-slice.ts into a dedicated directory with
individual slice files (tab, file-tree, dirty, metadata, file-operations-menu)
to improve maintainability and code organization.
2026-01-17 17:28:25 +08:00
yyh
15d6f60f25 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-17 17:03:32 +08:00
hjlarry
a99e70d96e fix CI 2026-01-17 15:55:27 +08:00
hjlarry
9eeceb2455 fix basedpyright 2026-01-17 15:54:32 +08:00
autofix-ci[bot]
7901e18fa6 [autofix.ci] apply automated fixes 2026-01-17 06:57:16 +00:00
hjlarry
2befef0b21 Merge branch 'feat/collaboration2' of github.com:langgenius/dify into feat/collaboration2 2026-01-17 14:55:22 +08:00
hjlarry
8869cd7008 fix api 2026-01-17 14:55:12 +08:00
hjlarry
91e6ae2a7d fix bug 2026-01-17 14:53:33 +08:00
hjlarry
6ab8e05a5e fix api 2026-01-17 14:47:44 +08:00
hjlarry
717f99a352 fix migration file 2026-01-17 12:54:15 +08:00
hjlarry
735cd78dc2 fix api 2026-01-17 12:45:40 +08:00
autofix-ci[bot]
c820501cbb [autofix.ci] apply automated fixes (attempt 2/3) 2026-01-17 04:29:38 +00:00
autofix-ci[bot]
43ef2395ac [autofix.ci] apply automated fixes 2026-01-17 04:27:34 +00:00
hjlarry
bb3d94f1c5 Merge remote-tracking branch 'myori/main' into feat/collaboration2 2026-01-17 12:24:37 +08:00
hjlarry
c45fbb6491 rm workflow.ts 2026-01-17 10:26:12 +08:00
hjlarry
fc291e4ca2 Merge remote-tracking branch 'myori/main' into feat/collaboration2 2026-01-17 10:22:41 +08:00
yyh
ad8c5f5452 perf: lazy load SkillMain component using next/dynamic
Reduce initial bundle size by dynamically importing SkillMain
component. This prevents loading the entire Skill module (including
Monaco and Lexical editors) when users only access the Graph view.
2026-01-16 21:31:56 +08:00
Harry
721d82b91a refactor(sandbox): modify sandbox provider configuration by adding 'configure_type' column and updating unique constraints 2026-01-16 19:02:16 +08:00
zhsama
0c62c39a1d Merge branch 'zhsama/assemble-var-input' into feat/pull-a-variable 2026-01-16 18:54:53 +08:00
zhsama
8d643e4b85 feat: add assemble variables icon 2026-01-16 18:45:28 +08:00
Joel
d542a74733 feat: panel ui 2026-01-16 18:39:13 +08:00
Harry
16078a9df6 refactor(sandbox): update DifyCliLocator path resolution and enhance sandbox provider configuration logic
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-16 18:37:43 +08:00
Harry
0bd17c6d0f refactor(sandbox): sandbox provider system default configuration 2026-01-16 18:22:44 +08:00
zhsama
77401e6f5c feat: optimize variable picker styling and optimize agent nodes 2026-01-16 18:21:43 +08:00
Joel
8b42435f7a feat: support set default value when choose tool 2026-01-16 18:16:01 +08:00
Joel
3147e850be fix: click tool not show current 2026-01-16 17:52:40 +08:00
Joel
0b33381efb feat: support save settings 2026-01-16 17:44:40 +08:00
yyh
ee7a9a34e0 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-16 17:25:19 +08:00
Joel
148f92f92d fix: allow all fileds and not allow model set to auto 2026-01-16 17:20:11 +08:00
Novice
4ee49552ce feat: add prompt variable message 2026-01-16 17:10:18 +08:00
zhsama
40caaaab23 Merge branch 'zhsama/assemble-var-input' into feat/pull-a-variable 2026-01-16 17:04:18 +08:00
zhsama
1bc1c04be5 feat: add assemble variables entry 2026-01-16 17:03:22 +08:00
Novice
18abc66585 feat: add context file support 2026-01-16 17:01:44 +08:00
Joel
f79df6982d feat: support setting show on click 2026-01-16 16:58:58 +08:00
zhsama
e85e31773a Merge branch 'zhsama/llm-warning-ui' into feat/pull-a-variable 2026-01-16 16:22:07 +08:00
zhsama
e5336a2d75 Use warning token borders for mentions 2026-01-16 15:09:42 +08:00
Joel
649283df09 fix: not popup and use new setting 2026-01-16 15:09:25 +08:00
zhsama
7222a896d8 Align warning styles for agent mentions 2026-01-16 15:01:11 +08:00
zhsama
b5712bf8b0 Merge branch 'zhsama/agent-at-nodes' into feat/pull-a-variable 2026-01-16 14:47:37 +08:00
yyh
06b6625c01 feat(skill): implement file tree search with debounced filtering
Add search functionality to skill sidebar using react-arborist's built-in
searchTerm and searchMatch props. Search input is debounced at 300ms and
filters tree nodes by name (case-insensitive). Also add success toast for
rename operations.
2026-01-16 14:44:44 +08:00
zhsama
7bc2e33e83 Merge remote-tracking branch 'origin/feat/pull-a-variable' into feat/pull-a-variable 2026-01-16 14:43:31 +08:00
Joel
eb4f57fb8b chore: split tool config 2026-01-16 14:39:33 +08:00
yyh
0f5d3f38da refactor(skill): use node.parent chain for ancestor traversal
Replace getAncestorIds(treeData) with node.parent chain traversal
for more efficient ancestor lookup. This avoids re-traversing the
tree data structure and uses react-arborist's built-in parent refs.

Also rename hook to useSyncTreeWithActiveTab for clarity.
2026-01-16 14:27:21 +08:00
yyh
76da178cc1 refactor(skill): extract tree node handlers into reusable hooks
Extract complex event handling and side effects from file tree components
into dedicated hooks for better separation of concerns and reusability.
2026-01-16 14:15:21 +08:00
yyh
38a2d2fe68 fix(skill): isolate more button click from tree node click handling
Use split button pattern to separate main content area from more button.
This prevents click events on the more button from bubbling up to the
parent element's click/double-click handlers, which caused unintended
file opening when clicking the menu button multiple times.
2026-01-16 14:07:07 +08:00
yyh
9397ba5bd2 refactor: move skill store to workflow/store/ 2026-01-16 13:51:50 +08:00
yyh
7093962f30 refactor(skill): move skill editor slice to core workflow store
Move SkillEditorSlice from injection pattern to core workflow store,
making it available to all workflow contexts (workflow-app, chatflow,
and future rag-pipeline).

- Add createSkillEditorSlice to core createWorkflowStore
- Remove complex type conversion logic from workflow-app/index.tsx
- Remove optional chaining (?.) and non-null assertions (!) from components
- Simplify slice composition with type assertions via unknown
2026-01-16 13:51:50 +08:00
yyh
7022e4b9ca fix(skill): add key prop to editors to fix content sync on tab switch
Lexical editor only uses initialConfig.editorState on mount, ignoring
subsequent value prop changes when the component is reused by React.
Adding key={activeTabId} forces React to remount editors when switching
tabs, ensuring correct content is displayed.
2026-01-16 13:51:50 +08:00
yyh
b8d67a42bd refactor(skill): migrate skill editor store to workflow store slice injection
Refactor the skill editor state management from a standalone Zustand store
with Context provider pattern to a slice injection pattern that integrates
with the existing workflow store. This aligns with how rag-pipeline already
injects its slice.

- Remove SkillEditorProvider and SkillEditorContext
- Export createSkillEditorSlice for injection into workflow store
- Update all components to use useStore/useWorkflowStore from workflow store
- Add SkillEditorSliceShape to SliceFromInjection union type
- Use type-safe slice creator args without any types
2026-01-16 13:51:49 +08:00
yyh
106cb8e373 refactor(skill): unify node menu components with cva variants
Merge file-node-menu.tsx and folder-node-menu.tsx into a single
declarative NodeMenu component that uses type prop to determine
menu items. Add cva-based variant support to MenuItem for consistent
destructive styling.
2026-01-16 13:51:49 +08:00
Joel
9492eda5ef chore: tool format and render problem 2026-01-16 13:50:20 +08:00
Novice
a7826d9ea4 feat: agent add context 2026-01-16 11:47:55 +08:00
Joel
64ddcc8960 chore: fix choose provder id 2026-01-16 11:31:03 +08:00
yyh
c7bca6a3fb fix(skill): restore auto-pin on edit behavior (VS Code style) 2026-01-16 11:26:13 +08:00
yyh
f1ce933b33 fix(skill): address code review issues for tab management
1. Add confirmation dialog when closing dirty tabs
2. Fix file double-click race condition with useDelayedClick hook
3. Fix previewTabId orphan state in closeTab
4. Remove auto-pin on every keystroke (VS Code behavior)
5. Extract shared MenuItem component to eliminate duplication
6. Make nodeId optional when node is provided (reduce props drilling)
2026-01-16 11:20:49 +08:00
yyh
17990512ce fix(skill): add throttle to folder toggle and validate pinTab
- Use es-toolkit throttle with leading edge to prevent folder toggle
  flickering on double-click (3 toggles reduced to 1)
- Add validation in pinTab to check if file exists in openTabIds
2026-01-16 11:20:49 +08:00
yyh
a30fb5909b feat(skill): implement VS Code-style preview/pinned tab management
- Single-click file in tree opens in preview mode (temporary, replaceable)
- Double-click file opens in pinned mode (permanent)
- Preview tabs display with italic filename
- Editing content auto-converts preview tab to pinned
- Double-clicking preview tab header converts to pinned
- Only one preview tab can exist at a time
2026-01-16 11:20:49 +08:00
Joel
3dea5adf5c fix: change caused problem 2026-01-16 11:00:56 +08:00
yyh
5aca563a01 fix: migrations 2026-01-16 10:26:53 +08:00
yyh
bf1ebcdf8f Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-16 10:05:12 +08:00
yyh
3252748345 feat(skill): add oRPC contract and hook for file download URL
Add frontend oRPC integration for the existing backend download URL
endpoint to enable file downloads from the asset tree.
2026-01-16 09:55:17 +08:00
zhsama
72eb29c01b fix: fix duplicate agent context warnings in tool node 2026-01-16 00:42:42 +08:00
zhsama
0f3156dfbe fix: list multiple @mentions 2026-01-16 00:19:28 +08:00
zhsama
b21875eaaf fix: simplify @llm warning 2026-01-16 00:08:51 +08:00
zhsama
2591615a3c Merge branch 'zhsama/agent-at-nodes' into feat/pull-a-variable 2026-01-15 23:51:35 +08:00
zhsama
691554ad1c feat: 展示@agent引用 2026-01-15 23:32:14 +08:00
zhsama
f43fde5797 feat: Enhance context variable handling for Agent and LLM nodes 2026-01-15 23:26:19 +08:00
yyh
783cdb1357 feat(skill): add inline rename and guide lines to file tree
Add TreeEditInput component for inline file/folder renaming with keyboard
support (Enter to submit, Escape to cancel). Add TreeGuideLines component
to render vertical indent lines based on node depth for better visual
hierarchy in the tree view.

Reorganize file tree components into dedicated `file-tree` subdirectory
for better code organization.
2026-01-15 21:30:02 +08:00
yyh
2de17cb1a4 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-15 20:47:34 +08:00
yyh
3b6946d3da refactor(skill): centralize asset tree data fetching with custom hooks
Extract repeated appId retrieval and tree data fetching patterns into
dedicated hooks (useSkillAssetTreeData, useSkillAssetNodeMap) to reduce
code duplication across 6 components and leverage TanStack Query's
select option for efficient nodeMap computation.
2026-01-15 19:45:33 +08:00
yyh
b8adc8f498 fix(web): memoize skill sidebar menu offset 2026-01-15 19:45:32 +08:00
yyh
ca7c4d2c86 fix(skill): improve accessibility for file tree and tabs
- Convert div with onClick to proper button elements for keyboard access
- Add focus-visible ring styles to all interactive elements
- Add ARIA attributes (role, aria-selected, aria-expanded) to tree nodes
- Add keyboard navigation (Enter/Space) support to tree items
- Mark decorative icons with aria-hidden="true"
- Add missing i18n keys for accessibility labels
- Fix typography: use ellipsis character (…) instead of three dots
2026-01-15 19:45:32 +08:00
Harry
d8bafb0d1c refactor(app-asset): remove deprecated file download resource and streamline download URL handling with pre-signed storage 2026-01-15 19:28:15 +08:00
Harry
cd0724b827 refactor(app-asset-service): remove unused signed proxy URL generation and improve error handling for download URL 2026-01-15 19:28:15 +08:00
yyh
6e66e2591b feat(skill): disable file tree during mutations
- Add useIsMutating hook to track ongoing mutations
- Apply pointer-events-none and opacity-50 when mutating
- Prevents user interaction during file operations
2026-01-15 18:14:10 +08:00
yyh
fd0556909f fix(skill): default folders to collapsed state on load
- Add openByDefault={false} to Tree component
- react-arborist defaults openByDefault to true, causing all folders
  to be expanded on page refresh
2026-01-15 18:05:42 +08:00
yyh
ac2120da1e refactor(skill): separate DropTip from tree container
- Move DropTip component outside the tree flex container
- Use Fragment to group tree container, DropTip and context menu
- DropTip is now an independent fixed element at the bottom
2026-01-15 18:05:42 +08:00
yyh
f3904a7e39 fix(skill): use dynamic height for file tree to fix scroll issues
- Replace fixed height={1000} with dynamic containerSize.height
- Use useSize hook from ahooks to observe container dimensions
- Fallback to 400px default height for initial render
- Fixes scroll issues when collapsing folders
2026-01-15 18:05:42 +08:00
yyh
b3923ec3ca fix: translations 2026-01-15 18:05:41 +08:00
Joel
9ffdad6465 fix: click tool inner caused blur 2026-01-15 17:58:38 +08:00
zhsama
f247ebfbe1 feat: Await sub-graph save before syncing workflow draft 2026-01-15 17:53:28 +08:00
yyh
713e040481 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-15 17:26:58 +08:00
yyh
f58f36fc8f feat(skill): add file right-click/more menu and refactor naming
- Add right-click context menu and '...' more button for files
  - Files now support Rename and Delete operations
  - Created file-node-menu.tsx for file-specific menu

- Refactor component naming for consistency
  - file-item-menu.tsx -> file-node-menu.tsx (unify 'node' terminology)
  - file-operations-menu.tsx -> folder-node-menu.tsx (clarify folder menu)
  - file-tree-context-menu.tsx -> tree-context-menu.tsx (simplify)
  - file-tree-node.tsx -> tree-node.tsx (simplify)
  - files.tsx -> file-tree.tsx (more descriptive)
  - Renamed internal components: FileTreeNode -> TreeNode, Files -> FileTree

- Add context menu node highlight
  - When right-clicking a node, it now shows hover highlight
  - Subscribed to contextMenu.nodeId in TreeNode component
2026-01-15 17:26:12 +08:00
Joel
195cd2c898 chore: show line numbers to skill editor 2026-01-15 17:21:12 +08:00
Harry
6bb09dc58c feat(app-assets): add file download functionality with pre-signed URLs and enhance asset management
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-15 17:20:10 +08:00
Harry
33f3374ea6 refactor(sandbox): simplify sandbox_layer by removing ArchiveSandboxStorage and updating event handling 2026-01-15 17:20:10 +08:00
Harry
41baaca21d feat(sandbox): integrate ArchiveSandboxStorage into AdvancedChat and Workflow app generators 2026-01-15 17:20:10 +08:00
Joel
d650cde323 feat: skill editor choose tool 2026-01-15 17:16:01 +08:00
zhsama
d641c845dd feat: Pass workflow draft sync callback to sub-graph 2026-01-15 17:12:30 +08:00
yyh
e651c6cacf fix: css 2026-01-15 16:45:40 +08:00
zhsama
2e10d67610 perf: Replace topOffset prop with withHeader in Panel component 2026-01-15 16:44:15 +08:00
yyh
eab395f58a refactor: sync file tree open state 2026-01-15 16:39:22 +08:00
yyh
2f92957e15 fix: css 2026-01-15 16:14:51 +08:00
zhsama
e89d4e14ea Merge branch 'main' into feat/pull-a-variable 2026-01-15 16:14:15 +08:00
zhsama
5525f63032 refactor: sub-graph panel use shared Panel component 2026-01-15 16:12:39 +08:00
yyh
7bc1390366 feat(skill-editor): enhance + button with full operations and smart target folder
- Refactor sidebar-search-add to reuse useFileOperations hook
- Add getTargetFolderIdFromSelection utility for smart folder targeting
- Expand + button menu: New File, New Folder, Upload File, Upload Folder
- Target folder based on selection: file's parent, folder itself, or root
2026-01-15 16:10:01 +08:00
Joel
e91fb94d0e chore: palceholder 2026-01-15 16:08:26 +08:00
yyh
5c03a2e251 refactor(skill-editor): extract hooks and utils into separate directories
- Extract useFileOperations hook to hooks/use-file-operations.ts
- Move tree utilities to utils/tree-utils.ts
- Move file utilities to utils/file-utils.ts (renamed from utils.ts)
- Remove unnecessary JSDoc comments throughout components
- Simplify type.ts to only contain local type definitions
- Clean up store/index.ts by removing verbose comments
2026-01-15 16:00:42 +08:00
yyh
1741fcf84d feat(skill-editor): add rename and delete operations for folder context menu
- Add Rename using react-arborist native inline editing (node.edit())
- Add Delete with Confirm modal and automatic tab cleanup
- Add getAllDescendantFileIds utility for finding files to close on delete
- Add i18n strings for rename/delete operations (en-US, zh-Hans)
2026-01-15 16:00:41 +08:00
yyh
52215e9166 fix(prompt-editor): show border on hover for better scroll boundary visibility
Add hover state border to prompt editor so users can see the boundary
while scrolling even when the editor is not focused.
2026-01-15 16:00:41 +08:00
Joel
4cfc135652 feat: prompt editor support line num 2026-01-15 15:56:49 +08:00
zhsama
8ee643e88d fix: fix variable inspect panel width in subgraphs 2026-01-15 15:55:55 +08:00
yyh
ff632bf9b8 feat(workflow): persist view tab state to URL search params
Use nuqs to sync graph/skill view selection to URL, enabling
shareable links and browser history navigation. Hoists
SkillEditorProvider to maintain state across view switches.
2026-01-15 15:09:36 +08:00
yyh
ce9ed88b03 refactor(skill-editor): hoist SkillEditorProvider for state persistence
Move SkillEditorProvider from SkillMain to WorkflowAppWrapper so that
store state persists across view switches between Graph and Skill views.
Also add URL query state for view type using nuqs.
2026-01-15 15:09:12 +08:00
yyh
e6a4a08120 refactor(skill-editor): simplify code by extracting MenuItem component and removing dead code
- Extract reusable MenuItem component for menu buttons in FileOperationsMenu
- Remove unused handleUploadFileClick/handleUploadFolderClick callbacks
- Remove unused handleDropdownClose callback, inline directly
- Remove unused _fileId parameter from revealFile function
- Simplify toOpensObject using Object.fromEntries
2026-01-15 15:05:43 +08:00
yyh
388ee087c0 feat(skill-editor): add folder context menu with file operations
Add right-click context menu and "..." dropdown button for folders in
the file tree, enabling file operations within any folder:

- New File: Create empty file via Blob upload
- New Folder: Create subfolder
- Upload File: Upload multiple files to folder
- Upload Folder: Upload entire folder structure preserving hierarchy

Implementation includes:
- FileOperationsMenu: Shared menu component for both triggers
- FileTreeContextMenu: Right-click menu with absolute positioning
- FileTreeNode: Added context menu and dropdown button for folders
- Store slice for context menu state management
- i18n strings for en-US and zh-Hans
2026-01-15 14:56:31 +08:00
Joel
2fb8883918 feat: split different filetypes 2026-01-15 14:53:00 +08:00
yyh
28ccd42a1c refactor(skill-editor): simplify SkillEditorProvider
Remove verbose comments and appId reset logic since parent component
remounts on appId change. Consolidate imports and use function declaration.
2026-01-15 14:10:41 +08:00
yyh
fcd814a2c3 refactor(skill-editor): simplify state management and remove dead code
- Replace useRef pattern with useMemo for store creation in context.tsx
- Remove unused extension prop from EditorTabItem
- Fix useMemo dependency warnings in editor-tabs.tsx and skill-doc-editor.tsx
- Add proper OnMount type for Monaco editor instead of any
- Delete unused file-item.tsx and fold-item.tsx components
- Remove unused getExtension and fromOpensObject utilities from type.ts
- Refactor auto-reveal effect in files.tsx for better readability
2026-01-15 14:02:15 +08:00
yyh
fe17cbc1a8 feat(skill-editor): implement file tree, tab management, and dirty state tracking
Implement MVP features for skill editor based on design doc:
- Add Zustand store with Tab, FileTree, and Dirty slices
- Rewrite file tree using react-arborist for virtual scrolling
- Implement Tab↔FileTree sync with auto-reveal on tab activation
- Add upload functionality (new folder, upload file)
- Implement Monaco editor with dirty state tracking and Ctrl+S save
- Add i18n translations (en-US and zh-Hans)
2026-01-15 13:53:19 +08:00
Harry
63b3e71909 refactor(sandbox): redesign sandbox_layer & reorganize import paths 2026-01-15 13:22:49 +08:00
hjlarry
b549d669d6 clear logic 2026-01-15 13:17:14 +08:00
hjlarry
802b38eede fix 2026-01-15 13:16:35 +08:00
Harry
c1c8b6af44 chore: remove duplicate secret field in CliApiSession 2026-01-15 12:10:53 +08:00
hjlarry
4b57e7bd53 fix 2026-01-15 11:42:34 +08:00
Joel
3bd434ddf2 chore: ui enchance 2026-01-15 11:35:48 +08:00
Joel
834a5df580 fix: switch zindex 2026-01-15 11:31:08 +08:00
Joel
e40c2354d5 chore: remove useless props 2026-01-15 11:24:59 +08:00
Joel
b0eca12d88 feat: tabs 2026-01-15 11:22:43 +08:00
yyh
3a86983207 refactor(web): nest sandbox provider contracts 2026-01-15 11:04:43 +08:00
Joel
f461ddeb7e missing files 2026-01-15 11:04:15 +08:00
Joel
7b534baf15 chore: file type utils 2026-01-15 11:02:07 +08:00
Joel
74d8bdd3a7 chore: search ui 2026-01-15 11:02:07 +08:00
yyh
657739d48b Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/models/model.py
#	web/contract/router.ts
2026-01-15 10:59:45 +08:00
yyh
f8b27dd662 fix(web): accept 2xx status codes in upload function for HTTP semantics
The upload helper was hardcoded to only accept HTTP 201, which broke
PUT requests that return 200. This aligns with standard HTTP semantics
where POST returns 201 Created and PUT returns 200 OK.
2026-01-15 10:54:42 +08:00
yyh
18c7f4698a feat(web): add oRPC contracts and service hooks for app asset API
- Add TypeScript types for app asset management (types/app-asset.ts)
- Add oRPC contract definitions with nested router pattern (contract/console/app-asset.ts)
- Add React Query hooks for all asset operations (service/use-app-asset.ts)
- Integrate app asset contracts into console router

Endpoints covered: tree, createFolder, createFile, getFileContent,
updateFileContent, deleteNode, renameNode, moveNode, reorderNode, publish
2026-01-15 09:50:05 +08:00
zhsama
ccb337e8eb fix: Sync extractor prompt template with tool input text 2026-01-15 04:09:35 +08:00
zhsama
1ff677c300 refactor: Remove unused sub-graph persistence and initialization hooks.
Simplified sub-graph store by removing unused state fields and setters.
2026-01-15 04:08:42 +08:00
zhsama
04145b19a1 refactor: refactor prompt template processing logic 2026-01-15 01:14:46 +08:00
Harry
6cb8d03bf6 feat(sandbox): enhance SandboxLayer with app_id handling and storage integration
- Introduce _app_id attribute to store application ID from system variables
- Add _get_app_id method to retrieve and validate app_id
- Update on_graph_start to log app_id during sandbox initialization
- Integrate ArchiveSandboxStorage for persisting and restoring sandbox files
- Ensure proper error handling for sandbox file operations
2026-01-15 00:28:41 +08:00
Harry
94ff904a04 feat(sandbox): add AppAssetsInitializer and refactor VMFactory to VMBuilder
- Add AppAssetsInitializer to load published app assets into sandbox
- Refactor VMFactory.create() to VMBuilder with builder pattern
- Extract SandboxInitializer base class and DifyCliInitializer
- Simplify SandboxLayer constructor (remove options/environments params)
- Fix circular import in sandbox module by removing eager SandboxBashTool export
- Update SandboxProviderService to return VMBuilder instead of VirtualEnvironment
2026-01-15 00:13:52 +08:00
Harry
a0c388f283 refactor(sandbox): extract connection helpers and move run_command to helper module
- Add helpers.py with connection management utilities:
    - with_connection: context manager for connection lifecycle
    - submit_command: execute command and return CommandFuture
    - execute: run command with auto connection, raise on failure
    - try_execute: run command with auto connection, return result

  - Add CommandExecutionError to exec.py for typed error handling
    with access to exit_code, stderr, and full result

  - Remove run_command method from VirtualEnvironment base class
    (now available as submit_command helper)

  - Update all call sites to use new helper functions:
    - sandbox/session.py
    - sandbox/storage/archive_storage.py
    - sandbox/bash/bash_tool.py
    - workflow/nodes/command/node.py

  - Add comprehensive unit tests for helpers with connection reuse
2026-01-15 00:13:52 +08:00
zhsama
56e537786f feat: Update LLM context selector styling 2026-01-14 23:30:12 +08:00
zhsama
810f9eaaad feat: Enhance sub-graph components with context handling and variable management 2026-01-14 23:23:09 +08:00
yyh
31427e9c42 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 21:15:23 +08:00
yyh
384b99435b Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/.env.example
#	api/uv.lock
2026-01-14 21:14:36 +08:00
Harry
425d182f21 refactor: move app_asset_tree module and update imports in app_asset and app_asset_service 2026-01-14 20:31:40 +08:00
Harry
4394ba1fe1 feat(skill): implement app asset management features including folder and file operations, error handling, and database migration for app asset drafts
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-14 20:25:17 +08:00
zhsama
4828348532 feat: Add structured output to sub-graph LLM nodes 2026-01-14 17:25:06 +08:00
Joel
be5a4cf5e3 temp fix: tab change caused empty the nodes 2026-01-14 17:20:40 +08:00
yyh
d17a92f713 refactor(web): split sandbox provider contracts into separate file
Move sandbox provider related contracts from contract/console.ts
to contract/console/sandbox-provider.ts for better organization
2026-01-14 16:46:04 +08:00
hjlarry
bfedee0532 fix 2026-01-14 16:40:52 +08:00
Harry
5ac2230c5d feat: sandbox storage 2026-01-14 16:31:24 +08:00
Joel
ab531d946e feat: add main skill struct 2026-01-14 16:28:14 +08:00
Joel
1a8fd08563 chore: add list define and mock data 2026-01-14 16:28:14 +08:00
yyh
c6ddf89980 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:24:47 +08:00
yyh
71c39ae583 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:23:57 +08:00
yyh
7209ef4aa7 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:16:28 +08:00
Joel
6b55e6781f feat: graph skill main struct 2026-01-14 15:41:02 +08:00
zhsama
c8c048c3a3 perf: Optimize sub-graph store selectors and layout 2026-01-14 15:39:21 +08:00
yyh
4887c9ea6f refactor(web): simplify MCP tool availability context and hook
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
- Add useMemo to prevent unnecessary re-renders of context value
- Extract ProviderProps type for better readability
- Convert arrow functions to standard function declarations
- Remove unused versionSupported/sandboxEnabled from hook return type
2026-01-14 14:15:07 +08:00
Novice
495d575ebc feat: add assemble variable builder api 2026-01-14 14:12:36 +08:00
yyh
18170a1de5 feat(web): add sandbox mode check for MCP tool availability
Extend MCP tool availability context to include sandbox mode check
alongside version support. MCP tools are now blocked when sandbox
is disabled, with appropriate tooltip messages for each blocking
condition.
2026-01-14 14:01:56 +08:00
yyh
7ce144f493 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 13:40:39 +08:00
yyh
2279b605c6 refactor: import SandboxProvider type from @/types and remove retry:0
Move type imports to @/types/sandbox-provider instead of re-exporting
from service file. Remove unnecessary retry:0 options to use React
Query's default retry behavior.
2026-01-14 10:10:04 +08:00
yyh
3b78f9c2a5 refactor: migrate sandbox-provider API to ORPC
Replace manual fetch calls in use-sandbox-provider.ts with typed ORPC
contracts and client. Adds type definitions to types/sandbox-provider.ts
and registers contracts in the console router for consistent API handling.
2026-01-14 10:07:27 +08:00
yyh
7c029ce808 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/services/workflow_service.py
2026-01-14 09:54:07 +08:00
zhsama
b9052bc244 feat: add sub-graph config panel with variable selection and null
handling
2026-01-14 03:22:42 +08:00
zhsama
b7025ad9d6 feat: change sub-graph prompt handling to use user role 2026-01-13 23:23:18 +08:00
zhsama
c5482c2503 Merge branch 'main' into feat/pull-a-variable 2026-01-13 22:57:27 +08:00
zhsama
d394adfaf7 feat: Fix prompt template handling for Jinja2 edition type 2026-01-13 22:57:05 +08:00
zhsama
bc771d9c50 feat: Add onSave prop to SubGraph components for draft sync 2026-01-13 22:51:29 +08:00
zhsama
96ec176b83 feat: sub-graph to use dynamic node generation 2026-01-13 22:28:30 +08:00
hjlarry
1845938e70 fix type issue 2026-01-13 22:18:54 +08:00
hjlarry
fad81ab85e fix type issue 2026-01-13 22:11:36 +08:00
hjlarry
d1c64f5c74 add toast when disconnected 2026-01-13 22:08:59 +08:00
hjlarry
7f6c93bdce reduce CURSOR_THROTTLE_MS 2026-01-13 22:08:07 +08:00
zhsama
f57d2ef31f refactor: refactor workflow nodes state sync and extractor node
lifecycle
2026-01-13 18:37:23 +08:00
Harry
f28ded8455 feat(agent-sandbox): new tool resolver and bash execution implementation
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-13 18:16:48 +08:00
hjlarry
7730c88c74 fix leader election concurrently 2026-01-13 18:01:12 +08:00
zhsama
e80bc78780 fix: clear mock llm node functions 2026-01-13 17:57:02 +08:00
hjlarry
ac6b540fd8 CORS config 2026-01-13 17:50:16 +08:00
hjlarry
8c9276370c remove console.log 2026-01-13 17:46:53 +08:00
hjlarry
b91370aff7 fix next config 2026-01-13 17:40:04 +08:00
hjlarry
30424df7ce uuid v7 2026-01-13 17:20:02 +08:00
hjlarry
14f7f4758a fix error display 2026-01-13 17:19:52 +08:00
yyh
c6ba51127f fix(sandbox-provider): allow admin role to manage sandbox providers
Change permission check from isCurrentWorkspaceOwner to
isCurrentWorkspaceManager so both owner and admin roles can
configure sandbox providers.
2026-01-13 17:17:36 +08:00
zhsama
ddbbddbd14 refactor: Update variable syntax to support agent context markers
Extend variable pattern matching to support both `#` and `@` markers,
with `@` specifically used for agent context variables. Update regex
patterns, text processing logic, and add sub-graph persistence for agent
variable handling.
2026-01-13 17:13:45 +08:00
hjlarry
79c19983e0 refactor: fix N+1 query issue in workflow comments 2026-01-13 16:56:54 +08:00
Novice
9b961fb41e feat: structured output support file type 2026-01-13 16:48:01 +08:00
zxhlyh
1db995be0d Merge branch 'main' into feat/llm-support-tools 2026-01-13 16:46:03 +08:00
yyh
5675a44ffd fix(sandbox-provider): use Loading component and add daytona doc link
- Replace hardcoded "Loading..." text with Loading component
- Add daytona documentation link to PROVIDER_DOC_LINKS
2026-01-13 16:37:58 +08:00
hjlarry
aeb3fc6729 add backend logging 2026-01-13 16:25:54 +08:00
yyh
48295e5161 refactor(sandbox-provider): extract shared constants and remove redundant cache invalidation
- Extract PROVIDER_ICONS and PROVIDER_DESCRIPTION_KEYS to constants.ts
- Create shared ProviderIcon component with size and withBorder props
- Remove manual invalidateList() calls from config-modal and switch-modal
  (mutations already invalidate cache in onSuccess)
- Remove unused useInvalidSandboxProviderList hook
2026-01-13 16:18:08 +08:00
Novice
4f79d09d7b chore: change the DSL design 2026-01-13 16:10:18 +08:00
hjlarry
0c18d4e058 fix duplicated status 2026-01-13 15:59:59 +08:00
zhsama
dbed937fc6 Merge remote-tracking branch 'origin/feat/pull-a-variable' into feat/pull-a-variable 2026-01-13 15:17:24 +08:00
yyh
ffc39b0235 refactor: rename ACCOUNT_SETTING_TAB.PROVIDER to MODEL_PROVIDER
Rename the constant for clarity and consistency with the new
sandbox-provider tab naming convention. Update all references
across the codebase to use the new constant name.
2026-01-13 15:07:04 +08:00
yyh
f72f58dbc4 fix: loading state
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-13 14:38:19 +08:00
yyh
9d0f4a2152 fix(sandbox-provider): prevent permission hint flash on page load
Use strict equality check to only show no-permission message when
isCurrentWorkspaceOwner is explicitly false, not undefined.
2026-01-13 14:23:52 +08:00
yyh
1ed4ab4299 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-13 14:19:04 +08:00
Novice
969c96b070 feat: add stream response 2026-01-13 14:13:43 +08:00
yyh
3f69d348a1 chore: add translations 2026-01-13 14:05:41 +08:00
yyh
63fff151c7 fix: provider card style 2026-01-13 13:50:28 +08:00
yyh
9920e0b89a fix(sandbox-provider): hide config controls in read-only mode
Hide config button, divider, and enable button for non-owner users.
Adjust right padding to 24px in read-only mode for proper alignment.
2026-01-13 13:32:18 +08:00
yyh
3042f29c15 fix(sandbox-provider): update switch modal warning style to match design
Replace yellow warning box with red text for destructive emphasis.
Bold the provider name in confirmation text using Trans component.
2026-01-13 13:23:03 +08:00
yyh
99273e1118 style: provider card 2026-01-13 13:18:09 +08:00
yyh
041dbd482d fix(sandbox-provider): use i18n for provider card descriptions
Use PROVIDER_DESCRIPTION_KEYS mapping to display localized descriptions
instead of raw backend data, ensuring descriptions match Figma design.
2026-01-13 11:43:49 +08:00
yyh
b4aa1de10a fix(sandbox-provider): update provider descriptions to match Figma design
Update E2B, Daytona, and Docker descriptions with unique copy from design:
- E2B: "E2B Gives AI Agents Secure Computers with Real-World Tools."
- Daytona: "Deploy AI code with confidence using Daytona's lightning-fast infrastructure."
- Docker: "The Easiest Way to Build, Run, and Secure Agents."
2026-01-13 11:41:20 +08:00
yyh
c5a9b98cbe refactor(sandbox-provider): add centralized query keys management
Add sandboxProviderQueryKeys object for type-safe and maintainable
query key management, following the pattern used in use-common.ts.
2026-01-13 11:39:01 +08:00
yyh
21f47fbe58 fix(sandbox-provider): fix config modal header spacing and icon style
- Use custom header with 8px gap between title and subtitle
- Fix icon overflow-clip for proper border-radius
2026-01-13 11:12:51 +08:00
yyh
49f115dce3 fix(sandbox-provider): fix config modal subtitle icon to fill container 2026-01-13 11:11:03 +08:00
yyh
a81d0327d2 feat(sandbox-provider): update UI to match Figma design
- Update settings icon to RiEqualizer2Line
- Add 4px rounded container for provider icons in config modal
- Update section titles to uppercase style
- Change switch modal confirm button to warning variant
- Add i18n keys for setAsActive, readDocLink, securityTip
2026-01-13 11:04:11 +08:00
yyh
9eafe982ee fix: migration 2026-01-13 10:21:38 +08:00
yyh
a46bfdd0fc Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-13 10:15:59 +08:00
Harry
16f26c4f99 feat(cli_api): implement CLI API for external sandbox interactions, including session management and request handling 2026-01-12 20:57:07 +08:00
zhsama
03e0c4c617 feat: Add VarKindType parameter metion to mixed variable text input 2026-01-12 20:08:41 +08:00
zhsama
47790b49d4 fix: Fix agent context variable insertion to preserve existing text 2026-01-12 18:12:06 +08:00
zhsama
b25b069917 fix: refine agent variable logic 2026-01-12 18:12:06 +08:00
Novice
bb190f9610 feat: add mention type variable 2026-01-12 17:40:37 +08:00
zhsama
d65ae68668 Merge branch 'main' into feat/pull-a-variable
# Conflicts:
#	.nvmrc
2026-01-12 17:15:56 +08:00
zhsama
f625350439 refactor:Refactor agent variable handling in mixed variable text input 2026-01-12 17:05:00 +08:00
zhsama
f4e8f64bf7 refactor:Change sub-graph output handling from skip to default 2026-01-12 17:04:13 +08:00
Harry
42fd0a0a62 refactor(sandbox): simplify command execution by using shlex for command parsing and improve output formatting 2026-01-12 16:35:09 +08:00
Harry
b78439b334 refactor(llm): update model features handling and change agent strategy to FUNCTION_CALLING 2026-01-12 15:52:26 +08:00
Harry
1082d73355 refactor(sandbox): remove unused SANDBOX_WORK_DIR constant and update bash command descriptions for clarity 2026-01-12 15:02:30 +08:00
zhsama
d91087492d Refactor sub-graph components structure 2026-01-12 15:00:41 +08:00
zhsama
cab7cd37b8 feat: Add sub-graph component for workflow 2026-01-12 14:56:53 +08:00
Harry
201a18d6ba refactor(virtual_environment): add cwd parameter to execute_command method across all providers for improved command execution context 2026-01-12 14:20:03 +08:00
Harry
f990f4a8d4 refactor(sandbox): update DIFY_CLI_PATH and DIFY_CLI_CONFIG_PATH to use SANDBOX_WORK_DIR and enhance error handling in SandboxSession 2026-01-12 14:07:54 +08:00
zxhlyh
aa5e37f2db Merge branch 'main' into feat/llm-support-tools 2026-01-12 13:42:58 +08:00
Harry
e7c89b6153 refactor(sandbox): update imports and remove unused bash tool files, adjust DIFY_CLI_CONFIG_PATH 2026-01-12 13:36:19 +08:00
Harry
3e49d6b900 refactor: using initializer to replace hardcoded dify cli initialization 2026-01-12 12:13:56 +08:00
Harry
8aaff7fec1 refactor(sandbox): move VMFactory and related classes, update imports to reflect new structure 2026-01-12 12:01:21 +08:00
Harry
51ac23c9f1 refactor(sandbox): reorganize sandbox-related imports and rename SandboxFactory to VMFactory for clarity 2026-01-12 02:07:31 +08:00
Harry
9dd0361d0e refactor: rename new runtime as sandbox feature 2026-01-12 01:53:39 +08:00
Harry
3d2840edb6 feat: sandbox session and dify cli 2026-01-12 01:49:08 +08:00
Harry
ce0a59b60d feat: ad os field to virtual enviroment 2026-01-12 01:26:55 +08:00
Harry
2d8acf92f0 refactor(sandbox): remove Chinese translation for bash command execution description in SandboxBashTool 2026-01-12 01:16:53 +08:00
Harry
bc2ffa39fc refactor(sandbox): remove unused bash tool methods and streamline sandbox session handling in LLMNode 2026-01-12 00:09:40 +08:00
Harry
390c805ef4 feat(sandbox): implement sandbox runtime checks and integrate bash tool invocation in LLMNode 2026-01-11 22:56:05 +08:00
Harry
5b753dfd6e fix(sandbox): update FIXME comments to specify sandbox context for runtime config checks
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
2026-01-09 18:12:36 +08:00
Harry
5c8b80b01a feat(app): update default runtime mode and adjust runtime selection component styling 2026-01-09 18:12:36 +08:00
Harry
95d62039b1 feat(ui): change runtime selection component 2026-01-09 18:12:36 +08:00
Harry
78acfb0040 feat(sandbox): add command to setup system-level sandbox provider configuration 2026-01-09 18:12:35 +08:00
Harry
eb821efda7 refactor(encryption): update encryption utility references and clean up sandbox provider service logic 2026-01-09 18:12:35 +08:00
Harry
925825a41b refactor(encryption): using oauth encryption as a general encryption util. 2026-01-09 18:12:34 +08:00
zhsama
f925266c1b Merge branch 'main' into feat/pull-a-variable 2026-01-09 16:20:55 +08:00
zhsama
07ff8df58d Merge branch 'main' into feat/support-agent-sandbox 2026-01-09 16:20:33 +08:00
Harry
0a0f02c0c6 chore(migrations): re-arrange migration of "add llm generation details table" 2026-01-09 15:55:25 +08:00
Harry
d2f41ae9ef Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-09 15:37:29 +08:00
Harry
5a4f5f54a7 chore: apply ruff 2026-01-09 14:47:21 +08:00
Harry
eabfa8f3af fix(migrations): update down_revision for sandbox_providers migration 2026-01-09 14:45:56 +08:00
Novice
1557f48740 Merge branch 'feat/agent-node-v2' into feat/support-agent-sandbox 2026-01-09 14:19:27 +08:00
Harry
00d787a75b feat(workflows): add deployment workflow for agent development
- Created a new GitHub Actions workflow to automate deployment for the agent development branch.
- Configured the workflow to trigger upon successful completion of the "Build and Push API & Web" workflow.
- Implemented SSH deployment steps using appleboy/ssh-action for secure server updates.
2026-01-09 13:11:37 +08:00
Harry
3b454fa95a refactor(sandbox-manager): implement sharded locking for sandbox management
- Enhanced the SandboxManager to use a sharded locking mechanism for improved concurrency and performance.
- Replaced the global lock with shard-specific locks, allowing for lock-free reads and reducing contention.
- Updated methods for registering, retrieving, unregistering, and counting sandboxes to work with the new sharded structure.
- Improved documentation within the class to clarify the purpose and functionality of the sharding approach.
2026-01-09 12:13:41 +08:00
Harry
0da4d64d38 feat(sandbox-layer): refactor sandbox management and integrate with SandboxManager
- Simplified the SandboxLayer initialization by removing unused parameters and consolidating sandbox creation logic.
- Integrated SandboxManager for better lifecycle management of sandboxes during workflow execution.
- Updated error handling to ensure proper initialization and cleanup of sandboxes.
- Enhanced CommandNode to retrieve sandboxes from SandboxManager, improving sandbox availability checks.
- Added unit tests to validate the new sandbox management approach and ensure robust error handling.
2026-01-09 11:23:03 +08:00
zhsama
6e2cf23a73 Merge branch 'main' into feat/pull-a-variable 2026-01-09 02:49:47 +08:00
zhsama
8b0bc6937d feat: enhance component picker and workflow variable block functionality 2026-01-08 18:17:09 +08:00
zhsama
872fd98eda Merge remote-tracking branch 'origin/feat/pull-a-variable' into feat/pull-a-variable 2026-01-08 18:16:29 +08:00
Novice
5bcd3b6fe6 feat: add mention node executor 2026-01-08 17:36:21 +08:00
zhsama
1aed585a19 feat: enhance agent integration in prompt editor and mixed-variable text input 2026-01-08 17:02:35 +08:00
zhsama
831eba8b1c feat: update agent functionality in mixed-variable text input 2026-01-08 16:59:09 +08:00
Yeuoly
b09a831d15 feat: add tenant_id support to Sandbox and VirtualEnvironment initialization 2026-01-08 16:19:29 +08:00
zxhlyh
4d3d8b35d9 Merge branch 'main' into feat/llm-node-support-tools 2026-01-08 14:28:13 +08:00
zxhlyh
c323028179 feat: llm node support tools 2026-01-08 14:27:37 +08:00
Harry
94dbda503f refactor(llm-panel): update layout and enhance Max Iterations component
- Adjusted padding in the LLM panel for better visual alignment.
- Refactored the Max Iterations component to accept a className prop for flexible styling.
- Maintained the structure of advanced settings while ensuring consistent rendering of fields.
2026-01-08 14:15:58 +08:00
Harry
beefff3d48 feat(docker-demuxer): implement producer-consumer pattern for stream demultiplexing
- Introduced threading to handle Docker's stdout/stderr streams, improving thread safety and preventing race conditions.
- Replaced buffer-based reading with queue-based reading for stdout and stderr.
- Updated read methods to handle errors and end-of-stream conditions more gracefully.
- Enhanced documentation to reflect changes in the demuxing process.
2026-01-08 14:15:41 +08:00
Harry
c2e5081437 feat(llm-panel): collapse panel with advanced settings and max iterations
- Introduced a collapsible section for advanced settings in the LLM panel.
- Added Max Iterations component with conditional rendering based on the new hideMaxIterations prop.
- Updated context field and vision configuration to be part of the advanced settings.
- Added new translation key for advanced settings in the workflow localization file.
2026-01-08 12:16:18 +08:00
Harry
786c3e4137 chore: apply ruff 2026-01-08 11:14:44 +08:00
Harry
0d33714f28 fix(command-node): enhance error message formatting in command execution
- Improved error message handling by assigning the stderr output to a variable for better readability.
- Ensured consistent error reporting when a command fails, maintaining clarity in the output.
2026-01-08 11:14:37 +08:00
Harry
1fbba38436 fix(command-node): improve error reporting in command execution
- Updated error handling to provide detailed stderr output when a command fails.
- Streamlined working directory and command rendering by combining operations into single lines.
2026-01-08 11:14:23 +08:00
Harry
15c3d712d3 feat: sandbox provider configuration 2026-01-08 11:04:12 +08:00
Harry
5b01f544d1 refactor(command-node): streamline command execution and directory checks
- Simplified the command execution logic by removing unnecessary shell invocations.
- Enhanced working directory validation by directly using the `test` command.
- Improved command parsing with `shlex.split` for better handling of raw commands.
2026-01-08 11:04:11 +08:00
zhsama
8b8e521c4e Merge branch 'main' into feat/pull-a-variable 2026-01-07 22:11:05 +08:00
Yeuoly
fe4c591cfd feat(daytona-environment): enhance command management with threading support and default API URL 2026-01-07 18:47:22 +08:00
Yeuoly
0cd613ae52 fix(docker-daemon): update default Docker socket to use Unix socket 2026-01-07 18:35:49 +08:00
Yeuoly
0082f468b4 Refactor code structure for improved readability and maintainability 2026-01-07 18:33:13 +08:00
Novice
eec57e84e4 Merge branch 'main' into feat/agent-node-v2 2026-01-07 17:34:23 +08:00
zxhlyh
70149ea05e Merge branch 'main' into feat/llm-node-support-tools 2026-01-07 16:29:47 +08:00
zxhlyh
1d93f41fcf feat: llm node support tools 2026-01-07 16:28:41 +08:00
Harry
cd0f41a3e0 fix(command-node): improve working directory handling in CommandNode
- Added checks to verify the existence of the specified working directory before executing commands.
- Updated command execution logic to conditionally change the working directory if provided.
- Included FIXME comments to address future enhancements for native cwd support in VirtualEnvironment.run_command.
2026-01-07 15:30:59 +08:00
Harry
094c9fd802 fix: command node single debug run
- Added FIXME comments to indicate the need for unifying runtime config checking in AdvancedChatAppGenerator and WorkflowAppGenerator.
- Introduced sandbox management in WorkflowService with proper error handling for sandbox release.
- Enhanced runtime feature handling in the workflow execution process.
2026-01-07 15:22:12 +08:00
Novice
1584a78fc9 chore: add model name in detail 2026-01-07 15:05:18 +08:00
Novice
88248ad2d3 feat: add node level memory 2026-01-07 13:57:55 +08:00
Harry
1a203031e0 fix(virtual-env): fix Docker stdout/stderr demuxing and exit code parsing
- Add _DockerDemuxer to properly separate stdout/stderr from multiplexed stream
- Fix binary header garbage in Docker exec output (tty=False 8-byte header)
- Fix LocalVirtualEnvironment.get_command_status() to use os.WEXITSTATUS()
- Update tests to use Transport API instead of raw file descriptors
2026-01-07 12:20:07 +08:00
Harry
05c3344554 feat: future interface for easy way to use VM.execute_command 2026-01-07 11:57:00 +08:00
Harry
888be71639 feat: command node output variables 2026-01-07 11:15:52 +08:00
Harry
3902929d9f feat: new runtime options 2026-01-07 00:01:55 +08:00
zhsama
760a739e91 Merge branch 'main' into feat/grouping-branching
# Conflicts:
#	web/package.json
2026-01-06 22:00:01 +08:00
Harry
1c7c475c43 feat: add Command node support
- Introduced Command node type in workflow with associated UI components and translations.
- Enhanced SandboxLayer to manage sandbox attachment for Command nodes during execution.
- Updated various components and constants to integrate Command node functionality across the workflow.
2026-01-06 19:30:38 +08:00
Novice
cef7fd484b chore: add trace metadata and streaming icon 2026-01-06 16:30:33 +08:00
Harry
caabca3f02 feat: sandbox layer for workflow execution 2026-01-06 15:47:20 +08:00
zhsama
d92c476388 feat(workflow): enhance group node availability checks
- Updated `checkMakeGroupAvailability` to include a check for existing group nodes, preventing group creation if a group node is already selected.
- Modified `useMakeGroupAvailability` and `useNodesInteractions` hooks to incorporate the new group node check, ensuring accurate group creation logic.
- Adjusted UI rendering logic in the workflow panel to conditionally display elements based on node type, specifically for group nodes.
2026-01-06 02:07:13 +08:00
Harry
36b7075cf4 Merge feat/llm-node-support-tools and fix type errors
- Merge origin/feat/llm-node-support-tools branch
- Fix unused variable tenant_id in dsl.py
- Add None checks for app and workflow in dsl.py
- Add type ignore for e2b_code_interpreter import

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 18:32:15 +08:00
Harry
f3761c26e9 Merge remote-tracking branch 'origin/main' into feat/llm-node-support-tools 2026-01-05 18:17:05 +08:00
Harry
43daf4f82c refactor: rename construct_environment method to _construct_environment for consistency across virtual environment providers 2026-01-05 18:13:13 +08:00
Harry
932be0ad64 feat: session management for InnerAPI&VM 2026-01-05 18:13:13 +08:00
zhsama
9012dced6a feat(workflow): improve group node interaction handling
- Enhanced `useNodesInteractions` to better manage group node handlers and connections, ensuring accurate identification of leaf nodes and their branches.
- Updated logic to create handlers based on node connections, differentiating between internal and external connections.
- Refined initial node setup to include target branches for group nodes, improving the overall interaction model for grouped elements.
2026-01-05 17:42:31 +08:00
zhsama
50bed78d7a feat(workflow): add group node support and translations
- Introduced GroupDefault node with metadata and default values for group nodes.
- Enhanced useNodeMetaData hook to handle group node author and description using translations.
- Added translations for group node functionality in English, Japanese, Simplified Chinese, and Traditional Chinese.
2026-01-05 16:29:00 +08:00
zhsama
60250355cb feat(workflow): enhance group edge management and validation
- Introduced `createGroupInboundEdges` function to manage edges for group nodes, ensuring proper connections to head nodes.
- Updated edge creation logic to handle group nodes in both inbound and outbound scenarios, including temporary edges.
- Enhanced validation in `useWorkflow` to check connections for group nodes based on their head nodes.
- Refined edge processing in `preprocessNodesAndEdges` to ensure correct handling of source handles for group edges.
2026-01-05 15:48:26 +08:00
zhsama
75afc2dc0e chore: update packageManager version in package.json to pnpm@10.27.0 2026-01-05 14:42:48 +08:00
zhsama
225b13da93 Merge branch 'main' into feat/grouping-branching 2026-01-04 21:56:13 +08:00
zhsama
37c748192d feat(workflow): implement UI-only group functionality
- Added support for UI-only group nodes, including custom-group, custom-group-input, and custom-group-exit-port types.
- Enhanced edge interactions to manage temporary edges connected to groups, ensuring corresponding real edges are deleted when temp edges are removed.
- Updated node interaction hooks to restore hidden edges and remove temp edges efficiently.
- Implemented logic for creating and managing group structures, including entry and exit ports, while maintaining execution graph integrity.
2026-01-04 21:54:15 +08:00
zhsama
b7a2957340 feat(workflow): implement ungroup functionality for group nodes
- Added `handleUngroup`, `getCanUngroup`, and `getSelectedGroupId` methods to manage ungrouping of selected group nodes.
- Integrated ungrouping logic into the `useShortcuts` hook for keyboard shortcut support (Ctrl + Shift + G).
- Updated UI to include ungroup option in the panel operator popup for group nodes.
- Added translations for the ungroup action in multiple languages.
2026-01-04 21:40:34 +08:00
zhsama
a6ce6a249b feat(workflow): refine strokeDasharray logic for temporary edges 2026-01-04 20:59:33 +08:00
zhsama
8834e6e531 feat(workflow): enhance group node functionality with head and leaf node tracking
- Added headNodeIds and leafNodeIds to GroupNodeData to track nodes that receive input and send output outside the group.
- Updated useNodesInteractions hook to include headNodeIds in the group node data.
- Modified isValidConnection logic in useWorkflow to validate connections based on leaf node types for group nodes.
- Enhanced preprocessNodesAndEdges to rebuild temporary edges for group nodes, connecting them to external nodes for visual representation.
2026-01-04 20:45:42 +08:00
zxhlyh
04f40303fd Merge branch 'main' into feat/llm-node-support-tools 2026-01-04 18:04:42 +08:00
zxhlyh
ececc5ec2c feat: llm node support tools 2026-01-04 18:03:47 +08:00
Yeuoly
81547c5981 feat: add tests for QueueTransportReadCloser to handle blocking reads and first chunk returns 2026-01-04 17:58:04 +08:00
Yeuoly
a911b268aa feat: improve read behavior in QueueTransportReadCloser to handle initial data wait and subsequent immediate returns 2026-01-04 17:58:04 +08:00
zhsama
39010fd153 Merge branch 'refs/heads/main' into feat/grouping-branching 2026-01-04 17:25:18 +08:00
Novice
dc8a618b6a feat: add think start end tag 2026-01-04 11:09:43 +08:00
Novice
f3e7fea628 feat: add tool call time 2026-01-04 10:29:02 +08:00
Harry
926349b1f8 feat: transform tool file message for external access 2026-01-02 15:23:16 +08:00
Yeuoly
ec29c24916 feat: enhance QueueTransportReadCloser to handle reading with available data and improve EOF handling 2026-01-02 15:03:17 +08:00
Harry
3842eade67 feat: add API endpoint to fetch list of available tools and corresponding request model 2026-01-02 15:00:42 +08:00
zhsama
bd338a9043 Merge branch 'main' into feat/grouping-branching 2026-01-02 01:34:02 +08:00
Yeuoly
cf7e2d5d75 feat: add unit tests for transport classes including queue, pipe, and socket transports 2026-01-01 18:57:03 +08:00
Yeuoly
2673fe05a5 feat: introduce TransportEOFError for handling closed transport scenarios and update transport classes to raise it 2026-01-01 18:46:08 +08:00
Yeuoly
180fdffab1 feat: update E2BEnvironment options to include default template, list file depth, and API URL 2025-12-31 18:29:22 +08:00
Yeuoly
62e422f75a feat: add NotSupportedOperationError and update E2BEnvironment to raise it for unsupported command status retrieval 2025-12-31 18:09:14 +08:00
Yeuoly
41565e91ed feat: add support for passing environment variables to E2B sandbox 2025-12-31 18:07:43 +08:00
Yeuoly
c9610e9949 feat: implement transport abstractions for virtual environments and add E2B environment provider 2025-12-31 17:51:38 +08:00
Yeuoly
29dc083d8d feat: enhance DockerDaemonEnvironment with options handling and default values 2025-12-31 16:19:47 +08:00
zhsama
39d6383474 Merge branch 'main' into feat/grouping-branching 2025-12-30 22:01:20 +08:00
Yeuoly
f679065d2c feat: extend construct_environment method to accept environments parameter in virtual environment classes 2025-12-30 21:07:16 +08:00
Yeuoly
0a97e87a8e docs: clarify usage of close() method in PipeTransport docstring 2025-12-30 20:58:51 +08:00
Yeuoly
4d81455a83 fix: correct PipeTransport file descriptor assignments and architecture matching case sensitivity 2025-12-30 20:54:39 +08:00
Yeuoly
39091fe4df feat: enhance command execution and status retrieval in virtual environments with transport abstractions 2025-12-30 19:37:30 +08:00
Harry
bac5245cd0 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2025-12-30 19:11:29 +08:00
Yeuoly
274f9a3f32 Refactor code structure for improved readability and maintainability 2025-12-30 16:31:34 +08:00
Yeuoly
a513ab9a59 feat: implement DSL prediction API and virtual environment base classes 2025-12-30 15:24:54 +08:00
zxhlyh
e83635ee5a Merge branch 'main' into feat/llm-node-support-tools 2025-12-30 11:47:54 +08:00
zxhlyh
d79372a46d Merge branch 'main' into feat/llm-node-support-tools 2025-12-30 11:47:26 +08:00
zxhlyh
bbd11c9e89 feat: llm node support tools 2025-12-30 10:40:01 +08:00
autofix-ci[bot]
152fd52cd7 [autofix.ci] apply automated fixes 2025-12-30 02:23:25 +00:00
Novice
ccabdbc83b Merge branch 'main' into feat/agent-node-v2 2025-12-30 10:20:42 +08:00
Novice
56c8221b3f chore: remove frontend changes 2025-12-30 10:19:40 +08:00
Stephen Zhou
add8980790 add missing translation 2025-12-30 10:06:49 +08:00
zhsama
5157e1a96c Merge branch 'main' into feat/grouping-branching 2025-12-29 23:33:28 +08:00
zxhlyh
d132abcdb4 merge main 2025-12-29 15:55:45 +08:00
zxhlyh
d60348572e feat: llm node support tools 2025-12-29 14:55:26 +08:00
Novice
f55faae31b chore: strip reasoning from chatflow answers and persist generation details 2025-12-25 13:59:38 +08:00
zxhlyh
0cff94d90e Merge branch 'main' into feat/llm-node-support-tools 2025-12-25 13:45:49 +08:00
Novice
7fc25cafb2 feat: basic app add thought field 2025-12-25 10:28:21 +08:00
zxhlyh
a7859de625 feat: llm node support tools 2025-12-24 14:15:55 +08:00
zhsama
4bb76acc37 Merge branch 'main' into feat/grouping-branching 2025-12-23 23:56:26 +08:00
zhsama
b513933040 Merge branch 'main' into feat/grouping-branching
# Conflicts:
#	web/app/components/workflow/block-icon.tsx
#	web/app/components/workflow/hooks/use-nodes-interactions.ts
#	web/app/components/workflow/index.tsx
#	web/app/components/workflow/nodes/components.ts
#	web/app/components/workflow/selection-contextmenu.tsx
#	web/app/components/workflow/utils/workflow-init.ts
2025-12-23 23:55:21 +08:00
zhsama
18ea9d3f18 feat: Add GROUP node type and update node configuration filtering in Graph class 2025-12-23 20:44:36 +08:00
zhsama
7b660a9ebc feat: Simplify edge creation for group nodes in useNodesInteractions hook 2025-12-23 17:12:09 +08:00
zhsama
783a49bd97 feat: Refactor group node edge creation logic in useNodesInteractions hook 2025-12-23 16:44:11 +08:00
zhsama
d3c6b09354 feat: Implement group node edge handling in useNodesInteractions hook 2025-12-23 16:37:42 +08:00
zhsama
3d61496d25 feat: Enhance CustomGroupNode with exit ports and visual indicators 2025-12-23 15:36:53 +08:00
zhsama
16bff9e82f Merge branch 'refs/heads/main' into feat/grouping-branching 2025-12-23 15:27:54 +08:00
zhsama
22f25731e8 refactor: streamline edge building and node filtering in workflow graph 2025-12-22 18:59:08 +08:00
zhsama
035f51ad58 Merge branch 'main' into feat/grouping-branching 2025-12-22 18:18:37 +08:00
zhsama
e9795bd772 feat: refine workflow graph processing to exclude additional UI-only node types 2025-12-22 18:17:25 +08:00
zhsama
93b516a4ec feat: add UI-only group node types and enhance workflow graph processing 2025-12-22 17:35:33 +08:00
zhsama
fc9d5b2a62 feat: implement group node functionality and enhance grouping interactions 2025-12-19 15:17:45 +08:00
zhsama
e3bfb95c52 feat: implement grouping availability checks in selection context menu 2025-12-18 17:11:34 +08:00
Novice
047ea8c143 chore: improve type checking 2025-12-18 10:09:31 +08:00
zhsama
752cb9e4f4 feat: enhance selection context menu with alignment options and grouping functionality
- Added alignment buttons for nodes with tooltips in the selection context menu.
- Implemented grouping functionality with a new "Make group" option, including keyboard shortcuts.
- Updated translations for the new grouping feature in multiple languages.
- Refactored node selection logic to improve performance and readability.
2025-12-17 19:52:02 +08:00
Novice
f54b9b12b0 feat: add process data 2025-12-17 17:34:02 +08:00
Novice
cb99b8f04d chore: handle migrations 2025-12-17 15:59:09 +08:00
Novice
7c03bcba2b Merge branch 'main' into feat/agent-node-v2 2025-12-17 15:55:27 +08:00
Novice
92fa7271ed refactor(llm node): remove unused args 2025-12-17 15:42:23 +08:00
Novice
d3486cab31 refactor(llm node): tool call tool result entity 2025-12-17 10:30:21 +08:00
Novice
dd0a870969 Merge branch 'main' into feat/agent-node-v2 2025-12-16 15:17:29 +08:00
Novice
0c4c268003 chore: fix ci issues 2025-12-16 15:14:42 +08:00
autofix-ci[bot]
ff57848268 [autofix.ci] apply automated fixes 2025-12-15 07:29:20 +00:00
Novice
d223fee9b9 Merge branch 'main' into feat/agent-node-v2 2025-12-15 15:26:48 +08:00
Novice
ad18d084f3 feat: add sequence output variable. 2025-12-15 14:59:06 +08:00
Novice
9941d1f160 feat: add llm log metadata 2025-12-15 14:18:53 +08:00
Novice
13fa56b5b1 feat: add tracing metadata 2025-12-12 16:24:49 +08:00
Novice
9ce48b4dc4 fix: llm generation variable 2025-12-12 11:08:49 +08:00
Novice
abb2b860f2 chore: remove unused changes 2025-12-10 15:04:19 +08:00
Novice
930c36e757 fix: llm detail store 2025-12-09 20:56:54 +08:00
Novice
2d2ce5df85 feat: generation stream output. 2025-12-09 16:22:17 +08:00
Novice
2b23c43434 feat: add agent package 2025-12-09 11:36:47 +08:00
hjlarry
bd597497e7 prevent comment thread pinch 2025-11-27 15:37:46 +08:00
hjlarry
be1f841b37 control panel should be z-60 2025-11-24 16:27:37 +08:00
hjlarry
d98a428100 Revert "fix model config panel z-index"
This reverts commit f85bf0867c.
2025-11-24 16:23:10 +08:00
hjlarry
26d330e744 setting dialog should be z-index 60 2025-11-24 16:19:29 +08:00
hjlarry
61bed38afb Reapply "fix system model setting modal index"
This reverts commit 16fbc6b270.
2025-11-24 16:16:56 +08:00
hjlarry
16fbc6b270 Revert "fix system model setting modal index"
This reverts commit fe132de3c8.
2025-11-24 16:16:45 +08:00
hjlarry
fe132de3c8 fix system model setting modal index 2025-11-24 16:12:18 +08:00
hjlarry
f85bf0867c fix model config panel z-index 2025-11-24 16:10:46 +08:00
hjlarry
b441a7fbc4 fix style 2025-11-18 10:31:56 +08:00
hjlarry
8497d296b1 feat: can drag avatar to move the comment input 2025-11-18 09:53:15 +08:00
hjlarry
3ee2508ec8 fix comment input also not allow to zoomin canvas 2025-11-17 16:17:34 +08:00
hjlarry
ff8d5ac4b5 fix gesture zoom in 2025-11-17 15:37:43 +08:00
hjlarry
7fc98b2183 fix sync of webhook node 2025-11-14 11:31:08 +08:00
hjlarry
a4adafd8ad remove the single env button 2025-11-14 11:00:33 +08:00
hjlarry
c1bc3aeab9 fix migration file 2025-11-14 10:58:16 +08:00
hjlarry
edf962cdb5 Merge branch 'feat/collaboration' into feat/collaboration2 2025-11-13 15:31:21 +08:00
hjlarry
2fa13cdf86 if session unauthorized, rejoin 2025-11-11 16:38:55 +08:00
hjlarry
39de7673eb add redis key expire time for collaboration 2025-11-11 16:13:05 +08:00
hjlarry
d930d8cc4a fix setting dialog z-index 2025-11-10 18:02:36 +08:00
hjlarry
97626a3ba5 can't zoomOnPinch when mouse over comment preview 2025-11-07 09:27:49 +08:00
hjlarry
b7f7d04639 fix comment input mention not display avatar 2025-11-05 18:09:42 +08:00
hjlarry
13674bd859 comment input mode click empty place can close 2025-11-05 17:41:10 +08:00
hjlarry
fb9cbc0471 comment mode can't click node 2025-11-05 14:14:36 +08:00
hjlarry
2f60288d86 fix: resize workflow canvas cause incorrect comment position 2025-11-05 14:08:21 +08:00
hjlarry
ee3ded0fc2 fix control layer 2025-10-22 10:25:31 +08:00
hjlarry
351bad9ec4 fix minimap disable collobroation 2025-10-22 10:21:25 +08:00
hjlarry
9bf7473bbf hide comments when disable collaboration 2025-10-22 10:10:23 +08:00
hjlarry
fa09c88f5c add CollaborationEnabled for comment shortcut 2025-10-22 09:59:43 +08:00
hjlarry
83df78d0c8 hide comments icon when disable collabrotion mode 2025-10-22 09:50:37 +08:00
hjlarry
79266f7302 add note node sync data 2025-10-21 15:34:44 +08:00
hjlarry
7fecc7236c add more collaboration manager unit tests 2025-10-21 14:37:31 +08:00
hjlarry
9c7f6b7b71 add crdt provider unittests 2025-10-21 14:27:13 +08:00
hjlarry
b46da93e99 add unittests for event-emitter 2025-10-21 14:12:13 +08:00
hjlarry
e299a1fb20 add ws manager unit tests 2025-10-21 14:09:25 +08:00
hjlarry
122033cadb sort out code 2025-10-21 12:27:11 +08:00
hjlarry
df9bd1b3b5 add Parameters of ParametersExtractor node sync 2025-10-21 12:14:48 +08:00
hjlarry
f74492eb59 add prompt_template of LLM node sync 2025-10-21 12:00:42 +08:00
hjlarry
eaf1ae37dd add ENABLE_COLLABORATION_MODE 2025-10-21 11:46:28 +08:00
hjlarry
8e3b412ff6 fix websocket cookie auth 2025-10-21 11:46:00 +08:00
hjlarry
ba17f576e9 Merge remote-tracking branch 'myori/main' into feat/collaboration 2025-10-21 08:47:01 +08:00
lyzno1
9415ce4512 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-20 10:04:13 +08:00
lyzno1
239536933b Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-17 19:33:40 +08:00
hjlarry
80b34598e9 try to fix start node collaboration 2025-10-16 10:18:37 +08:00
lyzno1
9c66b92c34 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-15 21:08:08 +08:00
lyzno1
79872ea5e2 Refine workflow comment avatar highlight ring 2025-10-15 14:58:03 +08:00
lyzno1
cbf181bd76 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-15 11:06:23 +08:00
lyzno1
1393d21858 fix(web): adjust online users badge sizing and add pointer cursor to chevron 2025-10-15 11:06:04 +08:00
lyzno1
3a46b7bd18 fix(web): restyle workflow online-users avatar stack and dropdown 2025-10-15 10:48:38 +08:00
lyzno1
0bbfd81d26 fix: tooltip font 2025-10-15 10:35:42 +08:00
lyzno1
86db517142 fix(web): make workflow online-users dropdown click-based with revised spacing 2025-10-15 10:34:00 +08:00
lyzno1
50151f4007 fix(web): adjust workflow online-users icon and label styles 2025-10-15 10:21:54 +08:00
lyzno1
0395d1f91f Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-15 10:02:55 +08:00
lyzno1
5f4c1e4057 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-15 09:33:54 +08:00
hjlarry
d14413f3b0 comment click caculate the panel width 2025-10-15 09:11:44 +08:00
lyzno1
4fd968270c Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-14 18:56:27 +08:00
hjlarry
708a7dd362 fix comment mode can't drag node 2025-10-14 17:31:03 +08:00
hjlarry
cd85b75312 fix control panel hovered by comment icon 2025-10-14 17:16:33 +08:00
hjlarry
d685da377e fix minimap 2025-10-14 17:11:22 +08:00
hjlarry
8583992d23 when new user connected should rebroadcast the graph data 2025-10-14 16:57:02 +08:00
hjlarry
23fec75c90 cache the new created comment 2025-10-14 11:21:18 +08:00
hjlarry
ebe7303894 fix loop variable not sync well 2025-10-14 10:10:34 +08:00
hjlarry
79fb977f10 fix loop/iteration incorrect nodes width 2025-10-14 09:54:37 +08:00
lyzno1
c0af3414a3 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-14 07:54:05 +08:00
hjlarry
1857d37fae sync app published 2025-10-13 16:42:17 +08:00
hjlarry
60fdbb56a9 fix all lines missing 2025-10-13 16:38:50 +08:00
hjlarry
4c7853164d fix mcp server edit modal disappear 2025-10-13 16:36:39 +08:00
hjlarry
6c7a3ce4bb sync workflow publish to mcp server 2025-10-13 14:07:26 +08:00
lyzno1
a9e74b21f1 fix: increase ContentDialog z-index to display above workflow operators
The collaboration feature increased workflow operator z-index from z-10 to z-[60].
This caused the AppInfo ContentDialog (z-30) to appear below the operator buttons.
Increased ContentDialog z-index to z-[70] to ensure proper layer hierarchy.
2025-10-13 14:00:28 +08:00
lyzno1
e6730f7164 fix: dropdown menu border 2025-10-13 13:15:54 +08:00
lyzno1
3344723393 fix: prevent Enter key from triggering submit during IME composition
Add isComposing check at the start of handleKeyDown to ignore keyboard events during IME (Chinese/Japanese/Korean) input composition. This follows the existing pattern used in tag-management component and prevents premature form submission when users press Enter to confirm IME candidates.
2025-10-13 13:09:52 +08:00
lyzno1
c571185a91 fix: extract @mention highlighting from content in real-time to persist after edit 2025-10-13 13:03:55 +08:00
lyzno1
325c1cfa41 fix: prevent Save button flash by maintaining loading state until edit closes 2025-10-13 12:56:18 +08:00
lyzno1
1069421753 refactor: replace keyboard shortcut icons with custom EnterKey icon 2025-10-13 12:52:07 +08:00
lyzno1
b33a97ea5b style: update comment thread UI with design specs
- Fix edit bubble: keep avatar visible and match ThreadMessage layout
- Update edit container: rounded-xl, p-1, shadow-md, backdrop-blur
- Add keyboard shortcut icons (Cmd+Enter) to Save button
- Fix hover background: full-width with -mx-4 negative margin technique
- Apply design tokens consistently across components
2025-10-13 12:42:41 +08:00
lyzno1
d2c1d4c337 style: update mention dropdown UI to match design specs
- Update container: rounded-xl, border-0.5px, backdrop-blur, bg opacity 95%
- Update items: rounded-md with asymmetric padding (py-1 pl-2 pr-3)
- Use project design tokens (shadow-lg, bg-state-base-hover)
2025-10-13 12:24:28 +08:00
lyzno1
67762cf1d8 chore: resolve merge conflict in pnpm-lock.yaml
Merged origin/main into feat/collaboration and resolved dependency lock file conflicts by regenerating pnpm-lock.yaml through clean install.

Changes:
- Resolved eslint version differences (9.36.0 vs 9.35.0)
- Updated lock file reflects current dependency resolution
- All other changes from main branch successfully merged
2025-10-13 11:53:43 +08:00
hjlarry
eadce0287c app meta sync 2025-10-13 11:49:54 +08:00
hjlarry
ecaff5b63f fix loop var change cause collaboration crash 2025-10-13 10:06:50 +08:00
hjlarry
a300c9ef96 fix canvas empty on the bottom 2025-10-13 09:38:59 +08:00
lyzno1
44fe71e4db fix: ensure comment thread always scrolls to bottom on first render 2025-10-12 13:27:42 +08:00
lyzno1
0ac32188c5 feat: implement comprehensive focus management for comment thread
- Add forwardRef support to MentionInput to expose textarea ref
- Auto-focus reply input when thread opens (100ms delay)
- Restore focus after reply submission and edit operations
- Add Esc key handler to close thread with smart guards
- Enhance accessibility with ARIA attributes (dialog, modal, labelledby)
- Improve keyboard navigation and user experience

Implements P0-P3 priorities following WCAG 2.1 AA accessibility standards
2025-10-12 13:21:57 +08:00
lyzno1
9aaace706b feat: optimize comments panel filter UI and interaction logic 2025-10-12 13:04:24 +08:00
lyzno1
b22de5a824 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-12 13:04:07 +08:00
lyzno1
97463661c1 fix: translations 2025-10-11 20:33:55 +08:00
lyzno1
239a11855a fix: prevent dropdown from closing when showing inline delete confirmation
Use pre-rendering strategy with CSS visibility control instead of conditional rendering to avoid race condition between React state update and PortalToFollowElem's click-outside detection.
2025-10-11 20:21:52 +08:00
lyzno1
0632557d91 feat: use inline delete confirm for comment reply deletion(second time) 2025-10-11 18:37:41 +08:00
lyzno1
44be7d4c51 Revert "feat: use inline delete confirm for comment reply deletion"
This reverts commit a077a3f609.
2025-10-11 18:24:15 +08:00
lyzno1
efb4a9d327 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-11 18:18:40 +08:00
lyzno1
a077a3f609 feat: use inline delete confirm for comment reply deletion 2025-10-11 18:06:31 +08:00
lyzno1
3ccec0aab0 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-11 17:21:05 +08:00
hjlarry
3006133f0e sync node title 2025-10-11 15:48:51 +08:00
lyzno1
79beb25530 feat: add tooltips and improve delete button styling in CommentThread
- Add compact tooltips to Delete, Resolve, Previous, and Next buttons
- Change delete button hover to red background and text
- Use existing i18n translations for tooltip content
2025-10-11 15:22:37 +08:00
lyzno1
b47b228164 fix: align dropdown menu styles with design specs in CommentThread
- Update background to blur variant with backdrop filter
- Change border radius from lg to xl (12px)
- Add rounded corners to menu items to prevent hover overflow
2025-10-11 15:10:57 +08:00
lyzno1
be91db14d9 fix: add hover effect to first message in CommentThread
Wrap the root comment message with the same hover container as replies to ensure consistent hover behavior across all messages.
2025-10-11 15:08:27 +08:00
lyzno1
120893209e fix: align CommentPreview styles with design specs
- Update border radius to 24px with 3px bottom-left corner
- Change border width to 0.5px
- Add backdrop blur effect with bg-blur variant
- Replace custom shadow with standard shadow-lg
- Maintain proper Tailwind utility class usage
2025-10-11 15:02:06 +08:00
lyzno1
f19630bcf5 Merge remote-tracking branch 'origin/main' into feat/collaboration 2025-10-11 14:43:20 +08:00
lyzno1
9d93fda471 refactor: separate loading states for comment operations
Separate loading states to distinguish between different operations:
- activeCommentDetailLoading: loading comment details, delete/resolve operations
- replySubmitting: sending new replies
- replyUpdating: editing existing replies

Changes:
- Add replySubmitting and replyUpdating states to comment store
- Restore full-screen loading overlay for comment detail loading
- Use inline spinner (RiLoader2Line) in send/save buttons for reply operations
- Update loading state usage in handleCommentReply and handleCommentReplyUpdate
- Pass separated loading states from workflow index to CommentThread component

Benefits:
- UI clarity: different loading states have appropriate visual feedback
- Better UX: users can still navigate while sending replies
- Clear separation of concerns: each operation has its own loading state
2025-10-11 14:34:35 +08:00
lyzno1
d986659add chore: replace Chinese/Japanese comments with English translations 2025-10-11 14:20:37 +08:00
lyzno1
00dab7ca5f feat: improve mention input loading state and prevent button flash on submit 2025-10-11 14:20:37 +08:00
lyzno1
a4add403fb Fix MentionInput layout and improve comment hover styling 2025-10-11 14:20:37 +08:00
lyzno1
e9cdc96c74 feat: prevent duplicate @ insertion in mention input with visual feedback 2025-10-11 14:20:37 +08:00
lyzno1
6af1fea232 fix: update mention button icon color for better visibility in light mode 2025-10-11 14:20:37 +08:00
lyzno1
45d5d9e44f fix: mention input cannot scroll 2025-10-11 14:20:36 +08:00
lyzno1
376a084aca refactor: use PortalToFollowElem for dropdown with scroll handling
- Replace inline dropdown with PortalToFollowElem to prevent container overflow
- Use z-[100] for dropdown to ensure proper stacking
- Remove redundant outside click handler (handled by PortalToFollowElem)
- Add scroll event listener to auto-close dropdown when scrolling
- Dropdown now renders via portal outside message container
2025-10-11 14:20:36 +08:00
lyzno1
d1f42d47fe fix: improve dropdown menu hover and positioning 2025-10-11 14:20:36 +08:00
lyzno1
64b8fd87ad fix: improve dropdown menu positioning and z-index 2025-10-11 14:20:36 +08:00
lyzno1
364be48248 feat: add smooth scroll to comment thread 2025-10-11 14:20:36 +08:00
hjlarry
2bce046278 fix node error default value not sync 2025-10-11 14:17:58 +08:00
hjlarry
1120d552b6 fix knowledge node add/delete dataset not sync 2025-10-11 14:09:37 +08:00
hjlarry
69cab0817f fix comment input hoverd by comment content 2025-10-11 10:41:28 +08:00
hjlarry
c4d03bf378 change event type name of websocket 2025-10-11 09:07:02 +08:00
hjlarry
6c039be2ca fix jump to other page not disconnect websocket 2025-10-10 16:51:57 +08:00
hjlarry
832dabc8a4 only author can move the comment position 2025-10-10 15:58:01 +08:00
hjlarry
1da2028d9d keep the previous private property when import node data 2025-10-10 13:26:55 +08:00
hjlarry
7c3f6dcc8d use cloneDeep instead of json.parse 2025-10-10 10:34:00 +08:00
hjlarry
1472884eb5 sync the create/delete app in the list page 2025-10-10 10:18:23 +08:00
hjlarry
ec22b1c706 fix user uploaded avatar display incorrect 2025-10-09 17:40:20 +08:00
hjlarry
a1712df7c2 comment author avatar is the first avatar 2025-10-09 17:12:37 +08:00
hjlarry
a40e11cb3e only can edit own replies 2025-10-09 17:02:39 +08:00
hjlarry
61c46bea40 fix missing i18n 2025-10-09 16:55:53 +08:00
hjlarry
1c5c28a82c fix switch to cursor mode comment input still exists 2025-10-09 16:36:20 +08:00
hjlarry
2310145937 comment reply auto scoll down to bottom 2025-10-09 15:50:23 +08:00
hjlarry
6a9c9cadd0 fix comment hover the variable panel 2025-10-09 15:44:56 +08:00
hjlarry
7774ff9944 fix version not display 2025-10-09 15:07:36 +08:00
hjlarry
33d4c95470 can update comment position 2025-10-05 10:17:04 +08:00
hjlarry
659cbc05a9 fix mention-input in the bottom of the browser 2025-10-04 21:24:27 +08:00
hjlarry
6ce65de2cd fix merged main issues 2025-10-04 21:11:59 +08:00
hjlarry
93b2eb3ff6 Merge remote-tracking branch 'myori/main' into p284 2025-10-04 15:28:29 +08:00
hjlarry
bf71300635 improve comment cursor move 2025-10-04 14:36:10 +08:00
hjlarry
37ecd4a0bc fix @ input problem 2025-10-04 13:39:00 +08:00
hjlarry
827a1b181b fix comment icon position 2025-10-04 13:25:59 +08:00
hjlarry
c4e7cb75cd cache the mentioned users 2025-10-04 11:22:02 +08:00
hjlarry
98e4bfcda8 click comment icon not switch to comment mode 2025-10-03 23:36:56 +08:00
hjlarry
ee48ca7671 fix default comment icon 2025-09-30 15:23:43 +08:00
hjlarry
4ba6de1116 add leader session more check 2025-09-29 14:01:42 +08:00
hjlarry
bfbe636555 fix docker file websocket mode 2025-09-29 13:35:10 +08:00
hjlarry
54ae43ef47 sync children node data 2025-09-26 14:07:34 +08:00
hjlarry
7a74b5ee3e fix add child node resize parent node size 2025-09-26 14:04:50 +08:00
hjlarry
0e9d43d605 http node data sync 2025-09-26 11:13:20 +08:00
hjlarry
cc54363c27 sync the prompt editor 2025-09-26 10:48:00 +08:00
hjlarry
89affe3139 fix opened panel be affected 2025-09-26 09:20:33 +08:00
hjlarry
2c4977dbb1 fix bug 2025-09-25 16:56:06 +08:00
hjlarry
e240175116 sync nodes 2025-09-25 16:31:46 +08:00
hjlarry
2398ed6fe8 fix update env api update time error 2025-09-25 16:28:33 +08:00
hjlarry
a8420ac33c add fragment to prevent list missing key 2025-09-25 09:52:08 +08:00
hjlarry
8470be6411 improve delete comment i18n 2025-09-25 09:41:59 +08:00
hjlarry
3d6295c622 refactor delete comment and reply 2025-09-25 09:35:46 +08:00
17hz
ff2f7206f3 bump nextjs to 15.5 and turbopack for development mode (#24346)
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: 非法操作 <hjlarry@163.com>
2025-09-25 09:10:09 +08:00
hjlarry
b937fc8978 app online user list 2025-09-24 17:03:33 +08:00
hjlarry
86a9a51952 add comment preview 2025-09-24 12:54:54 +08:00
hjlarry
4188c9a1dd fix dark theme 2025-09-24 10:08:33 +08:00
hjlarry
8c00f89e36 add icon to zoom2fit 2025-09-23 22:22:28 +08:00
hjlarry
9e8ac5c96b refactor cursor and add hide comment 2025-09-23 22:13:02 +08:00
hjlarry
05a67f4716 add display/hide collaborator cursors 2025-09-23 17:37:40 +08:00
hjlarry
f49476a206 add show/hide minimap 2025-09-23 17:20:41 +08:00
hjlarry
c1e9c56e25 fix style 2025-09-23 17:19:36 +08:00
hjlarry
d5dd73cacf add i18n for comment 2025-09-23 16:19:04 +08:00
hjlarry
21f7a49b4e fix restore page crash 2025-09-23 15:44:57 +08:00
hjlarry
716ac04e13 add comment shortcut 2025-09-23 15:40:53 +08:00
hjlarry
c28a32fc47 fix handleModeComment 2025-09-23 15:35:28 +08:00
hjlarry
31cba28e8a improve comment cursor icon 2025-09-23 15:28:22 +08:00
hjlarry
48cd7e6481 input comment should not cancel comment mode 2025-09-23 14:48:31 +08:00
hjlarry
47aba1c9f9 fix style 2025-09-23 14:41:34 +08:00
hjlarry
0f3f8bc0d9 make mention input can display name different color 2025-09-23 11:38:38 +08:00
hjlarry
e0df12c212 fix mentioned names color 2025-09-23 11:24:17 +08:00
hjlarry
eb448d9bb8 fix avatar background color 2025-09-23 11:09:02 +08:00
hjlarry
0ba77f13db fix avatar inset 2025-09-23 10:46:18 +08:00
hjlarry
f0a2eb843c fix user cursor should not over the panel 2025-09-23 10:35:16 +08:00
hjlarry
5cf3d9e4d9 fix nginx config 2025-09-22 14:21:07 +08:00
hjlarry
41958f55cd fix CSP 2025-09-22 14:20:11 +08:00
hjlarry
600ad232e1 fix config 2025-09-22 14:20:11 +08:00
hjlarry
7a3825cfce fix docker config 2025-09-22 14:20:11 +08:00
hjlarry
9519653422 change default ws url 2025-09-22 14:20:11 +08:00
hjlarry
efa2307c73 change default ws url 2025-09-22 14:20:11 +08:00
hjlarry
068fa3d0e3 fix CI 2025-09-22 14:20:11 +08:00
hjlarry
13d8dbd542 fix CI 2025-09-22 14:20:08 +08:00
hjlarry
b442ba8b2b fix UserAvatarList background color 2025-09-19 12:07:07 +08:00
hjlarry
10e36d2355 add avatar on canvas node 2025-09-19 10:43:28 +08:00
hjlarry
13c53fedad add avatar display on node 2025-09-19 10:07:01 +08:00
hjlarry
4bda1bd884 open node panel not affect others 2025-09-18 17:42:02 +08:00
hjlarry
3abe7850d6 fix migration file 2025-09-18 16:30:40 +08:00
hjlarry
b50284d864 fix merge problem 2025-09-18 15:45:53 +08:00
hjlarry
81c6e52401 Merge remote-tracking branch 'origin/p254' into p284 2025-09-18 15:14:55 +08:00
hjlarry
847d257366 Merge branch 'p254' into p284 2025-09-18 14:50:59 +08:00
hjlarry
687662cf1f comment sync 2025-09-18 13:27:27 +08:00
hjlarry
6432d98469 improve the icon display on canvas 2025-09-18 11:49:43 +08:00
hjlarry
088ccf8b8d add UserAvatarList component 2025-09-18 09:47:07 +08:00
hjlarry
e8683bf957 fix comment cursor position 2025-09-18 09:17:45 +08:00
hjlarry
4653981b6b not display more icon when in edit mode 2025-09-17 20:45:54 +08:00
hjlarry
e2547413d3 fix edit input mouse pos 2025-09-17 20:40:59 +08:00
hjlarry
ea17f41b5b refactor reply code 2025-09-17 20:29:23 +08:00
hjlarry
29178d8adf can edit and delete a reply 2025-09-17 17:44:09 +08:00
hjlarry
7e86ead574 upgrade style 2025-09-17 16:41:10 +08:00
hjlarry
72debcb228 refactor mention input 2025-09-17 16:28:47 +08:00
hjlarry
72737dabc7 fix at can't click bug 2025-09-17 14:50:05 +08:00
hjlarry
f6e5cb4381 improve comment detail 2025-09-17 14:34:36 +08:00
hjlarry
ffad3b5fb1 comment detail window fix height 2025-09-17 13:45:56 +08:00
hjlarry
cba9fc3020 add comment reply 2025-09-17 12:50:42 +08:00
hjlarry
e776accaf3 add top operation buttons of comment detail 2025-09-17 10:45:15 +08:00
hjlarry
3eac26929a sync the comment panel and canvas 2025-09-17 09:13:31 +08:00
hjlarry
4d3adec738 click canvas icon display the active comment detail 2025-09-17 09:01:16 +08:00
hjlarry
89bed479e4 improve comment panel 2025-09-16 17:25:51 +08:00
hjlarry
fdd673a3a9 improve comments panel 2025-09-16 13:39:31 +08:00
hjlarry
22f6d285c7 fix comment cursor in panel incorrect 2025-09-16 10:20:12 +08:00
hjlarry
10aa16b471 add workflow comment panel 2025-09-16 09:51:12 +08:00
hjlarry
b3838581fd improve mention 2025-09-15 17:13:46 +08:00
hjlarry
affbe7ccdb can mention user in the create comment 2025-09-15 16:42:31 +08:00
hjlarry
dd8577f832 comments display on canvas 2025-09-15 14:16:06 +08:00
hjlarry
d7f5da5df4 display comments avatar on the canvas 2025-09-15 11:41:06 +08:00
hjlarry
9fda130b3a fix click comment once more then esc not work 2025-09-15 11:11:07 +08:00
hjlarry
72cdbdba0f fix chat input style 2025-09-15 09:20:06 +08:00
hjlarry
b92a153902 refactor code 2025-09-14 13:03:08 +08:00
hjlarry
9f2927979b fix comment cursor icon 2025-09-14 12:50:18 +08:00
hjlarry
75257232c3 add create comment frontend 2025-09-14 12:10:37 +08:00
hjlarry
1721314c62 add frontend comment service 2025-09-13 17:57:19 +08:00
hjlarry
fc230bcc59 add force update workflow to support restore 2025-09-12 16:27:12 +08:00
hjlarry
b4636ddf44 add leader restore workflow 2025-09-12 15:34:41 +08:00
hjlarry
b1140301a4 sync import dsl 2025-09-12 14:46:40 +08:00
hjlarry
58cd785da6 use const for cursor move config 2025-09-11 09:36:22 +08:00
hjlarry
2035186cd2 click avatar to follow user cursor position 2025-09-11 09:26:05 +08:00
hjlarry
53ba6aadff cursor pos transform to canvas 2025-09-11 09:07:03 +08:00
hjlarry
f091868b7c use new get avatar api 2025-09-10 15:15:43 +08:00
hjlarry
89bedae0d3 remove the test code for develop collaboration 2025-09-10 14:27:20 +08:00
hjlarry
c8acc48976 ruff format 2025-09-10 14:25:37 +08:00
hjlarry
21fee59b22 use new features update api 2025-09-10 14:24:38 +08:00
hjlarry
957a8253f8 change user list to conversation var panel left 2025-09-10 09:26:38 +08:00
hjlarry
d5fc3e7bed add new conversation vars update api 2025-09-10 09:24:22 +08:00
hjlarry
ab438b42da use new env variables update api 2025-09-10 09:07:55 +08:00
hjlarry
3867fece4a mcp server update 2025-09-09 15:01:38 +08:00
hjlarry
2b908d4fbe add app state update 2025-09-09 14:24:37 +08:00
hjlarry
8ff062ec8b change user default color 2025-09-09 10:20:02 +08:00
hjlarry
294fc41aec add redo undo manager of CRDT 2025-09-09 09:58:55 +08:00
hjlarry
684f7df158 node data use crdt data 2025-09-08 14:46:28 +08:00
hjlarry
c3287755e3 add request leader to sync graph 2025-09-08 09:00:20 +08:00
hjlarry
9f97f4d79e fix cursor style 2025-09-06 15:54:19 +08:00
hjlarry
34eb421649 add currentUserId is me 2025-09-06 12:27:54 +08:00
hjlarry
850b05573e add dropdown users list 2025-09-06 12:01:49 +08:00
hjlarry
6ec8bfdfee add mouse over avatar display username 2025-09-06 11:29:45 +08:00
hjlarry
81638c248e use one getUserColor func 2025-09-06 11:22:59 +08:00
hjlarry
2e11b1298e add online users avatar 2025-09-06 11:19:47 +08:00
hjlarry
20320f3a27 show online users on the canvas 2025-09-06 00:08:17 +08:00
hjlarry
4019c12d26 fix missing import 2025-09-05 22:20:07 +08:00
hjlarry
cf72184ce4 each browser tab session a ws connected obj 2025-09-05 22:19:16 +08:00
hjlarry
ca8d15bc64 add mention user list api 2025-08-31 13:42:59 +08:00
hjlarry
a91c897fd3 improve code 2025-08-31 00:43:34 +08:00
hjlarry
816bdf0320 add delete comment and reply 2025-08-31 00:28:01 +08:00
hjlarry
d4a6acbd99 add update reply 2025-08-30 23:49:27 +08:00
hjlarry
e421db4005 add resolve comment 2025-08-30 22:37:01 +08:00
hjlarry
9067c2a9c1 add update comment 2025-08-22 17:48:14 +08:00
hjlarry
9f7321ca1a add create reply 2025-08-22 17:33:47 +08:00
hjlarry
5fa01132b9 add create and list comment api 2025-08-22 16:47:08 +08:00
hjlarry
e082b6d599 add workflow comment models 2025-08-22 11:28:26 +08:00
hjlarry
d44be2d835 add leader submit graph data 2025-08-21 17:53:39 +08:00
hjlarry
7dc8557033 add Leader election 2025-08-21 16:17:16 +08:00
hjlarry
72037a1865 improve cursors logic 2025-08-21 14:27:41 +08:00
hjlarry
2d1621c43d add leader but not review 2025-08-08 14:54:18 +08:00
hjlarry
d1a5db3310 rm useCollaborativeCursors compoent 2025-08-07 18:03:12 +08:00
hjlarry
ad8fd8fecc clone the node to avoid loro recursive 2025-08-07 17:45:38 +08:00
hjlarry
be74b76079 refactor websocket init 2025-08-07 17:31:12 +08:00
hjlarry
dd64af728f refactor the cursors component 2025-08-07 14:29:23 +08:00
hjlarry
e43b46786d refactor all the frontend code 2025-08-07 10:58:53 +08:00
hjlarry
3f3b37b843 refactor to support mutli websocket connections 2025-08-06 17:05:39 +08:00
hjlarry
2ecf9f6ddf add features collaboration 2025-08-06 10:58:32 +08:00
hjlarry
48c069fe68 support env vars collaborate 2025-08-05 15:22:22 +08:00
hjlarry
9c5c597c85 support empty collaboration event data 2025-08-05 15:21:41 +08:00
hjlarry
c2eec8545d collaborate conversation vars 2025-08-05 14:24:51 +08:00
hjlarry
2395d4be26 fix imported updates also broadcast to other clients 2025-08-05 10:21:22 +08:00
hjlarry
9455476705 handle edge delete 2025-08-04 14:17:59 +08:00
hjlarry
494e223706 some operations don't need to broadcast 2025-08-03 14:18:48 +08:00
hjlarry
348fd18230 refactor collaboration 2025-08-03 13:34:07 +08:00
hjlarry
7233b4de55 the initial data to collaboration store 2025-07-31 16:27:01 +08:00
hjlarry
af6df05685 add setNodes and setEdges of collaboration store 2025-07-31 15:25:50 +08:00
hjlarry
965b65db6e use loro for crdt data 2025-07-31 14:02:53 +08:00
hjlarry
4cc01c8aa8 try a lot for yjs, but update data still not work... 2025-07-30 14:36:29 +08:00
hjlarry
41372168b6 refactor code 2025-07-23 10:04:16 +08:00
hjlarry
f4438b0a08 support mouse display 2025-07-22 18:08:35 +08:00
hjlarry
897c842637 ruff format 2025-07-21 16:13:04 +08:00
hjlarry
ee86ceb906 fix gunicorn gvent 2025-07-21 16:09:51 +08:00
hjlarry
e298732499 refactor code 2025-07-21 16:07:22 +08:00
hjlarry
4081937e22 migrate to python-socketio 2025-07-21 14:57:28 +08:00
hjlarry
f9aedb2118 add collaborate event 2025-07-21 11:10:23 +08:00
hjlarry
74b4719af8 support broadcast online users 2025-07-18 15:02:34 +08:00
hjlarry
2f35cc9188 add online users backend api and frontend submit cursor pos 2025-07-18 11:17:08 +08:00
hjlarry
2f966d8c38 fix websocket auth 2025-07-17 17:16:52 +08:00
hjlarry
b0868d9136 fix websocket auth 2025-07-17 17:16:38 +08:00
hjlarry
37440e9416 ruff format 2025-07-17 15:37:13 +08:00
hjlarry
0d7d27ec0b establish websocket connection 2025-07-17 15:36:50 +08:00
1639 changed files with 131098 additions and 9353 deletions

View File

@@ -8,6 +8,7 @@ on:
- "build/**"
- "release/e-*"
- "hotfix/**"
- "feat/hitl-backend"
tags:
- "*"
@@ -75,7 +76,9 @@ jobs:
with:
context: "{{defaultContext}}:${{ matrix.context }}"
platforms: ${{ matrix.platform }}
build-args: COMMIT_SHA=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
build-args: |
COMMIT_SHA=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
ENABLE_PROD_SOURCEMAP=${{ matrix.context == 'web' && github.ref_name == 'deploy/dev' }}
labels: ${{ steps.meta.outputs.labels }}
outputs: type=image,name=${{ env[matrix.image_name_env] }},push-by-digest=true,name-canonical=true,push=true
cache-from: type=gha,scope=${{ matrix.service_name }}

2
.gitignore vendored
View File

@@ -209,6 +209,7 @@ api/.vscode
.history
.idea/
web/migration/
# pnpm
/.pnpm-store
@@ -221,6 +222,7 @@ mise.toml
# AI Assistant
.sisyphus/
.roo/
api/.env.backup
/clickzetta

View File

@@ -37,7 +37,7 @@
"-c",
"1",
"-Q",
"dataset,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention",
"dataset,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention,workflow_based_app_execution",
"--loglevel",
"INFO"
],

View File

@@ -33,6 +33,9 @@ TRIGGER_URL=http://localhost:5001
# The time in seconds after the signature is rejected
FILES_ACCESS_TIMEOUT=300
# Collaboration mode toggle
ENABLE_COLLABORATION_MODE=false
# Access token expiration time in minutes
ACCESS_TOKEN_EXPIRE_MINUTES=60
@@ -717,3 +720,47 @@ SANDBOX_EXPIRED_RECORDS_CLEAN_GRACEFUL_PERIOD=21
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_SIZE=1000
SANDBOX_EXPIRED_RECORDS_RETENTION_DAYS=30
SANDBOX_EXPIRED_RECORDS_CLEAN_TASK_LOCK_TTL=90000
# Sandbox Dify CLI configuration
# Directory containing dify CLI binaries (dify-cli-<os>-<arch>). Defaults to api/bin when unset.
SANDBOX_DIFY_CLI_ROOT=
# CLI API URL for sandbox (dify-sandbox or e2b) to call back to Dify API.
# This URL must be accessible from the sandbox environment.
# For local development: use http://localhost:5001 or http://127.0.0.1:5001
# For middleware docker stack (api on host): keep localhost/127.0.0.1 and use agentbox via 127.0.0.1:2222
# For Docker deployment: use http://api:5001 (internal Docker network)
# For external sandbox (e.g., e2b): use a publicly accessible URL
CLI_API_URL=http://localhost:5001
# Optional defaults for SSH sandbox provider setup (for manual config/CLI usage).
# Middleware/local dev usually uses 127.0.0.1:2222; full docker deployment usually uses agentbox:22.
SSH_SANDBOX_HOST=127.0.0.1
SSH_SANDBOX_PORT=2222
SSH_SANDBOX_USERNAME=agentbox
SSH_SANDBOX_PASSWORD=agentbox
SSH_SANDBOX_BASE_WORKING_PATH=/workspace/sandboxes
# Redis URL used for PubSub between API and
# celery worker
# defaults to url constructed from `REDIS_*`
# configurations
PUBSUB_REDIS_URL=
# Pub/sub channel type for streaming events.
# valid options are:
#
# - pubsub: for normal Pub/Sub
# - sharded: for sharded Pub/Sub
#
# It's highly recommended to use sharded Pub/Sub AND redis cluster
# for large deployments.
PUBSUB_REDIS_CHANNEL_TYPE=pubsub
# Whether to use Redis cluster mode while running
# PubSub.
# It's highly recommended to enable this for large deployments.
PUBSUB_REDIS_USE_CLUSTERS=false
# Whether to Enable human input timeout check task
ENABLE_HUMAN_INPUT_TIMEOUT_TASK=true
# Human input timeout check interval in minutes
HUMAN_INPUT_TIMEOUT_TASK_INTERVAL=1

View File

@@ -36,6 +36,8 @@ ignore_imports =
core.workflow.nodes.loop.loop_node -> core.workflow.graph_engine
core.workflow.nodes.loop.loop_node -> core.workflow.graph
core.workflow.nodes.loop.loop_node -> core.workflow.graph_engine.command_channels
# TODO(QuantumGhost): fix the import violation later
core.workflow.entities.pause_reason -> core.workflow.nodes.human_input.entities
[importlinter:contract:workflow-infrastructure-dependencies]
name = Workflow Infrastructure Dependencies
@@ -50,14 +52,14 @@ ignore_imports =
core.workflow.nodes.agent.agent_node -> extensions.ext_database
core.workflow.nodes.datasource.datasource_node -> extensions.ext_database
core.workflow.nodes.knowledge_index.knowledge_index_node -> extensions.ext_database
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> extensions.ext_database
core.workflow.nodes.llm.file_saver -> extensions.ext_database
core.workflow.nodes.llm.llm_utils -> extensions.ext_database
core.workflow.nodes.llm.node -> extensions.ext_database
core.workflow.nodes.tool.tool_node -> extensions.ext_database
core.workflow.graph_engine.command_channels.redis_channel -> extensions.ext_redis
core.workflow.graph_engine.manager -> extensions.ext_redis
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> extensions.ext_redis
# TODO(QuantumGhost): use DI to avoid depending on global DB.
core.workflow.nodes.human_input.human_input_node -> extensions.ext_database
[importlinter:contract:workflow-external-imports]
name = Workflow External Imports
@@ -122,11 +124,6 @@ ignore_imports =
core.workflow.nodes.http_request.node -> core.tools.tool_file_manager
core.workflow.nodes.iteration.iteration_node -> core.app.workflow.node_factory
core.workflow.nodes.knowledge_index.knowledge_index_node -> core.rag.index_processor.index_processor_factory
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.rag.datasource.retrieval_service
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.rag.retrieval.dataset_retrieval
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> models.dataset
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> services.feature_service
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.model_runtime.model_providers.__base.large_language_model
core.workflow.nodes.llm.llm_utils -> configs
core.workflow.nodes.llm.llm_utils -> core.app.entities.app_invoke_entities
core.workflow.nodes.llm.llm_utils -> core.file.models
@@ -144,9 +141,9 @@ ignore_imports =
core.workflow.nodes.agent.agent_node -> core.agent.entities
core.workflow.nodes.agent.agent_node -> core.agent.plugin_entities
core.workflow.nodes.base.node -> core.app.entities.app_invoke_entities
core.workflow.nodes.human_input.human_input_node -> core.app.entities.app_invoke_entities
core.workflow.nodes.knowledge_index.knowledge_index_node -> core.app.entities.app_invoke_entities
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.app.app_config.entities
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.app.entities.app_invoke_entities
core.workflow.nodes.llm.node -> core.app.entities.app_invoke_entities
core.workflow.nodes.parameter_extractor.parameter_extractor_node -> core.app.entities.app_invoke_entities
core.workflow.nodes.parameter_extractor.parameter_extractor_node -> core.prompt.advanced_prompt_transform
@@ -162,9 +159,6 @@ ignore_imports =
core.workflow.workflow_entry -> core.app.workflow.node_factory
core.workflow.nodes.datasource.datasource_node -> core.datasource.datasource_manager
core.workflow.nodes.datasource.datasource_node -> core.datasource.utils.message_transformer
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.entities.agent_entities
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.entities.model_entities
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.model_manager
core.workflow.nodes.llm.llm_utils -> core.entities.provider_entities
core.workflow.nodes.parameter_extractor.parameter_extractor_node -> core.model_manager
core.workflow.nodes.question_classifier.question_classifier_node -> core.model_manager
@@ -213,7 +207,6 @@ ignore_imports =
core.workflow.nodes.llm.node -> core.llm_generator.output_parser.structured_output
core.workflow.nodes.llm.node -> core.model_manager
core.workflow.nodes.agent.entities -> core.prompt.entities.advanced_prompt_entities
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.prompt.simple_prompt_transform
core.workflow.nodes.llm.entities -> core.prompt.entities.advanced_prompt_entities
core.workflow.nodes.llm.llm_utils -> core.prompt.entities.advanced_prompt_entities
core.workflow.nodes.llm.node -> core.prompt.entities.advanced_prompt_entities
@@ -229,7 +222,6 @@ ignore_imports =
core.workflow.nodes.knowledge_index.knowledge_index_node -> services.summary_index_service
core.workflow.nodes.knowledge_index.knowledge_index_node -> tasks.generate_summary_index_task
core.workflow.nodes.knowledge_index.knowledge_index_node -> core.rag.index_processor.processor.paragraph_index_processor
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> core.rag.retrieval.retrieval_methods
core.workflow.nodes.llm.node -> models.dataset
core.workflow.nodes.agent.agent_node -> core.tools.utils.message_transformer
core.workflow.nodes.llm.file_saver -> core.tools.signature
@@ -247,6 +239,7 @@ ignore_imports =
core.workflow.nodes.document_extractor.node -> core.variables.segments
core.workflow.nodes.http_request.executor -> core.variables.segments
core.workflow.nodes.http_request.node -> core.variables.segments
core.workflow.nodes.human_input.entities -> core.variables.consts
core.workflow.nodes.iteration.iteration_node -> core.variables
core.workflow.nodes.iteration.iteration_node -> core.variables.segments
core.workflow.nodes.iteration.iteration_node -> core.variables.variables
@@ -287,12 +280,12 @@ ignore_imports =
core.workflow.nodes.agent.agent_node -> extensions.ext_database
core.workflow.nodes.datasource.datasource_node -> extensions.ext_database
core.workflow.nodes.knowledge_index.knowledge_index_node -> extensions.ext_database
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> extensions.ext_database
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> extensions.ext_redis
core.workflow.nodes.llm.file_saver -> extensions.ext_database
core.workflow.nodes.llm.llm_utils -> extensions.ext_database
core.workflow.nodes.llm.node -> extensions.ext_database
core.workflow.nodes.tool.tool_node -> extensions.ext_database
core.workflow.nodes.human_input.human_input_node -> extensions.ext_database
core.workflow.nodes.human_input.human_input_node -> core.repositories.human_input_repository
core.workflow.workflow_entry -> extensions.otel.runtime
core.workflow.nodes.agent.agent_node -> models
core.workflow.nodes.base.node -> models.enums

View File

@@ -0,0 +1,9 @@
Summary:
Summary:
- Application configuration definitions, including file access settings.
Invariants:
- File access settings drive signed URL expiration and base URLs.
Tests:
- Config parsing tests under tests/unit_tests/configs.

View File

@@ -0,0 +1,9 @@
Summary:
- Registers file-related API namespaces and routes for files service.
- Includes app-assets and sandbox archive proxy controllers.
Invariants:
- files_ns must include all file controller modules to register routes.
Tests:
- Coverage via controller unit tests and route registration smoke checks.

View File

@@ -0,0 +1,14 @@
Summary:
- App assets download proxy endpoint (signed URL verification, stream from storage).
Invariants:
- Validates AssetPath fields (UUIDs, asset_type allowlist).
- Verifies tenant-scoped signature and expiration before reading storage.
- URL uses expires_at/nonce/sign query params.
Edge Cases:
- Missing files return NotFound.
- Invalid signature or expired link returns Forbidden.
Tests:
- Verify signature validation and invalid/expired cases.

View File

@@ -0,0 +1,13 @@
Summary:
- App assets upload proxy endpoint (signed URL verification, upload to storage).
Invariants:
- Validates AssetPath fields (UUIDs, asset_type allowlist).
- Verifies tenant-scoped signature and expiration before writing storage.
- URL uses expires_at/nonce/sign query params.
Edge Cases:
- Invalid signature or expired link returns Forbidden.
Tests:
- Verify signature validation and invalid/expired cases.

View File

@@ -0,0 +1,14 @@
Summary:
- Sandbox archive upload/download proxy endpoints (signed URL verification, stream to storage).
Invariants:
- Validates tenant_id and sandbox_id UUIDs.
- Verifies tenant-scoped signature and expiration before storage access.
- URL uses expires_at/nonce/sign query params.
Edge Cases:
- Missing archive returns NotFound.
- Invalid signature or expired link returns Forbidden.
Tests:
- Add unit tests for signature validation if needed.

View File

@@ -0,0 +1,9 @@
Summary:
Summary:
- Collects file assets and emits FileAsset entries with storage keys.
Invariants:
- Storage keys are derived via AppAssetStorage for draft files.
Tests:
- Covered by asset build pipeline tests.

View File

@@ -0,0 +1,14 @@
Summary:
Summary:
- Builds skill artifacts from markdown assets and uploads resolved outputs.
Invariants:
- Reads draft asset content via AppAssetStorage refs.
- Writes resolved artifacts via AppAssetStorage refs.
- FileAsset storage keys are derived via AppAssetStorage.
Edge Cases:
- Missing or invalid JSON content yields empty skill content/metadata.
Tests:
- Build pipeline unit tests covering compile/upload paths.

View File

@@ -0,0 +1,9 @@
Summary:
Summary:
- Converts AppAssetFileTree to FileAsset items for packaging.
Invariants:
- Storage keys for assets are derived via AppAssetStorage.
Tests:
- Used in packaging/service tests for asset bundles.

View File

@@ -0,0 +1,14 @@
# Zip Packager Notes
## Purpose
- Builds a ZIP archive of asset contents stored via the configured storage backend.
## Key Decisions
- Packaging writes assets into an in-memory zip buffer returned as bytes.
- Asset fetch + zip writing are executed via a thread pool with a lock guarding `ZipFile` writes.
## Edge Cases
- ZIP writes are serialized by the lock; storage reads still run in parallel.
## Tests/Verification
- None yet.

View File

@@ -0,0 +1,9 @@
Summary:
Summary:
- Builds AssetItem entries for asset trees using AssetPath-derived storage keys.
Invariants:
- Uses AssetPath to compute draft storage keys.
Tests:
- Covered by asset parsing and packaging tests.

View File

@@ -0,0 +1,20 @@
Summary:
- Defines AssetPath facade + typed asset path classes for app-asset storage access.
- Maps asset paths to storage keys and generates presigned or signed-proxy URLs.
- Signs proxy URLs using tenant private keys and enforces expiration.
- Exposes app_asset_storage singleton for reuse.
Invariants:
- AssetPathBase fields (tenant_id/app_id/resource_id/node_id) must be UUIDs.
- AssetPath.from_components enforces valid types and resolved node_id presence.
- Storage keys are derived internally via AssetPathBase.get_storage_key; callers never supply raw paths.
- AppAssetStorage.storage returns the cached presign wrapper (not the raw storage).
Edge Cases:
- Storage backends without presign support must fall back to signed proxy URLs.
- Signed proxy verification enforces expiration and tenant-scoped signing keys.
- Upload URLs also fall back to signed proxy endpoints when presign is unsupported.
- load_or_none treats SilentStorage "File Not Found" bytes as missing.
Tests:
- Unit tests for ref validation, storage key mapping, and signed URL verification.

View File

@@ -0,0 +1,10 @@
Summary:
Summary:
- Extracts asset files from a zip and persists them into app asset storage.
Invariants:
- Rejects path traversal/absolute/backslash paths.
- Saves extracted files via AppAssetStorage draft refs.
Tests:
- Zip security edge cases and tree construction tests.

View File

@@ -0,0 +1,9 @@
Summary:
Summary:
- Downloads published app asset zip into sandbox and extracts it.
Invariants:
- Uses AppAssetStorage to generate download URLs for build zips (internal URL).
Tests:
- Sandbox initialization integration tests.

View File

@@ -0,0 +1,12 @@
Summary:
Summary:
- Downloads draft/resolved assets into sandbox for draft execution.
Invariants:
- Uses AppAssetStorage to generate download URLs for draft/resolved refs (internal URL).
Edge Cases:
- No nodes -> returns early.
Tests:
- Sandbox draft initialization tests.

View File

@@ -0,0 +1,9 @@
Summary:
- Sandbox lifecycle wrapper (ready/cancel/fail signals, mount/unmount, release).
Invariants:
- wait_ready raises with the original initialization error as the cause.
- release always attempts unmount and environment release, logging failures.
Tests:
- Covered by sandbox lifecycle/unit tests and workflow execution error handling.

View File

@@ -0,0 +1,2 @@
Summary:
- Sandbox security helper modules.

View File

@@ -0,0 +1,13 @@
Summary:
- Generates and verifies signed URLs for sandbox archive upload/download.
Invariants:
- tenant_id and sandbox_id must be UUIDs.
- Signatures are tenant-scoped and include operation, expiry, and nonce.
Edge Cases:
- Missing tenant private key raises ValueError.
- Expired or tampered signatures are rejected.
Tests:
- Add unit tests if sandbox archive signature behavior expands.

View File

@@ -0,0 +1,12 @@
Summary:
- Manages sandbox archive uploads/downloads for workspace persistence.
Invariants:
- Archive storage key is sandbox/<tenant_id>/<sandbox_id>.tar.gz.
- Signed URLs are tenant-scoped and use external files URL.
Edge Cases:
- Missing archive skips mount.
Tests:
- Covered indirectly via sandbox integration tests.

View File

@@ -0,0 +1,9 @@
Summary:
Summary:
- Loads/saves skill bundles to app asset storage.
Invariants:
- Skill bundles use AppAssetStorage refs and JSON serialization.
Tests:
- Covered by skill bundle build/load unit tests.

View File

@@ -0,0 +1,16 @@
# E2B Sandbox Provider Notes
## Purpose
- Implements the E2B-backed `VirtualEnvironment` provider and bootstraps sandbox metadata, file I/O, and command execution.
## Key Decisions
- Sandbox metadata is gathered during `_construct_environment` using the E2B SDK before returning `Metadata`.
- Architecture/OS detection uses a single `uname -m -s` call split by whitespace to reduce round-trips.
- Command execution streams stdout/stderr through `QueueTransportReadCloser`; stdin is unsupported.
## Edge Cases
- `release_environment` raises when sandbox termination fails.
- `execute_command` runs in a background thread; consumers must read stdout/stderr until EOF.
## Tests/Verification
- None yet. Add targeted service tests when behavior changes.

View File

@@ -0,0 +1,14 @@
Summary:
- App asset CRUD, publish/build pipeline, and presigned URL generation.
Invariants:
- Asset storage access goes through AppAssetStorage + AssetPath, using app_asset_storage singleton.
- Tree operations require tenant/app scoping and lock for mutation.
- Asset zips are packaged via raw storage with storage keys from AppAssetStorage.
Edge Cases:
- File nodes larger than preview limit are rejected.
- Deletion runs asynchronously; storage failures are logged.
Tests:
- Unit tests for storage URL generation and publish/build flows.

View File

@@ -0,0 +1,10 @@
Summary:
Summary:
- Imports app bundles, including asset extraction into app asset storage.
Invariants:
- Asset imports respect zip security checks and tenant/app scoping.
- Draft asset packaging uses AppAssetStorage for key mapping.
Tests:
- Bundle import unit tests and zip validation coverage.

View File

@@ -0,0 +1,6 @@
Summary:
Summary:
- Unit tests for AppAssetStorage ref validation, key mapping, and signing.
Tests:
- Covers valid/invalid refs, signature verify, expiration handling, and proxy URL generation.

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import os
import sys
from typing import TYPE_CHECKING, cast
@@ -16,10 +17,15 @@ def is_db_command() -> bool:
# create app
flask_app = None
socketio_app = None
if is_db_command():
from app_factory import create_migrations_app
app = create_migrations_app()
socketio_app = app
flask_app = app
else:
# Gunicorn and Celery handle monkey patching automatically in production by
# specifying the `gevent` worker class. Manual monkey patching is not required here.
@@ -30,8 +36,15 @@ else:
from app_factory import create_app
app = create_app()
celery = cast("Celery", app.extensions["celery"])
socketio_app, flask_app = create_app()
app = flask_app
celery = cast("Celery", flask_app.extensions["celery"])
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)
from gevent import pywsgi
from geventwebsocket.handler import WebSocketHandler # type: ignore[reportMissingTypeStubs]
host = os.environ.get("HOST", "0.0.0.0")
port = int(os.environ.get("PORT", 5001))
server = pywsgi.WSGIServer((host, port), socketio_app, handler_class=WebSocketHandler)
server.serve_forever()

View File

@@ -1,6 +1,7 @@
import logging
import time
import socketio # type: ignore[reportMissingTypeStubs]
from opentelemetry.trace import get_current_span
from opentelemetry.trace.span import INVALID_SPAN_ID, INVALID_TRACE_ID
@@ -8,6 +9,7 @@ from configs import dify_config
from contexts.wrapper import RecyclableContextVar
from core.logging.context import init_request_context
from dify_app import DifyApp
from extensions.ext_socketio import sio
logger = logging.getLogger(__name__)
@@ -60,14 +62,18 @@ def create_flask_app_with_configs() -> DifyApp:
return dify_app
def create_app() -> DifyApp:
def create_app() -> tuple[socketio.WSGIApp, DifyApp]:
start_time = time.perf_counter()
app = create_flask_app_with_configs()
initialize_extensions(app)
sio.app = app
socketio_app = socketio.WSGIApp(sio, app)
end_time = time.perf_counter()
if dify_config.DEBUG:
logger.info("Finished create_app (%s ms)", round((end_time - start_time) * 1000, 2))
return app
return socketio_app, app
def initialize_extensions(app: DifyApp):

BIN
api/bin/dify-cli-darwin-amd64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-darwin-arm64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-linux-amd64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-linux-arm64 Executable file

Binary file not shown.

View File

@@ -23,7 +23,8 @@ from core.rag.datasource.vdb.vector_factory import Vector
from core.rag.datasource.vdb.vector_type import VectorType
from core.rag.index_processor.constant.built_in_field import BuiltInField
from core.rag.models.document import ChildDocument, Document
from core.tools.utils.system_oauth_encryption import encrypt_system_oauth_params
from core.sandbox import SandboxBuilder, SandboxType
from core.tools.utils.system_encryption import encrypt_system_params
from events.app_event import app_was_created
from extensions.ext_database import db
from extensions.ext_redis import redis_client
@@ -1613,7 +1614,7 @@ def remove_orphaned_files_on_storage(force: bool):
click.echo(click.style(f"- Scanning files on storage path {storage_path}", fg="white"))
files = storage.scan(path=storage_path, files=True, directories=False)
all_files_on_storage.extend(files)
except FileNotFoundError as e:
except FileNotFoundError:
click.echo(click.style(f" -> Skipping path {storage_path} as it does not exist.", fg="yellow"))
continue
except Exception as e:
@@ -1864,6 +1865,59 @@ def file_usage(
click.echo(click.style(f"Use --offset {offset + limit} to see next page", fg="white"))
@click.command("setup-sandbox-system-config", help="Setup system-level sandbox provider configuration.")
@click.option(
"--provider-type", prompt=True, type=click.Choice(["e2b", "docker", "local", "ssh"]), help="Sandbox provider type"
)
@click.option("--config", prompt=True, help='Configuration JSON (e.g., {"api_key": "xxx"} for e2b)')
def setup_sandbox_system_config(provider_type: str, config: str):
"""
Setup system-level sandbox provider configuration.
Examples:
flask setup-sandbox-system-config --provider-type e2b --config '{"api_key": "e2b_xxx"}'
flask setup-sandbox-system-config --provider-type docker --config '{"docker_sock": "unix:///var/run/docker.sock"}'
flask setup-sandbox-system-config --provider-type local --config '{}'
flask setup-sandbox-system-config --provider-type ssh --config \
'{"ssh_host": "agentbox", "ssh_port": "22", "ssh_username": "agentbox", "ssh_password": "agentbox"}'
"""
from models.sandbox import SandboxProviderSystemConfig
try:
click.echo(click.style(f"Validating config: {config}", fg="yellow"))
config_dict = TypeAdapter(dict[str, Any]).validate_json(config)
click.echo(click.style("Config validated successfully.", fg="green"))
click.echo(click.style(f"Validating config schema for provider type: {provider_type}", fg="yellow"))
SandboxBuilder.validate(SandboxType(provider_type), config_dict)
click.echo(click.style("Config schema validated successfully.", fg="green"))
click.echo(click.style("Encrypting config...", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
encrypted_config = encrypt_system_params(config_dict)
click.echo(click.style("Config encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error validating/encrypting config: {str(e)}", fg="red"))
return
deleted_count = db.session.query(SandboxProviderSystemConfig).filter_by(provider_type=provider_type).delete()
if deleted_count > 0:
click.echo(
click.style(
f"Deleted {deleted_count} existing system config for provider type: {provider_type}", fg="yellow"
)
)
system_config = SandboxProviderSystemConfig(
provider_type=provider_type,
encrypted_config=encrypted_config,
)
db.session.add(system_config)
db.session.commit()
click.echo(click.style(f"Sandbox system config setup successfully. id: {system_config.id}", fg="green"))
click.echo(click.style(f"Provider type: {provider_type}", fg="green"))
@click.command("setup-system-tool-oauth-client", help="Setup system tool oauth client.")
@click.option("--provider", prompt=True, help="Provider name")
@click.option("--client-params", prompt=True, help="Client Params")
@@ -1883,7 +1937,7 @@ def setup_system_tool_oauth_client(provider, client_params):
click.echo(click.style(f"Encrypting client params: {client_params}", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
oauth_client_params = encrypt_system_oauth_params(client_params_dict)
oauth_client_params = encrypt_system_params(client_params_dict)
click.echo(click.style("Client params encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error parsing client params: {str(e)}", fg="red"))
@@ -1932,7 +1986,7 @@ def setup_system_trigger_oauth_client(provider, client_params):
click.echo(click.style(f"Encrypting client params: {client_params}", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
oauth_client_params = encrypt_system_oauth_params(client_params_dict)
oauth_client_params = encrypt_system_params(client_params_dict)
click.echo(click.style("Client params encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error parsing client params: {str(e)}", fg="red"))

View File

@@ -2,6 +2,7 @@ import logging
from pathlib import Path
from typing import Any
from pydantic import Field
from pydantic.fields import FieldInfo
from pydantic_settings import BaseSettings, PydanticBaseSettingsSource, SettingsConfigDict, TomlConfigSettingsSource
@@ -82,6 +83,17 @@ class DifyConfig(
extra="ignore",
)
SANDBOX_DIFY_CLI_ROOT: str | None = Field(
default=None,
description=(
"Filesystem directory containing dify CLI binaries named dify-cli-<os>-<arch>. "
"Defaults to api/bin when unset."
),
)
DIFY_PORT: int = Field(
default=5001,
description="Port used by Dify to communicate with the host machine.",
)
# Before adding any config,
# please consider to arrange it in the proper config group of existed or added
# for better readability and maintainability.

View File

@@ -1,3 +1,4 @@
from datetime import timedelta
from enum import StrEnum
from typing import Literal
@@ -48,6 +49,16 @@ class SecurityConfig(BaseSettings):
default=5,
)
WEB_FORM_SUBMIT_RATE_LIMIT_MAX_ATTEMPTS: PositiveInt = Field(
description="Maximum number of web form submissions allowed per IP within the rate limit window",
default=30,
)
WEB_FORM_SUBMIT_RATE_LIMIT_WINDOW_SECONDS: PositiveInt = Field(
description="Time window in seconds for web form submission rate limiting",
default=60,
)
LOGIN_DISABLED: bool = Field(
description="Whether to disable login checks",
default=False,
@@ -82,6 +93,12 @@ class AppExecutionConfig(BaseSettings):
default=0,
)
HUMAN_INPUT_GLOBAL_TIMEOUT_SECONDS: PositiveInt = Field(
description="Maximum seconds a workflow run can stay paused waiting for human input before global timeout.",
default=int(timedelta(days=7).total_seconds()),
ge=1,
)
class CodeExecutionSandboxConfig(BaseSettings):
"""
@@ -249,6 +266,17 @@ class PluginConfig(BaseSettings):
)
class CliApiConfig(BaseSettings):
"""
Configuration for CLI API (for dify-cli to call back from external sandbox environments)
"""
CLI_API_URL: str = Field(
description="CLI API URL for external sandbox (e.g., e2b) to call back.",
default="http://localhost:5001",
)
class MarketplaceConfig(BaseSettings):
"""
Configuration for marketplace
@@ -1134,6 +1162,14 @@ class CeleryScheduleTasksConfig(BaseSettings):
description="Enable queue monitor task",
default=False,
)
ENABLE_HUMAN_INPUT_TIMEOUT_TASK: bool = Field(
description="Enable human input timeout check task",
default=True,
)
HUMAN_INPUT_TIMEOUT_TASK_INTERVAL: PositiveInt = Field(
description="Human input timeout check interval in minutes",
default=1,
)
ENABLE_CHECK_UPGRADABLE_PLUGIN_TASK: bool = Field(
description="Enable check upgradable plugin task",
default=True,
@@ -1244,6 +1280,13 @@ class PositionConfig(BaseSettings):
return {item.strip() for item in self.POSITION_TOOL_EXCLUDES.split(",") if item.strip() != ""}
class CollaborationConfig(BaseSettings):
ENABLE_COLLABORATION_MODE: bool = Field(
description="Whether to enable collaboration mode features across the workspace",
default=False,
)
class LoginConfig(BaseSettings):
ENABLE_EMAIL_CODE_LOGIN: bool = Field(
description="whether to enable email code login",
@@ -1338,6 +1381,7 @@ class FeatureConfig(
TriggerConfig,
AsyncWorkflowConfig,
PluginConfig,
CliApiConfig,
MarketplaceConfig,
DataSetConfig,
EndpointConfig,
@@ -1362,6 +1406,7 @@ class FeatureConfig(
WorkflowConfig,
WorkflowNodeExecutionConfig,
WorkspaceConfig,
CollaborationConfig,
LoginConfig,
AccountConfig,
SwaggerUIConfig,

View File

@@ -6,6 +6,7 @@ from pydantic import Field, NonNegativeFloat, NonNegativeInt, PositiveFloat, Pos
from pydantic_settings import BaseSettings
from .cache.redis_config import RedisConfig
from .cache.redis_pubsub_config import RedisPubSubConfig
from .storage.aliyun_oss_storage_config import AliyunOSSStorageConfig
from .storage.amazon_s3_storage_config import S3StorageConfig
from .storage.azure_blob_storage_config import AzureBlobStorageConfig
@@ -317,6 +318,7 @@ class MiddlewareConfig(
CeleryConfig, # Note: CeleryConfig already inherits from DatabaseConfig
KeywordStoreConfig,
RedisConfig,
RedisPubSubConfig,
# configs of storage and storage providers
StorageConfig,
AliyunOSSStorageConfig,

View File

@@ -0,0 +1,96 @@
from typing import Literal, Protocol
from urllib.parse import quote_plus, urlunparse
from pydantic import Field
from pydantic_settings import BaseSettings
class RedisConfigDefaults(Protocol):
REDIS_HOST: str
REDIS_PORT: int
REDIS_USERNAME: str | None
REDIS_PASSWORD: str | None
REDIS_DB: int
REDIS_USE_SSL: bool
REDIS_USE_SENTINEL: bool | None
REDIS_USE_CLUSTERS: bool
class RedisConfigDefaultsMixin:
def _redis_defaults(self: RedisConfigDefaults) -> RedisConfigDefaults:
return self
class RedisPubSubConfig(BaseSettings, RedisConfigDefaultsMixin):
"""
Configuration settings for Redis pub/sub streaming.
"""
PUBSUB_REDIS_URL: str | None = Field(
alias="PUBSUB_REDIS_URL",
description=(
"Redis connection URL for pub/sub streaming events between API "
"and celery worker, defaults to url constructed from "
"`REDIS_*` configurations"
),
default=None,
)
PUBSUB_REDIS_USE_CLUSTERS: bool = Field(
description=(
"Enable Redis Cluster mode for pub/sub streaming. It's highly "
"recommended to enable this for large deployments."
),
default=False,
)
PUBSUB_REDIS_CHANNEL_TYPE: Literal["pubsub", "sharded"] = Field(
description=(
"Pub/sub channel type for streaming events. "
"Valid options are:\n"
"\n"
" - pubsub: for normal Pub/Sub\n"
" - sharded: for sharded Pub/Sub\n"
"\n"
"It's highly recommended to use sharded Pub/Sub AND redis cluster "
"for large deployments."
),
default="pubsub",
)
def _build_default_pubsub_url(self) -> str:
defaults = self._redis_defaults()
if not defaults.REDIS_HOST or not defaults.REDIS_PORT:
raise ValueError("PUBSUB_REDIS_URL must be set when default Redis URL cannot be constructed")
scheme = "rediss" if defaults.REDIS_USE_SSL else "redis"
username = defaults.REDIS_USERNAME or None
password = defaults.REDIS_PASSWORD or None
userinfo = ""
if username:
userinfo = quote_plus(username)
if password:
password_part = quote_plus(password)
userinfo = f"{userinfo}:{password_part}" if userinfo else f":{password_part}"
if userinfo:
userinfo = f"{userinfo}@"
host = defaults.REDIS_HOST
port = defaults.REDIS_PORT
db = defaults.REDIS_DB
netloc = f"{userinfo}{host}:{port}"
return urlunparse((scheme, netloc, f"/{db}", "", "", ""))
@property
def normalized_pubsub_redis_url(self) -> str:
pubsub_redis_url = self.PUBSUB_REDIS_URL
if pubsub_redis_url:
cleaned = pubsub_redis_url.strip()
pubsub_redis_url = cleaned or None
if pubsub_redis_url:
return pubsub_redis_url
return self._build_default_pubsub_url()

View File

@@ -0,0 +1,27 @@
from flask import Blueprint
from flask_restx import Namespace
from libs.external_api import ExternalApi
bp = Blueprint("cli_api", __name__, url_prefix="/cli/api")
api = ExternalApi(
bp,
version="1.0",
title="CLI API",
description="APIs for Dify CLI to call back from external sandbox environments (e.g., e2b)",
)
# Create namespace
cli_api_ns = Namespace("cli_api", description="CLI API operations", path="/")
from .dify_cli import cli_api as _plugin
api.add_namespace(cli_api_ns)
__all__ = [
"_plugin",
"api",
"bp",
"cli_api_ns",
]

View File

@@ -0,0 +1,192 @@
from flask import abort
from flask_restx import Resource
from pydantic import BaseModel
from controllers.cli_api import cli_api_ns
from controllers.cli_api.dify_cli.wraps import get_cli_user_tenant, plugin_data
from controllers.cli_api.wraps import cli_api_only
from controllers.console.wraps import setup_required
from core.app.entities.app_invoke_entities import InvokeFrom
from core.file.helpers import get_signed_file_url_for_plugin
from core.plugin.backwards_invocation.app import PluginAppBackwardsInvocation
from core.plugin.backwards_invocation.base import BaseBackwardsInvocationResponse
from core.plugin.backwards_invocation.model import PluginModelBackwardsInvocation
from core.plugin.backwards_invocation.tool import PluginToolBackwardsInvocation
from core.plugin.entities.request import (
RequestInvokeApp,
RequestInvokeLLM,
RequestInvokeTool,
RequestRequestUploadFile,
)
from core.sandbox.bash.dify_cli import DifyCliToolConfig
from core.session.cli_api import CliContext
from core.skill.entities import ToolInvocationRequest
from core.tools.entities.tool_entities import ToolProviderType
from core.tools.tool_manager import ToolManager
from libs.helper import length_prefixed_response
from models.account import Account
from models.model import EndUser, Tenant
class FetchToolItem(BaseModel):
tool_type: str
tool_provider: str
tool_name: str
credential_id: str | None = None
class FetchToolBatchRequest(BaseModel):
tools: list[FetchToolItem]
@cli_api_ns.route("/invoke/llm")
class CliInvokeLLMApi(Resource):
@cli_api_only
@get_cli_user_tenant
@setup_required
@plugin_data(payload_type=RequestInvokeLLM)
def post(
self,
user_model: Account | EndUser,
tenant_model: Tenant,
payload: RequestInvokeLLM,
cli_context: CliContext,
):
def generator():
response = PluginModelBackwardsInvocation.invoke_llm(user_model.id, tenant_model, payload)
return PluginModelBackwardsInvocation.convert_to_event_stream(response)
return length_prefixed_response(0xF, generator())
@cli_api_ns.route("/invoke/tool")
class CliInvokeToolApi(Resource):
@cli_api_only
@get_cli_user_tenant
@setup_required
@plugin_data(payload_type=RequestInvokeTool)
def post(
self,
user_model: Account | EndUser,
tenant_model: Tenant,
payload: RequestInvokeTool,
cli_context: CliContext,
):
tool_type = ToolProviderType.value_of(payload.tool_type)
request = ToolInvocationRequest(
tool_type=tool_type,
provider=payload.provider,
tool_name=payload.tool,
credential_id=payload.credential_id,
)
if cli_context.tool_access and not cli_context.tool_access.is_allowed(request):
abort(403, description=f"Access denied for tool: {payload.provider}/{payload.tool}")
def generator():
return PluginToolBackwardsInvocation.convert_to_event_stream(
PluginToolBackwardsInvocation.invoke_tool(
tenant_id=tenant_model.id,
user_id=user_model.id,
tool_type=tool_type,
provider=payload.provider,
tool_name=payload.tool,
tool_parameters=payload.tool_parameters,
credential_id=payload.credential_id,
),
)
return length_prefixed_response(0xF, generator())
@cli_api_ns.route("/invoke/app")
class CliInvokeAppApi(Resource):
@cli_api_only
@get_cli_user_tenant
@setup_required
@plugin_data(payload_type=RequestInvokeApp)
def post(
self,
user_model: Account | EndUser,
tenant_model: Tenant,
payload: RequestInvokeApp,
cli_context: CliContext,
):
response = PluginAppBackwardsInvocation.invoke_app(
app_id=payload.app_id,
user_id=user_model.id,
tenant_id=tenant_model.id,
conversation_id=payload.conversation_id,
query=payload.query,
stream=payload.response_mode == "streaming",
inputs=payload.inputs,
files=payload.files,
)
return length_prefixed_response(0xF, PluginAppBackwardsInvocation.convert_to_event_stream(response))
@cli_api_ns.route("/upload/file/request")
class CliUploadFileRequestApi(Resource):
@cli_api_only
@get_cli_user_tenant
@setup_required
@plugin_data(payload_type=RequestRequestUploadFile)
def post(
self,
user_model: Account | EndUser,
tenant_model: Tenant,
payload: RequestRequestUploadFile,
cli_context: CliContext,
):
url = get_signed_file_url_for_plugin(
filename=payload.filename,
mimetype=payload.mimetype,
tenant_id=tenant_model.id,
user_id=user_model.id,
)
return BaseBackwardsInvocationResponse(data={"url": url}).model_dump()
@cli_api_ns.route("/fetch/tools/batch")
class CliFetchToolsBatchApi(Resource):
@cli_api_only
@get_cli_user_tenant
@setup_required
@plugin_data(payload_type=FetchToolBatchRequest)
def post(
self,
user_model: Account | EndUser,
tenant_model: Tenant,
payload: FetchToolBatchRequest,
cli_context: CliContext,
):
tools: list[dict] = []
for item in payload.tools:
provider_type = ToolProviderType.value_of(item.tool_type)
request = ToolInvocationRequest(
tool_type=provider_type,
provider=item.tool_provider,
tool_name=item.tool_name,
credential_id=item.credential_id,
)
if cli_context.tool_access and not cli_context.tool_access.is_allowed(request):
abort(403, description=f"Access denied for tool: {item.tool_provider}/{item.tool_name}")
try:
tool_runtime = ToolManager.get_tool_runtime(
tenant_id=tenant_model.id,
provider_type=provider_type,
provider_id=item.tool_provider,
tool_name=item.tool_name,
invoke_from=InvokeFrom.AGENT,
credential_id=item.credential_id,
)
tool_config = DifyCliToolConfig.create_from_tool(tool_runtime)
tools.append(tool_config.model_dump())
except Exception:
continue
return BaseBackwardsInvocationResponse(data={"tools": tools}).model_dump()

View File

@@ -0,0 +1,137 @@
from collections.abc import Callable
from functools import wraps
from typing import ParamSpec, TypeVar
from flask import current_app, g, request
from flask_login import user_logged_in
from pydantic import BaseModel
from sqlalchemy.orm import Session
from core.session.cli_api import CliApiSession, CliContext
from extensions.ext_database import db
from libs.login import current_user
from models.account import Tenant
from models.model import DefaultEndUserSessionID, EndUser
P = ParamSpec("P")
R = TypeVar("R")
class TenantUserPayload(BaseModel):
tenant_id: str
user_id: str
def get_user(tenant_id: str, user_id: str | None) -> EndUser:
"""
Get current user
NOTE: user_id is not trusted, it could be maliciously set to any value.
As a result, it could only be considered as an end user id.
"""
if not user_id:
user_id = DefaultEndUserSessionID.DEFAULT_SESSION_ID
is_anonymous = user_id == DefaultEndUserSessionID.DEFAULT_SESSION_ID
try:
with Session(db.engine) as session:
user_model = None
if is_anonymous:
user_model = (
session.query(EndUser)
.where(
EndUser.session_id == user_id,
EndUser.tenant_id == tenant_id,
)
.first()
)
else:
user_model = (
session.query(EndUser)
.where(
EndUser.id == user_id,
EndUser.tenant_id == tenant_id,
)
.first()
)
if not user_model:
user_model = EndUser(
tenant_id=tenant_id,
type="service_api",
is_anonymous=is_anonymous,
session_id=user_id,
)
session.add(user_model)
session.commit()
session.refresh(user_model)
except Exception:
raise ValueError("user not found")
return user_model
def get_cli_user_tenant(view_func: Callable[P, R]):
@wraps(view_func)
def decorated_view(*args: P.args, **kwargs: P.kwargs):
session: CliApiSession | None = getattr(g, "cli_api_session", None)
if session is None:
raise ValueError("session not found")
user_id = session.user_id
tenant_id = session.tenant_id
cli_context = CliContext.model_validate(session.context)
if not user_id:
user_id = DefaultEndUserSessionID.DEFAULT_SESSION_ID
try:
tenant_model = (
db.session.query(Tenant)
.where(
Tenant.id == tenant_id,
)
.first()
)
except Exception:
raise ValueError("tenant not found")
if not tenant_model:
raise ValueError("tenant not found")
kwargs["tenant_model"] = tenant_model
kwargs["user_model"] = get_user(tenant_id, user_id)
kwargs["cli_context"] = cli_context
current_app.login_manager._update_request_context_with_user(kwargs["user_model"]) # type: ignore
user_logged_in.send(current_app._get_current_object(), user=current_user) # type: ignore
return view_func(*args, **kwargs)
return decorated_view
def plugin_data(view: Callable[P, R] | None = None, *, payload_type: type[BaseModel]):
def decorator(view_func: Callable[P, R]):
@wraps(view_func)
def decorated_view(*args: P.args, **kwargs: P.kwargs):
try:
data = request.get_json()
except Exception:
raise ValueError("invalid json")
try:
payload = payload_type.model_validate(data)
except Exception as e:
raise ValueError(f"invalid payload: {str(e)}")
kwargs["payload"] = payload
return view_func(*args, **kwargs)
return decorated_view
if view is None:
return decorator
else:
return decorator(view)

View File

@@ -0,0 +1,56 @@
import hashlib
import hmac
import time
from collections.abc import Callable
from functools import wraps
from typing import ParamSpec, TypeVar
from flask import abort, g, request
from core.session.cli_api import CliApiSessionManager
P = ParamSpec("P")
R = TypeVar("R")
SIGNATURE_TTL_SECONDS = 300
def _verify_signature(session_secret: str, timestamp: str, body: bytes, signature: str) -> bool:
expected = hmac.new(
session_secret.encode(),
f"{timestamp}.".encode() + body,
hashlib.sha256,
).hexdigest()
return hmac.compare_digest(f"sha256={expected}", signature)
def cli_api_only(view: Callable[P, R]):
@wraps(view)
def decorated(*args: P.args, **kwargs: P.kwargs):
session_id = request.headers.get("X-Cli-Api-Session-Id")
timestamp = request.headers.get("X-Cli-Api-Timestamp")
signature = request.headers.get("X-Cli-Api-Signature")
if not session_id or not timestamp or not signature:
abort(401)
try:
ts = int(timestamp)
if abs(time.time() - ts) > SIGNATURE_TTL_SECONDS:
abort(401)
except ValueError:
abort(401)
session = CliApiSessionManager().get(session_id)
if not session:
abort(401)
body = request.get_data()
if not _verify_signature(session.secret, timestamp, body, signature):
abort(401)
g.cli_api_session = session
return view(*args, **kwargs)
return decorated

View File

@@ -5,8 +5,6 @@ from enum import StrEnum
from flask_restx import Namespace
from pydantic import BaseModel, TypeAdapter
from controllers.console import console_ns
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
@@ -24,6 +22,9 @@ def register_schema_models(namespace: Namespace, *models: type[BaseModel]) -> No
def get_or_create_model(model_name: str, field_def):
# Import lazily to avoid circular imports between console controllers and schema helpers.
from controllers.console import console_ns
existing = console_ns.models.get(model_name)
if existing is None:
existing = console_ns.model(model_name, field_def)

View File

@@ -32,13 +32,16 @@ for module_name in RESOURCE_MODULES:
# Ensure resource modules are imported so route decorators are evaluated.
# Import other controllers
# Sandbox file browser
from . import (
admin,
apikey,
extension,
feature,
human_input_form,
init_validate,
ping,
sandbox_files,
setup,
spec,
version,
@@ -50,6 +53,7 @@ from .app import (
agent,
annotation,
app,
app_asset,
audio,
completion,
conversation,
@@ -60,9 +64,11 @@ from .app import (
model_config,
ops_trace,
site,
skills,
statistic,
workflow,
workflow_app_log,
workflow_comment,
workflow_draft_variable,
workflow_run,
workflow_statistic,
@@ -114,6 +120,7 @@ from .explore import (
saved_message,
trial,
)
from .socketio import workflow as socketio_workflow # pyright: ignore[reportUnusedImport]
# Import tag controllers
from .tag import tags
@@ -128,6 +135,7 @@ from .workspace import (
model_providers,
models,
plugin,
sandbox_providers,
tool_providers,
trigger_providers,
workspace,
@@ -146,6 +154,7 @@ __all__ = [
"api",
"apikey",
"app",
"app_asset",
"audio",
"banner",
"billing",
@@ -171,6 +180,7 @@ __all__ = [
"forgot_password",
"generator",
"hit_testing",
"human_input_form",
"init_validate",
"installed_app",
"load_balancing_config",
@@ -194,9 +204,12 @@ __all__ = [
"rag_pipeline_import",
"rag_pipeline_workflow",
"recommended_app",
"sandbox_files",
"sandbox_providers",
"saved_message",
"setup",
"site",
"skills",
"spec",
"statistic",
"tags",
@@ -207,6 +220,7 @@ __all__ = [
"website",
"workflow",
"workflow_app_log",
"workflow_comment",
"workflow_draft_variable",
"workflow_run",
"workflow_statistic",

View File

@@ -1,6 +1,7 @@
import logging
import uuid
from datetime import datetime
from enum import StrEnum
from typing import Any, Literal, TypeAlias
from flask import request
@@ -31,6 +32,7 @@ from extensions.ext_database import db
from libs.login import current_account_with_tenant, login_required
from models import App, DatasetPermissionEnum, Workflow
from models.model import IconType
from models.workflow_features import WorkflowFeatures
from services.app_dsl_service import AppDslService, ImportMode
from services.app_service import AppService
from services.enterprise.enterprise_service import EnterpriseService
@@ -58,6 +60,11 @@ register_enum_models(console_ns, IconType)
_logger = logging.getLogger(__name__)
class RuntimeType(StrEnum):
CLASSIC = "classic"
SANDBOXED = "sandboxed"
class AppListQuery(BaseModel):
page: int = Field(default=1, ge=1, le=99999, description="Page number (1-99999)")
limit: int = Field(default=20, ge=1, le=100, description="Page size (1-100)")
@@ -122,6 +129,11 @@ class AppExportQuery(BaseModel):
workflow_id: str | None = Field(default=None, description="Specific workflow ID to export")
class AppExportBundleQuery(BaseModel):
include_secret: bool = Field(default=False, description="Include secrets in export")
workflow_id: str | None = Field(default=None, description="Specific workflow ID to export")
class AppNamePayload(BaseModel):
name: str = Field(..., min_length=1, description="Name to check")
@@ -347,6 +359,7 @@ class AppPartial(ResponseModel):
create_user_name: str | None = None
author_name: str | None = None
has_draft_trigger: bool | None = None
runtime_type: RuntimeType = RuntimeType.CLASSIC
@computed_field(return_type=str | None) # type: ignore
@property
@@ -496,6 +509,7 @@ class AppListApi(Resource):
str(app.id) for app in app_pagination.items if app.mode in {"workflow", "advanced-chat"}
]
draft_trigger_app_ids: set[str] = set()
sandbox_app_ids: set[str] = set()
if workflow_capable_app_ids:
draft_workflows = (
db.session.execute(
@@ -514,6 +528,10 @@ class AppListApi(Resource):
NodeType.TRIGGER_PLUGIN,
}
for workflow in draft_workflows:
# Check sandbox feature
if workflow.get_feature(WorkflowFeatures.SANDBOX).enabled:
sandbox_app_ids.add(str(workflow.app_id))
node_id = None
try:
for node_id, node_data in workflow.walk_nodes():
@@ -526,6 +544,7 @@ class AppListApi(Resource):
for app in app_pagination.items:
app.has_draft_trigger = str(app.id) in draft_trigger_app_ids
app.runtime_type = RuntimeType.SANDBOXED if str(app.id) in sandbox_app_ids else RuntimeType.CLASSIC
pagination_model = AppPagination.model_validate(app_pagination, from_attributes=True)
return pagination_model.model_dump(mode="json"), 200
@@ -694,6 +713,29 @@ class AppExportApi(Resource):
return payload.model_dump(mode="json")
@console_ns.route("/apps/<uuid:app_id>/export-bundle")
class AppExportBundleApi(Resource):
@get_app_model
@setup_required
@login_required
@account_initialization_required
@edit_permission_required
def get(self, app_model):
from services.app_bundle_service import AppBundleService
args = AppExportBundleQuery.model_validate(request.args.to_dict(flat=True))
current_user, _ = current_account_with_tenant()
result = AppBundleService.export_bundle(
app_model=app_model,
account_id=str(current_user.id),
include_secret=args.include_secret,
workflow_id=args.workflow_id,
)
return result.model_dump(mode="json")
@console_ns.route("/apps/<uuid:app_id>/name")
class AppNameApi(Resource):
@console_ns.doc("check_app_name")

View File

@@ -0,0 +1,321 @@
from flask import request
from flask_restx import Resource
from pydantic import BaseModel, Field, field_validator
from controllers.console import console_ns
from controllers.console.app.error import (
AppAssetNodeNotFoundError,
AppAssetPathConflictError,
)
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from core.app.entities.app_asset_entities import BatchUploadNode
from libs.login import current_account_with_tenant, login_required
from models import App
from models.model import AppMode
from services.app_asset_service import AppAssetService
from services.errors.app_asset import (
AppAssetNodeNotFoundError as ServiceNodeNotFoundError,
)
from services.errors.app_asset import (
AppAssetParentNotFoundError,
)
from services.errors.app_asset import (
AppAssetPathConflictError as ServicePathConflictError,
)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class CreateFolderPayload(BaseModel):
name: str = Field(..., min_length=1, max_length=255)
parent_id: str | None = None
class CreateFilePayload(BaseModel):
name: str = Field(..., min_length=1, max_length=255)
parent_id: str | None = None
@field_validator("name", mode="before")
@classmethod
def strip_name(cls, v: str) -> str:
return v.strip() if isinstance(v, str) else v
@field_validator("parent_id", mode="before")
@classmethod
def empty_to_none(cls, v: str | None) -> str | None:
return v or None
class GetUploadUrlPayload(BaseModel):
name: str = Field(..., min_length=1, max_length=255)
size: int = Field(..., ge=0)
parent_id: str | None = None
@field_validator("name", mode="before")
@classmethod
def strip_name(cls, v: str) -> str:
return v.strip() if isinstance(v, str) else v
@field_validator("parent_id", mode="before")
@classmethod
def empty_to_none(cls, v: str | None) -> str | None:
return v or None
class BatchUploadPayload(BaseModel):
children: list[BatchUploadNode] = Field(..., min_length=1)
class UpdateFileContentPayload(BaseModel):
content: str
class RenameNodePayload(BaseModel):
name: str = Field(..., min_length=1, max_length=255)
class MoveNodePayload(BaseModel):
parent_id: str | None = None
class ReorderNodePayload(BaseModel):
after_node_id: str | None = Field(default=None, description="Place after this node, None for first position")
def reg(cls: type[BaseModel]) -> None:
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(CreateFolderPayload)
reg(CreateFilePayload)
reg(GetUploadUrlPayload)
reg(BatchUploadNode)
reg(BatchUploadPayload)
reg(UpdateFileContentPayload)
reg(RenameNodePayload)
reg(MoveNodePayload)
reg(ReorderNodePayload)
@console_ns.route("/apps/<string:app_id>/assets/tree")
class AppAssetTreeResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def get(self, app_model: App):
current_user, _ = current_account_with_tenant()
tree = AppAssetService.get_asset_tree(app_model, current_user.id)
return {"children": [view.model_dump() for view in tree.transform()]}
@console_ns.route("/apps/<string:app_id>/assets/folders")
class AppAssetFolderResource(Resource):
@console_ns.expect(console_ns.models[CreateFolderPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App):
current_user, _ = current_account_with_tenant()
payload = CreateFolderPayload.model_validate(console_ns.payload or {})
try:
node = AppAssetService.create_folder(app_model, current_user.id, payload.name, payload.parent_id)
return node.model_dump(), 201
except AppAssetParentNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()
@console_ns.route("/apps/<string:app_id>/assets/files/<string:node_id>")
class AppAssetFileDetailResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def get(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
try:
content = AppAssetService.get_file_content(app_model, current_user.id, node_id)
return {"content": content.decode("utf-8", errors="replace")}
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.expect(console_ns.models[UpdateFileContentPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def put(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
file = request.files.get("file")
if file:
content = file.read()
else:
payload = UpdateFileContentPayload.model_validate(console_ns.payload or {})
content = payload.content.encode("utf-8")
try:
node = AppAssetService.update_file_content(app_model, current_user.id, node_id, content)
return node.model_dump()
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.route("/apps/<string:app_id>/assets/nodes/<string:node_id>")
class AppAssetNodeResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def delete(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
try:
AppAssetService.delete_node(app_model, current_user.id, node_id)
return {"result": "success"}, 200
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.route("/apps/<string:app_id>/assets/nodes/<string:node_id>/rename")
class AppAssetNodeRenameResource(Resource):
@console_ns.expect(console_ns.models[RenameNodePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
payload = RenameNodePayload.model_validate(console_ns.payload or {})
try:
node = AppAssetService.rename_node(app_model, current_user.id, node_id, payload.name)
return node.model_dump()
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()
@console_ns.route("/apps/<string:app_id>/assets/nodes/<string:node_id>/move")
class AppAssetNodeMoveResource(Resource):
@console_ns.expect(console_ns.models[MoveNodePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
payload = MoveNodePayload.model_validate(console_ns.payload or {})
try:
node = AppAssetService.move_node(app_model, current_user.id, node_id, payload.parent_id)
return node.model_dump()
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
except AppAssetParentNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()
@console_ns.route("/apps/<string:app_id>/assets/nodes/<string:node_id>/reorder")
class AppAssetNodeReorderResource(Resource):
@console_ns.expect(console_ns.models[ReorderNodePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
payload = ReorderNodePayload.model_validate(console_ns.payload or {})
try:
node = AppAssetService.reorder_node(app_model, current_user.id, node_id, payload.after_node_id)
return node.model_dump()
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.route("/apps/<string:app_id>/assets/files/<string:node_id>/download-url")
class AppAssetFileDownloadUrlResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def get(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
try:
download_url = AppAssetService.get_file_download_url(app_model, current_user.id, node_id)
return {"download_url": download_url}
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.route("/apps/<string:app_id>/assets/files/upload")
class AppAssetFileUploadUrlResource(Resource):
@console_ns.expect(console_ns.models[GetUploadUrlPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App):
current_user, _ = current_account_with_tenant()
payload = GetUploadUrlPayload.model_validate(console_ns.payload or {})
try:
node, upload_url = AppAssetService.get_file_upload_url(
app_model, current_user.id, payload.name, payload.size, payload.parent_id
)
return {"node": node.model_dump(), "upload_url": upload_url}, 201
except AppAssetParentNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()
@console_ns.route("/apps/<string:app_id>/assets/batch-upload")
class AppAssetBatchUploadResource(Resource):
@console_ns.expect(console_ns.models[BatchUploadPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App):
"""
Create nodes from tree structure and return upload URLs.
Input:
{
"children": [
{"name": "folder1", "node_type": "folder", "children": [
{"name": "file1.txt", "node_type": "file", "size": 1024}
]},
{"name": "root.txt", "node_type": "file", "size": 512}
]
}
Output:
{
"children": [
{"id": "xxx", "name": "folder1", "node_type": "folder", "children": [
{"id": "yyy", "name": "file1.txt", "node_type": "file", "size": 1024, "upload_url": "..."}
]},
{"id": "zzz", "name": "root.txt", "node_type": "file", "size": 512, "upload_url": "..."}
]
}
"""
current_user, _ = current_account_with_tenant()
payload = BatchUploadPayload.model_validate(console_ns.payload or {})
try:
result_children = AppAssetService.batch_create_from_tree(app_model, current_user.id, payload.children)
return {"children": [child.model_dump() for child in result_children]}, 201
except AppAssetParentNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()

View File

@@ -51,6 +51,14 @@ class AppImportPayload(BaseModel):
app_id: str | None = Field(None)
class AppImportBundleConfirmPayload(BaseModel):
name: str | None = None
description: str | None = None
icon_type: str | None = None
icon: str | None = None
icon_background: str | None = None
console_ns.schema_model(
AppImportPayload.__name__, AppImportPayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
@@ -139,3 +147,68 @@ class AppImportCheckDependenciesApi(Resource):
result = import_service.check_dependencies(app_model=app_model)
return result.model_dump(mode="json"), 200
@console_ns.route("/apps/imports-bundle/prepare")
class AppImportBundlePrepareApi(Resource):
"""Step 1: Get upload URL for bundle import."""
@setup_required
@login_required
@account_initialization_required
@edit_permission_required
def post(self):
from services.app_bundle_service import AppBundleService
current_user, current_tenant_id = current_account_with_tenant()
result = AppBundleService.prepare_import(
tenant_id=current_tenant_id,
account_id=current_user.id,
)
return {"import_id": result.import_id, "upload_url": result.upload_url}, 200
@console_ns.route("/apps/imports-bundle/<string:import_id>/confirm")
class AppImportBundleConfirmApi(Resource):
"""Step 2: Confirm bundle import after upload."""
@setup_required
@login_required
@account_initialization_required
@marshal_with(app_import_model)
@cloud_edition_billing_resource_check("apps")
@edit_permission_required
def post(self, import_id: str):
from flask import request
from core.app.entities.app_bundle_entities import BundleFormatError
from services.app_bundle_service import AppBundleService
current_user, _ = current_account_with_tenant()
args = AppImportBundleConfirmPayload.model_validate(request.get_json() or {})
try:
result = AppBundleService.confirm_import(
import_id=import_id,
account=current_user,
name=args.name,
description=args.description,
icon_type=args.icon_type,
icon=args.icon,
icon_background=args.icon_background,
)
except BundleFormatError as e:
return {"error": str(e)}, 400
if result.app_id and FeatureService.get_system_features().webapp_auth.enabled:
EnterpriseService.WebAppAuth.update_app_access_mode(result.app_id, "private")
status = result.status
if status == ImportStatus.FAILED:
return result.model_dump(mode="json"), 400
elif status == ImportStatus.PENDING:
return result.model_dump(mode="json"), 202
return result.model_dump(mode="json"), 200

View File

@@ -89,6 +89,7 @@ status_count_model = console_ns.model(
"success": fields.Integer,
"failed": fields.Integer,
"partial_success": fields.Integer,
"paused": fields.Integer,
},
)

View File

@@ -110,8 +110,6 @@ class TracingConfigCheckError(BaseHTTPException):
class InvokeRateLimitError(BaseHTTPException):
"""Raised when the Invoke returns rate limit error."""
error_code = "rate_limit_error"
description = "Rate Limit Error"
code = 429
@@ -121,3 +119,21 @@ class NeedAddIdsError(BaseHTTPException):
error_code = "need_add_ids"
description = "Need to add ids."
code = 400
class AppAssetNodeNotFoundError(BaseHTTPException):
error_code = "app_asset_node_not_found"
description = "App asset node not found."
code = 404
class AppAssetFileRequiredError(BaseHTTPException):
error_code = "app_asset_file_required"
description = "File is required."
code = 400
class AppAssetPathConflictError(BaseHTTPException):
error_code = "app_asset_path_conflict"
description = "Path already exists."
code = 409

View File

@@ -1,4 +1,5 @@
from collections.abc import Sequence
from typing import Any
from flask_restx import Resource
from pydantic import BaseModel, Field
@@ -16,6 +17,11 @@ from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotIni
from core.helper.code_executor.code_node_provider import CodeNodeProvider
from core.helper.code_executor.javascript.javascript_code_provider import JavascriptCodeProvider
from core.helper.code_executor.python3.python3_code_provider import Python3CodeProvider
from core.llm_generator.context_models import (
AvailableVarPayload,
CodeContextPayload,
ParameterInfoPayload,
)
from core.llm_generator.entities import RuleCodeGeneratePayload, RuleGeneratePayload, RuleStructuredOutputPayload
from core.llm_generator.llm_generator import LLMGenerator
from core.model_runtime.errors.invoke import InvokeError
@@ -41,6 +47,34 @@ class InstructionTemplatePayload(BaseModel):
type: str = Field(..., description="Instruction template type")
class ContextGeneratePayload(BaseModel):
"""Payload for generating extractor code node."""
language: str = Field(default="python3", description="Code language (python3/javascript)")
prompt_messages: list[dict[str, Any]] = Field(
..., description="Multi-turn conversation history, last message is the current instruction"
)
model_config_data: dict[str, Any] = Field(..., alias="model_config", description="Model configuration")
available_vars: list[AvailableVarPayload] = Field(..., description="Available variables from upstream nodes")
parameter_info: ParameterInfoPayload = Field(..., description="Target parameter metadata from the frontend")
code_context: CodeContextPayload = Field(description="Existing code node context for incremental generation")
class SuggestedQuestionsPayload(BaseModel):
"""Payload for generating suggested questions."""
language: str = Field(
default="English", description="Language for generated questions (e.g. English, Chinese, Japanese)"
)
model_config_data: dict[str, Any] = Field(
default_factory=dict,
alias="model_config",
description="Model configuration (optional, uses system default if not provided)",
)
available_vars: list[AvailableVarPayload] = Field(..., description="Available variables from upstream nodes")
parameter_info: ParameterInfoPayload = Field(..., description="Target parameter metadata from the frontend")
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
@@ -50,6 +84,8 @@ reg(RuleCodeGeneratePayload)
reg(RuleStructuredOutputPayload)
reg(InstructionGeneratePayload)
reg(InstructionTemplatePayload)
reg(ContextGeneratePayload)
reg(SuggestedQuestionsPayload)
reg(ModelConfig)
@@ -263,3 +299,70 @@ class InstructionGenerationTemplateApi(Resource):
return {"data": INSTRUCTION_GENERATE_TEMPLATE_CODE}
case _:
raise ValueError(f"Invalid type: {args.type}")
@console_ns.route("/context-generate")
class ContextGenerateApi(Resource):
@console_ns.doc("generate_with_context")
@console_ns.doc(description="Generate with multi-turn conversation context")
@console_ns.expect(console_ns.models[ContextGeneratePayload.__name__])
@console_ns.response(200, "Content generated successfully")
@console_ns.response(400, "Invalid request parameters or workflow not found")
@console_ns.response(402, "Provider quota exceeded")
@setup_required
@login_required
@account_initialization_required
def post(self):
from core.llm_generator.utils import deserialize_prompt_messages
args = ContextGeneratePayload.model_validate(console_ns.payload)
_, current_tenant_id = current_account_with_tenant()
try:
return LLMGenerator.generate_with_context(
tenant_id=current_tenant_id,
language=args.language,
prompt_messages=deserialize_prompt_messages(args.prompt_messages),
model_config=args.model_config_data,
available_vars=args.available_vars,
parameter_info=args.parameter_info,
code_context=args.code_context,
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
except QuotaExceededError:
raise ProviderQuotaExceededError()
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
except InvokeError as e:
raise CompletionRequestError(e.description)
@console_ns.route("/context-generate/suggested-questions")
class SuggestedQuestionsApi(Resource):
@console_ns.doc("generate_suggested_questions")
@console_ns.doc(description="Generate suggested questions for context generation")
@console_ns.expect(console_ns.models[SuggestedQuestionsPayload.__name__])
@console_ns.response(200, "Questions generated successfully")
@setup_required
@login_required
@account_initialization_required
def post(self):
args = SuggestedQuestionsPayload.model_validate(console_ns.payload)
_, current_tenant_id = current_account_with_tenant()
try:
return LLMGenerator.generate_suggested_questions(
tenant_id=current_tenant_id,
language=args.language,
available_vars=args.available_vars,
parameter_info=args.parameter_info,
model_config=args.model_config_data,
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
except QuotaExceededError:
raise ProviderQuotaExceededError()
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
except InvokeError as e:
raise CompletionRequestError(e.description)

View File

@@ -33,7 +33,7 @@ from libs.login import current_account_with_tenant, login_required
from models.model import AppMode, Conversation, Message, MessageAnnotation, MessageFeedback
from services.errors.conversation import ConversationNotExistsError
from services.errors.message import MessageNotExistsError, SuggestedQuestionsAfterAnswerDisabledError
from services.message_service import MessageService
from services.message_service import MessageService, attach_message_extra_contents
logger = logging.getLogger(__name__)
@@ -207,10 +207,12 @@ message_detail_model = console_ns.model(
"created_at": TimestampField,
"agent_thoughts": fields.List(fields.Nested(agent_thought_model)),
"message_files": fields.List(fields.Nested(message_file_model)),
"extra_contents": fields.List(fields.Raw),
"metadata": fields.Raw(attribute="message_metadata_dict"),
"status": fields.String,
"error": fields.String,
"parent_message_id": fields.String,
"generation_detail": fields.Raw,
},
)
@@ -299,6 +301,7 @@ class ChatMessageListApi(Resource):
has_more = False
history_messages = list(reversed(history_messages))
attach_message_extra_contents(history_messages)
return InfiniteScrollPagination(data=history_messages, limit=args.limit, has_more=has_more)
@@ -481,4 +484,5 @@ class MessageApi(Resource):
if not message:
raise NotFound("Message Not Exists.")
attach_message_extra_contents([message])
return message

View File

@@ -0,0 +1,83 @@
from flask_restx import Resource
from controllers.console import console_ns
from controllers.console.app.error import DraftWorkflowNotExist
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, current_account_with_tenant, setup_required
from libs.login import login_required
from models import App
from models.model import AppMode
from services.skill_service import SkillService
from services.workflow_service import WorkflowService
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/nodes/<string:node_id>/skills")
class NodeSkillsApi(Resource):
"""API for retrieving skill references for a specific workflow node."""
@console_ns.doc("get_node_skills")
@console_ns.doc(description="Get skill references for a specific node in the draft workflow")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.response(200, "Node skills retrieved successfully")
@console_ns.response(404, "Workflow or node not found")
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def get(self, app_model: App, node_id: str):
"""
Get skill information for a specific node in the draft workflow.
Returns information about skill references in the node, including:
- skill_references: List of prompt messages marked as skills
- tool_references: Aggregated tool references from all skill prompts
- file_references: Aggregated file references from all skill prompts
"""
current_user, _ = current_account_with_tenant()
workflow_service = WorkflowService()
workflow = workflow_service.get_draft_workflow(app_model=app_model)
if not workflow:
raise DraftWorkflowNotExist()
skill_info = SkillService.get_node_skill_info(
app=app_model,
workflow=workflow,
node_id=node_id,
user_id=current_user.id,
)
return skill_info.model_dump()
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/skills")
class WorkflowSkillsApi(Resource):
"""API for retrieving all skill references in a workflow."""
@console_ns.doc("get_workflow_skills")
@console_ns.doc(description="Get all skill references in the draft workflow")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.response(200, "Workflow skills retrieved successfully")
@console_ns.response(404, "Workflow not found")
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def get(self, app_model: App):
"""
Get skill information for all nodes in the draft workflow that have skill references.
Returns a list of nodes with their skill information.
"""
current_user, _ = current_account_with_tenant()
workflow_service = WorkflowService()
workflow = workflow_service.get_draft_workflow(app_model=app_model)
if not workflow:
raise DraftWorkflowNotExist()
skills_info = SkillService.get_workflow_skills(
app=app_model,
workflow=workflow,
user_id=current_user.id,
)
return {"nodes": [info.model_dump() for info in skills_info]}

View File

@@ -33,8 +33,10 @@ from core.trigger.debug.event_selectors import (
from core.workflow.enums import NodeType
from core.workflow.graph_engine.manager import GraphEngineManager
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from factories import file_factory, variable_factory
from fields.member_fields import simple_account_fields
from fields.online_user_fields import online_user_list_fields
from fields.workflow_fields import workflow_fields, workflow_pagination_fields
from libs import helper
from libs.datetime_utils import naive_utc_now
@@ -43,9 +45,12 @@ from libs.login import current_account_with_tenant, login_required
from models import App
from models.model import AppMode
from models.workflow import Workflow
from repositories.workflow_collaboration_repository import WORKFLOW_ONLINE_USERS_PREFIX
from services.app_generate_service import AppGenerateService
from services.errors.app import WorkflowHashNotEqualError
from services.errors.llm import InvokeRateLimitError
from services.workflow.entities import NestedNodeGraphRequest, NestedNodeParameterSchema
from services.workflow.nested_node_graph_service import NestedNodeGraphService
from services.workflow_service import DraftWorkflowDeletionError, WorkflowInUseError, WorkflowService
logger = logging.getLogger(__name__)
@@ -160,6 +165,14 @@ class WorkflowUpdatePayload(BaseModel):
marked_comment: str | None = Field(default=None, max_length=100)
class WorkflowFeaturesPayload(BaseModel):
features: dict[str, Any] = Field(..., description="Workflow feature configuration")
class WorkflowOnlineUsersQuery(BaseModel):
workflow_ids: str = Field(..., description="Comma-separated workflow IDs")
class DraftWorkflowTriggerRunPayload(BaseModel):
node_id: str
@@ -168,6 +181,15 @@ class DraftWorkflowTriggerRunAllPayload(BaseModel):
node_ids: list[str]
class NestedNodeGraphPayload(BaseModel):
"""Request payload for generating nested node graph."""
parent_node_id: str = Field(description="ID of the parent node that uses the extracted value")
parameter_key: str = Field(description="Key of the parameter being extracted")
context_source: list[str] = Field(description="Variable selector for the context source")
parameter_schema: dict[str, Any] = Field(description="Schema of the parameter to extract")
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
@@ -183,8 +205,11 @@ reg(DefaultBlockConfigQuery)
reg(ConvertToWorkflowPayload)
reg(WorkflowListQuery)
reg(WorkflowUpdatePayload)
reg(WorkflowFeaturesPayload)
reg(WorkflowOnlineUsersQuery)
reg(DraftWorkflowTriggerRunPayload)
reg(DraftWorkflowTriggerRunAllPayload)
reg(NestedNodeGraphPayload)
# TODO(QuantumGhost): Refactor existing node run API to handle file parameter parsing
@@ -507,6 +532,179 @@ class WorkflowDraftRunLoopNodeApi(Resource):
raise InternalServerError()
class HumanInputFormPreviewPayload(BaseModel):
inputs: dict[str, Any] = Field(
default_factory=dict,
description="Values used to fill missing upstream variables referenced in form_content",
)
class HumanInputFormSubmitPayload(BaseModel):
form_inputs: dict[str, Any] = Field(..., description="Values the user provides for the form's own fields")
inputs: dict[str, Any] = Field(
...,
description="Values used to fill missing upstream variables referenced in form_content",
)
action: str = Field(..., description="Selected action ID")
class HumanInputDeliveryTestPayload(BaseModel):
delivery_method_id: str = Field(..., description="Delivery method ID")
inputs: dict[str, Any] = Field(
default_factory=dict,
description="Values used to fill missing upstream variables referenced in form_content",
)
reg(HumanInputFormPreviewPayload)
reg(HumanInputFormSubmitPayload)
reg(HumanInputDeliveryTestPayload)
@console_ns.route("/apps/<uuid:app_id>/advanced-chat/workflows/draft/human-input/nodes/<string:node_id>/form/preview")
class AdvancedChatDraftHumanInputFormPreviewApi(Resource):
@console_ns.doc("get_advanced_chat_draft_human_input_form")
@console_ns.doc(description="Get human input form preview for advanced chat workflow")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(console_ns.models[HumanInputFormPreviewPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT])
@edit_permission_required
def post(self, app_model: App, node_id: str):
"""
Preview human input form content and placeholders
"""
current_user, _ = current_account_with_tenant()
args = HumanInputFormPreviewPayload.model_validate(console_ns.payload or {})
inputs = args.inputs
workflow_service = WorkflowService()
preview = workflow_service.get_human_input_form_preview(
app_model=app_model,
account=current_user,
node_id=node_id,
inputs=inputs,
)
return jsonable_encoder(preview)
@console_ns.route("/apps/<uuid:app_id>/advanced-chat/workflows/draft/human-input/nodes/<string:node_id>/form/run")
class AdvancedChatDraftHumanInputFormRunApi(Resource):
@console_ns.doc("submit_advanced_chat_draft_human_input_form")
@console_ns.doc(description="Submit human input form preview for advanced chat workflow")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(console_ns.models[HumanInputFormSubmitPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT])
@edit_permission_required
def post(self, app_model: App, node_id: str):
"""
Submit human input form preview
"""
current_user, _ = current_account_with_tenant()
args = HumanInputFormSubmitPayload.model_validate(console_ns.payload or {})
workflow_service = WorkflowService()
result = workflow_service.submit_human_input_form_preview(
app_model=app_model,
account=current_user,
node_id=node_id,
form_inputs=args.form_inputs,
inputs=args.inputs,
action=args.action,
)
return jsonable_encoder(result)
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/human-input/nodes/<string:node_id>/form/preview")
class WorkflowDraftHumanInputFormPreviewApi(Resource):
@console_ns.doc("get_workflow_draft_human_input_form")
@console_ns.doc(description="Get human input form preview for workflow")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(console_ns.models[HumanInputFormPreviewPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.WORKFLOW])
@edit_permission_required
def post(self, app_model: App, node_id: str):
"""
Preview human input form content and placeholders
"""
current_user, _ = current_account_with_tenant()
args = HumanInputFormPreviewPayload.model_validate(console_ns.payload or {})
inputs = args.inputs
workflow_service = WorkflowService()
preview = workflow_service.get_human_input_form_preview(
app_model=app_model,
account=current_user,
node_id=node_id,
inputs=inputs,
)
return jsonable_encoder(preview)
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/human-input/nodes/<string:node_id>/form/run")
class WorkflowDraftHumanInputFormRunApi(Resource):
@console_ns.doc("submit_workflow_draft_human_input_form")
@console_ns.doc(description="Submit human input form preview for workflow")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(console_ns.models[HumanInputFormSubmitPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.WORKFLOW])
@edit_permission_required
def post(self, app_model: App, node_id: str):
"""
Submit human input form preview
"""
current_user, _ = current_account_with_tenant()
workflow_service = WorkflowService()
args = HumanInputFormSubmitPayload.model_validate(console_ns.payload or {})
result = workflow_service.submit_human_input_form_preview(
app_model=app_model,
account=current_user,
node_id=node_id,
form_inputs=args.form_inputs,
inputs=args.inputs,
action=args.action,
)
return jsonable_encoder(result)
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/human-input/nodes/<string:node_id>/delivery-test")
class WorkflowDraftHumanInputDeliveryTestApi(Resource):
@console_ns.doc("test_workflow_draft_human_input_delivery")
@console_ns.doc(description="Test human input delivery for workflow")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(console_ns.models[HumanInputDeliveryTestPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.WORKFLOW, AppMode.ADVANCED_CHAT])
@edit_permission_required
def post(self, app_model: App, node_id: str):
"""
Test human input delivery
"""
current_user, _ = current_account_with_tenant()
workflow_service = WorkflowService()
args = HumanInputDeliveryTestPayload.model_validate(console_ns.payload or {})
workflow_service.test_human_input_delivery(
app_model=app_model,
account=current_user,
node_id=node_id,
delivery_method_id=args.delivery_method_id,
inputs=args.inputs,
)
return jsonable_encoder({})
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/run")
class DraftWorkflowRunApi(Resource):
@console_ns.doc("run_draft_workflow")
@@ -654,13 +852,14 @@ class PublishedWorkflowApi(Resource):
"""
Publish workflow
"""
from services.app_bundle_service import AppBundleService
current_user, _ = current_account_with_tenant()
args = PublishWorkflowPayload.model_validate(console_ns.payload or {})
workflow_service = WorkflowService()
with Session(db.engine) as session:
workflow = workflow_service.publish_workflow(
workflow = AppBundleService.publish(
session=session,
app_model=app_model,
account=current_user,
@@ -771,6 +970,31 @@ class ConvertToWorkflowApi(Resource):
}
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/features")
class WorkflowFeaturesApi(Resource):
"""Update draft workflow features."""
@console_ns.expect(console_ns.models[WorkflowFeaturesPayload.__name__])
@console_ns.doc("update_workflow_features")
@console_ns.doc(description="Update draft workflow features")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.response(200, "Workflow features updated successfully")
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App):
current_user, _ = current_account_with_tenant()
args = WorkflowFeaturesPayload.model_validate(console_ns.payload or {})
features = args.features
workflow_service = WorkflowService()
workflow_service.update_draft_workflow_features(app_model=app_model, features=features, account=current_user)
return {"result": "success"}
@console_ns.route("/apps/<uuid:app_id>/workflows")
class PublishedAllWorkflowApi(Resource):
@console_ns.expect(console_ns.models[WorkflowListQuery.__name__])
@@ -1148,3 +1372,83 @@ class DraftWorkflowTriggerRunAllApi(Resource):
"status": "error",
}
), 400
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/nested-node-graph")
class NestedNodeGraphApi(Resource):
"""
API for generating Nested Node LLM graph structures.
This endpoint creates a complete graph structure containing an LLM node
configured to extract values from list[PromptMessage] variables.
"""
@console_ns.doc("generate_nested_node_graph")
@console_ns.doc(description="Generate a Nested Node LLM graph structure")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(console_ns.models[NestedNodeGraphPayload.__name__])
@console_ns.response(200, "Nested node graph generated successfully")
@console_ns.response(400, "Invalid request parameters")
@console_ns.response(403, "Permission denied")
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@edit_permission_required
def post(self, app_model: App):
"""
Generate a Nested Node LLM graph structure.
Returns a complete graph structure containing a single LLM node
configured for extracting values from list[PromptMessage] context.
"""
payload = NestedNodeGraphPayload.model_validate(console_ns.payload or {})
parameter_schema = NestedNodeParameterSchema(
name=payload.parameter_schema.get("name", payload.parameter_key),
type=payload.parameter_schema.get("type", "string"),
description=payload.parameter_schema.get("description", ""),
)
request = NestedNodeGraphRequest(
parent_node_id=payload.parent_node_id,
parameter_key=payload.parameter_key,
context_source=payload.context_source,
parameter_schema=parameter_schema,
)
with Session(db.engine) as session:
service = NestedNodeGraphService(session)
response = service.generate_nested_node_graph(tenant_id=app_model.tenant_id, request=request)
return response.model_dump()
@console_ns.route("/apps/workflows/online-users")
class WorkflowOnlineUsersApi(Resource):
@console_ns.expect(console_ns.models[WorkflowOnlineUsersQuery.__name__])
@console_ns.doc("get_workflow_online_users")
@console_ns.doc(description="Get workflow online users")
@setup_required
@login_required
@account_initialization_required
@marshal_with(online_user_list_fields)
def get(self):
args = WorkflowOnlineUsersQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
workflow_ids = [workflow_id.strip() for workflow_id in args.workflow_ids.split(",") if workflow_id.strip()]
results = []
for workflow_id in workflow_ids:
users_json = redis_client.hgetall(f"{WORKFLOW_ONLINE_USERS_PREFIX}{workflow_id}")
users = []
for _, user_info_json in users_json.items():
try:
users.append(json.loads(user_info_json))
except Exception:
continue
results.append({"workflow_id": workflow_id, "users": users})
return {"data": results}

View File

@@ -0,0 +1,322 @@
import logging
from flask_restx import Resource, marshal_with
from pydantic import BaseModel, Field, TypeAdapter
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from fields.member_fields import AccountWithRole
from fields.workflow_comment_fields import (
workflow_comment_basic_fields,
workflow_comment_create_fields,
workflow_comment_detail_fields,
workflow_comment_reply_create_fields,
workflow_comment_reply_update_fields,
workflow_comment_resolve_fields,
workflow_comment_update_fields,
)
from libs.login import current_user, login_required
from models import App
from services.account_service import TenantService
from services.workflow_comment_service import WorkflowCommentService
logger = logging.getLogger(__name__)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkflowCommentCreatePayload(BaseModel):
position_x: float = Field(..., description="Comment X position")
position_y: float = Field(..., description="Comment Y position")
content: str = Field(..., description="Comment content")
mentioned_user_ids: list[str] = Field(default_factory=list, description="Mentioned user IDs")
class WorkflowCommentUpdatePayload(BaseModel):
content: str = Field(..., description="Comment content")
position_x: float | None = Field(default=None, description="Comment X position")
position_y: float | None = Field(default=None, description="Comment Y position")
mentioned_user_ids: list[str] = Field(default_factory=list, description="Mentioned user IDs")
class WorkflowCommentReplyCreatePayload(BaseModel):
content: str = Field(..., description="Reply content")
mentioned_user_ids: list[str] = Field(default_factory=list, description="Mentioned user IDs")
class WorkflowCommentReplyUpdatePayload(BaseModel):
content: str = Field(..., description="Reply content")
mentioned_user_ids: list[str] = Field(default_factory=list, description="Mentioned user IDs")
class WorkflowCommentMentionUsersResponse(BaseModel):
users: list[AccountWithRole] = Field(description="Mentionable users")
for model in (
WorkflowCommentCreatePayload,
WorkflowCommentUpdatePayload,
WorkflowCommentReplyCreatePayload,
WorkflowCommentReplyUpdatePayload,
):
console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
for model in (AccountWithRole, WorkflowCommentMentionUsersResponse):
console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
workflow_comment_basic_model = console_ns.model("WorkflowCommentBasic", workflow_comment_basic_fields)
workflow_comment_detail_model = console_ns.model("WorkflowCommentDetail", workflow_comment_detail_fields)
workflow_comment_create_model = console_ns.model("WorkflowCommentCreate", workflow_comment_create_fields)
workflow_comment_update_model = console_ns.model("WorkflowCommentUpdate", workflow_comment_update_fields)
workflow_comment_resolve_model = console_ns.model("WorkflowCommentResolve", workflow_comment_resolve_fields)
workflow_comment_reply_create_model = console_ns.model(
"WorkflowCommentReplyCreate", workflow_comment_reply_create_fields
)
workflow_comment_reply_update_model = console_ns.model(
"WorkflowCommentReplyUpdate", workflow_comment_reply_update_fields
)
workflow_comment_mention_users_model = console_ns.models[WorkflowCommentMentionUsersResponse.__name__]
@console_ns.route("/apps/<uuid:app_id>/workflow/comments")
class WorkflowCommentListApi(Resource):
"""API for listing and creating workflow comments."""
@console_ns.doc("list_workflow_comments")
@console_ns.doc(description="Get all comments for a workflow")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.response(200, "Comments retrieved successfully", workflow_comment_basic_model)
@login_required
@setup_required
@account_initialization_required
@get_app_model()
@marshal_with(workflow_comment_basic_model, envelope="data")
def get(self, app_model: App):
"""Get all comments for a workflow."""
comments = WorkflowCommentService.get_comments(tenant_id=current_user.current_tenant_id, app_id=app_model.id)
return comments
@console_ns.doc("create_workflow_comment")
@console_ns.doc(description="Create a new workflow comment")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(console_ns.models[WorkflowCommentCreatePayload.__name__])
@console_ns.response(201, "Comment created successfully", workflow_comment_create_model)
@login_required
@setup_required
@account_initialization_required
@get_app_model()
@marshal_with(workflow_comment_create_model)
def post(self, app_model: App):
"""Create a new workflow comment."""
payload = WorkflowCommentCreatePayload.model_validate(console_ns.payload or {})
result = WorkflowCommentService.create_comment(
tenant_id=current_user.current_tenant_id,
app_id=app_model.id,
created_by=current_user.id,
content=payload.content,
position_x=payload.position_x,
position_y=payload.position_y,
mentioned_user_ids=payload.mentioned_user_ids,
)
return result, 201
@console_ns.route("/apps/<uuid:app_id>/workflow/comments/<string:comment_id>")
class WorkflowCommentDetailApi(Resource):
"""API for managing individual workflow comments."""
@console_ns.doc("get_workflow_comment")
@console_ns.doc(description="Get a specific workflow comment")
@console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"})
@console_ns.response(200, "Comment retrieved successfully", workflow_comment_detail_model)
@login_required
@setup_required
@account_initialization_required
@get_app_model()
@marshal_with(workflow_comment_detail_model)
def get(self, app_model: App, comment_id: str):
"""Get a specific workflow comment."""
comment = WorkflowCommentService.get_comment(
tenant_id=current_user.current_tenant_id, app_id=app_model.id, comment_id=comment_id
)
return comment
@console_ns.doc("update_workflow_comment")
@console_ns.doc(description="Update a workflow comment")
@console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"})
@console_ns.expect(console_ns.models[WorkflowCommentUpdatePayload.__name__])
@console_ns.response(200, "Comment updated successfully", workflow_comment_update_model)
@login_required
@setup_required
@account_initialization_required
@get_app_model()
@marshal_with(workflow_comment_update_model)
def put(self, app_model: App, comment_id: str):
"""Update a workflow comment."""
payload = WorkflowCommentUpdatePayload.model_validate(console_ns.payload or {})
result = WorkflowCommentService.update_comment(
tenant_id=current_user.current_tenant_id,
app_id=app_model.id,
comment_id=comment_id,
user_id=current_user.id,
content=payload.content,
position_x=payload.position_x,
position_y=payload.position_y,
mentioned_user_ids=payload.mentioned_user_ids,
)
return result
@console_ns.doc("delete_workflow_comment")
@console_ns.doc(description="Delete a workflow comment")
@console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"})
@console_ns.response(204, "Comment deleted successfully")
@login_required
@setup_required
@account_initialization_required
@get_app_model()
def delete(self, app_model: App, comment_id: str):
"""Delete a workflow comment."""
WorkflowCommentService.delete_comment(
tenant_id=current_user.current_tenant_id,
app_id=app_model.id,
comment_id=comment_id,
user_id=current_user.id,
)
return {"result": "success"}, 204
@console_ns.route("/apps/<uuid:app_id>/workflow/comments/<string:comment_id>/resolve")
class WorkflowCommentResolveApi(Resource):
"""API for resolving and reopening workflow comments."""
@console_ns.doc("resolve_workflow_comment")
@console_ns.doc(description="Resolve a workflow comment")
@console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"})
@console_ns.response(200, "Comment resolved successfully", workflow_comment_resolve_model)
@login_required
@setup_required
@account_initialization_required
@get_app_model()
@marshal_with(workflow_comment_resolve_model)
def post(self, app_model: App, comment_id: str):
"""Resolve a workflow comment."""
comment = WorkflowCommentService.resolve_comment(
tenant_id=current_user.current_tenant_id,
app_id=app_model.id,
comment_id=comment_id,
user_id=current_user.id,
)
return comment
@console_ns.route("/apps/<uuid:app_id>/workflow/comments/<string:comment_id>/replies")
class WorkflowCommentReplyApi(Resource):
"""API for managing comment replies."""
@console_ns.doc("create_workflow_comment_reply")
@console_ns.doc(description="Add a reply to a workflow comment")
@console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"})
@console_ns.expect(console_ns.models[WorkflowCommentReplyCreatePayload.__name__])
@console_ns.response(201, "Reply created successfully", workflow_comment_reply_create_model)
@login_required
@setup_required
@account_initialization_required
@get_app_model()
@marshal_with(workflow_comment_reply_create_model)
def post(self, app_model: App, comment_id: str):
"""Add a reply to a workflow comment."""
# Validate comment access first
WorkflowCommentService.validate_comment_access(
comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id
)
payload = WorkflowCommentReplyCreatePayload.model_validate(console_ns.payload or {})
result = WorkflowCommentService.create_reply(
comment_id=comment_id,
content=payload.content,
created_by=current_user.id,
mentioned_user_ids=payload.mentioned_user_ids,
)
return result, 201
@console_ns.route("/apps/<uuid:app_id>/workflow/comments/<string:comment_id>/replies/<string:reply_id>")
class WorkflowCommentReplyDetailApi(Resource):
"""API for managing individual comment replies."""
@console_ns.doc("update_workflow_comment_reply")
@console_ns.doc(description="Update a comment reply")
@console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID", "reply_id": "Reply ID"})
@console_ns.expect(console_ns.models[WorkflowCommentReplyUpdatePayload.__name__])
@console_ns.response(200, "Reply updated successfully", workflow_comment_reply_update_model)
@login_required
@setup_required
@account_initialization_required
@get_app_model()
@marshal_with(workflow_comment_reply_update_model)
def put(self, app_model: App, comment_id: str, reply_id: str):
"""Update a comment reply."""
# Validate comment access first
WorkflowCommentService.validate_comment_access(
comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id
)
payload = WorkflowCommentReplyUpdatePayload.model_validate(console_ns.payload or {})
reply = WorkflowCommentService.update_reply(
reply_id=reply_id,
user_id=current_user.id,
content=payload.content,
mentioned_user_ids=payload.mentioned_user_ids,
)
return reply
@console_ns.doc("delete_workflow_comment_reply")
@console_ns.doc(description="Delete a comment reply")
@console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID", "reply_id": "Reply ID"})
@console_ns.response(204, "Reply deleted successfully")
@login_required
@setup_required
@account_initialization_required
@get_app_model()
def delete(self, app_model: App, comment_id: str, reply_id: str):
"""Delete a comment reply."""
# Validate comment access first
WorkflowCommentService.validate_comment_access(
comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id
)
WorkflowCommentService.delete_reply(reply_id=reply_id, user_id=current_user.id)
return {"result": "success"}, 204
@console_ns.route("/apps/<uuid:app_id>/workflow/comments/mention-users")
class WorkflowCommentMentionUsersApi(Resource):
"""API for getting mentionable users for workflow comments."""
@console_ns.doc("workflow_comment_mention_users")
@console_ns.doc(description="Get all users in current tenant for mentions")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.response(200, "Mentionable users retrieved successfully", workflow_comment_mention_users_model)
@login_required
@setup_required
@account_initialization_required
@get_app_model()
def get(self, app_model: App):
"""Get all users in current tenant for mentions."""
members = TenantService.get_tenant_members(current_user.current_tenant)
member_models = TypeAdapter(list[AccountWithRole]).validate_python(members, from_attributes=True)
response = WorkflowCommentMentionUsersResponse(users=member_models)
return response.model_dump(mode="json"), 200

View File

@@ -17,15 +17,16 @@ from controllers.console.wraps import account_initialization_required, edit_perm
from controllers.web.error import InvalidArgumentError, NotFoundError
from core.file import helpers as file_helpers
from core.variables.segment_group import SegmentGroup
from core.variables.segments import ArrayFileSegment, FileSegment, Segment
from core.variables.segments import ArrayFileSegment, ArrayPromptMessageSegment, FileSegment, Segment
from core.variables.types import SegmentType
from core.workflow.constants import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID
from extensions.ext_database import db
from factories import variable_factory
from factories.file_factory import build_from_mapping, build_from_mappings
from factories.variable_factory import build_segment_with_type
from libs.login import login_required
from libs.login import current_account_with_tenant, login_required
from models import App, AppMode
from models.workflow import WorkflowDraftVariable
from services.sandbox.sandbox_service import SandboxService
from services.workflow_draft_variable_service import WorkflowDraftVariableList, WorkflowDraftVariableService
from services.workflow_service import WorkflowService
@@ -43,6 +44,16 @@ class WorkflowDraftVariableUpdatePayload(BaseModel):
value: Any | None = Field(default=None, description="Variable value")
class ConversationVariableUpdatePayload(BaseModel):
conversation_variables: list[dict[str, Any]] = Field(
..., description="Conversation variables for the draft workflow"
)
class EnvironmentVariableUpdatePayload(BaseModel):
environment_variables: list[dict[str, Any]] = Field(..., description="Environment variables for the draft workflow")
console_ns.schema_model(
WorkflowDraftVariableListQuery.__name__,
WorkflowDraftVariableListQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
@@ -51,6 +62,14 @@ console_ns.schema_model(
WorkflowDraftVariableUpdatePayload.__name__,
WorkflowDraftVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
ConversationVariableUpdatePayload.__name__,
ConversationVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
EnvironmentVariableUpdatePayload.__name__,
EnvironmentVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
def _convert_values_to_json_serializable_object(value: Segment):
@@ -58,6 +77,8 @@ def _convert_values_to_json_serializable_object(value: Segment):
return value.value.model_dump()
elif isinstance(value, ArrayFileSegment):
return [i.model_dump() for i in value.value]
elif isinstance(value, ArrayPromptMessageSegment):
return value.to_object()
elif isinstance(value, SegmentGroup):
return [_convert_values_to_json_serializable_object(i) for i in value.value]
else:
@@ -247,6 +268,8 @@ class WorkflowVariableCollectionApi(Resource):
@console_ns.response(204, "Workflow variables deleted successfully")
@_api_prerequisite
def delete(self, app_model: App):
current_user, _ = current_account_with_tenant()
SandboxService.delete_draft_storage(app_model.tenant_id, app_model.id, current_user.id)
draft_var_srv = WorkflowDraftVariableService(
session=db.session(),
)
@@ -383,7 +406,7 @@ class VariableApi(Resource):
if len(raw_value) > 0 and not isinstance(raw_value[0], dict):
raise InvalidArgumentError(description=f"expected dict for files[0], got {type(raw_value)}")
raw_value = build_from_mappings(mappings=raw_value, tenant_id=app_model.tenant_id)
new_value = build_segment_with_type(variable.value_type, raw_value)
new_value = variable_factory.build_segment_with_type(variable.value_type, raw_value)
draft_var_srv.update_variable(variable, name=new_name, value=new_value)
db.session.commit()
return variable
@@ -476,6 +499,35 @@ class ConversationVariableCollectionApi(Resource):
db.session.commit()
return _get_variable_list(app_model, CONVERSATION_VARIABLE_NODE_ID)
@console_ns.expect(console_ns.models[ConversationVariableUpdatePayload.__name__])
@console_ns.doc("update_conversation_variables")
@console_ns.doc(description="Update conversation variables for workflow draft")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.response(200, "Conversation variables updated successfully")
@setup_required
@login_required
@account_initialization_required
@edit_permission_required
@get_app_model(mode=AppMode.ADVANCED_CHAT)
def post(self, app_model: App):
payload = ConversationVariableUpdatePayload.model_validate(console_ns.payload or {})
workflow_service = WorkflowService()
conversation_variables_list = payload.conversation_variables
conversation_variables = [
variable_factory.build_conversation_variable_from_mapping(obj) for obj in conversation_variables_list
]
current_user, _ = current_account_with_tenant()
workflow_service.update_draft_workflow_conversation_variables(
app_model=app_model,
account=current_user,
conversation_variables=conversation_variables,
)
return {"result": "success"}
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/system-variables")
class SystemVariableCollectionApi(Resource):
@@ -527,3 +579,32 @@ class EnvironmentVariableCollectionApi(Resource):
)
return {"items": env_vars_list}
@console_ns.expect(console_ns.models[EnvironmentVariableUpdatePayload.__name__])
@console_ns.doc("update_environment_variables")
@console_ns.doc(description="Update environment variables for workflow draft")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.response(200, "Environment variables updated successfully")
@setup_required
@login_required
@account_initialization_required
@edit_permission_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App):
payload = EnvironmentVariableUpdatePayload.model_validate(console_ns.payload or {})
current_user, _ = current_account_with_tenant()
workflow_service = WorkflowService()
environment_variables_list = payload.environment_variables
environment_variables = [
variable_factory.build_environment_variable_from_mapping(obj) for obj in environment_variables_list
]
workflow_service.update_draft_workflow_environment_variables(
app_model=app_model,
account=current_user,
environment_variables=environment_variables,
)
return {"result": "success"}

View File

@@ -5,10 +5,15 @@ from flask import request
from flask_restx import Resource, fields, marshal_with
from pydantic import BaseModel, Field, field_validator
from sqlalchemy import select
from sqlalchemy.orm import sessionmaker
from configs import dify_config
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from controllers.web.error import NotFoundError
from core.workflow.entities.pause_reason import HumanInputRequired
from core.workflow.enums import WorkflowExecutionStatus
from extensions.ext_database import db
from fields.end_user_fields import simple_end_user_fields
from fields.member_fields import simple_account_fields
@@ -27,9 +32,21 @@ from libs.custom_inputs import time_duration
from libs.helper import uuid_value
from libs.login import current_user, login_required
from models import Account, App, AppMode, EndUser, WorkflowArchiveLog, WorkflowRunTriggeredFrom
from models.workflow import WorkflowRun
from repositories.factory import DifyAPIRepositoryFactory
from services.retention.workflow_run.constants import ARCHIVE_BUNDLE_NAME
from services.workflow_run_service import WorkflowRunService
def _build_backstage_input_url(form_token: str | None) -> str | None:
if not form_token:
return None
base_url = dify_config.APP_WEB_URL
if not base_url:
return None
return f"{base_url.rstrip('/')}/form/{form_token}"
# Workflow run status choices for filtering
WORKFLOW_RUN_STATUS_CHOICES = ["running", "succeeded", "failed", "stopped", "partial-succeeded"]
EXPORT_SIGNED_URL_EXPIRE_SECONDS = 3600
@@ -440,3 +457,68 @@ class WorkflowRunNodeExecutionListApi(Resource):
)
return {"data": node_executions}
@console_ns.route("/workflow/<string:workflow_run_id>/pause-details")
class ConsoleWorkflowPauseDetailsApi(Resource):
"""Console API for getting workflow pause details."""
@setup_required
@login_required
@account_initialization_required
def get(self, workflow_run_id: str):
"""
Get workflow pause details.
GET /console/api/workflow/<workflow_run_id>/pause-details
Returns information about why and where the workflow is paused.
"""
# Query WorkflowRun to determine if workflow is suspended
session_maker = sessionmaker(bind=db.engine)
workflow_run_repo = DifyAPIRepositoryFactory.create_api_workflow_run_repository(session_maker=session_maker)
workflow_run = db.session.get(WorkflowRun, workflow_run_id)
if not workflow_run:
raise NotFoundError("Workflow run not found")
if workflow_run.tenant_id != current_user.current_tenant_id:
raise NotFoundError("Workflow run not found")
# Check if workflow is suspended
is_paused = workflow_run.status == WorkflowExecutionStatus.PAUSED
if not is_paused:
return {
"paused_at": None,
"paused_nodes": [],
}, 200
pause_entity = workflow_run_repo.get_workflow_pause(workflow_run_id)
pause_reasons = pause_entity.get_pause_reasons() if pause_entity else []
# Build response
paused_at = pause_entity.paused_at if pause_entity else None
paused_nodes = []
response = {
"paused_at": paused_at.isoformat() + "Z" if paused_at else None,
"paused_nodes": paused_nodes,
}
for reason in pause_reasons:
if isinstance(reason, HumanInputRequired):
paused_nodes.append(
{
"node_id": reason.node_id,
"node_title": reason.node_title,
"pause_type": {
"type": "human_input",
"form_id": reason.form_id,
"backstage_input_url": _build_backstage_input_url(reason.form_token),
},
}
)
else:
raise AssertionError("unimplemented.")
return response, 200

View File

@@ -0,0 +1,217 @@
"""
Console/Studio Human Input Form APIs.
"""
import json
import logging
from collections.abc import Generator
from flask import Response, jsonify, request
from flask_restx import Resource, reqparse
from sqlalchemy import select
from sqlalchemy.orm import Session, sessionmaker
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, setup_required
from controllers.web.error import InvalidArgumentError, NotFoundError
from core.app.apps.advanced_chat.app_generator import AdvancedChatAppGenerator
from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter
from core.app.apps.message_generator import MessageGenerator
from core.app.apps.workflow.app_generator import WorkflowAppGenerator
from extensions.ext_database import db
from libs.login import current_account_with_tenant, login_required
from models import App
from models.enums import CreatorUserRole
from models.human_input import RecipientType
from models.model import AppMode
from models.workflow import WorkflowRun
from repositories.factory import DifyAPIRepositoryFactory
from services.human_input_service import Form, HumanInputService
from services.workflow_event_snapshot_service import build_workflow_event_stream
logger = logging.getLogger(__name__)
def _jsonify_form_definition(form: Form) -> Response:
payload = form.get_definition().model_dump()
payload["expiration_time"] = int(form.expiration_time.timestamp())
return Response(json.dumps(payload, ensure_ascii=False), mimetype="application/json")
@console_ns.route("/form/human_input/<string:form_token>")
class ConsoleHumanInputFormApi(Resource):
"""Console API for getting human input form definition."""
@staticmethod
def _ensure_console_access(form: Form):
_, current_tenant_id = current_account_with_tenant()
if form.tenant_id != current_tenant_id:
raise NotFoundError("App not found")
@setup_required
@login_required
@account_initialization_required
def get(self, form_token: str):
"""
Get human input form definition by form token.
GET /console/api/form/human_input/<form_token>
"""
service = HumanInputService(db.engine)
form = service.get_form_definition_by_token_for_console(form_token)
if form is None:
raise NotFoundError(f"form not found, token={form_token}")
self._ensure_console_access(form)
return _jsonify_form_definition(form)
@account_initialization_required
@login_required
def post(self, form_token: str):
"""
Submit human input form by form token.
POST /console/api/form/human_input/<form_token>
Request body:
{
"inputs": {
"content": "User input content"
},
"action": "Approve"
}
"""
parser = reqparse.RequestParser()
parser.add_argument("inputs", type=dict, required=True, location="json")
parser.add_argument("action", type=str, required=True, location="json")
args = parser.parse_args()
current_user, _ = current_account_with_tenant()
service = HumanInputService(db.engine)
form = service.get_form_by_token(form_token)
if form is None:
raise NotFoundError(f"form not found, token={form_token}")
self._ensure_console_access(form)
recipient_type = form.recipient_type
if recipient_type not in {RecipientType.CONSOLE, RecipientType.BACKSTAGE}:
raise NotFoundError(f"form not found, token={form_token}")
# The type checker is not smart enought to validate the following invariant.
# So we need to assert it manually.
assert recipient_type is not None, "recipient_type cannot be None here."
service.submit_form_by_token(
recipient_type=recipient_type,
form_token=form_token,
selected_action_id=args["action"],
form_data=args["inputs"],
submission_user_id=current_user.id,
)
return jsonify({})
@console_ns.route("/workflow/<string:workflow_run_id>/events")
class ConsoleWorkflowEventsApi(Resource):
"""Console API for getting workflow execution events after resume."""
@account_initialization_required
@login_required
def get(self, workflow_run_id: str):
"""
Get workflow execution events stream after resume.
GET /console/api/workflow/<workflow_run_id>/events
Returns Server-Sent Events stream.
"""
user, tenant_id = current_account_with_tenant()
session_maker = sessionmaker(db.engine)
repo = DifyAPIRepositoryFactory.create_api_workflow_run_repository(session_maker)
workflow_run = repo.get_workflow_run_by_id_and_tenant_id(
tenant_id=tenant_id,
run_id=workflow_run_id,
)
if workflow_run is None:
raise NotFoundError(f"WorkflowRun not found, id={workflow_run_id}")
if workflow_run.created_by_role != CreatorUserRole.ACCOUNT:
raise NotFoundError(f"WorkflowRun not created by account, id={workflow_run_id}")
if workflow_run.created_by != user.id:
raise NotFoundError(f"WorkflowRun not created by the current account, id={workflow_run_id}")
with Session(expire_on_commit=False, bind=db.engine) as session:
app = _retrieve_app_for_workflow_run(session, workflow_run)
if workflow_run.finished_at is not None:
# TODO(QuantumGhost): should we modify the handling for finished workflow run here?
response = WorkflowResponseConverter.workflow_run_result_to_finish_response(
task_id=workflow_run.id,
workflow_run=workflow_run,
creator_user=user,
)
payload = response.model_dump(mode="json")
payload["event"] = response.event.value
def _generate_finished_events() -> Generator[str, None, None]:
yield f"data: {json.dumps(payload)}\n\n"
event_generator = _generate_finished_events
else:
msg_generator = MessageGenerator()
if app.mode == AppMode.ADVANCED_CHAT:
generator = AdvancedChatAppGenerator()
elif app.mode == AppMode.WORKFLOW:
generator = WorkflowAppGenerator()
else:
raise InvalidArgumentError(f"cannot subscribe to workflow run, workflow_run_id={workflow_run.id}")
include_state_snapshot = request.args.get("include_state_snapshot", "false").lower() == "true"
def _generate_stream_events():
if include_state_snapshot:
return generator.convert_to_event_stream(
build_workflow_event_stream(
app_mode=AppMode(app.mode),
workflow_run=workflow_run,
tenant_id=workflow_run.tenant_id,
app_id=workflow_run.app_id,
session_maker=session_maker,
)
)
return generator.convert_to_event_stream(
msg_generator.retrieve_events(AppMode(app.mode), workflow_run.id),
)
event_generator = _generate_stream_events
return Response(
event_generator(),
mimetype="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
},
)
def _retrieve_app_for_workflow_run(session: Session, workflow_run: WorkflowRun):
query = select(App).where(
App.id == workflow_run.app_id,
App.tenant_id == workflow_run.tenant_id,
)
app = session.scalars(query).first()
if app is None:
raise AssertionError(
f"App not found for WorkflowRun, workflow_run_id={workflow_run.id}, "
f"app_id={workflow_run.app_id}, tenant_id={workflow_run.tenant_id}"
)
return app

View File

@@ -0,0 +1,103 @@
from __future__ import annotations
from fastapi.encoders import jsonable_encoder
from flask import request
from flask_restx import Resource, fields
from pydantic import BaseModel, Field
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, setup_required
from libs.login import current_account_with_tenant, login_required
from services.sandbox.sandbox_file_service import SandboxFileService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class SandboxFileListQuery(BaseModel):
path: str | None = Field(default=None, description="Workspace relative path")
recursive: bool = Field(default=False, description="List recursively")
class SandboxFileDownloadRequest(BaseModel):
path: str = Field(..., description="Workspace relative file path")
console_ns.schema_model(
SandboxFileListQuery.__name__,
SandboxFileListQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
SandboxFileDownloadRequest.__name__,
SandboxFileDownloadRequest.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
SANDBOX_FILE_NODE_FIELDS = {
"path": fields.String,
"is_dir": fields.Boolean,
"size": fields.Raw,
"mtime": fields.Raw,
"extension": fields.String,
}
SANDBOX_FILE_DOWNLOAD_TICKET_FIELDS = {
"download_url": fields.String,
"expires_in": fields.Integer,
"export_id": fields.String,
}
sandbox_file_node_model = console_ns.model("SandboxFileNode", SANDBOX_FILE_NODE_FIELDS)
sandbox_file_download_ticket_model = console_ns.model("SandboxFileDownloadTicket", SANDBOX_FILE_DOWNLOAD_TICKET_FIELDS)
@console_ns.route("/apps/<string:app_id>/sandbox/files")
class SandboxFilesApi(Resource):
"""List sandbox files for the current user.
The sandbox_id is derived from the current user's ID, as each user has
their own sandbox workspace per app.
"""
@setup_required
@login_required
@account_initialization_required
@console_ns.expect(console_ns.models[SandboxFileListQuery.__name__])
@console_ns.marshal_list_with(sandbox_file_node_model)
def get(self, app_id: str):
args = SandboxFileListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore[arg-type]
account, tenant_id = current_account_with_tenant()
sandbox_id = account.id
return jsonable_encoder(
SandboxFileService.list_files(
tenant_id=tenant_id,
app_id=app_id,
sandbox_id=sandbox_id,
path=args.path,
recursive=args.recursive,
)
)
@console_ns.route("/apps/<string:app_id>/sandbox/files/download")
class SandboxFileDownloadApi(Resource):
"""Download a sandbox file for the current user.
The sandbox_id is derived from the current user's ID, as each user has
their own sandbox workspace per app.
"""
@setup_required
@login_required
@account_initialization_required
@console_ns.expect(console_ns.models[SandboxFileDownloadRequest.__name__])
@console_ns.marshal_with(sandbox_file_download_ticket_model)
def post(self, app_id: str):
payload = SandboxFileDownloadRequest.model_validate(console_ns.payload or {})
account, tenant_id = current_account_with_tenant()
sandbox_id = account.id
res = SandboxFileService.download_file(
tenant_id=tenant_id, app_id=app_id, sandbox_id=sandbox_id, path=payload.path
)
return jsonable_encoder(res)

View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1,119 @@
import logging
from collections.abc import Callable
from typing import cast
from flask import Request as FlaskRequest
from extensions.ext_socketio import sio
from libs.passport import PassportService
from libs.token import extract_access_token
from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository
from services.account_service import AccountService
from services.workflow_collaboration_service import WorkflowCollaborationService
repository = WorkflowCollaborationRepository()
collaboration_service = WorkflowCollaborationService(repository, sio)
def _sio_on(event: str) -> Callable[[Callable[..., object]], Callable[..., object]]:
return cast(Callable[[Callable[..., object]], Callable[..., object]], sio.on(event))
@_sio_on("connect")
def socket_connect(sid, environ, auth):
"""
WebSocket connect event, do authentication here.
"""
try:
request_environ = FlaskRequest(environ)
token = extract_access_token(request_environ)
except Exception:
logging.exception("Failed to extract token")
token = None
if not token:
logging.warning("Socket connect rejected: missing token (sid=%s)", sid)
return False
try:
decoded = PassportService().verify(token)
user_id = decoded.get("user_id")
if not user_id:
logging.warning("Socket connect rejected: missing user_id (sid=%s)", sid)
return False
with sio.app.app_context():
user = AccountService.load_logged_in_account(account_id=user_id)
if not user:
logging.warning("Socket connect rejected: user not found (user_id=%s, sid=%s)", user_id, sid)
return False
if not user.has_edit_permission:
logging.warning("Socket connect rejected: no edit permission (user_id=%s, sid=%s)", user_id, sid)
return False
collaboration_service.save_session(sid, user)
return True
except Exception:
logging.exception("Socket authentication failed")
return False
@_sio_on("user_connect")
def handle_user_connect(sid, data):
"""
Handle user connect event. Each session (tab) is treated as an independent collaborator.
"""
workflow_id = data.get("workflow_id")
if not workflow_id:
return {"msg": "workflow_id is required"}, 400
result = collaboration_service.register_session(workflow_id, sid)
if not result:
return {"msg": "unauthorized"}, 401
user_id, is_leader = result
return {"msg": "connected", "user_id": user_id, "sid": sid, "isLeader": is_leader}
@_sio_on("disconnect")
def handle_disconnect(sid):
"""
Handle session disconnect event. Remove the specific session from online users.
"""
collaboration_service.disconnect_session(sid)
@_sio_on("collaboration_event")
def handle_collaboration_event(sid, data):
"""
Handle general collaboration events, include:
1. mouse_move
2. vars_and_features_update
3. sync_request (ask leader to update graph)
4. app_state_update
5. mcp_server_update
6. workflow_update
7. comments_update
8. node_panel_presence
9. skill_file_active
10. skill_sync_request
11. skill_resync_request
"""
return collaboration_service.relay_collaboration_event(sid, data)
@_sio_on("graph_event")
def handle_graph_event(sid, data):
"""
Handle graph events - simple broadcast relay.
"""
return collaboration_service.relay_graph_event(sid, data)
@_sio_on("skill_event")
def handle_skill_event(sid, data):
"""
Handle skill events - simple broadcast relay.
"""
return collaboration_service.relay_skill_event(sid, data)

View File

@@ -37,6 +37,7 @@ from controllers.console.wraps import (
only_edition_cloud,
setup_required,
)
from core.file import helpers as file_helpers
from extensions.ext_database import db
from fields.member_fields import Account as AccountResponse
from libs.datetime_utils import naive_utc_now
@@ -74,6 +75,10 @@ class AccountAvatarPayload(BaseModel):
avatar: str
class AccountAvatarQuery(BaseModel):
avatar: str = Field(..., description="Avatar file ID")
class AccountInterfaceLanguagePayload(BaseModel):
interface_language: str
@@ -159,6 +164,7 @@ def reg(cls: type[BaseModel]):
reg(AccountInitPayload)
reg(AccountNamePayload)
reg(AccountAvatarPayload)
reg(AccountAvatarQuery)
reg(AccountInterfaceLanguagePayload)
reg(AccountInterfaceThemePayload)
reg(AccountTimezonePayload)
@@ -268,6 +274,18 @@ class AccountNameApi(Resource):
@console_ns.route("/account/avatar")
class AccountAvatarApi(Resource):
@console_ns.expect(console_ns.models[AccountAvatarQuery.__name__])
@console_ns.doc("get_account_avatar")
@console_ns.doc(description="Get account avatar url")
@setup_required
@login_required
@account_initialization_required
def get(self):
args = AccountAvatarQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
avatar_url = file_helpers.get_signed_file_url(args.avatar)
return {"avatar_url": avatar_url}
@console_ns.expect(console_ns.models[AccountAvatarPayload.__name__])
@setup_required
@login_required

View File

@@ -0,0 +1,67 @@
import json
import httpx
import yaml
from flask import request
from flask_restx import Resource
from pydantic import BaseModel
from sqlalchemy.orm import Session
from werkzeug.exceptions import Forbidden
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, setup_required
from core.plugin.impl.exc import PluginPermissionDeniedError
from extensions.ext_database import db
from libs.login import current_account_with_tenant, login_required
from models.model import App
from models.workflow import Workflow
from services.app_dsl_service import AppDslService
class DSLPredictRequest(BaseModel):
app_id: str
current_node_id: str
@console_ns.route("/workspaces/current/dsl/predict")
class DSLPredictApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
user, _ = current_account_with_tenant()
if not user.is_admin_or_owner:
raise Forbidden()
args = DSLPredictRequest.model_validate(request.get_json())
app_id: str = args.app_id
current_node_id: str = args.current_node_id
with Session(db.engine) as session:
app = session.query(App).filter_by(id=app_id).first()
workflow = session.query(Workflow).filter_by(app_id=app_id, version=Workflow.VERSION_DRAFT).first()
if not app:
raise ValueError("App not found")
if not workflow:
raise ValueError("Workflow not found")
try:
i = 0
for node_id, _ in workflow.walk_nodes():
if node_id == current_node_id:
break
i += 1
dsl = yaml.safe_load(AppDslService.export_dsl(app_model=app))
response = httpx.post(
"http://spark-832c:8000/predict",
json={"graph_data": dsl, "source_node_index": i},
)
return {
"nodes": json.loads(response.json()),
}
except PluginPermissionDeniedError as e:
raise ValueError(e.description) from e

View File

@@ -0,0 +1,104 @@
import logging
from flask import request
from flask_restx import Resource, fields
from pydantic import BaseModel
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, setup_required
from core.model_runtime.utils.encoders import jsonable_encoder
from libs.login import current_account_with_tenant, login_required
from services.sandbox.sandbox_provider_service import SandboxProviderService
logger = logging.getLogger(__name__)
class SandboxProviderConfigRequest(BaseModel):
config: dict
activate: bool = False
class SandboxProviderActivateRequest(BaseModel):
type: str
@console_ns.route("/workspaces/current/sandbox-providers")
class SandboxProviderListApi(Resource):
@console_ns.doc("list_sandbox_providers")
@console_ns.doc(description="Get list of available sandbox providers with configuration status")
@console_ns.response(200, "Success", fields.List(fields.Raw(description="Sandbox provider information")))
@setup_required
@login_required
@account_initialization_required
def get(self):
_, current_tenant_id = current_account_with_tenant()
providers = SandboxProviderService.list_providers(current_tenant_id)
return jsonable_encoder([p.model_dump() for p in providers])
@console_ns.route("/workspaces/current/sandbox-provider/<string:provider_type>/config")
class SandboxProviderConfigApi(Resource):
@console_ns.doc("save_sandbox_provider_config")
@console_ns.doc(description="Save or update configuration for a sandbox provider")
@console_ns.response(200, "Success")
@setup_required
@login_required
@account_initialization_required
def post(self, provider_type: str):
_, current_tenant_id = current_account_with_tenant()
args = SandboxProviderConfigRequest.model_validate(request.get_json())
try:
result = SandboxProviderService.save_config(
tenant_id=current_tenant_id,
provider_type=provider_type,
config=args.config,
activate=args.activate,
)
return result
except ValueError as e:
return {"message": str(e)}, 400
@console_ns.doc("delete_sandbox_provider_config")
@console_ns.doc(description="Delete configuration for a sandbox provider")
@console_ns.response(200, "Success")
@setup_required
@login_required
@account_initialization_required
def delete(self, provider_type: str):
_, current_tenant_id = current_account_with_tenant()
try:
result = SandboxProviderService.delete_config(
tenant_id=current_tenant_id,
provider_type=provider_type,
)
return result
except ValueError as e:
return {"message": str(e)}, 400
@console_ns.route("/workspaces/current/sandbox-provider/<string:provider_type>/activate")
class SandboxProviderActivateApi(Resource):
"""Activate a sandbox provider."""
@console_ns.doc("activate_sandbox_provider")
@console_ns.doc(description="Activate a sandbox provider for the current workspace")
@console_ns.response(200, "Success")
@setup_required
@login_required
@account_initialization_required
def post(self, provider_type: str):
"""Activate a sandbox provider."""
_, current_tenant_id = current_account_with_tenant()
try:
args = SandboxProviderActivateRequest.model_validate(request.get_json())
result = SandboxProviderService.activate_provider(
tenant_id=current_tenant_id,
provider_type=provider_type,
type=args.type,
)
return result
except ValueError as e:
return {"message": str(e)}, 400

View File

@@ -14,7 +14,12 @@ api = ExternalApi(
files_ns = Namespace("files", description="File operations", path="/")
from . import image_preview, tool_files, upload
from . import (
image_preview,
storage_files,
tool_files,
upload,
)
api.add_namespace(files_ns)
@@ -23,6 +28,7 @@ __all__ = [
"bp",
"files_ns",
"image_preview",
"storage_files",
"tool_files",
"upload",
]

View File

@@ -0,0 +1,80 @@
"""Token-based file proxy controller for storage operations.
This controller handles file download and upload operations using opaque UUID tokens.
The token maps to the real storage key in Redis, so the actual storage path is never
exposed in the URL.
Routes:
GET /files/storage-files/{token} - Download a file
PUT /files/storage-files/{token} - Upload a file
The operation type (download/upload) is determined by the ticket stored in Redis,
not by the HTTP method. This ensures a download ticket cannot be used for upload
and vice versa.
"""
from urllib.parse import quote
from flask import Response, request
from flask_restx import Resource
from werkzeug.exceptions import Forbidden, NotFound, RequestEntityTooLarge
from controllers.files import files_ns
from extensions.ext_storage import storage
from services.storage_ticket_service import StorageTicketService
@files_ns.route("/storage-files/<string:token>")
class StorageFilesApi(Resource):
"""Handle file operations through token-based URLs."""
def get(self, token: str):
"""Download a file using a token.
The ticket must have op="download", otherwise returns 403.
"""
ticket = StorageTicketService.get_ticket(token)
if ticket is None:
raise Forbidden("Invalid or expired token")
if ticket.op != "download":
raise Forbidden("This token is not valid for download")
try:
generator = storage.load_stream(ticket.storage_key)
except FileNotFoundError:
raise NotFound("File not found")
filename = ticket.filename or ticket.storage_key.rsplit("/", 1)[-1]
encoded_filename = quote(filename)
return Response(
generator,
mimetype="application/octet-stream",
direct_passthrough=True,
headers={
"Content-Disposition": f"attachment; filename*=UTF-8''{encoded_filename}",
},
)
def put(self, token: str):
"""Upload a file using a token.
The ticket must have op="upload", otherwise returns 403.
If the request body exceeds max_bytes, returns 413.
"""
ticket = StorageTicketService.get_ticket(token)
if ticket is None:
raise Forbidden("Invalid or expired token")
if ticket.op != "upload":
raise Forbidden("This token is not valid for upload")
content = request.get_data()
if ticket.max_bytes is not None and len(content) > ticket.max_bytes:
raise RequestEntityTooLarge(f"Upload exceeds maximum size of {ticket.max_bytes} bytes")
storage.save(ticket.storage_key, content)
return Response(status=204)

View File

@@ -448,3 +448,53 @@ class PluginFetchAppInfoApi(Resource):
return BaseBackwardsInvocationResponse(
data=PluginAppBackwardsInvocation.fetch_app_info(payload.app_id, tenant_model.id)
).model_dump()
@inner_api_ns.route("/fetch/tools/list")
class PluginFetchToolsListApi(Resource):
@get_user_tenant
@setup_required
@plugin_inner_api_only
@inner_api_ns.doc("plugin_fetch_tools_list")
@inner_api_ns.doc(description="Fetch all available tools through plugin interface")
@inner_api_ns.doc(
responses={
200: "Tools list retrieved successfully",
401: "Unauthorized - invalid API key",
404: "Service not available",
}
)
def post(self, user_model: Account | EndUser, tenant_model: Tenant):
from sqlalchemy.orm import Session
from extensions.ext_database import db
from services.tools.api_tools_manage_service import ApiToolManageService
from services.tools.builtin_tools_manage_service import BuiltinToolManageService
from services.tools.mcp_tools_manage_service import MCPToolManageService
from services.tools.workflow_tools_manage_service import WorkflowToolManageService
providers = []
# Get builtin tools
builtin_providers = BuiltinToolManageService.list_builtin_tools(user_model.id, tenant_model.id)
for provider in builtin_providers:
providers.append(provider.to_dict())
# Get API tools
api_providers = ApiToolManageService.list_api_tools(tenant_model.id)
for provider in api_providers:
providers.append(provider.to_dict())
# Get workflow tools
workflow_providers = WorkflowToolManageService.list_tenant_workflow_tools(user_model.id, tenant_model.id)
for provider in workflow_providers:
providers.append(provider.to_dict())
# Get MCP tools
with Session(db.engine) as session:
mcp_service = MCPToolManageService(session)
mcp_providers = mcp_service.list_providers(tenant_id=tenant_model.id, for_list=True)
for provider in mcp_providers:
providers.append(provider.to_dict())
return BaseBackwardsInvocationResponse(data={"providers": providers}).model_dump()

View File

@@ -75,7 +75,6 @@ def get_user_tenant(view_func: Callable[P, R]):
@wraps(view_func)
def decorated_view(*args: P.args, **kwargs: P.kwargs):
payload = TenantUserPayload.model_validate(request.get_json(silent=True) or {})
user_id = payload.user_id
tenant_id = payload.tenant_id

View File

@@ -5,14 +5,15 @@ from hashlib import sha1
from hmac import new as hmac_new
from typing import ParamSpec, TypeVar
P = ParamSpec("P")
R = TypeVar("R")
from flask import abort, request
from configs import dify_config
from extensions.ext_database import db
from models.model import EndUser
P = ParamSpec("P")
R = TypeVar("R")
def billing_inner_api_only(view: Callable[P, R]):
@wraps(view)
@@ -88,11 +89,11 @@ def plugin_inner_api_only(view: Callable[P, R]):
if not dify_config.PLUGIN_DAEMON_KEY:
abort(404)
# get header 'X-Inner-Api-Key'
# validate using inner api key
inner_api_key = request.headers.get("X-Inner-Api-Key")
if not inner_api_key or inner_api_key != dify_config.INNER_API_KEY_FOR_PLUGIN:
abort(404)
if inner_api_key and inner_api_key == dify_config.INNER_API_KEY_FOR_PLUGIN:
return view(*args, **kwargs)
return view(*args, **kwargs)
abort(401)
return decorated

View File

@@ -34,6 +34,7 @@ from .dataset import (
metadata,
segment,
)
from .end_user import end_user
from .workspace import models
__all__ = [
@@ -44,6 +45,7 @@ __all__ = [
"conversation",
"dataset",
"document",
"end_user",
"file",
"file_preview",
"hit_testing",

View File

@@ -33,8 +33,9 @@ from core.workflow.graph_engine.manager import GraphEngineManager
from extensions.ext_database import db
from fields.workflow_app_log_fields import build_workflow_app_log_pagination_model
from libs import helper
from libs.helper import TimestampField
from libs.helper import OptionalTimestampField, TimestampField
from models.model import App, AppMode, EndUser
from models.workflow import WorkflowRun
from repositories.factory import DifyAPIRepositoryFactory
from services.app_generate_service import AppGenerateService
from services.errors.app import IsDraftWorkflowError, WorkflowIdFormatError, WorkflowNotFoundError
@@ -63,17 +64,32 @@ class WorkflowLogQuery(BaseModel):
register_schema_models(service_api_ns, WorkflowRunPayload, WorkflowLogQuery)
class WorkflowRunStatusField(fields.Raw):
def output(self, key, obj: WorkflowRun, **kwargs):
return obj.status.value
class WorkflowRunOutputsField(fields.Raw):
def output(self, key, obj: WorkflowRun, **kwargs):
if obj.status == WorkflowExecutionStatus.PAUSED:
return {}
outputs = obj.outputs_dict
return outputs or {}
workflow_run_fields = {
"id": fields.String,
"workflow_id": fields.String,
"status": fields.String,
"status": WorkflowRunStatusField,
"inputs": fields.Raw,
"outputs": fields.Raw,
"outputs": WorkflowRunOutputsField,
"error": fields.String,
"total_steps": fields.Integer,
"total_tokens": fields.Integer,
"created_at": TimestampField,
"finished_at": TimestampField,
"finished_at": OptionalTimestampField,
"elapsed_time": fields.Float,
}

View File

@@ -0,0 +1,3 @@
from . import end_user
__all__ = ["end_user"]

View File

@@ -0,0 +1,41 @@
from uuid import UUID
from flask_restx import Resource
from controllers.service_api import service_api_ns
from controllers.service_api.end_user.error import EndUserNotFoundError
from controllers.service_api.wraps import validate_app_token
from fields.end_user_fields import EndUserDetail
from models.model import App
from services.end_user_service import EndUserService
@service_api_ns.route("/end-users/<uuid:end_user_id>")
class EndUserApi(Resource):
"""Resource for retrieving end user details by ID."""
@service_api_ns.doc("get_end_user")
@service_api_ns.doc(description="Get an end user by ID")
@service_api_ns.doc(
params={"end_user_id": "End user ID"},
responses={
200: "End user retrieved successfully",
401: "Unauthorized - invalid API token",
404: "End user not found",
},
)
@validate_app_token
def get(self, app_model: App, end_user_id: UUID):
"""Get end user detail.
This endpoint is scoped to the current app token's tenant/app to prevent
cross-tenant/app access when an end-user ID is known.
"""
end_user = EndUserService.get_end_user_by_id(
tenant_id=app_model.tenant_id, app_id=app_model.id, end_user_id=str(end_user_id)
)
if end_user is None:
raise EndUserNotFoundError()
return EndUserDetail.model_validate(end_user).model_dump(mode="json")

View File

@@ -0,0 +1,7 @@
from libs.exception import BaseHTTPException
class EndUserNotFoundError(BaseHTTPException):
error_code = "end_user_not_found"
description = "End user not found."
code = 404

View File

@@ -23,6 +23,7 @@ from . import (
feature,
files,
forgot_password,
human_input_form,
login,
message,
passport,
@@ -30,6 +31,7 @@ from . import (
saved_message,
site,
workflow,
workflow_events,
)
api.add_namespace(web_ns)
@@ -44,6 +46,7 @@ __all__ = [
"feature",
"files",
"forgot_password",
"human_input_form",
"login",
"message",
"passport",
@@ -52,4 +55,5 @@ __all__ = [
"site",
"web_ns",
"workflow",
"workflow_events",
]

View File

@@ -117,6 +117,12 @@ class InvokeRateLimitError(BaseHTTPException):
code = 429
class WebFormRateLimitExceededError(BaseHTTPException):
error_code = "web_form_rate_limit_exceeded"
description = "Too many form requests. Please try again later."
code = 429
class NotFoundError(BaseHTTPException):
error_code = "not_found"
code = 404

View File

@@ -0,0 +1,161 @@
"""
Web App Human Input Form APIs.
"""
import json
import logging
from datetime import datetime
from flask import Response, request
from flask_restx import Resource, reqparse
from werkzeug.exceptions import Forbidden
from configs import dify_config
from controllers.web import web_ns
from controllers.web.error import NotFoundError, WebFormRateLimitExceededError
from controllers.web.site import serialize_app_site_payload
from extensions.ext_database import db
from libs.helper import RateLimiter, extract_remote_ip
from models.account import TenantStatus
from models.model import App, Site
from services.human_input_service import Form, FormNotFoundError, HumanInputService
logger = logging.getLogger(__name__)
_FORM_SUBMIT_RATE_LIMITER = RateLimiter(
prefix="web_form_submit_rate_limit",
max_attempts=dify_config.WEB_FORM_SUBMIT_RATE_LIMIT_MAX_ATTEMPTS,
time_window=dify_config.WEB_FORM_SUBMIT_RATE_LIMIT_WINDOW_SECONDS,
)
_FORM_ACCESS_RATE_LIMITER = RateLimiter(
prefix="web_form_access_rate_limit",
max_attempts=dify_config.WEB_FORM_SUBMIT_RATE_LIMIT_MAX_ATTEMPTS,
time_window=dify_config.WEB_FORM_SUBMIT_RATE_LIMIT_WINDOW_SECONDS,
)
def _stringify_default_values(values: dict[str, object]) -> dict[str, str]:
result: dict[str, str] = {}
for key, value in values.items():
if value is None:
result[key] = ""
elif isinstance(value, (dict, list)):
result[key] = json.dumps(value, ensure_ascii=False)
else:
result[key] = str(value)
return result
def _to_timestamp(value: datetime) -> int:
return int(value.timestamp())
def _jsonify_form_definition(form: Form, site_payload: dict | None = None) -> Response:
"""Return the form payload (optionally with site) as a JSON response."""
definition_payload = form.get_definition().model_dump()
payload = {
"form_content": definition_payload["rendered_content"],
"inputs": definition_payload["inputs"],
"resolved_default_values": _stringify_default_values(definition_payload["default_values"]),
"user_actions": definition_payload["user_actions"],
"expiration_time": _to_timestamp(form.expiration_time),
}
if site_payload is not None:
payload["site"] = site_payload
return Response(json.dumps(payload, ensure_ascii=False), mimetype="application/json")
@web_ns.route("/form/human_input/<string:form_token>")
class HumanInputFormApi(Resource):
"""API for getting and submitting human input forms via the web app."""
# NOTE(QuantumGhost): this endpoint is unauthenticated on purpose for now.
# def get(self, _app_model: App, _end_user: EndUser, form_token: str):
def get(self, form_token: str):
"""
Get human input form definition by token.
GET /api/form/human_input/<form_token>
"""
ip_address = extract_remote_ip(request)
if _FORM_ACCESS_RATE_LIMITER.is_rate_limited(ip_address):
raise WebFormRateLimitExceededError()
_FORM_ACCESS_RATE_LIMITER.increment_rate_limit(ip_address)
service = HumanInputService(db.engine)
# TODO(QuantumGhost): forbid submision for form tokens
# that are only for console.
form = service.get_form_by_token(form_token)
if form is None:
raise NotFoundError("Form not found")
service.ensure_form_active(form)
app_model, site = _get_app_site_from_form(form)
return _jsonify_form_definition(form, site_payload=serialize_app_site_payload(app_model, site, None))
# def post(self, _app_model: App, _end_user: EndUser, form_token: str):
def post(self, form_token: str):
"""
Submit human input form by token.
POST /api/form/human_input/<form_token>
Request body:
{
"inputs": {
"content": "User input content"
},
"action": "Approve"
}
"""
parser = reqparse.RequestParser()
parser.add_argument("inputs", type=dict, required=True, location="json")
parser.add_argument("action", type=str, required=True, location="json")
args = parser.parse_args()
ip_address = extract_remote_ip(request)
if _FORM_SUBMIT_RATE_LIMITER.is_rate_limited(ip_address):
raise WebFormRateLimitExceededError()
_FORM_SUBMIT_RATE_LIMITER.increment_rate_limit(ip_address)
service = HumanInputService(db.engine)
form = service.get_form_by_token(form_token)
if form is None:
raise NotFoundError("Form not found")
if (recipient_type := form.recipient_type) is None:
logger.warning("Recipient type is None for form, form_id=%", form.id)
raise AssertionError("Recipient type is None")
try:
service.submit_form_by_token(
recipient_type=recipient_type,
form_token=form_token,
selected_action_id=args["action"],
form_data=args["inputs"],
submission_end_user_id=None,
# submission_end_user_id=_end_user.id,
)
except FormNotFoundError:
raise NotFoundError("Form not found")
return {}, 200
def _get_app_site_from_form(form: Form) -> tuple[App, Site]:
"""Resolve App/Site for the form's app and validate tenant status."""
app_model = db.session.query(App).where(App.id == form.app_id).first()
if app_model is None or app_model.tenant_id != form.tenant_id:
raise NotFoundError("Form not found")
site = db.session.query(Site).where(Site.app_id == app_model.id).first()
if site is None:
raise Forbidden()
if app_model.tenant and app_model.tenant.status == TenantStatus.ARCHIVE:
raise Forbidden()
return app_model, site

View File

@@ -1,4 +1,6 @@
from flask_restx import fields, marshal_with
from typing import cast
from flask_restx import fields, marshal, marshal_with
from werkzeug.exceptions import Forbidden
from configs import dify_config
@@ -7,7 +9,7 @@ from controllers.web.wraps import WebApiResource
from extensions.ext_database import db
from libs.helper import AppIconUrlField
from models.account import TenantStatus
from models.model import Site
from models.model import App, Site
from services.feature_service import FeatureService
@@ -108,3 +110,14 @@ class AppSiteInfo:
"remove_webapp_brand": remove_webapp_brand,
"replace_webapp_logo": replace_webapp_logo,
}
def serialize_site(site: Site) -> dict:
"""Serialize Site model using the same schema as AppSiteApi."""
return cast(dict, marshal(site, AppSiteApi.site_fields))
def serialize_app_site_payload(app_model: App, site: Site, end_user_id: str | None) -> dict:
can_replace_logo = FeatureService.get_features(app_model.tenant_id).can_replace_logo
app_site_info = AppSiteInfo(app_model.tenant, app_model, site, end_user_id, can_replace_logo)
return cast(dict, marshal(app_site_info, AppSiteApi.app_fields))

View File

@@ -0,0 +1,112 @@
"""
Web App Workflow Resume APIs.
"""
import json
from collections.abc import Generator
from flask import Response, request
from sqlalchemy.orm import sessionmaker
from controllers.web import api
from controllers.web.error import InvalidArgumentError, NotFoundError
from controllers.web.wraps import WebApiResource
from core.app.apps.advanced_chat.app_generator import AdvancedChatAppGenerator
from core.app.apps.base_app_generator import BaseAppGenerator
from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter
from core.app.apps.message_generator import MessageGenerator
from core.app.apps.workflow.app_generator import WorkflowAppGenerator
from extensions.ext_database import db
from models.enums import CreatorUserRole
from models.model import App, AppMode, EndUser
from repositories.factory import DifyAPIRepositoryFactory
from services.workflow_event_snapshot_service import build_workflow_event_stream
class WorkflowEventsApi(WebApiResource):
"""API for getting workflow execution events after resume."""
def get(self, app_model: App, end_user: EndUser, task_id: str):
"""
Get workflow execution events stream after resume.
GET /api/workflow/<task_id>/events
Returns Server-Sent Events stream.
"""
workflow_run_id = task_id
session_maker = sessionmaker(db.engine)
repo = DifyAPIRepositoryFactory.create_api_workflow_run_repository(session_maker)
workflow_run = repo.get_workflow_run_by_id_and_tenant_id(
tenant_id=app_model.tenant_id,
run_id=workflow_run_id,
)
if workflow_run is None:
raise NotFoundError(f"WorkflowRun not found, id={workflow_run_id}")
if workflow_run.app_id != app_model.id:
raise NotFoundError(f"WorkflowRun not found, id={workflow_run_id}")
if workflow_run.created_by_role != CreatorUserRole.END_USER:
raise NotFoundError(f"WorkflowRun not created by end user, id={workflow_run_id}")
if workflow_run.created_by != end_user.id:
raise NotFoundError(f"WorkflowRun not created by the current end user, id={workflow_run_id}")
if workflow_run.finished_at is not None:
response = WorkflowResponseConverter.workflow_run_result_to_finish_response(
task_id=workflow_run.id,
workflow_run=workflow_run,
creator_user=end_user,
)
payload = response.model_dump(mode="json")
payload["event"] = response.event.value
def _generate_finished_events() -> Generator[str, None, None]:
yield f"data: {json.dumps(payload)}\n\n"
event_generator = _generate_finished_events
else:
app_mode = AppMode.value_of(app_model.mode)
msg_generator = MessageGenerator()
generator: BaseAppGenerator
if app_mode == AppMode.ADVANCED_CHAT:
generator = AdvancedChatAppGenerator()
elif app_mode == AppMode.WORKFLOW:
generator = WorkflowAppGenerator()
else:
raise InvalidArgumentError(f"cannot subscribe to workflow run, workflow_run_id={workflow_run.id}")
include_state_snapshot = request.args.get("include_state_snapshot", "false").lower() == "true"
def _generate_stream_events():
if include_state_snapshot:
return generator.convert_to_event_stream(
build_workflow_event_stream(
app_mode=app_mode,
workflow_run=workflow_run,
tenant_id=app_model.tenant_id,
app_id=app_model.id,
session_maker=session_maker,
)
)
return generator.convert_to_event_stream(
msg_generator.retrieve_events(app_mode, workflow_run.id),
)
event_generator = _generate_stream_events
return Response(
event_generator(),
mimetype="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
},
)
# Register the APIs
api.add_resource(WorkflowEventsApi, "/workflow/<string:task_id>/events")

View File

@@ -0,0 +1,380 @@
import logging
from collections.abc import Generator
from copy import deepcopy
from typing import Any
from core.agent.base_agent_runner import BaseAgentRunner
from core.agent.entities import AgentEntity, AgentLog, AgentResult
from core.agent.patterns.strategy_factory import StrategyFactory
from core.app.apps.base_app_queue_manager import PublishFrom
from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent
from core.file import file_manager
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMUsage,
PromptMessage,
PromptMessageContentType,
SystemPromptMessage,
TextPromptMessageContent,
UserPromptMessage,
)
from core.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes
from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform
from core.tools.__base.tool import Tool
from core.tools.entities.tool_entities import ToolInvokeMeta
from core.tools.tool_engine import ToolEngine
from models.model import Message
logger = logging.getLogger(__name__)
class AgentAppRunner(BaseAgentRunner):
def _create_tool_invoke_hook(self, message: Message):
"""
Create a tool invoke hook that uses ToolEngine.agent_invoke.
This hook handles file creation and returns proper meta information.
"""
# Get trace manager from app generate entity
trace_manager = self.application_generate_entity.trace_manager
def tool_invoke_hook(
tool: Tool, tool_args: dict[str, Any], tool_name: str
) -> tuple[str, list[str], ToolInvokeMeta]:
"""Hook that uses agent_invoke for proper file and meta handling."""
tool_invoke_response, message_files, tool_invoke_meta = ToolEngine.agent_invoke(
tool=tool,
tool_parameters=tool_args,
user_id=self.user_id,
tenant_id=self.tenant_id,
message=message,
invoke_from=self.application_generate_entity.invoke_from,
agent_tool_callback=self.agent_callback,
trace_manager=trace_manager,
app_id=self.application_generate_entity.app_config.app_id,
message_id=message.id,
conversation_id=self.conversation.id,
)
# Publish files and track IDs
for message_file_id in message_files:
self.queue_manager.publish(
QueueMessageFileEvent(message_file_id=message_file_id),
PublishFrom.APPLICATION_MANAGER,
)
self._current_message_file_ids.append(message_file_id)
return tool_invoke_response, message_files, tool_invoke_meta
return tool_invoke_hook
def run(self, message: Message, query: str, **kwargs: Any) -> Generator[LLMResultChunk, None, None]:
"""
Run Agent application
"""
self.query = query
app_generate_entity = self.application_generate_entity
app_config = self.app_config
assert app_config is not None, "app_config is required"
assert app_config.agent is not None, "app_config.agent is required"
# convert tools into ModelRuntime Tool format
tool_instances, _ = self._init_prompt_tools()
assert app_config.agent
# Create tool invoke hook for agent_invoke
tool_invoke_hook = self._create_tool_invoke_hook(message)
# Get instruction for ReAct strategy
instruction = self.app_config.prompt_template.simple_prompt_template or ""
# Use factory to create appropriate strategy
strategy = StrategyFactory.create_strategy(
model_features=self.model_features,
model_instance=self.model_instance,
tools=list(tool_instances.values()),
files=list(self.files),
max_iterations=app_config.agent.max_iteration,
context=self.build_execution_context(),
agent_strategy=self.config.strategy,
tool_invoke_hook=tool_invoke_hook,
instruction=instruction,
)
# Initialize state variables
current_agent_thought_id = None
has_published_thought = False
current_tool_name: str | None = None
self._current_message_file_ids: list[str] = []
# organize prompt messages
prompt_messages = self._organize_prompt_messages()
# Run strategy
generator = strategy.run(
prompt_messages=prompt_messages,
model_parameters=app_generate_entity.model_conf.parameters,
stop=app_generate_entity.model_conf.stop,
stream=True,
)
# Consume generator and collect result
result: AgentResult | None = None
try:
while True:
try:
output = next(generator)
except StopIteration as e:
# Generator finished, get the return value
result = e.value
break
if isinstance(output, LLMResultChunk):
# Handle LLM chunk
if current_agent_thought_id and not has_published_thought:
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
has_published_thought = True
yield output
elif isinstance(output, AgentLog):
# Handle Agent Log using log_type for type-safe dispatch
if output.status == AgentLog.LogStatus.START:
if output.log_type == AgentLog.LogType.ROUND:
# Start of a new round
message_file_ids: list[str] = []
current_agent_thought_id = self.create_agent_thought(
message_id=message.id,
message="",
tool_name="",
tool_input="",
messages_ids=message_file_ids,
)
has_published_thought = False
elif output.log_type == AgentLog.LogType.TOOL_CALL:
if current_agent_thought_id is None:
continue
# Tool call start - extract data from structured fields
current_tool_name = output.data.get("tool_name", "")
tool_input = output.data.get("tool_args", {})
self.save_agent_thought(
agent_thought_id=current_agent_thought_id,
tool_name=current_tool_name,
tool_input=tool_input,
thought=None,
observation=None,
tool_invoke_meta=None,
answer=None,
messages_ids=[],
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
elif output.status == AgentLog.LogStatus.SUCCESS:
if output.log_type == AgentLog.LogType.THOUGHT:
if current_agent_thought_id is None:
continue
thought_text = output.data.get("thought")
self.save_agent_thought(
agent_thought_id=current_agent_thought_id,
tool_name=None,
tool_input=None,
thought=thought_text,
observation=None,
tool_invoke_meta=None,
answer=None,
messages_ids=[],
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
elif output.log_type == AgentLog.LogType.TOOL_CALL:
if current_agent_thought_id is None:
continue
# Tool call finished
tool_output = output.data.get("output")
# Get meta from strategy output (now properly populated)
tool_meta = output.data.get("meta")
# Wrap tool_meta with tool_name as key (required by agent_service)
if tool_meta and current_tool_name:
tool_meta = {current_tool_name: tool_meta}
self.save_agent_thought(
agent_thought_id=current_agent_thought_id,
tool_name=None,
tool_input=None,
thought=None,
observation=tool_output,
tool_invoke_meta=tool_meta,
answer=None,
messages_ids=self._current_message_file_ids,
)
# Clear message file ids after saving
self._current_message_file_ids = []
current_tool_name = None
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
elif output.log_type == AgentLog.LogType.ROUND:
if current_agent_thought_id is None:
continue
# Round finished - save LLM usage and answer
llm_usage = output.metadata.get(AgentLog.LogMetadata.LLM_USAGE)
llm_result = output.data.get("llm_result")
final_answer = output.data.get("final_answer")
self.save_agent_thought(
agent_thought_id=current_agent_thought_id,
tool_name=None,
tool_input=None,
thought=llm_result,
observation=None,
tool_invoke_meta=None,
answer=final_answer,
messages_ids=[],
llm_usage=llm_usage,
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
except Exception:
# Re-raise any other exceptions
raise
# Process final result
if isinstance(result, AgentResult):
final_answer = result.text
usage = result.usage or LLMUsage.empty_usage()
# Publish end event
self.queue_manager.publish(
QueueMessageEndEvent(
llm_result=LLMResult(
model=self.model_instance.model,
prompt_messages=prompt_messages,
message=AssistantPromptMessage(content=final_answer),
usage=usage,
system_fingerprint="",
)
),
PublishFrom.APPLICATION_MANAGER,
)
def _init_system_message(self, prompt_template: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Initialize system message
"""
if not prompt_template:
return prompt_messages or []
prompt_messages = prompt_messages or []
if prompt_messages and isinstance(prompt_messages[0], SystemPromptMessage):
prompt_messages[0] = SystemPromptMessage(content=prompt_template)
return prompt_messages
if not prompt_messages:
return [SystemPromptMessage(content=prompt_template)]
prompt_messages.insert(0, SystemPromptMessage(content=prompt_template))
return prompt_messages
def _organize_user_query(self, query: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""
if self.files:
# get image detail config
image_detail_config = (
self.application_generate_entity.file_upload_config.image_config.detail
if (
self.application_generate_entity.file_upload_config
and self.application_generate_entity.file_upload_config.image_config
)
else None
)
image_detail_config = image_detail_config or ImagePromptMessageContent.DETAIL.LOW
prompt_message_contents: list[PromptMessageContentUnionTypes] = []
for file in self.files:
prompt_message_contents.append(
file_manager.to_prompt_message_content(
file,
image_detail_config=image_detail_config,
)
)
prompt_message_contents.append(TextPromptMessageContent(data=query))
prompt_messages.append(UserPromptMessage(content=prompt_message_contents))
else:
prompt_messages.append(UserPromptMessage(content=query))
return prompt_messages
def _clear_user_prompt_image_messages(self, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
As for now, gpt supports both fc and vision at the first iteration.
We need to remove the image messages from the prompt messages at the first iteration.
"""
prompt_messages = deepcopy(prompt_messages)
for prompt_message in prompt_messages:
if isinstance(prompt_message, UserPromptMessage):
if isinstance(prompt_message.content, list):
prompt_message.content = "\n".join(
[
content.data
if content.type == PromptMessageContentType.TEXT
else "[image]"
if content.type == PromptMessageContentType.IMAGE
else "[file]"
for content in prompt_message.content
]
)
return prompt_messages
def _organize_prompt_messages(self):
# For ReAct strategy, use the agent prompt template
if self.config.strategy == AgentEntity.Strategy.CHAIN_OF_THOUGHT and self.config.prompt:
prompt_template = self.config.prompt.first_prompt
else:
prompt_template = self.app_config.prompt_template.simple_prompt_template or ""
self.history_prompt_messages = self._init_system_message(prompt_template, self.history_prompt_messages)
query_prompt_messages = self._organize_user_query(self.query or "", [])
self.history_prompt_messages = AgentHistoryPromptTransform(
model_config=self.model_config,
prompt_messages=[*query_prompt_messages, *self._current_thoughts],
history_messages=self.history_prompt_messages,
memory=self.memory,
).get_prompt()
prompt_messages = [*self.history_prompt_messages, *query_prompt_messages, *self._current_thoughts]
if len(self._current_thoughts) != 0:
# clear messages after the first iteration
prompt_messages = self._clear_user_prompt_image_messages(prompt_messages)
return prompt_messages

View File

@@ -6,7 +6,7 @@ from typing import Union, cast
from sqlalchemy import select
from core.agent.entities import AgentEntity, AgentToolEntity
from core.agent.entities import AgentEntity, AgentToolEntity, ExecutionContext
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfig
from core.app.apps.base_app_queue_manager import AppQueueManager
@@ -116,9 +116,20 @@ class BaseAgentRunner(AppRunner):
features = model_schema.features if model_schema and model_schema.features else []
self.stream_tool_call = ModelFeature.STREAM_TOOL_CALL in features
self.files = application_generate_entity.files if ModelFeature.VISION in features else []
self.model_features = features
self.query: str | None = ""
self._current_thoughts: list[PromptMessage] = []
def build_execution_context(self) -> ExecutionContext:
"""Build execution context."""
return ExecutionContext(
user_id=self.user_id,
app_id=self.app_config.app_id,
conversation_id=self.conversation.id,
message_id=self.message.id,
tenant_id=self.tenant_id,
)
def _repack_app_generate_entity(
self, app_generate_entity: AgentChatAppGenerateEntity
) -> AgentChatAppGenerateEntity:

View File

@@ -1,437 +0,0 @@
import json
import logging
from abc import ABC, abstractmethod
from collections.abc import Generator, Mapping, Sequence
from typing import Any
from core.agent.base_agent_runner import BaseAgentRunner
from core.agent.entities import AgentScratchpadUnit
from core.agent.output_parser.cot_output_parser import CotAgentOutputParser
from core.app.apps.base_app_queue_manager import PublishFrom
from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import (
AssistantPromptMessage,
PromptMessage,
PromptMessageTool,
ToolPromptMessage,
UserPromptMessage,
)
from core.ops.ops_trace_manager import TraceQueueManager
from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform
from core.tools.__base.tool import Tool
from core.tools.entities.tool_entities import ToolInvokeMeta
from core.tools.tool_engine import ToolEngine
from core.workflow.nodes.agent.exc import AgentMaxIterationError
from models.model import Message
logger = logging.getLogger(__name__)
class CotAgentRunner(BaseAgentRunner, ABC):
_is_first_iteration = True
_ignore_observation_providers = ["wenxin"]
_historic_prompt_messages: list[PromptMessage]
_agent_scratchpad: list[AgentScratchpadUnit]
_instruction: str
_query: str
_prompt_messages_tools: Sequence[PromptMessageTool]
def run(
self,
message: Message,
query: str,
inputs: Mapping[str, str],
) -> Generator:
"""
Run Cot agent application
"""
app_generate_entity = self.application_generate_entity
self._repack_app_generate_entity(app_generate_entity)
self._init_react_state(query)
trace_manager = app_generate_entity.trace_manager
# check model mode
if "Observation" not in app_generate_entity.model_conf.stop:
if app_generate_entity.model_conf.provider not in self._ignore_observation_providers:
app_generate_entity.model_conf.stop.append("Observation")
app_config = self.app_config
assert app_config.agent
# init instruction
inputs = inputs or {}
instruction = app_config.prompt_template.simple_prompt_template or ""
self._instruction = self._fill_in_inputs_from_external_data_tools(instruction, inputs)
iteration_step = 1
max_iteration_steps = min(app_config.agent.max_iteration, 99) + 1
# convert tools into ModelRuntime Tool format
tool_instances, prompt_messages_tools = self._init_prompt_tools()
self._prompt_messages_tools = prompt_messages_tools
function_call_state = True
llm_usage: dict[str, LLMUsage | None] = {"usage": None}
final_answer = ""
prompt_messages: list = [] # Initialize prompt_messages
agent_thought_id = "" # Initialize agent_thought_id
def increase_usage(final_llm_usage_dict: dict[str, LLMUsage | None], usage: LLMUsage):
if not final_llm_usage_dict["usage"]:
final_llm_usage_dict["usage"] = usage
else:
llm_usage = final_llm_usage_dict["usage"]
llm_usage.prompt_tokens += usage.prompt_tokens
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.total_tokens += usage.total_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
llm_usage.total_price += usage.total_price
model_instance = self.model_instance
while function_call_state and iteration_step <= max_iteration_steps:
# continue to run until there is not any tool call
function_call_state = False
if iteration_step == max_iteration_steps:
# the last iteration, remove all tools
self._prompt_messages_tools = []
message_file_ids: list[str] = []
agent_thought_id = self.create_agent_thought(
message_id=message.id, message="", tool_name="", tool_input="", messages_ids=message_file_ids
)
if iteration_step > 1:
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=app_generate_entity.model_conf.parameters,
tools=[],
stop=app_generate_entity.model_conf.stop,
stream=True,
user=self.user_id,
callbacks=[],
)
usage_dict: dict[str, LLMUsage | None] = {}
react_chunks = CotAgentOutputParser.handle_react_stream_output(chunks, usage_dict)
scratchpad = AgentScratchpadUnit(
agent_response="",
thought="",
action_str="",
observation="",
action=None,
)
# publish agent thought if it's first iteration
if iteration_step == 1:
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
for chunk in react_chunks:
if isinstance(chunk, AgentScratchpadUnit.Action):
action = chunk
# detect action
assert scratchpad.agent_response is not None
scratchpad.agent_response += json.dumps(chunk.model_dump())
scratchpad.action_str = json.dumps(chunk.model_dump())
scratchpad.action = action
else:
assert scratchpad.agent_response is not None
scratchpad.agent_response += chunk
assert scratchpad.thought is not None
scratchpad.thought += chunk
yield LLMResultChunk(
model=self.model_config.model,
prompt_messages=prompt_messages,
system_fingerprint="",
delta=LLMResultChunkDelta(index=0, message=AssistantPromptMessage(content=chunk), usage=None),
)
assert scratchpad.thought is not None
scratchpad.thought = scratchpad.thought.strip() or "I am thinking about how to help you"
self._agent_scratchpad.append(scratchpad)
# Check if max iteration is reached and model still wants to call tools
if iteration_step == max_iteration_steps and scratchpad.action:
if scratchpad.action.action_name.lower() != "final answer":
raise AgentMaxIterationError(app_config.agent.max_iteration)
# get llm usage
if "usage" in usage_dict:
if usage_dict["usage"] is not None:
increase_usage(llm_usage, usage_dict["usage"])
else:
usage_dict["usage"] = LLMUsage.empty_usage()
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name=(scratchpad.action.action_name if scratchpad.action and not scratchpad.is_final() else ""),
tool_input={scratchpad.action.action_name: scratchpad.action.action_input} if scratchpad.action else {},
tool_invoke_meta={},
thought=scratchpad.thought or "",
observation="",
answer=scratchpad.agent_response or "",
messages_ids=[],
llm_usage=usage_dict["usage"],
)
if not scratchpad.is_final():
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
if not scratchpad.action:
# failed to extract action, return final answer directly
final_answer = ""
else:
if scratchpad.action.action_name.lower() == "final answer":
# action is final answer, return final answer directly
try:
if isinstance(scratchpad.action.action_input, dict):
final_answer = json.dumps(scratchpad.action.action_input, ensure_ascii=False)
elif isinstance(scratchpad.action.action_input, str):
final_answer = scratchpad.action.action_input
else:
final_answer = f"{scratchpad.action.action_input}"
except TypeError:
final_answer = f"{scratchpad.action.action_input}"
else:
function_call_state = True
# action is tool call, invoke tool
tool_invoke_response, tool_invoke_meta = self._handle_invoke_action(
action=scratchpad.action,
tool_instances=tool_instances,
message_file_ids=message_file_ids,
trace_manager=trace_manager,
)
scratchpad.observation = tool_invoke_response
scratchpad.agent_response = tool_invoke_response
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name=scratchpad.action.action_name,
tool_input={scratchpad.action.action_name: scratchpad.action.action_input},
thought=scratchpad.thought or "",
observation={scratchpad.action.action_name: tool_invoke_response},
tool_invoke_meta={scratchpad.action.action_name: tool_invoke_meta.to_dict()},
answer=scratchpad.agent_response,
messages_ids=message_file_ids,
llm_usage=usage_dict["usage"],
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
# update prompt tool message
for prompt_tool in self._prompt_messages_tools:
self.update_prompt_message_tool(tool_instances[prompt_tool.name], prompt_tool)
iteration_step += 1
yield LLMResultChunk(
model=model_instance.model,
prompt_messages=prompt_messages,
delta=LLMResultChunkDelta(
index=0, message=AssistantPromptMessage(content=final_answer), usage=llm_usage["usage"]
),
system_fingerprint="",
)
# save agent thought
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name="",
tool_input={},
tool_invoke_meta={},
thought=final_answer,
observation={},
answer=final_answer,
messages_ids=[],
)
# publish end event
self.queue_manager.publish(
QueueMessageEndEvent(
llm_result=LLMResult(
model=model_instance.model,
prompt_messages=prompt_messages,
message=AssistantPromptMessage(content=final_answer),
usage=llm_usage["usage"] or LLMUsage.empty_usage(),
system_fingerprint="",
)
),
PublishFrom.APPLICATION_MANAGER,
)
def _handle_invoke_action(
self,
action: AgentScratchpadUnit.Action,
tool_instances: Mapping[str, Tool],
message_file_ids: list[str],
trace_manager: TraceQueueManager | None = None,
) -> tuple[str, ToolInvokeMeta]:
"""
handle invoke action
:param action: action
:param tool_instances: tool instances
:param message_file_ids: message file ids
:param trace_manager: trace manager
:return: observation, meta
"""
# action is tool call, invoke tool
tool_call_name = action.action_name
tool_call_args = action.action_input
tool_instance = tool_instances.get(tool_call_name)
if not tool_instance:
answer = f"there is not a tool named {tool_call_name}"
return answer, ToolInvokeMeta.error_instance(answer)
if isinstance(tool_call_args, str):
try:
tool_call_args = json.loads(tool_call_args)
except json.JSONDecodeError:
pass
# invoke tool
tool_invoke_response, message_files, tool_invoke_meta = ToolEngine.agent_invoke(
tool=tool_instance,
tool_parameters=tool_call_args,
user_id=self.user_id,
tenant_id=self.tenant_id,
message=self.message,
invoke_from=self.application_generate_entity.invoke_from,
agent_tool_callback=self.agent_callback,
trace_manager=trace_manager,
)
# publish files
for message_file_id in message_files:
# publish message file
self.queue_manager.publish(
QueueMessageFileEvent(message_file_id=message_file_id), PublishFrom.APPLICATION_MANAGER
)
# add message file ids
message_file_ids.append(message_file_id)
return tool_invoke_response, tool_invoke_meta
def _convert_dict_to_action(self, action: dict) -> AgentScratchpadUnit.Action:
"""
convert dict to action
"""
return AgentScratchpadUnit.Action(action_name=action["action"], action_input=action["action_input"])
def _fill_in_inputs_from_external_data_tools(self, instruction: str, inputs: Mapping[str, Any]) -> str:
"""
fill in inputs from external data tools
"""
for key, value in inputs.items():
try:
instruction = instruction.replace(f"{{{{{key}}}}}", str(value))
except Exception:
continue
return instruction
def _init_react_state(self, query):
"""
init agent scratchpad
"""
self._query = query
self._agent_scratchpad = []
self._historic_prompt_messages = self._organize_historic_prompt_messages()
@abstractmethod
def _organize_prompt_messages(self) -> list[PromptMessage]:
"""
organize prompt messages
"""
def _format_assistant_message(self, agent_scratchpad: list[AgentScratchpadUnit]) -> str:
"""
format assistant message
"""
message = ""
for scratchpad in agent_scratchpad:
if scratchpad.is_final():
message += f"Final Answer: {scratchpad.agent_response}"
else:
message += f"Thought: {scratchpad.thought}\n\n"
if scratchpad.action_str:
message += f"Action: {scratchpad.action_str}\n\n"
if scratchpad.observation:
message += f"Observation: {scratchpad.observation}\n\n"
return message
def _organize_historic_prompt_messages(
self, current_session_messages: list[PromptMessage] | None = None
) -> list[PromptMessage]:
"""
organize historic prompt messages
"""
result: list[PromptMessage] = []
scratchpads: list[AgentScratchpadUnit] = []
current_scratchpad: AgentScratchpadUnit | None = None
for message in self.history_prompt_messages:
if isinstance(message, AssistantPromptMessage):
if not current_scratchpad:
assert isinstance(message.content, str)
current_scratchpad = AgentScratchpadUnit(
agent_response=message.content,
thought=message.content or "I am thinking about how to help you",
action_str="",
action=None,
observation=None,
)
scratchpads.append(current_scratchpad)
if message.tool_calls:
try:
current_scratchpad.action = AgentScratchpadUnit.Action(
action_name=message.tool_calls[0].function.name,
action_input=json.loads(message.tool_calls[0].function.arguments),
)
current_scratchpad.action_str = json.dumps(current_scratchpad.action.to_dict())
except Exception:
logger.exception("Failed to parse tool call from assistant message")
elif isinstance(message, ToolPromptMessage):
if current_scratchpad:
assert isinstance(message.content, str)
current_scratchpad.observation = message.content
else:
raise NotImplementedError("expected str type")
elif isinstance(message, UserPromptMessage):
if scratchpads:
result.append(AssistantPromptMessage(content=self._format_assistant_message(scratchpads)))
scratchpads = []
current_scratchpad = None
result.append(message)
if scratchpads:
result.append(AssistantPromptMessage(content=self._format_assistant_message(scratchpads)))
historic_prompts = AgentHistoryPromptTransform(
model_config=self.model_config,
prompt_messages=current_session_messages or [],
history_messages=result,
memory=self.memory,
).get_prompt()
return historic_prompts

View File

@@ -1,118 +0,0 @@
import json
from core.agent.cot_agent_runner import CotAgentRunner
from core.file import file_manager
from core.model_runtime.entities import (
AssistantPromptMessage,
PromptMessage,
SystemPromptMessage,
TextPromptMessageContent,
UserPromptMessage,
)
from core.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes
from core.model_runtime.utils.encoders import jsonable_encoder
class CotChatAgentRunner(CotAgentRunner):
def _organize_system_prompt(self) -> SystemPromptMessage:
"""
Organize system prompt
"""
assert self.app_config.agent
assert self.app_config.agent.prompt
prompt_entity = self.app_config.agent.prompt
if not prompt_entity:
raise ValueError("Agent prompt configuration is not set")
first_prompt = prompt_entity.first_prompt
system_prompt = (
first_prompt.replace("{{instruction}}", self._instruction)
.replace("{{tools}}", json.dumps(jsonable_encoder(self._prompt_messages_tools)))
.replace("{{tool_names}}", ", ".join([tool.name for tool in self._prompt_messages_tools]))
)
return SystemPromptMessage(content=system_prompt)
def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""
if self.files:
# get image detail config
image_detail_config = (
self.application_generate_entity.file_upload_config.image_config.detail
if (
self.application_generate_entity.file_upload_config
and self.application_generate_entity.file_upload_config.image_config
)
else None
)
image_detail_config = image_detail_config or ImagePromptMessageContent.DETAIL.LOW
prompt_message_contents: list[PromptMessageContentUnionTypes] = []
for file in self.files:
prompt_message_contents.append(
file_manager.to_prompt_message_content(
file,
image_detail_config=image_detail_config,
)
)
prompt_message_contents.append(TextPromptMessageContent(data=query))
prompt_messages.append(UserPromptMessage(content=prompt_message_contents))
else:
prompt_messages.append(UserPromptMessage(content=query))
return prompt_messages
def _organize_prompt_messages(self) -> list[PromptMessage]:
"""
Organize
"""
# organize system prompt
system_message = self._organize_system_prompt()
# organize current assistant messages
agent_scratchpad = self._agent_scratchpad
if not agent_scratchpad:
assistant_messages = []
else:
assistant_message = AssistantPromptMessage(content="")
assistant_message.content = "" # FIXME: type check tell mypy that assistant_message.content is str
for unit in agent_scratchpad:
if unit.is_final():
assert isinstance(assistant_message.content, str)
assistant_message.content += f"Final Answer: {unit.agent_response}"
else:
assert isinstance(assistant_message.content, str)
assistant_message.content += f"Thought: {unit.thought}\n\n"
if unit.action_str:
assistant_message.content += f"Action: {unit.action_str}\n\n"
if unit.observation:
assistant_message.content += f"Observation: {unit.observation}\n\n"
assistant_messages = [assistant_message]
# query messages
query_messages = self._organize_user_query(self._query, [])
if assistant_messages:
# organize historic prompt messages
historic_messages = self._organize_historic_prompt_messages(
[system_message, *query_messages, *assistant_messages, UserPromptMessage(content="continue")]
)
messages = [
system_message,
*historic_messages,
*query_messages,
*assistant_messages,
UserPromptMessage(content="continue"),
]
else:
# organize historic prompt messages
historic_messages = self._organize_historic_prompt_messages([system_message, *query_messages])
messages = [system_message, *historic_messages, *query_messages]
# join all messages
return messages

View File

@@ -1,87 +0,0 @@
import json
from core.agent.cot_agent_runner import CotAgentRunner
from core.model_runtime.entities.message_entities import (
AssistantPromptMessage,
PromptMessage,
TextPromptMessageContent,
UserPromptMessage,
)
from core.model_runtime.utils.encoders import jsonable_encoder
class CotCompletionAgentRunner(CotAgentRunner):
def _organize_instruction_prompt(self) -> str:
"""
Organize instruction prompt
"""
if self.app_config.agent is None:
raise ValueError("Agent configuration is not set")
prompt_entity = self.app_config.agent.prompt
if prompt_entity is None:
raise ValueError("prompt entity is not set")
first_prompt = prompt_entity.first_prompt
system_prompt = (
first_prompt.replace("{{instruction}}", self._instruction)
.replace("{{tools}}", json.dumps(jsonable_encoder(self._prompt_messages_tools)))
.replace("{{tool_names}}", ", ".join([tool.name for tool in self._prompt_messages_tools]))
)
return system_prompt
def _organize_historic_prompt(self, current_session_messages: list[PromptMessage] | None = None) -> str:
"""
Organize historic prompt
"""
historic_prompt_messages = self._organize_historic_prompt_messages(current_session_messages)
historic_prompt = ""
for message in historic_prompt_messages:
if isinstance(message, UserPromptMessage):
historic_prompt += f"Question: {message.content}\n\n"
elif isinstance(message, AssistantPromptMessage):
if isinstance(message.content, str):
historic_prompt += message.content + "\n\n"
elif isinstance(message.content, list):
for content in message.content:
if not isinstance(content, TextPromptMessageContent):
continue
historic_prompt += content.data
return historic_prompt
def _organize_prompt_messages(self) -> list[PromptMessage]:
"""
Organize prompt messages
"""
# organize system prompt
system_prompt = self._organize_instruction_prompt()
# organize historic prompt messages
historic_prompt = self._organize_historic_prompt()
# organize current assistant messages
agent_scratchpad = self._agent_scratchpad
assistant_prompt = ""
for unit in agent_scratchpad or []:
if unit.is_final():
assistant_prompt += f"Final Answer: {unit.agent_response}"
else:
assistant_prompt += f"Thought: {unit.thought}\n\n"
if unit.action_str:
assistant_prompt += f"Action: {unit.action_str}\n\n"
if unit.observation:
assistant_prompt += f"Observation: {unit.observation}\n\n"
# query messages
query_prompt = f"Question: {self._query}"
# join all messages
prompt = (
system_prompt.replace("{{historic_messages}}", historic_prompt)
.replace("{{agent_scratchpad}}", assistant_prompt)
.replace("{{query}}", query_prompt)
)
return [UserPromptMessage(content=prompt)]

View File

@@ -1,3 +1,5 @@
import uuid
from collections.abc import Mapping
from enum import StrEnum
from typing import Any, Union
@@ -92,3 +94,96 @@ class AgentInvokeMessage(ToolInvokeMessage):
"""
pass
class ExecutionContext(BaseModel):
"""Execution context containing trace and audit information.
This context carries all the IDs and metadata that are not part of
the core business logic but needed for tracing, auditing, and
correlation purposes.
"""
user_id: str | None = None
app_id: str | None = None
conversation_id: str | None = None
message_id: str | None = None
tenant_id: str | None = None
@classmethod
def create_minimal(cls, user_id: str | None = None) -> "ExecutionContext":
"""Create a minimal context with only essential fields."""
return cls(user_id=user_id)
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for passing to legacy code."""
return {
"user_id": self.user_id,
"app_id": self.app_id,
"conversation_id": self.conversation_id,
"message_id": self.message_id,
"tenant_id": self.tenant_id,
}
def with_updates(self, **kwargs) -> "ExecutionContext":
"""Create a new context with updated fields."""
data = self.to_dict()
data.update(kwargs)
return ExecutionContext(
user_id=data.get("user_id"),
app_id=data.get("app_id"),
conversation_id=data.get("conversation_id"),
message_id=data.get("message_id"),
tenant_id=data.get("tenant_id"),
)
class AgentLog(BaseModel):
"""
Agent Log.
"""
class LogType(StrEnum):
"""Type of agent log entry."""
ROUND = "round" # A complete iteration round
THOUGHT = "thought" # LLM thinking/reasoning
TOOL_CALL = "tool_call" # Tool invocation
class LogMetadata(StrEnum):
STARTED_AT = "started_at"
FINISHED_AT = "finished_at"
ELAPSED_TIME = "elapsed_time"
TOTAL_PRICE = "total_price"
TOTAL_TOKENS = "total_tokens"
PROVIDER = "provider"
CURRENCY = "currency"
LLM_USAGE = "llm_usage"
ICON = "icon"
ICON_DARK = "icon_dark"
class LogStatus(StrEnum):
START = "start"
ERROR = "error"
SUCCESS = "success"
id: str = Field(default_factory=lambda: str(uuid.uuid4()), description="The id of the log")
label: str = Field(..., description="The label of the log")
log_type: LogType = Field(..., description="The type of the log")
parent_id: str | None = Field(default=None, description="Leave empty for root log")
error: str | None = Field(default=None, description="The error message")
status: LogStatus = Field(..., description="The status of the log")
data: Mapping[str, Any] = Field(..., description="Detailed log data")
metadata: Mapping[LogMetadata, Any] = Field(default={}, description="The metadata of the log")
class AgentResult(BaseModel):
"""
Agent execution result.
"""
text: str = Field(default="", description="The generated text")
files: list[Any] = Field(default_factory=list, description="Files produced during execution")
usage: Any | None = Field(default=None, description="LLM usage statistics")
finish_reason: str | None = Field(default=None, description="Reason for completion")

View File

@@ -1,468 +0,0 @@
import json
import logging
from collections.abc import Generator
from copy import deepcopy
from typing import Any, Union
from core.agent.base_agent_runner import BaseAgentRunner
from core.app.apps.base_app_queue_manager import PublishFrom
from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent
from core.file import file_manager
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMResultChunkDelta,
LLMUsage,
PromptMessage,
PromptMessageContentType,
SystemPromptMessage,
TextPromptMessageContent,
ToolPromptMessage,
UserPromptMessage,
)
from core.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes
from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform
from core.tools.entities.tool_entities import ToolInvokeMeta
from core.tools.tool_engine import ToolEngine
from core.workflow.nodes.agent.exc import AgentMaxIterationError
from models.model import Message
logger = logging.getLogger(__name__)
class FunctionCallAgentRunner(BaseAgentRunner):
def run(self, message: Message, query: str, **kwargs: Any) -> Generator[LLMResultChunk, None, None]:
"""
Run FunctionCall agent application
"""
self.query = query
app_generate_entity = self.application_generate_entity
app_config = self.app_config
assert app_config is not None, "app_config is required"
assert app_config.agent is not None, "app_config.agent is required"
# convert tools into ModelRuntime Tool format
tool_instances, prompt_messages_tools = self._init_prompt_tools()
assert app_config.agent
iteration_step = 1
max_iteration_steps = min(app_config.agent.max_iteration, 99) + 1
# continue to run until there is not any tool call
function_call_state = True
llm_usage: dict[str, LLMUsage | None] = {"usage": None}
final_answer = ""
prompt_messages: list = [] # Initialize prompt_messages
# get tracing instance
trace_manager = app_generate_entity.trace_manager
def increase_usage(final_llm_usage_dict: dict[str, LLMUsage | None], usage: LLMUsage):
if not final_llm_usage_dict["usage"]:
final_llm_usage_dict["usage"] = usage
else:
llm_usage = final_llm_usage_dict["usage"]
llm_usage.prompt_tokens += usage.prompt_tokens
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.total_tokens += usage.total_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
llm_usage.total_price += usage.total_price
model_instance = self.model_instance
while function_call_state and iteration_step <= max_iteration_steps:
function_call_state = False
if iteration_step == max_iteration_steps:
# the last iteration, remove all tools
prompt_messages_tools = []
message_file_ids: list[str] = []
agent_thought_id = self.create_agent_thought(
message_id=message.id, message="", tool_name="", tool_input="", messages_ids=message_file_ids
)
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=app_generate_entity.model_conf.parameters,
tools=prompt_messages_tools,
stop=app_generate_entity.model_conf.stop,
stream=self.stream_tool_call,
user=self.user_id,
callbacks=[],
)
tool_calls: list[tuple[str, str, dict[str, Any]]] = []
# save full response
response = ""
# save tool call names and inputs
tool_call_names = ""
tool_call_inputs = ""
current_llm_usage = None
if isinstance(chunks, Generator):
is_first_chunk = True
for chunk in chunks:
if is_first_chunk:
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
is_first_chunk = False
# check if there is any tool call
if self.check_tool_calls(chunk):
function_call_state = True
tool_calls.extend(self.extract_tool_calls(chunk) or [])
tool_call_names = ";".join([tool_call[1] for tool_call in tool_calls])
try:
tool_call_inputs = json.dumps(
{tool_call[1]: tool_call[2] for tool_call in tool_calls}, ensure_ascii=False
)
except TypeError:
# fallback: force ASCII to handle non-serializable objects
tool_call_inputs = json.dumps({tool_call[1]: tool_call[2] for tool_call in tool_calls})
if chunk.delta.message and chunk.delta.message.content:
if isinstance(chunk.delta.message.content, list):
for content in chunk.delta.message.content:
response += content.data
else:
response += str(chunk.delta.message.content)
if chunk.delta.usage:
increase_usage(llm_usage, chunk.delta.usage)
current_llm_usage = chunk.delta.usage
yield chunk
else:
result = chunks
# check if there is any tool call
if self.check_blocking_tool_calls(result):
function_call_state = True
tool_calls.extend(self.extract_blocking_tool_calls(result) or [])
tool_call_names = ";".join([tool_call[1] for tool_call in tool_calls])
try:
tool_call_inputs = json.dumps(
{tool_call[1]: tool_call[2] for tool_call in tool_calls}, ensure_ascii=False
)
except TypeError:
# fallback: force ASCII to handle non-serializable objects
tool_call_inputs = json.dumps({tool_call[1]: tool_call[2] for tool_call in tool_calls})
if result.usage:
increase_usage(llm_usage, result.usage)
current_llm_usage = result.usage
if result.message and result.message.content:
if isinstance(result.message.content, list):
for content in result.message.content:
response += content.data
else:
response += str(result.message.content)
if not result.message.content:
result.message.content = ""
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
yield LLMResultChunk(
model=model_instance.model,
prompt_messages=result.prompt_messages,
system_fingerprint=result.system_fingerprint,
delta=LLMResultChunkDelta(
index=0,
message=result.message,
usage=result.usage,
),
)
assistant_message = AssistantPromptMessage(content=response, tool_calls=[])
if tool_calls:
assistant_message.tool_calls = [
AssistantPromptMessage.ToolCall(
id=tool_call[0],
type="function",
function=AssistantPromptMessage.ToolCall.ToolCallFunction(
name=tool_call[1], arguments=json.dumps(tool_call[2], ensure_ascii=False)
),
)
for tool_call in tool_calls
]
self._current_thoughts.append(assistant_message)
# save thought
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name=tool_call_names,
tool_input=tool_call_inputs,
thought=response,
tool_invoke_meta=None,
observation=None,
answer=response,
messages_ids=[],
llm_usage=current_llm_usage,
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
final_answer += response + "\n"
# Check if max iteration is reached and model still wants to call tools
if iteration_step == max_iteration_steps and tool_calls:
raise AgentMaxIterationError(app_config.agent.max_iteration)
# call tools
tool_responses = []
for tool_call_id, tool_call_name, tool_call_args in tool_calls:
tool_instance = tool_instances.get(tool_call_name)
if not tool_instance:
tool_response = {
"tool_call_id": tool_call_id,
"tool_call_name": tool_call_name,
"tool_response": f"there is not a tool named {tool_call_name}",
"meta": ToolInvokeMeta.error_instance(f"there is not a tool named {tool_call_name}").to_dict(),
}
else:
# invoke tool
tool_invoke_response, message_files, tool_invoke_meta = ToolEngine.agent_invoke(
tool=tool_instance,
tool_parameters=tool_call_args,
user_id=self.user_id,
tenant_id=self.tenant_id,
message=self.message,
invoke_from=self.application_generate_entity.invoke_from,
agent_tool_callback=self.agent_callback,
trace_manager=trace_manager,
app_id=self.application_generate_entity.app_config.app_id,
message_id=self.message.id,
conversation_id=self.conversation.id,
)
# publish files
for message_file_id in message_files:
# publish message file
self.queue_manager.publish(
QueueMessageFileEvent(message_file_id=message_file_id), PublishFrom.APPLICATION_MANAGER
)
# add message file ids
message_file_ids.append(message_file_id)
tool_response = {
"tool_call_id": tool_call_id,
"tool_call_name": tool_call_name,
"tool_response": tool_invoke_response,
"meta": tool_invoke_meta.to_dict(),
}
tool_responses.append(tool_response)
if tool_response["tool_response"] is not None:
self._current_thoughts.append(
ToolPromptMessage(
content=str(tool_response["tool_response"]),
tool_call_id=tool_call_id,
name=tool_call_name,
)
)
if len(tool_responses) > 0:
# save agent thought
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name="",
tool_input="",
thought="",
tool_invoke_meta={
tool_response["tool_call_name"]: tool_response["meta"] for tool_response in tool_responses
},
observation={
tool_response["tool_call_name"]: tool_response["tool_response"]
for tool_response in tool_responses
},
answer="",
messages_ids=message_file_ids,
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
# update prompt tool
for prompt_tool in prompt_messages_tools:
self.update_prompt_message_tool(tool_instances[prompt_tool.name], prompt_tool)
iteration_step += 1
# publish end event
self.queue_manager.publish(
QueueMessageEndEvent(
llm_result=LLMResult(
model=model_instance.model,
prompt_messages=prompt_messages,
message=AssistantPromptMessage(content=final_answer),
usage=llm_usage["usage"] or LLMUsage.empty_usage(),
system_fingerprint="",
)
),
PublishFrom.APPLICATION_MANAGER,
)
def check_tool_calls(self, llm_result_chunk: LLMResultChunk) -> bool:
"""
Check if there is any tool call in llm result chunk
"""
if llm_result_chunk.delta.message.tool_calls:
return True
return False
def check_blocking_tool_calls(self, llm_result: LLMResult) -> bool:
"""
Check if there is any blocking tool call in llm result
"""
if llm_result.message.tool_calls:
return True
return False
def extract_tool_calls(self, llm_result_chunk: LLMResultChunk) -> list[tuple[str, str, dict[str, Any]]]:
"""
Extract tool calls from llm result chunk
Returns:
List[Tuple[str, str, Dict[str, Any]]]: [(tool_call_id, tool_call_name, tool_call_args)]
"""
tool_calls = []
for prompt_message in llm_result_chunk.delta.message.tool_calls:
args = {}
if prompt_message.function.arguments != "":
args = json.loads(prompt_message.function.arguments)
tool_calls.append(
(
prompt_message.id,
prompt_message.function.name,
args,
)
)
return tool_calls
def extract_blocking_tool_calls(self, llm_result: LLMResult) -> list[tuple[str, str, dict[str, Any]]]:
"""
Extract blocking tool calls from llm result
Returns:
List[Tuple[str, str, Dict[str, Any]]]: [(tool_call_id, tool_call_name, tool_call_args)]
"""
tool_calls = []
for prompt_message in llm_result.message.tool_calls:
args = {}
if prompt_message.function.arguments != "":
args = json.loads(prompt_message.function.arguments)
tool_calls.append(
(
prompt_message.id,
prompt_message.function.name,
args,
)
)
return tool_calls
def _init_system_message(self, prompt_template: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Initialize system message
"""
if not prompt_messages and prompt_template:
return [
SystemPromptMessage(content=prompt_template),
]
if prompt_messages and not isinstance(prompt_messages[0], SystemPromptMessage) and prompt_template:
prompt_messages.insert(0, SystemPromptMessage(content=prompt_template))
return prompt_messages or []
def _organize_user_query(self, query: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""
if self.files:
# get image detail config
image_detail_config = (
self.application_generate_entity.file_upload_config.image_config.detail
if (
self.application_generate_entity.file_upload_config
and self.application_generate_entity.file_upload_config.image_config
)
else None
)
image_detail_config = image_detail_config or ImagePromptMessageContent.DETAIL.LOW
prompt_message_contents: list[PromptMessageContentUnionTypes] = []
for file in self.files:
prompt_message_contents.append(
file_manager.to_prompt_message_content(
file,
image_detail_config=image_detail_config,
)
)
prompt_message_contents.append(TextPromptMessageContent(data=query))
prompt_messages.append(UserPromptMessage(content=prompt_message_contents))
else:
prompt_messages.append(UserPromptMessage(content=query))
return prompt_messages
def _clear_user_prompt_image_messages(self, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
As for now, gpt supports both fc and vision at the first iteration.
We need to remove the image messages from the prompt messages at the first iteration.
"""
prompt_messages = deepcopy(prompt_messages)
for prompt_message in prompt_messages:
if isinstance(prompt_message, UserPromptMessage):
if isinstance(prompt_message.content, list):
prompt_message.content = "\n".join(
[
content.data
if content.type == PromptMessageContentType.TEXT
else "[image]"
if content.type == PromptMessageContentType.IMAGE
else "[file]"
for content in prompt_message.content
]
)
return prompt_messages
def _organize_prompt_messages(self):
prompt_template = self.app_config.prompt_template.simple_prompt_template or ""
self.history_prompt_messages = self._init_system_message(prompt_template, self.history_prompt_messages)
query_prompt_messages = self._organize_user_query(self.query or "", [])
self.history_prompt_messages = AgentHistoryPromptTransform(
model_config=self.model_config,
prompt_messages=[*query_prompt_messages, *self._current_thoughts],
history_messages=self.history_prompt_messages,
memory=self.memory,
).get_prompt()
prompt_messages = [*self.history_prompt_messages, *query_prompt_messages, *self._current_thoughts]
if len(self._current_thoughts) != 0:
# clear messages after the first iteration
prompt_messages = self._clear_user_prompt_image_messages(prompt_messages)
return prompt_messages

View File

@@ -0,0 +1,55 @@
# Agent Patterns
A unified agent pattern module that powers both Agent V2 workflow nodes and agent applications. Strategies share a common execution contract while adapting to model capabilities and tool availability.
## Overview
The module applies a strategy pattern around LLM/tool orchestration. `StrategyFactory` auto-selects the best implementation based on model features or an explicit agent strategy, and each strategy streams logs and usage consistently.
## Key Features
- **Dual strategies**
- `FunctionCallStrategy`: uses native LLM function/tool calling when the model exposes `TOOL_CALL`, `MULTI_TOOL_CALL`, or `STREAM_TOOL_CALL`.
- `ReActStrategy`: ReAct (reasoning + acting) flow driven by `CotAgentOutputParser`, used when function calling is unavailable or explicitly requested.
- **Explicit or auto selection**
- `StrategyFactory.create_strategy` prefers an explicit `AgentEntity.Strategy` (FUNCTION_CALLING or CHAIN_OF_THOUGHT).
- Otherwise it falls back to function calling when tool-call features exist, or ReAct when they do not.
- **Unified execution contract**
- `AgentPattern.run` yields streaming `AgentLog` entries and `LLMResultChunk` data, returning an `AgentResult` with text, files, usage, and `finish_reason`.
- Iterations are configurable and hard-capped at 99 rounds; the last round forces a final answer by withholding tools.
- **Tool handling and hooks**
- Tools convert to `PromptMessageTool` objects before invocation.
- Optional `tool_invoke_hook` lets callers override tool execution (e.g., agent apps) while workflow runs use `ToolEngine.generic_invoke`.
- Tool outputs support text, links, JSON, variables, blobs, retriever resources, and file attachments; `target=="self"` files are reloaded into model context, others are returned as outputs.
- **File-aware arguments**
- Tool args accept `[File: <id>]` or `[Files: <id1, id2>]` placeholders that resolve to `File` objects before invocation, enabling models to reference uploaded files safely.
- **ReAct prompt shaping**
- System prompts replace `{{instruction}}`, `{{tools}}`, and `{{tool_names}}` placeholders.
- Adds `Observation` to stop sequences and appends scratchpad text so the model sees prior Thought/Action/Observation history.
- **Observability and accounting**
- Standardized `AgentLog` entries for rounds, model thoughts, and tool calls, including usage aggregation (`LLMUsage`) across streaming and non-streaming paths.
## Architecture
```
agent/patterns/
├── base.py # Shared utilities: logging, usage, tool invocation, file handling
├── function_call.py # Native function-calling loop with tool execution
├── react.py # ReAct loop with CoT parsing and scratchpad wiring
└── strategy_factory.py # Strategy selection by model features or explicit override
```
## Usage
- For auto-selection:
- Call `StrategyFactory.create_strategy(model_features, model_instance, context, tools, files, ...)` and run the returned strategy with prompt messages and model params.
- For explicit behavior:
- Pass `agent_strategy=AgentEntity.Strategy.FUNCTION_CALLING` to force native calls (falls back to ReAct if unsupported), or `CHAIN_OF_THOUGHT` to force ReAct.
- Both strategies stream chunks and logs; collect the generator output until it returns an `AgentResult`.
## Integration Points
- **Model runtime**: delegates to `ModelInstance.invoke_llm` for both streaming and non-streaming calls.
- **Tool system**: defaults to `ToolEngine.generic_invoke`, with `tool_invoke_hook` for custom callers.
- **Files**: flows through `File` objects for tool inputs/outputs and model-context attachments.
- **Execution context**: `ExecutionContext` fields (user/app/conversation/message) propagate to tool invocations and logging.

View File

@@ -0,0 +1,19 @@
"""Agent patterns module.
This module provides different strategies for agent execution:
- FunctionCallStrategy: Uses native function/tool calling
- ReActStrategy: Uses ReAct (Reasoning + Acting) approach
- StrategyFactory: Factory for creating strategies based on model features
"""
from .base import AgentPattern
from .function_call import FunctionCallStrategy
from .react import ReActStrategy
from .strategy_factory import StrategyFactory
__all__ = [
"AgentPattern",
"FunctionCallStrategy",
"ReActStrategy",
"StrategyFactory",
]

View File

@@ -0,0 +1,471 @@
"""Base class for agent strategies."""
from __future__ import annotations
import json
import re
import time
from abc import ABC, abstractmethod
from collections.abc import Callable, Generator
from typing import TYPE_CHECKING, Any
from core.agent.entities import AgentLog, AgentResult, ExecutionContext
from core.file import File
from core.model_manager import ModelInstance
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMResultChunkDelta,
PromptMessage,
PromptMessageTool,
)
from core.model_runtime.entities.llm_entities import LLMUsage
from core.model_runtime.entities.message_entities import TextPromptMessageContent
from core.tools.entities.tool_entities import ToolInvokeMessage, ToolInvokeMeta
if TYPE_CHECKING:
from core.tools.__base.tool import Tool
# Type alias for tool invoke hook
# Returns: (response_content, message_file_ids, tool_invoke_meta)
ToolInvokeHook = Callable[["Tool", dict[str, Any], str], tuple[str, list[str], ToolInvokeMeta]]
class AgentPattern(ABC):
"""Base class for agent execution strategies."""
def __init__(
self,
model_instance: ModelInstance,
tools: list[Tool],
context: ExecutionContext,
max_iterations: int = 10,
workflow_call_depth: int = 0,
files: list[File] = [],
tool_invoke_hook: ToolInvokeHook | None = None,
):
"""Initialize the agent strategy."""
self.model_instance = model_instance
self.tools = tools
self.context = context
self.max_iterations = min(max_iterations, 99) # Cap at 99 iterations
self.workflow_call_depth = workflow_call_depth
self.files: list[File] = files
self.tool_invoke_hook = tool_invoke_hook
@abstractmethod
def run(
self, prompt_messages: list[PromptMessage], model_parameters: dict[str, Any], stop: list[str] = [],
stream: bool = True,
) -> Generator[LLMResultChunk | AgentLog, None, AgentResult]:
"""Execute the agent strategy."""
pass
def _accumulate_usage(self, total_usage: dict[str, Any], delta_usage: LLMUsage) -> None:
"""Accumulate LLM usage statistics."""
if not total_usage.get("usage"):
# Create a copy to avoid modifying the original
total_usage["usage"] = LLMUsage(
prompt_tokens=delta_usage.prompt_tokens,
prompt_unit_price=delta_usage.prompt_unit_price,
prompt_price_unit=delta_usage.prompt_price_unit,
prompt_price=delta_usage.prompt_price,
completion_tokens=delta_usage.completion_tokens,
completion_unit_price=delta_usage.completion_unit_price,
completion_price_unit=delta_usage.completion_price_unit,
completion_price=delta_usage.completion_price,
total_tokens=delta_usage.total_tokens,
total_price=delta_usage.total_price,
currency=delta_usage.currency,
latency=delta_usage.latency,
)
else:
current: LLMUsage = total_usage["usage"]
current.prompt_tokens += delta_usage.prompt_tokens
current.completion_tokens += delta_usage.completion_tokens
current.total_tokens += delta_usage.total_tokens
current.prompt_price += delta_usage.prompt_price
current.completion_price += delta_usage.completion_price
current.total_price += delta_usage.total_price
def _extract_content(self, content: Any) -> str:
"""Extract text content from message content."""
if isinstance(content, list):
# Content items are PromptMessageContentUnionTypes
text_parts = []
for c in content:
# Check if it's a TextPromptMessageContent (which has data attribute)
if isinstance(c, TextPromptMessageContent):
text_parts.append(c.data)
return "".join(text_parts)
return str(content)
def _has_tool_calls(self, chunk: LLMResultChunk) -> bool:
"""Check if chunk contains tool calls."""
# LLMResultChunk always has delta attribute
return bool(chunk.delta.message and chunk.delta.message.tool_calls)
def _has_tool_calls_result(self, result: LLMResult) -> bool:
"""Check if result contains tool calls (non-streaming)."""
# LLMResult always has message attribute
return bool(result.message and result.message.tool_calls)
def _extract_tool_calls(self, chunk: LLMResultChunk) -> list[tuple[str, str, dict[str, Any]]]:
"""Extract tool calls from streaming chunk."""
tool_calls: list[tuple[str, str, dict[str, Any]]] = []
if chunk.delta.message and chunk.delta.message.tool_calls:
for tool_call in chunk.delta.message.tool_calls:
if tool_call.function:
try:
args = json.loads(tool_call.function.arguments) if tool_call.function.arguments else {}
except json.JSONDecodeError:
args = {}
tool_calls.append((tool_call.id or "", tool_call.function.name, args))
return tool_calls
def _extract_tool_calls_result(self, result: LLMResult) -> list[tuple[str, str, dict[str, Any]]]:
"""Extract tool calls from non-streaming result."""
tool_calls = []
if result.message and result.message.tool_calls:
for tool_call in result.message.tool_calls:
if tool_call.function:
try:
args = json.loads(tool_call.function.arguments) if tool_call.function.arguments else {}
except json.JSONDecodeError:
args = {}
tool_calls.append((tool_call.id or "", tool_call.function.name, args))
return tool_calls
def _extract_text_from_message(self, message: PromptMessage) -> str:
"""Extract text content from a prompt message."""
# PromptMessage always has content attribute
content = message.content
if isinstance(content, str):
return content
elif isinstance(content, list):
# Extract text from content list
text_parts = []
for item in content:
if isinstance(item, TextPromptMessageContent):
text_parts.append(item.data)
return " ".join(text_parts)
return ""
def _get_tool_metadata(self, tool_instance: Tool) -> dict[AgentLog.LogMetadata, Any]:
"""Get metadata for a tool including provider and icon info."""
from core.tools.tool_manager import ToolManager
metadata: dict[AgentLog.LogMetadata, Any] = {}
if tool_instance.entity and tool_instance.entity.identity:
identity = tool_instance.entity.identity
if identity.provider:
metadata[AgentLog.LogMetadata.PROVIDER] = identity.provider
# Get icon using ToolManager for proper URL generation
tenant_id = self.context.tenant_id
if tenant_id and identity.provider:
try:
provider_type = tool_instance.tool_provider_type()
icon = ToolManager.get_tool_icon(tenant_id, provider_type, identity.provider)
if isinstance(icon, str):
metadata[AgentLog.LogMetadata.ICON] = icon
elif isinstance(icon, dict):
# Handle icon dict with background/content or light/dark variants
metadata[AgentLog.LogMetadata.ICON] = icon
except Exception:
# Fallback to identity.icon if ToolManager fails
if identity.icon:
metadata[AgentLog.LogMetadata.ICON] = identity.icon
elif identity.icon:
metadata[AgentLog.LogMetadata.ICON] = identity.icon
return metadata
def _create_log(
self,
label: str,
log_type: AgentLog.LogType,
status: AgentLog.LogStatus,
data: dict[str, Any] | None = None,
parent_id: str | None = None,
extra_metadata: dict[AgentLog.LogMetadata, Any] | None = None,
) -> AgentLog:
"""Create a new AgentLog with standard metadata."""
metadata: dict[AgentLog.LogMetadata, Any] = {
AgentLog.LogMetadata.STARTED_AT: time.perf_counter(),
}
if extra_metadata:
metadata.update(extra_metadata)
return AgentLog(
label=label,
log_type=log_type,
status=status,
data=data or {},
parent_id=parent_id,
metadata=metadata,
)
def _finish_log(
self,
log: AgentLog,
data: dict[str, Any] | None = None,
usage: LLMUsage | None = None,
) -> AgentLog:
"""Finish an AgentLog by updating its status and metadata."""
log.status = AgentLog.LogStatus.SUCCESS
if data is not None:
log.data = data
# Calculate elapsed time
started_at = log.metadata.get(AgentLog.LogMetadata.STARTED_AT, time.perf_counter())
finished_at = time.perf_counter()
# Update metadata
log.metadata = {
**log.metadata,
AgentLog.LogMetadata.FINISHED_AT: finished_at,
# Calculate elapsed time in seconds
AgentLog.LogMetadata.ELAPSED_TIME: round(finished_at - started_at, 4),
}
# Add usage information if provided
if usage:
log.metadata.update(
{
AgentLog.LogMetadata.TOTAL_PRICE: usage.total_price,
AgentLog.LogMetadata.CURRENCY: usage.currency,
AgentLog.LogMetadata.TOTAL_TOKENS: usage.total_tokens,
AgentLog.LogMetadata.LLM_USAGE: usage,
}
)
return log
def _replace_file_references(self, tool_args: dict[str, Any]) -> dict[str, Any]:
"""
Replace file references in tool arguments with actual File objects.
Args:
tool_args: Dictionary of tool arguments
Returns:
Updated tool arguments with file references replaced
"""
# Process each argument in the dictionary
processed_args: dict[str, Any] = {}
for key, value in tool_args.items():
processed_args[key] = self._process_file_reference(value)
return processed_args
def _process_file_reference(self, data: Any) -> Any:
"""
Recursively process data to replace file references.
Supports both single file [File: file_id] and multiple files [Files: file_id1, file_id2, ...].
Args:
data: The data to process (can be dict, list, str, or other types)
Returns:
Processed data with file references replaced
"""
single_file_pattern = re.compile(r"^\[File:\s*([^\]]+)\]$")
multiple_files_pattern = re.compile(r"^\[Files:\s*([^\]]+)\]$")
if isinstance(data, dict):
# Process dictionary recursively
return {key: self._process_file_reference(value) for key, value in data.items()}
elif isinstance(data, list):
# Process list recursively
return [self._process_file_reference(item) for item in data]
elif isinstance(data, str):
# Check for single file pattern [File: file_id]
single_match = single_file_pattern.match(data.strip())
if single_match:
file_id = single_match.group(1).strip()
# Find the file in self.files
for file in self.files:
if file.id and str(file.id) == file_id:
return file
# If file not found, return original value
return data
# Check for multiple files pattern [Files: file_id1, file_id2, ...]
multiple_match = multiple_files_pattern.match(data.strip())
if multiple_match:
file_ids_str = multiple_match.group(1).strip()
# Split by comma and strip whitespace
file_ids = [fid.strip() for fid in file_ids_str.split(",")]
# Find all matching files
matched_files: list[File] = []
for file_id in file_ids:
for file in self.files:
if file.id and str(file.id) == file_id:
matched_files.append(file)
break
# Return list of files if any were found, otherwise return original
return matched_files or data
return data
else:
# Return other types as-is
return data
def _create_text_chunk(self, text: str, prompt_messages: list[PromptMessage]) -> LLMResultChunk:
"""Create a text chunk for streaming."""
return LLMResultChunk(
model=self.model_instance.model,
prompt_messages=prompt_messages,
delta=LLMResultChunkDelta(
index=0,
message=AssistantPromptMessage(content=text),
usage=None,
),
system_fingerprint="",
)
def _invoke_tool(
self,
tool_instance: Tool,
tool_args: dict[str, Any],
tool_name: str,
) -> tuple[str, list[File], ToolInvokeMeta | None]:
"""
Invoke a tool and collect its response.
Args:
tool_instance: The tool instance to invoke
tool_args: Tool arguments
tool_name: Name of the tool
Returns:
Tuple of (response_content, tool_files, tool_invoke_meta)
"""
# Process tool_args to replace file references with actual File objects
tool_args = self._replace_file_references(tool_args)
# If a tool invoke hook is set, use it instead of generic_invoke
if self.tool_invoke_hook:
response_content, _, tool_invoke_meta = self.tool_invoke_hook(tool_instance, tool_args, tool_name)
# Note: message_file_ids are stored in DB, we don't convert them to File objects here
# The caller (AgentAppRunner) handles file publishing
return response_content, [], tool_invoke_meta
# Default: use generic_invoke for workflow scenarios
# Import here to avoid circular import
from core.tools.tool_engine import DifyWorkflowCallbackHandler, ToolEngine
tool_response = ToolEngine().generic_invoke(
tool=tool_instance,
tool_parameters=tool_args,
user_id=self.context.user_id or "",
workflow_tool_callback=DifyWorkflowCallbackHandler(),
workflow_call_depth=self.workflow_call_depth,
app_id=self.context.app_id,
conversation_id=self.context.conversation_id,
message_id=self.context.message_id,
)
# Collect response and files
response_content = ""
tool_files: list[File] = []
for response in tool_response:
if response.type == ToolInvokeMessage.MessageType.TEXT:
assert isinstance(response.message, ToolInvokeMessage.TextMessage)
response_content += response.message.text
elif response.type == ToolInvokeMessage.MessageType.LINK:
# Handle link messages
if isinstance(response.message, ToolInvokeMessage.TextMessage):
response_content += f"[Link: {response.message.text}]"
elif response.type == ToolInvokeMessage.MessageType.IMAGE:
# Handle image URL messages
if isinstance(response.message, ToolInvokeMessage.TextMessage):
response_content += f"[Image: {response.message.text}]"
elif response.type == ToolInvokeMessage.MessageType.IMAGE_LINK:
# Handle image link messages
if isinstance(response.message, ToolInvokeMessage.TextMessage):
response_content += f"[Image: {response.message.text}]"
elif response.type == ToolInvokeMessage.MessageType.BINARY_LINK:
# Handle binary file link messages
if isinstance(response.message, ToolInvokeMessage.TextMessage):
filename = response.meta.get("filename", "file") if response.meta else "file"
response_content += f"[File: {filename} - {response.message.text}]"
elif response.type == ToolInvokeMessage.MessageType.JSON:
# Handle JSON messages
if isinstance(response.message, ToolInvokeMessage.JsonMessage):
response_content += json.dumps(response.message.json_object, ensure_ascii=False, indent=2)
elif response.type == ToolInvokeMessage.MessageType.BLOB:
# Handle blob messages - convert to text representation
if isinstance(response.message, ToolInvokeMessage.BlobMessage):
mime_type = (
response.meta.get("mime_type", "application/octet-stream")
if response.meta
else "application/octet-stream"
)
size = len(response.message.blob)
response_content += f"[Binary data: {mime_type}, size: {size} bytes]"
elif response.type == ToolInvokeMessage.MessageType.VARIABLE:
# Handle variable messages
if isinstance(response.message, ToolInvokeMessage.VariableMessage):
var_name = response.message.variable_name
var_value = response.message.variable_value
if isinstance(var_value, str):
response_content += var_value
else:
response_content += f"[Variable {var_name}: {json.dumps(var_value, ensure_ascii=False)}]"
elif response.type == ToolInvokeMessage.MessageType.BLOB_CHUNK:
# Handle blob chunk messages - these are parts of a larger blob
if isinstance(response.message, ToolInvokeMessage.BlobChunkMessage):
response_content += f"[Blob chunk {response.message.sequence}: {len(response.message.blob)} bytes]"
elif response.type == ToolInvokeMessage.MessageType.RETRIEVER_RESOURCES:
# Handle retriever resources messages
if isinstance(response.message, ToolInvokeMessage.RetrieverResourceMessage):
response_content += response.message.context
elif response.type == ToolInvokeMessage.MessageType.FILE:
# Extract file from meta
if response.meta and "file" in response.meta:
file = response.meta["file"]
if isinstance(file, File):
# Check if file is for model or tool output
if response.meta.get("target") == "self":
# File is for model - add to files for next prompt
self.files.append(file)
response_content += f"File '{file.filename}' has been loaded into your context."
else:
# File is tool output
tool_files.append(file)
return response_content, tool_files, None
def _find_tool_by_name(self, tool_name: str) -> Tool | None:
"""Find a tool instance by its name."""
for tool in self.tools:
if tool.entity.identity.name == tool_name:
return tool
return None
def _convert_tools_to_prompt_format(self) -> list[PromptMessageTool]:
"""Convert tools to prompt message format."""
prompt_tools: list[PromptMessageTool] = []
for tool in self.tools:
prompt_tools.append(tool.to_prompt_message_tool())
return prompt_tools
def _update_usage_with_empty(self, llm_usage: dict[str, Any]) -> None:
"""Initialize usage tracking with empty usage if not set."""
if "usage" not in llm_usage or llm_usage["usage"] is None:
llm_usage["usage"] = LLMUsage.empty_usage()

View File

@@ -0,0 +1,296 @@
"""Function Call strategy implementation."""
import json
from collections.abc import Generator
from typing import Any, Union
from core.agent.entities import AgentLog, AgentResult
from core.file import File
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMResultChunkDelta,
LLMUsage,
PromptMessage,
PromptMessageTool,
ToolPromptMessage,
)
from core.tools.entities.tool_entities import ToolInvokeMeta
from .base import AgentPattern
class FunctionCallStrategy(AgentPattern):
"""Function Call strategy using model's native tool calling capability."""
def run(
self, prompt_messages: list[PromptMessage], model_parameters: dict[str, Any], stop: list[str] = [],
stream: bool = True,
) -> Generator[LLMResultChunk | AgentLog, None, AgentResult]:
"""Execute the function call agent strategy."""
# Convert tools to prompt format
prompt_tools: list[PromptMessageTool] = self._convert_tools_to_prompt_format()
# Initialize tracking
iteration_step: int = 1
max_iterations: int = self.max_iterations + 1
function_call_state: bool = True
total_usage: dict[str, LLMUsage | None] = {"usage": None}
messages: list[PromptMessage] = list(prompt_messages) # Create mutable copy
final_text: str = ""
finish_reason: str | None = None
output_files: list[File] = [] # Track files produced by tools
while function_call_state and iteration_step <= max_iterations:
function_call_state = False
round_log = self._create_log(
label=f"ROUND {iteration_step}",
log_type=AgentLog.LogType.ROUND,
status=AgentLog.LogStatus.START,
data={},
)
yield round_log
# On last iteration, remove tools to force final answer
current_tools: list[PromptMessageTool] = [] if iteration_step == max_iterations else prompt_tools
model_log = self._create_log(
label=f"{self.model_instance.model} Thought",
log_type=AgentLog.LogType.THOUGHT,
status=AgentLog.LogStatus.START,
data={},
parent_id=round_log.id,
extra_metadata={
AgentLog.LogMetadata.PROVIDER: self.model_instance.provider,
},
)
yield model_log
# Track usage for this round only
round_usage: dict[str, LLMUsage | None] = {"usage": None}
# Invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = self.model_instance.invoke_llm(
prompt_messages=messages,
model_parameters=model_parameters,
tools=current_tools,
stop=stop,
stream=stream,
user=self.context.user_id,
callbacks=[],
)
# Process response
tool_calls, response_content, chunk_finish_reason = yield from self._handle_chunks(
chunks, round_usage, model_log
)
messages.append(self._create_assistant_message(response_content, tool_calls))
# Accumulate to total usage
round_usage_value = round_usage.get("usage")
if round_usage_value:
self._accumulate_usage(total_usage, round_usage_value)
# Update final text if no tool calls (this is likely the final answer)
if not tool_calls:
final_text = response_content
# Update finish reason
if chunk_finish_reason:
finish_reason = chunk_finish_reason
# Process tool calls
tool_outputs: dict[str, str] = {}
if tool_calls:
function_call_state = True
# Execute tools
for tool_call_id, tool_name, tool_args in tool_calls:
tool_response, tool_files, _ = yield from self._handle_tool_call(
tool_name, tool_args, tool_call_id, messages, round_log
)
tool_outputs[tool_name] = tool_response
# Track files produced by tools
output_files.extend(tool_files)
yield self._finish_log(
round_log,
data={
"llm_result": response_content,
"tool_calls": [
{"name": tc[1], "args": tc[2], "output": tool_outputs.get(tc[1], "")} for tc in tool_calls
]
if tool_calls
else [],
"final_answer": final_text if not function_call_state else None,
},
usage=round_usage.get("usage"),
)
iteration_step += 1
# Return final result
from core.agent.entities import AgentResult
return AgentResult(
text=final_text,
files=output_files,
usage=total_usage.get("usage") or LLMUsage.empty_usage(),
finish_reason=finish_reason,
)
def _handle_chunks(
self,
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult],
llm_usage: dict[str, LLMUsage | None],
start_log: AgentLog,
) -> Generator[
LLMResultChunk | AgentLog,
None,
tuple[list[tuple[str, str, dict[str, Any]]], str, str | None],
]:
"""Handle LLM response chunks and extract tool calls and content.
Returns a tuple of (tool_calls, response_content, finish_reason).
"""
tool_calls: list[tuple[str, str, dict[str, Any]]] = []
response_content: str = ""
finish_reason: str | None = None
if isinstance(chunks, Generator):
# Streaming response
for chunk in chunks:
# Extract tool calls
if self._has_tool_calls(chunk):
tool_calls.extend(self._extract_tool_calls(chunk))
# Extract content
if chunk.delta.message and chunk.delta.message.content:
response_content += self._extract_content(chunk.delta.message.content)
# Track usage
if chunk.delta.usage:
self._accumulate_usage(llm_usage, chunk.delta.usage)
# Capture finish reason
if chunk.delta.finish_reason:
finish_reason = chunk.delta.finish_reason
yield chunk
else:
# Non-streaming response
result: LLMResult = chunks
if self._has_tool_calls_result(result):
tool_calls.extend(self._extract_tool_calls_result(result))
if result.message and result.message.content:
response_content += self._extract_content(result.message.content)
if result.usage:
self._accumulate_usage(llm_usage, result.usage)
# Convert to streaming format
yield LLMResultChunk(
model=result.model,
prompt_messages=result.prompt_messages,
delta=LLMResultChunkDelta(index=0, message=result.message, usage=result.usage),
)
yield self._finish_log(
start_log,
data={
"result": response_content,
},
usage=llm_usage.get("usage"),
)
return tool_calls, response_content, finish_reason
def _create_assistant_message(
self, content: str, tool_calls: list[tuple[str, str, dict[str, Any]]] | None = None
) -> AssistantPromptMessage:
"""Create assistant message with tool calls."""
if tool_calls is None:
return AssistantPromptMessage(content=content)
return AssistantPromptMessage(
content=content or "",
tool_calls=[
AssistantPromptMessage.ToolCall(
id=tc[0],
type="function",
function=AssistantPromptMessage.ToolCall.ToolCallFunction(name=tc[1], arguments=json.dumps(tc[2])),
)
for tc in tool_calls
],
)
def _handle_tool_call(
self,
tool_name: str,
tool_args: dict[str, Any],
tool_call_id: str,
messages: list[PromptMessage],
round_log: AgentLog,
) -> Generator[AgentLog, None, tuple[str, list[File], ToolInvokeMeta | None]]:
"""Handle a single tool call and return response with files and meta."""
# Find tool
tool_instance = self._find_tool_by_name(tool_name)
if not tool_instance:
raise ValueError(f"Tool {tool_name} not found")
# Get tool metadata (provider, icon, etc.)
tool_metadata = self._get_tool_metadata(tool_instance)
# Create tool call log
tool_call_log = self._create_log(
label=f"CALL {tool_name}",
log_type=AgentLog.LogType.TOOL_CALL,
status=AgentLog.LogStatus.START,
data={
"tool_call_id": tool_call_id,
"tool_name": tool_name,
"tool_args": tool_args,
},
parent_id=round_log.id,
extra_metadata=tool_metadata,
)
yield tool_call_log
# Invoke tool using base class method with error handling
try:
response_content, tool_files, tool_invoke_meta = self._invoke_tool(tool_instance, tool_args, tool_name)
yield self._finish_log(
tool_call_log,
data={
**tool_call_log.data,
"output": response_content,
"files": len(tool_files),
"meta": tool_invoke_meta.to_dict() if tool_invoke_meta else None,
},
)
final_content = response_content or "Tool executed successfully"
# Add tool response to messages
messages.append(
ToolPromptMessage(
content=final_content,
tool_call_id=tool_call_id,
name=tool_name,
)
)
return response_content, tool_files, tool_invoke_meta
except Exception as e:
# Tool invocation failed, yield error log
error_message = str(e)
tool_call_log.status = AgentLog.LogStatus.ERROR
tool_call_log.error = error_message
tool_call_log.data = {
**tool_call_log.data,
"error": error_message,
}
yield tool_call_log
# Add error message to conversation
error_content = f"Tool execution failed: {error_message}"
messages.append(
ToolPromptMessage(
content=error_content,
tool_call_id=tool_call_id,
name=tool_name,
)
)
return error_content, [], None

View File

@@ -0,0 +1,415 @@
"""ReAct strategy implementation."""
from __future__ import annotations
import json
from collections.abc import Generator
from typing import TYPE_CHECKING, Any, Union
from core.agent.entities import AgentLog, AgentResult, AgentScratchpadUnit, ExecutionContext
from core.agent.output_parser.cot_output_parser import CotAgentOutputParser
from core.file import File
from core.model_manager import ModelInstance
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMResultChunkDelta,
PromptMessage,
SystemPromptMessage,
)
from .base import AgentPattern, ToolInvokeHook
if TYPE_CHECKING:
from core.tools.__base.tool import Tool
class ReActStrategy(AgentPattern):
"""ReAct strategy using reasoning and acting approach."""
def __init__(
self,
model_instance: ModelInstance,
tools: list[Tool],
context: ExecutionContext,
max_iterations: int = 10,
workflow_call_depth: int = 0,
files: list[File] = [],
tool_invoke_hook: ToolInvokeHook | None = None,
instruction: str = "",
):
"""Initialize the ReAct strategy with instruction support."""
super().__init__(
model_instance=model_instance,
tools=tools,
context=context,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
files=files,
tool_invoke_hook=tool_invoke_hook,
)
self.instruction = instruction
def run(
self, prompt_messages: list[PromptMessage], model_parameters: dict[str, Any], stop: list[str] = [],
stream: bool = True,
) -> Generator[LLMResultChunk | AgentLog, None, AgentResult]:
"""Execute the ReAct agent strategy."""
# Initialize tracking
agent_scratchpad: list[AgentScratchpadUnit] = []
iteration_step: int = 1
max_iterations: int = self.max_iterations + 1
react_state: bool = True
total_usage: dict[str, Any] = {"usage": None}
output_files: list[File] = [] # Track files produced by tools
final_text: str = ""
finish_reason: str | None = None
# Add "Observation" to stop sequences
if "Observation" not in stop:
stop = stop.copy()
stop.append("Observation")
while react_state and iteration_step <= max_iterations:
react_state = False
round_log = self._create_log(
label=f"ROUND {iteration_step}",
log_type=AgentLog.LogType.ROUND,
status=AgentLog.LogStatus.START,
data={},
)
yield round_log
# Build prompt with/without tools based on iteration
include_tools = iteration_step < max_iterations
current_messages = self._build_prompt_with_react_format(
prompt_messages, agent_scratchpad, include_tools, self.instruction
)
model_log = self._create_log(
label=f"{self.model_instance.model} Thought",
log_type=AgentLog.LogType.THOUGHT,
status=AgentLog.LogStatus.START,
data={},
parent_id=round_log.id,
extra_metadata={
AgentLog.LogMetadata.PROVIDER: self.model_instance.provider,
},
)
yield model_log
# Track usage for this round only
round_usage: dict[str, Any] = {"usage": None}
# Use current messages directly (files are handled by base class if needed)
messages_to_use = current_messages
# Invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = self.model_instance.invoke_llm(
prompt_messages=messages_to_use,
model_parameters=model_parameters,
stop=stop,
stream=stream,
user=self.context.user_id or "",
callbacks=[],
)
# Process response
scratchpad, chunk_finish_reason = yield from self._handle_chunks(
chunks, round_usage, model_log, current_messages
)
agent_scratchpad.append(scratchpad)
# Accumulate to total usage
round_usage_value = round_usage.get("usage")
if round_usage_value:
self._accumulate_usage(total_usage, round_usage_value)
# Update finish reason
if chunk_finish_reason:
finish_reason = chunk_finish_reason
# Check if we have an action to execute
if scratchpad.action and scratchpad.action.action_name.lower() != "final answer":
react_state = True
# Execute tool
observation, tool_files = yield from self._handle_tool_call(
scratchpad.action, current_messages, round_log
)
scratchpad.observation = observation
# Track files produced by tools
output_files.extend(tool_files)
# Add observation to scratchpad for display
yield self._create_text_chunk(f"\nObservation: {observation}\n", current_messages)
else:
# Extract final answer
if scratchpad.action and scratchpad.action.action_input:
final_answer = scratchpad.action.action_input
if isinstance(final_answer, dict):
final_answer = json.dumps(final_answer, ensure_ascii=False)
final_text = str(final_answer)
elif scratchpad.thought:
# If no action but we have thought, use thought as final answer
final_text = scratchpad.thought
yield self._finish_log(
round_log,
data={
"thought": scratchpad.thought,
"action": scratchpad.action_str if scratchpad.action else None,
"observation": scratchpad.observation or None,
"final_answer": final_text if not react_state else None,
},
usage=round_usage.get("usage"),
)
iteration_step += 1
# Return final result
from core.agent.entities import AgentResult
return AgentResult(
text=final_text, files=output_files, usage=total_usage.get("usage"), finish_reason=finish_reason
)
def _build_prompt_with_react_format(
self,
original_messages: list[PromptMessage],
agent_scratchpad: list[AgentScratchpadUnit],
include_tools: bool = True,
instruction: str = "",
) -> list[PromptMessage]:
"""Build prompt messages with ReAct format."""
# Copy messages to avoid modifying original
messages = list(original_messages)
# Find and update the system prompt that should already exist
system_prompt_found = False
for i, msg in enumerate(messages):
if isinstance(msg, SystemPromptMessage):
system_prompt_found = True
# The system prompt from frontend already has the template, just replace placeholders
# Format tools
tools_str = ""
tool_names = []
if include_tools and self.tools:
# Convert tools to prompt message tools format
prompt_tools = [tool.to_prompt_message_tool() for tool in self.tools]
tool_names = [tool.name for tool in prompt_tools]
# Format tools as JSON for comprehensive information
from core.model_runtime.utils.encoders import jsonable_encoder
tools_str = json.dumps(jsonable_encoder(prompt_tools), indent=2)
tool_names_str = ", ".join(f'"{name}"' for name in tool_names)
else:
tools_str = "No tools available"
tool_names_str = ""
# Replace placeholders in the existing system prompt
updated_content = msg.content
assert isinstance(updated_content, str)
updated_content = updated_content.replace("{{instruction}}", instruction)
updated_content = updated_content.replace("{{tools}}", tools_str)
updated_content = updated_content.replace("{{tool_names}}", tool_names_str)
# Create new SystemPromptMessage with updated content
messages[i] = SystemPromptMessage(content=updated_content)
break
# If no system prompt found, that's unexpected but add scratchpad anyway
if not system_prompt_found:
# This shouldn't happen if frontend is working correctly
pass
# Format agent scratchpad
scratchpad_str = ""
if agent_scratchpad:
scratchpad_parts: list[str] = []
for unit in agent_scratchpad:
if unit.thought:
scratchpad_parts.append(f"Thought: {unit.thought}")
if unit.action_str:
scratchpad_parts.append(f"Action:\n```\n{unit.action_str}\n```")
if unit.observation:
scratchpad_parts.append(f"Observation: {unit.observation}")
scratchpad_str = "\n".join(scratchpad_parts)
# If there's a scratchpad, append it to the last message
if scratchpad_str:
messages.append(AssistantPromptMessage(content=scratchpad_str))
return messages
def _handle_chunks(
self,
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult],
llm_usage: dict[str, Any],
model_log: AgentLog,
current_messages: list[PromptMessage],
) -> Generator[
LLMResultChunk | AgentLog,
None,
tuple[AgentScratchpadUnit, str | None],
]:
"""Handle LLM response chunks and extract action/thought.
Returns a tuple of (scratchpad_unit, finish_reason).
"""
usage_dict: dict[str, Any] = {}
# Convert non-streaming to streaming format if needed
if isinstance(chunks, LLMResult):
# Create a generator from the LLMResult
def result_to_chunks() -> Generator[LLMResultChunk, None, None]:
yield LLMResultChunk(
model=chunks.model,
prompt_messages=chunks.prompt_messages,
delta=LLMResultChunkDelta(
index=0,
message=chunks.message,
usage=chunks.usage,
finish_reason=None, # LLMResult doesn't have finish_reason, only streaming chunks do
),
system_fingerprint=chunks.system_fingerprint or "",
)
streaming_chunks = result_to_chunks()
else:
streaming_chunks = chunks
react_chunks = CotAgentOutputParser.handle_react_stream_output(streaming_chunks, usage_dict)
# Initialize scratchpad unit
scratchpad = AgentScratchpadUnit(
agent_response="",
thought="",
action_str="",
observation="",
action=None,
)
finish_reason: str | None = None
# Process chunks
for chunk in react_chunks:
if isinstance(chunk, AgentScratchpadUnit.Action):
# Action detected
action_str = json.dumps(chunk.model_dump())
scratchpad.agent_response = (scratchpad.agent_response or "") + action_str
scratchpad.action_str = action_str
scratchpad.action = chunk
yield self._create_text_chunk(json.dumps(chunk.model_dump()), current_messages)
else:
# Text chunk
chunk_text = str(chunk)
scratchpad.agent_response = (scratchpad.agent_response or "") + chunk_text
scratchpad.thought = (scratchpad.thought or "") + chunk_text
yield self._create_text_chunk(chunk_text, current_messages)
# Update usage
if usage_dict.get("usage"):
if llm_usage.get("usage"):
self._accumulate_usage(llm_usage, usage_dict["usage"])
else:
llm_usage["usage"] = usage_dict["usage"]
# Clean up thought
scratchpad.thought = (scratchpad.thought or "").strip() or "I am thinking about how to help you"
# Finish model log
yield self._finish_log(
model_log,
data={
"thought": scratchpad.thought,
"action": scratchpad.action_str if scratchpad.action else None,
},
usage=llm_usage.get("usage"),
)
return scratchpad, finish_reason
def _handle_tool_call(
self,
action: AgentScratchpadUnit.Action,
prompt_messages: list[PromptMessage],
round_log: AgentLog,
) -> Generator[AgentLog, None, tuple[str, list[File]]]:
"""Handle tool call and return observation with files."""
tool_name = action.action_name
tool_args: dict[str, Any] | str = action.action_input
# Find tool instance first to get metadata
tool_instance = self._find_tool_by_name(tool_name)
tool_metadata = self._get_tool_metadata(tool_instance) if tool_instance else {}
# Start tool log with tool metadata
tool_log = self._create_log(
label=f"CALL {tool_name}",
log_type=AgentLog.LogType.TOOL_CALL,
status=AgentLog.LogStatus.START,
data={
"tool_name": tool_name,
"tool_args": tool_args,
},
parent_id=round_log.id,
extra_metadata=tool_metadata,
)
yield tool_log
if not tool_instance:
# Finish tool log with error
yield self._finish_log(
tool_log,
data={
**tool_log.data,
"error": f"Tool {tool_name} not found",
},
)
return f"Tool {tool_name} not found", []
# Ensure tool_args is a dict
tool_args_dict: dict[str, Any]
if isinstance(tool_args, str):
try:
tool_args_dict = json.loads(tool_args)
except json.JSONDecodeError:
tool_args_dict = {"input": tool_args}
elif not isinstance(tool_args, dict):
tool_args_dict = {"input": str(tool_args)}
else:
tool_args_dict = tool_args
# Invoke tool using base class method with error handling
try:
response_content, tool_files, tool_invoke_meta = self._invoke_tool(tool_instance, tool_args_dict, tool_name)
# Finish tool log
yield self._finish_log(
tool_log,
data={
**tool_log.data,
"output": response_content,
"files": len(tool_files),
"meta": tool_invoke_meta.to_dict() if tool_invoke_meta else None,
},
)
return response_content or "Tool executed successfully", tool_files
except Exception as e:
# Tool invocation failed, yield error log
error_message = str(e)
tool_log.status = AgentLog.LogStatus.ERROR
tool_log.error = error_message
tool_log.data = {
**tool_log.data,
"error": error_message,
}
yield tool_log
return f"Tool execution failed: {error_message}", []

View File

@@ -0,0 +1,108 @@
"""Strategy factory for creating agent strategies."""
from __future__ import annotations
from typing import TYPE_CHECKING
from core.agent.entities import AgentEntity, ExecutionContext
from core.file.models import File
from core.model_manager import ModelInstance
from core.model_runtime.entities.model_entities import ModelFeature
from .base import AgentPattern, ToolInvokeHook
from .function_call import FunctionCallStrategy
from .react import ReActStrategy
if TYPE_CHECKING:
from core.tools.__base.tool import Tool
class StrategyFactory:
"""Factory for creating agent strategies based on model features."""
# Tool calling related features
TOOL_CALL_FEATURES = {ModelFeature.TOOL_CALL, ModelFeature.MULTI_TOOL_CALL, ModelFeature.STREAM_TOOL_CALL}
@staticmethod
def create_strategy(
model_features: list[ModelFeature],
model_instance: ModelInstance,
context: ExecutionContext,
tools: list[Tool],
files: list[File],
max_iterations: int = 10,
workflow_call_depth: int = 0,
agent_strategy: AgentEntity.Strategy | None = None,
tool_invoke_hook: ToolInvokeHook | None = None,
instruction: str = "",
) -> AgentPattern:
"""
Create an appropriate strategy based on model features.
Args:
model_features: List of model features/capabilities
model_instance: Model instance to use
context: Execution context containing trace/audit information
tools: Available tools
files: Available files
max_iterations: Maximum iterations for the strategy
workflow_call_depth: Depth of workflow calls
agent_strategy: Optional explicit strategy override
tool_invoke_hook: Optional hook for custom tool invocation (e.g., agent_invoke)
instruction: Optional instruction for ReAct strategy
Returns:
AgentStrategy instance
"""
# If explicit strategy is provided and it's Function Calling, try to use it if supported
if agent_strategy == AgentEntity.Strategy.FUNCTION_CALLING:
if set(model_features) & StrategyFactory.TOOL_CALL_FEATURES:
return FunctionCallStrategy(
model_instance=model_instance,
context=context,
tools=tools,
files=files,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
tool_invoke_hook=tool_invoke_hook,
)
# Fallback to ReAct if FC is requested but not supported
# If explicit strategy is Chain of Thought (ReAct)
if agent_strategy == AgentEntity.Strategy.CHAIN_OF_THOUGHT:
return ReActStrategy(
model_instance=model_instance,
context=context,
tools=tools,
files=files,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
tool_invoke_hook=tool_invoke_hook,
instruction=instruction,
)
# Default auto-selection logic
if set(model_features) & StrategyFactory.TOOL_CALL_FEATURES:
# Model supports native function calling
return FunctionCallStrategy(
model_instance=model_instance,
context=context,
tools=tools,
files=files,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
tool_invoke_hook=tool_invoke_hook,
)
else:
# Use ReAct strategy for models without function calling
return ReActStrategy(
model_instance=model_instance,
context=context,
tools=tools,
files=files,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
tool_invoke_hook=tool_invoke_hook,
instruction=instruction,
)

View File

@@ -4,8 +4,8 @@ import contextvars
import logging
import threading
import uuid
from collections.abc import Generator, Mapping
from typing import TYPE_CHECKING, Any, Literal, Union, overload
from collections.abc import Generator, Mapping, Sequence
from typing import TYPE_CHECKING, Any, Literal, TypeVar, Union, overload
from flask import Flask, current_app
from pydantic import ValidationError
@@ -29,23 +29,32 @@ from core.app.apps.message_based_app_generator import MessageBasedAppGenerator
from core.app.apps.message_based_app_queue_manager import MessageBasedAppQueueManager
from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom
from core.app.entities.task_entities import ChatbotAppBlockingResponse, ChatbotAppStreamResponse
from core.app.layers.pause_state_persist_layer import PauseStateLayerConfig, PauseStatePersistenceLayer
from core.app.layers.sandbox_layer import SandboxLayer
from core.helper.trace_id_helper import extract_external_trace_id_from_args
from core.model_runtime.errors.invoke import InvokeAuthorizationError
from core.ops.ops_trace_manager import TraceQueueManager
from core.prompt.utils.get_thread_messages_length import get_thread_messages_length
from core.repositories import DifyCoreRepositoryFactory
from core.sandbox import Sandbox
from core.workflow.graph_engine.layers.base import GraphEngineLayer
from core.workflow.repositories.draft_variable_repository import (
DraftVariableSaverFactory,
)
from core.workflow.repositories.workflow_execution_repository import WorkflowExecutionRepository
from core.workflow.repositories.workflow_node_execution_repository import WorkflowNodeExecutionRepository
from core.workflow.runtime import GraphRuntimeState
from core.workflow.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader
from extensions.ext_database import db
from factories import file_factory
from libs.flask_utils import preserve_flask_contexts
from models import Account, App, Conversation, EndUser, Message, Workflow, WorkflowNodeExecutionTriggeredFrom
from models.base import Base
from models.enums import WorkflowRunTriggeredFrom
from models.workflow_features import WorkflowFeatures
from services.conversation_service import ConversationService
from services.sandbox.sandbox_provider_service import SandboxProviderService
from services.sandbox.sandbox_service import SandboxService
from services.workflow_draft_variable_service import (
DraftVarLoader,
WorkflowDraftVariableService,
@@ -65,7 +74,9 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
user: Union[Account, EndUser],
args: Mapping[str, Any],
invoke_from: InvokeFrom,
workflow_run_id: str,
streaming: Literal[False],
pause_state_config: PauseStateLayerConfig | None = None,
) -> Mapping[str, Any]: ...
@overload
@@ -74,9 +85,11 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
app_model: App,
workflow: Workflow,
user: Union[Account, EndUser],
args: Mapping,
args: Mapping[str, Any],
invoke_from: InvokeFrom,
workflow_run_id: str,
streaming: Literal[True],
pause_state_config: PauseStateLayerConfig | None = None,
) -> Generator[Mapping | str, None, None]: ...
@overload
@@ -85,9 +98,11 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
app_model: App,
workflow: Workflow,
user: Union[Account, EndUser],
args: Mapping,
args: Mapping[str, Any],
invoke_from: InvokeFrom,
workflow_run_id: str,
streaming: bool,
pause_state_config: PauseStateLayerConfig | None = None,
) -> Mapping[str, Any] | Generator[str | Mapping, None, None]: ...
def generate(
@@ -95,9 +110,11 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
app_model: App,
workflow: Workflow,
user: Union[Account, EndUser],
args: Mapping,
args: Mapping[str, Any],
invoke_from: InvokeFrom,
workflow_run_id: str,
streaming: bool = True,
pause_state_config: PauseStateLayerConfig | None = None,
) -> Mapping[str, Any] | Generator[str | Mapping, None, None]:
"""
Generate App response.
@@ -161,7 +178,6 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
# always enable retriever resource in debugger mode
app_config.additional_features.show_retrieve_source = True # type: ignore
workflow_run_id = str(uuid.uuid4())
# init application generate entity
application_generate_entity = AdvancedChatAppGenerateEntity(
task_id=str(uuid.uuid4()),
@@ -179,7 +195,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
invoke_from=invoke_from,
extras=extras,
trace_manager=trace_manager,
workflow_run_id=workflow_run_id,
workflow_run_id=str(workflow_run_id),
)
contexts.plugin_tool_providers.set({})
contexts.plugin_tool_providers_lock.set(threading.Lock())
@@ -216,6 +232,38 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
workflow_node_execution_repository=workflow_node_execution_repository,
conversation=conversation,
stream=streaming,
pause_state_config=pause_state_config,
)
def resume(
self,
*,
app_model: App,
workflow: Workflow,
user: Union[Account, EndUser],
conversation: Conversation,
message: Message,
application_generate_entity: AdvancedChatAppGenerateEntity,
workflow_execution_repository: WorkflowExecutionRepository,
workflow_node_execution_repository: WorkflowNodeExecutionRepository,
graph_runtime_state: GraphRuntimeState,
pause_state_config: PauseStateLayerConfig | None = None,
):
"""
Resume a paused advanced chat execution.
"""
return self._generate(
workflow=workflow,
user=user,
invoke_from=application_generate_entity.invoke_from,
application_generate_entity=application_generate_entity,
workflow_execution_repository=workflow_execution_repository,
workflow_node_execution_repository=workflow_node_execution_repository,
conversation=conversation,
message=message,
stream=application_generate_entity.stream,
pause_state_config=pause_state_config,
graph_runtime_state=graph_runtime_state,
)
def single_iteration_generate(
@@ -396,8 +444,12 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
workflow_execution_repository: WorkflowExecutionRepository,
workflow_node_execution_repository: WorkflowNodeExecutionRepository,
conversation: Conversation | None = None,
message: Message | None = None,
stream: bool = True,
variable_loader: VariableLoader = DUMMY_VARIABLE_LOADER,
pause_state_config: PauseStateLayerConfig | None = None,
graph_runtime_state: GraphRuntimeState | None = None,
graph_engine_layers: Sequence[GraphEngineLayer] = (),
) -> Mapping[str, Any] | Generator[str | Mapping[str, Any], Any, None]:
"""
Generate App response.
@@ -411,12 +463,12 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
:param conversation: conversation
:param stream: is stream
"""
is_first_conversation = False
if not conversation:
is_first_conversation = True
is_first_conversation = conversation is None
# init generate records
(conversation, message) = self._init_generate_records(application_generate_entity, conversation)
if conversation is not None and message is not None:
pass
else:
conversation, message = self._init_generate_records(application_generate_entity, conversation)
if is_first_conversation:
# update conversation features
@@ -439,6 +491,16 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
message_id=message.id,
)
graph_layers: list[GraphEngineLayer] = list(graph_engine_layers)
if pause_state_config is not None:
graph_layers.append(
PauseStatePersistenceLayer(
session_factory=pause_state_config.session_factory,
generate_entity=application_generate_entity,
state_owner_user_id=pause_state_config.state_owner_user_id,
)
)
# new thread with request context and contextvars
context = contextvars.copy_context()
@@ -454,14 +516,25 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
"variable_loader": variable_loader,
"workflow_execution_repository": workflow_execution_repository,
"workflow_node_execution_repository": workflow_node_execution_repository,
"graph_engine_layers": tuple(graph_layers),
"graph_runtime_state": graph_runtime_state,
},
)
worker_thread.start()
# release database connection, because the following new thread operations may take a long time
db.session.refresh(workflow)
db.session.refresh(message)
with Session(bind=db.engine, expire_on_commit=False) as session:
workflow = _refresh_model(session, workflow)
message = _refresh_model(session, message)
# workflow_ = session.get(Workflow, workflow.id)
# assert workflow_ is not None
# workflow = workflow_
# message_ = session.get(Message, message.id)
# assert message_ is not None
# message = message_
# db.session.refresh(workflow)
# db.session.refresh(message)
# db.session.refresh(user)
db.session.close()
@@ -490,6 +563,8 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
variable_loader: VariableLoader,
workflow_execution_repository: WorkflowExecutionRepository,
workflow_node_execution_repository: WorkflowNodeExecutionRepository,
graph_engine_layers: Sequence[GraphEngineLayer] = (),
graph_runtime_state: GraphRuntimeState | None = None,
):
"""
Generate worker in a new thread.
@@ -517,6 +592,29 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
if workflow is None:
raise ValueError("Workflow not found")
sandbox: Sandbox | None = None
graph_engine_layers: tuple = ()
if workflow.get_feature(WorkflowFeatures.SANDBOX).enabled:
sandbox_provider = SandboxProviderService.get_sandbox_provider(
application_generate_entity.app_config.tenant_id
)
if workflow.version == Workflow.VERSION_DRAFT:
sandbox = SandboxService.create_draft(
tenant_id=application_generate_entity.app_config.tenant_id,
app_id=application_generate_entity.app_config.app_id,
user_id=application_generate_entity.user_id,
sandbox_provider=sandbox_provider,
)
else:
sandbox = SandboxService.create(
tenant_id=application_generate_entity.app_config.tenant_id,
app_id=application_generate_entity.app_config.app_id,
user_id=application_generate_entity.user_id,
sandbox_id=conversation_id,
sandbox_provider=sandbox_provider,
)
graph_engine_layers = (SandboxLayer(sandbox=sandbox),)
# Determine system_user_id based on invocation source
is_external_api_call = application_generate_entity.invoke_from in {
InvokeFrom.WEB_APP,
@@ -547,6 +645,9 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
app=app,
workflow_execution_repository=workflow_execution_repository,
workflow_node_execution_repository=workflow_node_execution_repository,
graph_engine_layers=graph_engine_layers,
sandbox=sandbox,
graph_runtime_state=graph_runtime_state,
)
try:
@@ -614,3 +715,13 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
else:
logger.exception("Failed to process generate task pipeline, conversation_id: %s", conversation.id)
raise e
_T = TypeVar("_T", bound=Base)
def _refresh_model(session, model: _T) -> _T:
with Session(bind=db.engine, expire_on_commit=False) as session:
detach_model = session.get(type(model), model.id)
assert detach_model is not None
return detach_model

View File

@@ -25,6 +25,7 @@ from core.app.workflow.layers.persistence import PersistenceWorkflowInfo, Workfl
from core.db.session_factory import session_factory
from core.moderation.base import ModerationError
from core.moderation.input_moderation import InputModeration
from core.sandbox import Sandbox
from core.variables.variables import Variable
from core.workflow.enums import WorkflowType
from core.workflow.graph_engine.command_channels.redis_channel import RedisChannel
@@ -66,6 +67,8 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
workflow_execution_repository: WorkflowExecutionRepository,
workflow_node_execution_repository: WorkflowNodeExecutionRepository,
graph_engine_layers: Sequence[GraphEngineLayer] = (),
sandbox: Sandbox | None = None,
graph_runtime_state: GraphRuntimeState | None = None,
):
super().__init__(
queue_manager=queue_manager,
@@ -82,6 +85,8 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
self._app = app
self._workflow_execution_repository = workflow_execution_repository
self._workflow_node_execution_repository = workflow_node_execution_repository
self._sandbox = sandbox
self._resume_graph_runtime_state = graph_runtime_state
@trace_span(WorkflowAppRunnerHandler)
def run(self):
@@ -110,7 +115,21 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
invoke_from = InvokeFrom.DEBUGGER
user_from = self._resolve_user_from(invoke_from)
if self.application_generate_entity.single_iteration_run or self.application_generate_entity.single_loop_run:
resume_state = self._resume_graph_runtime_state
if resume_state is not None:
graph_runtime_state = resume_state
variable_pool = graph_runtime_state.variable_pool
graph = self._init_graph(
graph_config=self._workflow.graph_dict,
graph_runtime_state=graph_runtime_state,
workflow_id=self._workflow.id,
tenant_id=self._workflow.tenant_id,
user_id=self.application_generate_entity.user_id,
invoke_from=invoke_from,
user_from=user_from,
)
elif self.application_generate_entity.single_iteration_run or self.application_generate_entity.single_loop_run:
# Handle single iteration or single loop run
graph, variable_pool, graph_runtime_state = self._prepare_single_node_execution(
workflow=self._workflow,
@@ -156,6 +175,10 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
# init graph
graph_runtime_state = GraphRuntimeState(variable_pool=variable_pool, start_at=time.time())
if self._sandbox:
graph_runtime_state.set_sandbox(self._sandbox)
graph = self._init_graph(
graph_config=self._workflow.graph_dict,
graph_runtime_state=graph_runtime_state,

View File

@@ -82,7 +82,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter):
data = cls._error_to_stream_response(sub_stream_response.err)
response_chunk.update(data)
else:
response_chunk.update(sub_stream_response.model_dump(mode="json"))
response_chunk.update(sub_stream_response.model_dump(mode="json", exclude_none=True))
yield response_chunk
@classmethod
@@ -110,7 +110,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter):
}
if isinstance(sub_stream_response, MessageEndStreamResponse):
sub_stream_response_dict = sub_stream_response.model_dump(mode="json")
sub_stream_response_dict = sub_stream_response.model_dump(mode="json", exclude_none=True)
metadata = sub_stream_response_dict.get("metadata", {})
sub_stream_response_dict["metadata"] = cls._get_simple_metadata(metadata)
response_chunk.update(sub_stream_response_dict)
@@ -120,6 +120,6 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter):
elif isinstance(sub_stream_response, NodeStartStreamResponse | NodeFinishStreamResponse):
response_chunk.update(sub_stream_response.to_ignore_detail_dict())
else:
response_chunk.update(sub_stream_response.model_dump(mode="json"))
response_chunk.update(sub_stream_response.model_dump(mode="json", exclude_none=True))
yield response_chunk

View File

@@ -4,6 +4,7 @@ import re
import time
from collections.abc import Callable, Generator, Mapping
from contextlib import contextmanager
from dataclasses import dataclass, field
from threading import Thread
from typing import Any, Union
@@ -19,11 +20,14 @@ from core.app.entities.app_invoke_entities import (
InvokeFrom,
)
from core.app.entities.queue_entities import (
ChunkType,
MessageQueueMessage,
QueueAdvancedChatMessageEndEvent,
QueueAgentLogEvent,
QueueAnnotationReplyEvent,
QueueErrorEvent,
QueueHumanInputFormFilledEvent,
QueueHumanInputFormTimeoutEvent,
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
@@ -42,6 +46,7 @@ from core.app.entities.queue_entities import (
QueueTextChunkEvent,
QueueWorkflowFailedEvent,
QueueWorkflowPartialSuccessEvent,
QueueWorkflowPausedEvent,
QueueWorkflowStartedEvent,
QueueWorkflowSucceededEvent,
WorkflowQueueMessage,
@@ -63,6 +68,8 @@ from core.base.tts import AppGeneratorTTSPublisher, AudioTrunk
from core.model_runtime.entities.llm_entities import LLMUsage
from core.model_runtime.utils.encoders import jsonable_encoder
from core.ops.ops_trace_manager import TraceQueueManager
from core.repositories.human_input_repository import HumanInputFormRepositoryImpl
from core.workflow.entities.pause_reason import HumanInputRequired
from core.workflow.enums import WorkflowExecutionStatus
from core.workflow.nodes import NodeType
from core.workflow.repositories.draft_variable_repository import DraftVariableSaverFactory
@@ -70,13 +77,135 @@ from core.workflow.runtime import GraphRuntimeState
from core.workflow.system_variable import SystemVariable
from extensions.ext_database import db
from libs.datetime_utils import naive_utc_now
from models import Account, Conversation, EndUser, Message, MessageFile
from models.enums import CreatorUserRole
from models import Account, Conversation, EndUser, LLMGenerationDetail, Message, MessageFile
from models.enums import CreatorUserRole, MessageStatus
from models.execution_extra_content import HumanInputContent
from models.workflow import Workflow
logger = logging.getLogger(__name__)
@dataclass
class StreamEventBuffer:
"""
Buffer for recording stream events in order to reconstruct the generation sequence.
Records the exact order of text chunks, thoughts, and tool calls as they stream.
"""
# Accumulated reasoning content (each thought block is a separate element)
reasoning_content: list[str] = field(default_factory=list)
# Current reasoning buffer (accumulates until we see a different event type)
_current_reasoning: str = ""
# Tool calls with their details
tool_calls: list[dict] = field(default_factory=list)
# Tool call ID to index mapping for updating results
_tool_call_id_map: dict[str, int] = field(default_factory=dict)
# Sequence of events in stream order
sequence: list[dict] = field(default_factory=list)
# Current position in answer text
_content_position: int = 0
# Track last event type to detect transitions
_last_event_type: str | None = None
def _flush_current_reasoning(self) -> None:
"""Flush accumulated reasoning to the list and add to sequence."""
if self._current_reasoning.strip():
self.reasoning_content.append(self._current_reasoning.strip())
self.sequence.append({"type": "reasoning", "index": len(self.reasoning_content) - 1})
self._current_reasoning = ""
def record_text_chunk(self, text: str) -> None:
"""Record a text chunk event."""
if not text:
return
# Flush any pending reasoning first
if self._last_event_type == "thought":
self._flush_current_reasoning()
text_len = len(text)
start_pos = self._content_position
# If last event was also content, extend it; otherwise create new
if self.sequence and self.sequence[-1].get("type") == "content":
self.sequence[-1]["end"] = start_pos + text_len
else:
self.sequence.append({"type": "content", "start": start_pos, "end": start_pos + text_len})
self._content_position += text_len
self._last_event_type = "content"
def record_thought_chunk(self, text: str) -> None:
"""Record a thought/reasoning chunk event."""
if not text:
return
# Accumulate thought content
self._current_reasoning += text
self._last_event_type = "thought"
def record_tool_call(
self,
tool_call_id: str,
tool_name: str,
tool_arguments: str,
tool_icon: str | dict | None = None,
tool_icon_dark: str | dict | None = None,
) -> None:
"""Record a tool call event."""
if not tool_call_id:
return
# Flush any pending reasoning first
if self._last_event_type == "thought":
self._flush_current_reasoning()
# Check if this tool call already exists (we might get multiple chunks)
if tool_call_id in self._tool_call_id_map:
idx = self._tool_call_id_map[tool_call_id]
# Update arguments if provided
if tool_arguments:
self.tool_calls[idx]["arguments"] = tool_arguments
else:
# New tool call
tool_call = {
"id": tool_call_id or "",
"name": tool_name or "",
"arguments": tool_arguments or "",
"result": "",
"elapsed_time": None,
"icon": tool_icon,
"icon_dark": tool_icon_dark,
}
self.tool_calls.append(tool_call)
idx = len(self.tool_calls) - 1
self._tool_call_id_map[tool_call_id] = idx
self.sequence.append({"type": "tool_call", "index": idx})
self._last_event_type = "tool_call"
def record_tool_result(self, tool_call_id: str, result: str, tool_elapsed_time: float | None = None) -> None:
"""Record a tool result event (update existing tool call)."""
if not tool_call_id:
return
if tool_call_id in self._tool_call_id_map:
idx = self._tool_call_id_map[tool_call_id]
self.tool_calls[idx]["result"] = result
self.tool_calls[idx]["elapsed_time"] = tool_elapsed_time
# Remove from map after result is recorded, so that subsequent calls
# with the same tool_call_id are treated as new tool calls
del self._tool_call_id_map[tool_call_id]
def finalize(self) -> None:
"""Finalize the buffer, flushing any pending data."""
if self._last_event_type == "thought":
self._flush_current_reasoning()
def has_data(self) -> bool:
"""Check if there's any meaningful data recorded."""
return bool(self.reasoning_content or self.tool_calls or self.sequence)
class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
"""
AdvancedChatAppGenerateTaskPipeline is a class that generate stream output and state management for Application.
@@ -128,6 +257,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
)
self._task_state = WorkflowTaskState()
self._seed_task_state_from_message(message)
self._message_cycle_manager = MessageCycleManager(
application_generate_entity=application_generate_entity, task_state=self._task_state
)
@@ -135,6 +265,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
self._application_generate_entity = application_generate_entity
self._workflow_id = workflow.id
self._workflow_features_dict = workflow.features_dict
self._workflow_tenant_id = workflow.tenant_id
self._conversation_id = conversation.id
self._conversation_mode = conversation.mode
self._message_id = message.id
@@ -144,8 +275,15 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
self._workflow_run_id: str = ""
self._draft_var_saver_factory = draft_var_saver_factory
self._graph_runtime_state: GraphRuntimeState | None = None
# Stream event buffer for recording generation sequence
self._stream_buffer = StreamEventBuffer()
self._message_saved_on_pause = False
self._seed_graph_runtime_state_from_queue_manager()
def _seed_task_state_from_message(self, message: Message) -> None:
if message.status == MessageStatus.PAUSED and message.answer:
self._task_state.answer = message.answer
def process(self) -> Union[ChatbotAppBlockingResponse, Generator[ChatbotAppStreamResponse, None, None]]:
"""
Process generate task pipeline.
@@ -308,6 +446,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
task_id=self._application_generate_entity.task_id,
workflow_run_id=run_id,
workflow_id=self._workflow_id,
reason=event.reason,
)
yield workflow_start_resp
@@ -383,7 +522,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
queue_message: Union[WorkflowQueueMessage, MessageQueueMessage] | None = None,
**kwargs,
) -> Generator[StreamResponse, None, None]:
"""Handle text chunk events."""
"""Handle text chunk events and record to stream buffer for sequence reconstruction."""
delta_text = event.text
if delta_text is None:
return
@@ -405,9 +544,53 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
if tts_publisher and queue_message:
tts_publisher.publish(queue_message)
self._task_state.answer += delta_text
tool_call = event.tool_call
tool_result = event.tool_result
tool_payload = tool_call or tool_result
tool_call_id = tool_payload.id if tool_payload and tool_payload.id else ""
tool_name = tool_payload.name if tool_payload and tool_payload.name else ""
tool_arguments = tool_call.arguments if tool_call and tool_call.arguments else ""
tool_files = tool_result.files if tool_result else []
tool_elapsed_time = tool_result.elapsed_time if tool_result else None
tool_icon = tool_payload.icon if tool_payload else None
tool_icon_dark = tool_payload.icon_dark if tool_payload else None
# Record stream event based on chunk type
chunk_type = event.chunk_type or ChunkType.TEXT
match chunk_type:
case ChunkType.TEXT:
self._stream_buffer.record_text_chunk(delta_text)
self._task_state.answer += delta_text
case ChunkType.THOUGHT:
# Reasoning should not be part of final answer text
self._stream_buffer.record_thought_chunk(delta_text)
case ChunkType.TOOL_CALL:
self._stream_buffer.record_tool_call(
tool_call_id=tool_call_id,
tool_name=tool_name,
tool_arguments=tool_arguments,
tool_icon=tool_icon,
tool_icon_dark=tool_icon_dark,
)
case ChunkType.TOOL_RESULT:
self._stream_buffer.record_tool_result(
tool_call_id=tool_call_id,
result=delta_text,
tool_elapsed_time=tool_elapsed_time,
)
case _:
pass
yield self._message_cycle_manager.message_to_stream_response(
answer=delta_text, message_id=self._message_id, from_variable_selector=event.from_variable_selector
answer=delta_text,
message_id=self._message_id,
from_variable_selector=event.from_variable_selector,
chunk_type=event.chunk_type.value if event.chunk_type else None,
tool_call_id=tool_call_id or None,
tool_name=tool_name or None,
tool_arguments=tool_arguments or None,
tool_files=tool_files,
tool_elapsed_time=tool_elapsed_time,
tool_icon=tool_icon,
tool_icon_dark=tool_icon_dark,
)
def _handle_iteration_start_event(
@@ -525,6 +708,35 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
)
yield workflow_finish_resp
def _handle_workflow_paused_event(
self,
event: QueueWorkflowPausedEvent,
**kwargs,
) -> Generator[StreamResponse, None, None]:
"""Handle workflow paused events."""
validated_state = self._ensure_graph_runtime_initialized()
responses = self._workflow_response_converter.workflow_pause_to_stream_response(
event=event,
task_id=self._application_generate_entity.task_id,
graph_runtime_state=validated_state,
)
for reason in event.reasons:
if isinstance(reason, HumanInputRequired):
self._persist_human_input_extra_content(form_id=reason.form_id, node_id=reason.node_id)
yield from responses
resolved_state: GraphRuntimeState | None = None
try:
resolved_state = self._ensure_graph_runtime_initialized()
except ValueError:
resolved_state = None
with self._database_session() as session:
self._save_message(session=session, graph_runtime_state=resolved_state)
message = self._get_message(session=session)
if message is not None:
message.status = MessageStatus.PAUSED
self._message_saved_on_pause = True
self._base_task_pipeline.queue_manager.publish(QueueAdvancedChatMessageEndEvent(), PublishFrom.TASK_PIPELINE)
def _handle_workflow_failed_event(
@@ -614,9 +826,10 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
reason=QueueMessageReplaceEvent.MessageReplaceReason.OUTPUT_MODERATION,
)
# Save message
with self._database_session() as session:
self._save_message(session=session, graph_runtime_state=resolved_state)
# Save message unless it has already been persisted on pause.
if not self._message_saved_on_pause:
with self._database_session() as session:
self._save_message(session=session, graph_runtime_state=resolved_state)
yield self._message_end_to_stream_response()
@@ -642,6 +855,65 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
"""Handle message replace events."""
yield self._message_cycle_manager.message_replace_to_stream_response(answer=event.text, reason=event.reason)
def _handle_human_input_form_filled_event(
self, event: QueueHumanInputFormFilledEvent, **kwargs
) -> Generator[StreamResponse, None, None]:
"""Handle human input form filled events."""
self._persist_human_input_extra_content(node_id=event.node_id)
yield self._workflow_response_converter.human_input_form_filled_to_stream_response(
event=event, task_id=self._application_generate_entity.task_id
)
def _handle_human_input_form_timeout_event(
self, event: QueueHumanInputFormTimeoutEvent, **kwargs
) -> Generator[StreamResponse, None, None]:
"""Handle human input form timeout events."""
yield self._workflow_response_converter.human_input_form_timeout_to_stream_response(
event=event, task_id=self._application_generate_entity.task_id
)
def _persist_human_input_extra_content(self, *, node_id: str | None = None, form_id: str | None = None) -> None:
if not self._workflow_run_id or not self._message_id:
return
if form_id is None:
if node_id is None:
return
form_id = self._load_human_input_form_id(node_id=node_id)
if form_id is None:
logger.warning(
"HumanInput form not found for workflow run %s node %s",
self._workflow_run_id,
node_id,
)
return
with self._database_session() as session:
exists_stmt = select(HumanInputContent).where(
HumanInputContent.workflow_run_id == self._workflow_run_id,
HumanInputContent.message_id == self._message_id,
HumanInputContent.form_id == form_id,
)
if session.scalar(exists_stmt) is not None:
return
content = HumanInputContent(
workflow_run_id=self._workflow_run_id,
message_id=self._message_id,
form_id=form_id,
)
session.add(content)
def _load_human_input_form_id(self, *, node_id: str) -> str | None:
form_repository = HumanInputFormRepositoryImpl(
session_factory=db.engine,
tenant_id=self._workflow_tenant_id,
)
form = form_repository.get_form(self._workflow_run_id, node_id)
if form is None:
return None
return form.id
def _handle_agent_log_event(self, event: QueueAgentLogEvent, **kwargs) -> Generator[StreamResponse, None, None]:
"""Handle agent log events."""
yield self._workflow_response_converter.handle_agent_log(
@@ -659,6 +931,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
QueueWorkflowStartedEvent: self._handle_workflow_started_event,
QueueWorkflowSucceededEvent: self._handle_workflow_succeeded_event,
QueueWorkflowPartialSuccessEvent: self._handle_workflow_partial_success_event,
QueueWorkflowPausedEvent: self._handle_workflow_paused_event,
QueueWorkflowFailedEvent: self._handle_workflow_failed_event,
# Node events
QueueNodeRetryEvent: self._handle_node_retry_event,
@@ -680,6 +953,8 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
QueueMessageReplaceEvent: self._handle_message_replace_event,
QueueAdvancedChatMessageEndEvent: self._handle_advanced_chat_message_end_event,
QueueAgentLogEvent: self._handle_agent_log_event,
QueueHumanInputFormFilledEvent: self._handle_human_input_form_filled_event,
QueueHumanInputFormTimeoutEvent: self._handle_human_input_form_timeout_event,
}
def _dispatch_event(
@@ -747,6 +1022,9 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
case QueueWorkflowFailedEvent():
yield from self._handle_workflow_failed_event(event, trace_manager=trace_manager)
break
case QueueWorkflowPausedEvent():
yield from self._handle_workflow_paused_event(event)
break
case QueueStopEvent():
yield from self._handle_stop_event(event, graph_runtime_state=None, trace_manager=trace_manager)
@@ -772,9 +1050,15 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
def _save_message(self, *, session: Session, graph_runtime_state: GraphRuntimeState | None = None):
message = self._get_message(session=session)
if message is None:
return
if message.status == MessageStatus.PAUSED:
message.status = MessageStatus.NORMAL
# If there are assistant files, remove markdown image links from answer
answer_text = self._task_state.answer
answer_text = self._strip_think_blocks(answer_text)
if self._recorded_files:
# Remove markdown image links since we're storing files separately
answer_text = re.sub(r"!\[.*?\]\(.*?\)", "", answer_text).strip()
@@ -826,6 +1110,54 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
]
session.add_all(message_files)
# Save generation detail (reasoning/tool calls/sequence) from stream buffer
self._save_generation_detail(session=session, message=message)
@staticmethod
def _strip_think_blocks(text: str) -> str:
"""Remove <think>...</think> blocks (including their content) from text."""
if not text or "<think" not in text.lower():
return text
clean_text = re.sub(r"<think[^>]*>.*?</think>", "", text, flags=re.IGNORECASE | re.DOTALL)
clean_text = re.sub(r"\n\s*\n", "\n\n", clean_text).strip()
return clean_text
def _save_generation_detail(self, *, session: Session, message: Message) -> None:
"""
Save LLM generation detail for Chatflow using stream event buffer.
The buffer records the exact order of events as they streamed,
allowing accurate reconstruction of the generation sequence.
"""
# Finalize the stream buffer to flush any pending data
self._stream_buffer.finalize()
# Only save if there's meaningful data
if not self._stream_buffer.has_data():
return
reasoning_content = self._stream_buffer.reasoning_content
tool_calls = self._stream_buffer.tool_calls
sequence = self._stream_buffer.sequence
# Check if generation detail already exists for this message
existing = session.query(LLMGenerationDetail).filter_by(message_id=message.id).first()
if existing:
existing.reasoning_content = json.dumps(reasoning_content) if reasoning_content else None
existing.tool_calls = json.dumps(tool_calls) if tool_calls else None
existing.sequence = json.dumps(sequence) if sequence else None
else:
generation_detail = LLMGenerationDetail(
tenant_id=self._application_generate_entity.app_config.tenant_id,
app_id=self._application_generate_entity.app_config.app_id,
message_id=message.id,
reasoning_content=json.dumps(reasoning_content) if reasoning_content else None,
tool_calls=json.dumps(tool_calls) if tool_calls else None,
sequence=json.dumps(sequence) if sequence else None,
)
session.add(generation_detail)
def _seed_graph_runtime_state_from_queue_manager(self) -> None:
"""Bootstrap the cached runtime state from the queue manager when present."""
candidate = self._base_task_pipeline.queue_manager.graph_runtime_state

View File

@@ -3,10 +3,8 @@ from typing import cast
from sqlalchemy import select
from core.agent.cot_chat_agent_runner import CotChatAgentRunner
from core.agent.cot_completion_agent_runner import CotCompletionAgentRunner
from core.agent.agent_app_runner import AgentAppRunner
from core.agent.entities import AgentEntity
from core.agent.fc_agent_runner import FunctionCallAgentRunner
from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfig
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.apps.base_app_runner import AppRunner
@@ -14,8 +12,7 @@ from core.app.entities.app_invoke_entities import AgentChatAppGenerateEntity
from core.app.entities.queue_entities import QueueAnnotationReplyEvent
from core.memory.token_buffer_memory import TokenBufferMemory
from core.model_manager import ModelInstance
from core.model_runtime.entities.llm_entities import LLMMode
from core.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey
from core.model_runtime.entities.model_entities import ModelFeature
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.moderation.base import ModerationError
from extensions.ext_database import db
@@ -194,22 +191,7 @@ class AgentChatAppRunner(AppRunner):
raise ValueError("Message not found")
db.session.close()
runner_cls: type[FunctionCallAgentRunner] | type[CotChatAgentRunner] | type[CotCompletionAgentRunner]
# start agent runner
if agent_entity.strategy == AgentEntity.Strategy.CHAIN_OF_THOUGHT:
# check LLM mode
if model_schema.model_properties.get(ModelPropertyKey.MODE) == LLMMode.CHAT:
runner_cls = CotChatAgentRunner
elif model_schema.model_properties.get(ModelPropertyKey.MODE) == LLMMode.COMPLETION:
runner_cls = CotCompletionAgentRunner
else:
raise ValueError(f"Invalid LLM mode: {model_schema.model_properties.get(ModelPropertyKey.MODE)}")
elif agent_entity.strategy == AgentEntity.Strategy.FUNCTION_CALLING:
runner_cls = FunctionCallAgentRunner
else:
raise ValueError(f"Invalid agent strategy: {agent_entity.strategy}")
runner = runner_cls(
runner = AgentAppRunner(
tenant_id=app_config.tenant_id,
application_generate_entity=application_generate_entity,
conversation=conversation_result,

View File

@@ -81,7 +81,7 @@ class AgentChatAppGenerateResponseConverter(AppGenerateResponseConverter):
data = cls._error_to_stream_response(sub_stream_response.err)
response_chunk.update(data)
else:
response_chunk.update(sub_stream_response.model_dump(mode="json"))
response_chunk.update(sub_stream_response.model_dump(mode="json", exclude_none=True))
yield response_chunk
@classmethod
@@ -109,7 +109,7 @@ class AgentChatAppGenerateResponseConverter(AppGenerateResponseConverter):
}
if isinstance(sub_stream_response, MessageEndStreamResponse):
sub_stream_response_dict = sub_stream_response.model_dump(mode="json")
sub_stream_response_dict = sub_stream_response.model_dump(mode="json", exclude_none=True)
metadata = sub_stream_response_dict.get("metadata", {})
sub_stream_response_dict["metadata"] = cls._get_simple_metadata(metadata)
response_chunk.update(sub_stream_response_dict)
@@ -117,6 +117,6 @@ class AgentChatAppGenerateResponseConverter(AppGenerateResponseConverter):
data = cls._error_to_stream_response(sub_stream_response.err)
response_chunk.update(data)
else:
response_chunk.update(sub_stream_response.model_dump(mode="json"))
response_chunk.update(sub_stream_response.model_dump(mode="json", exclude_none=True))
yield response_chunk

View File

@@ -81,7 +81,7 @@ class ChatAppGenerateResponseConverter(AppGenerateResponseConverter):
data = cls._error_to_stream_response(sub_stream_response.err)
response_chunk.update(data)
else:
response_chunk.update(sub_stream_response.model_dump(mode="json"))
response_chunk.update(sub_stream_response.model_dump(mode="json", exclude_none=True))
yield response_chunk
@classmethod
@@ -109,7 +109,7 @@ class ChatAppGenerateResponseConverter(AppGenerateResponseConverter):
}
if isinstance(sub_stream_response, MessageEndStreamResponse):
sub_stream_response_dict = sub_stream_response.model_dump(mode="json")
sub_stream_response_dict = sub_stream_response.model_dump(mode="json", exclude_none=True)
metadata = sub_stream_response_dict.get("metadata", {})
sub_stream_response_dict["metadata"] = cls._get_simple_metadata(metadata)
response_chunk.update(sub_stream_response_dict)
@@ -117,6 +117,6 @@ class ChatAppGenerateResponseConverter(AppGenerateResponseConverter):
data = cls._error_to_stream_response(sub_stream_response.err)
response_chunk.update(data)
else:
response_chunk.update(sub_stream_response.model_dump(mode="json"))
response_chunk.update(sub_stream_response.model_dump(mode="json", exclude_none=True))
yield response_chunk

Some files were not shown because too many files have changed in this diff Show More