Compare commits

...

192 Commits
0.8.1 ... 0.9.1

Author SHA1 Message Date
-LAN-
1f5cc071f8 chore(version): bump to 0.9.1 (#8945) 2024-09-30 23:22:21 +08:00
Jyong
625e4c4c72 fix multiple retrieval in knowledge node (#8942) 2024-09-30 23:07:04 +08:00
-LAN-
7850a28ec8 Revert "chore(version): bump to 0.9.1" (#8944) 2024-09-30 22:53:32 +08:00
-LAN-
730d3a6d7c chore(version): bump to 0.9.1 (#8938) 2024-09-30 22:13:38 +08:00
Yi Xiao
d6a44e9990 fix: request params for internal dataset (#8940) 2024-09-30 22:10:27 +08:00
Jyong
3069b5cf57 original dataset update remove unuseful parameters (#8939) 2024-09-30 22:01:32 +08:00
NFish
7873e455bb fix: Fix the error when importing web pages using jina (#8937) 2024-09-30 21:27:11 +08:00
Jyong
a651b73db0 original dataset update issue (#8935) 2024-09-30 21:17:12 +08:00
-LAN-
d2ce4960f1 chore(versioning): bump version to 0.9.0 (#8911) 2024-09-30 18:33:20 +08:00
KVOJJJin
1af4ca344e Feat: add debounce for search in logs (#8924) 2024-09-30 17:18:47 +08:00
zhuhao
fa837b2dfd fix: fix the issue with the system model configuration update (#8923) 2024-09-30 17:14:13 +08:00
github-actions[bot]
824a71388a chore: translate i18n files (#8917)
Co-authored-by: JohnJyong <76649700+JohnJyong@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-09-30 16:35:00 +08:00
Aurelius Huang
4585cffce1 fix: Compatible with special characters in pg full-text search. (#8921)
Co-authored-by: Aurelius Huang <cm.huang@aftership.com>
2024-09-30 16:32:23 +08:00
Yi Xiao
13046709a9 fix: line in iteration node is not straight (#8918) 2024-09-30 16:04:51 +08:00
Jyong
9d221a5e19 external knowledge api (#8913)
Co-authored-by: Yi <yxiaoisme@gmail.com>
2024-09-30 15:38:43 +08:00
zhuhao
77aef9ff1d refactor: optimize the calculation of rerank threshold and the logic for forbidden characters in model_uid (#8879) 2024-09-30 12:55:01 +08:00
zhuhao
503561f464 fix: fix the data validation consistency issue in keyword content review (#8908) 2024-09-30 12:52:18 +08:00
-LAN-
ada9d408ac refactor(api/variables): VariableError as a ValueError. (#8554) 2024-09-30 12:48:58 +08:00
-LAN-
3af65b2f45 feat(api): add version comparison logic (#8902) 2024-09-30 11:12:26 +08:00
Zhaofeng Miao
369e1e6f58 feat(website-crawl): add jina reader as additional alternative for website crawling (#8761) 2024-09-30 09:57:19 +08:00
zhuhao
fb49413a41 feat: add voyage ai as a new model provider (#8747) 2024-09-29 16:55:59 +08:00
zhuhao
42dfde6546 docs: add english versions for the files customizable_model_scale_out and predefined_model_scale_out (#8871) 2024-09-29 16:16:56 +08:00
chenxu9741
c531b4a911 fix: #8843 event: tts_message_end always return in api streaming resp… (#8846) 2024-09-29 16:13:20 +08:00
longzhihun
e4ed916baa Add Jamba and Llama3.2 model support (#8878) 2024-09-29 16:12:56 +08:00
-LAN-
4ec977eaba fix(workflow): update tagging logic in GitHub Actions (#8882) 2024-09-29 16:12:42 +08:00
Bowen Liang
74f58f29f9 chore: bump ruff to 0.6.8 for fixing violation in SIM910 (#8869) 2024-09-29 00:29:59 +08:00
zhuhao
f97607370a refactor: update Callback to an abstract class (#8868) 2024-09-28 21:41:02 +08:00
zhuhao
850492dafa feat: deprecate gte-Qwen2-7B-instruct embedding model (#8866) 2024-09-28 21:40:27 +08:00
zhuhao
61c89a9168 feat: add internlm2.5-20b and qwen2.5-coder-7b model (#8862) 2024-09-28 16:31:02 +08:00
takatost
49af18fbd6 fix: customize model credentials were invalid despite the provider credentials being active (#8864) 2024-09-28 15:54:26 +08:00
zhuhao
6cd22f3bca fix: update qwen2.5-coder-7b model name (#8861) 2024-09-28 15:01:27 +08:00
Kevin9703
a2e2f8a8c9 fix(workflow/nodes/knowledge-retrieval/use-config): Preserve rerankin… (#8842) 2024-09-28 10:54:50 +08:00
ice yao
27e33fb15c chore: fix wrong VectorType match case (#8857) 2024-09-28 10:54:04 +08:00
zhuhao
55e6123db9 feat: add min-connection and max-connection for pgvector (#8841) 2024-09-27 18:16:20 +08:00
走在修行的大街上
c828a5dfdf feat(Tools): add feishu tools (#8800)
Co-authored-by: 黎斌 <libin.23@bytedance.com>
2024-09-27 17:31:45 +08:00
CXwudi
0603359e2d fix: delete harm catalog settings for gemini (#8829) 2024-09-27 13:49:03 +08:00
HowardChan
bb781764b8 Add Llama3.2 models in Groq provider (#8831) 2024-09-27 12:13:00 +08:00
zhuhao
29275c7447 feat: deprecate mistral model for siliconflow (#8828) 2024-09-27 12:11:56 +08:00
8bitpd
4c1063e1c5 fix: AnalyticdbVector retrieval scores (#8803) 2024-09-27 12:05:21 +08:00
非法操作
d6b9587a97 fix: close log status option raise error (#8826) 2024-09-27 11:13:40 +08:00
zhuhao
6fbaabc1bc feat: add pgvecto-rs and analyticdb in docker/.env.example (#8823) 2024-09-27 11:13:29 +08:00
Shai Perednik
a36117e12d Updated the YouTube channel to Dify's (#8817) 2024-09-27 09:15:33 +08:00
CXwudi
e5efd09ebb chore: massive update of the Gemini models based on latest documentation (#8822) 2024-09-27 09:14:33 +08:00
wenmeng zhou
ecc951609d add more detailed doc for models of qwen series (#8799)
Co-authored-by: crazywoola <427733928@qq.com>
2024-09-26 22:32:33 +08:00
ice yao
063474f408 Add llama3.2 model in fireworks provider (#8809) 2024-09-26 22:21:01 +08:00
Hash Brown
3dfbc348e3 feat: improved SVG output UX (#8765) 2024-09-26 19:41:59 +08:00
AAEE86
9a4b53a212 feat: add stream for Gemini (#8678) 2024-09-26 19:08:59 +08:00
AAEE86
03edfbe6f5 feat: add qwen to add custom model parameters (#8759) 2024-09-26 19:04:25 +08:00
Joel
3d2cb25a67 fix: change wrong company name (#8801) 2024-09-26 17:53:11 +08:00
非法操作
6df14e50b2 fix: workflow as tool always outdated (#8798) 2024-09-26 17:50:36 +08:00
zhuhao
008e0efeb0 refactor: update delete method as an abstract method (#8794) 2024-09-26 16:36:21 +08:00
cx
128a66f7fe fix: Ollama modelfeature set vision, and an exception occurred at the… (#8783) 2024-09-26 16:34:40 +08:00
非法操作
62406991df fix: start node input config modal raise 'variable name is required' (#8793) 2024-09-26 16:28:20 +08:00
非法操作
d1173a69f8 fix: the Image-1X tool (#8787) 2024-09-26 13:48:06 +08:00
Shenghang Tsai
a0b0809b1c Add more models for SiliconFlow (#8779) 2024-09-26 11:29:53 +08:00
Aaron Ji
4c9ef6e830 fix: update usage for Jina Embeddings v3 (#8771) 2024-09-26 11:29:35 +08:00
非法操作
0c96f0aa51 fix: credential *** should be string (#8785) 2024-09-26 11:24:03 +08:00
zhuhao
ac73763726 chore: add input_type param desc for the _invoke method of text_embedding (#8778) 2024-09-26 11:23:09 +08:00
非法操作
5ba19d64e9 fix: TavilySearch tool get api link (#8780) 2024-09-26 11:22:18 +08:00
Qun
fefbc43fb0 chore: fix comfyui tool doc url (#8775) 2024-09-26 08:18:13 +08:00
Bowen Liang
a8b837c4a9 dep: bump ElasticSearch from 8.14.x to 8.15.x (#8197) 2024-09-25 22:55:24 +08:00
Pan, Wen-Ming
02ff6cca70 feat: add support for Vertex AI Gemini 1.5 002 and experimental models (#8767) 2024-09-25 21:27:26 +08:00
NFish
ef47f68e4a fix: the translation result may cause a different meaning (#8763) 2024-09-25 18:25:06 +08:00
Hash Brown
2ef8b187fa Add GitHub Actions Workflow for Web Tests (#8753) 2024-09-25 15:50:51 +08:00
zhuiyue132
b0927c39fb fix: expose the configuration of HTTP request node to Docker (#8716)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-09-25 15:06:54 +08:00
cherryhuahua
d0e0111f88 fix:Spark's large language model token calculation error #7911 (#8755) 2024-09-25 14:51:42 +08:00
zhuhao
2328944987 chore: apply ruff reformat for python-client sdk (#8752) 2024-09-25 14:48:06 +08:00
非法操作
cb1942c242 chore: make url display in the middle of http node (#8741) 2024-09-25 11:27:17 +08:00
crazywoola
bf64ff215b fix: . is missing in file_extension (#8736) 2024-09-25 10:09:20 +08:00
ybalbert001
68c7e68a8a Fix Issue: switch LLM of SageMaker endpoint doesn't take effect (#8737)
Co-authored-by: Yuanbo Li <ybalbert@amazon.com>
2024-09-25 09:12:35 +08:00
ice yao
91f70d0bd9 Add embedding models in fireworks provider (#8728) 2024-09-25 08:47:11 +08:00
Jyong
4669eb24be add embedding input type parameter (#8724) 2024-09-24 21:53:50 +08:00
Sa Zhang
debe5953a8 Fix/update jina ai products labels and descriptions (#8730)
Co-authored-by: sa zhang <sa.zhang@jina.ai>
2024-09-24 21:19:49 +08:00
Shota Totsuka
1c7877b048 fix: remove harm category setting from vertex ai (#8721) 2024-09-24 20:53:26 +08:00
非法操作
9ca2e2c968 chore: remove windows platform timezone set (#8712) 2024-09-24 17:33:29 +08:00
zxhlyh
f42ef0624d fix: embedded chat on ios (#8718) 2024-09-24 17:23:11 +08:00
ice yao
64baedb484 fix: update nomic model provider token calculation (#8705) 2024-09-24 14:04:07 +08:00
Benjamin
4638f99aaa fix: change model provider name issue Ref #8691 (#8710) 2024-09-24 13:26:58 +08:00
AAEE86
aebe5fc68c fix: Remove unsupported parameters in qwen model (#8699) 2024-09-24 13:06:21 +08:00
zhuhao
1ecf70dca0 feat: add mixedbread as a new model provider (#8523) 2024-09-24 11:20:15 +08:00
ybalbert001
7c485f8bb8 fix llm integration problem: It doesn't work on docker env (#8701)
Co-authored-by: Yuanbo Li <ybalbert@amazon.com>
2024-09-24 10:33:30 +08:00
themanforfree
21e9608b23 feat: add xinference sd web ui api tool (#8385)
Signed-off-by: themanforfree <themanforfree@gmail.com>
2024-09-24 10:20:06 +08:00
Sa Zhang
7f1b028840 fix: change the brand name to Jina AI (#8691)
Co-authored-by: sa zhang <sa.zhang@jina.ai>
2024-09-23 21:39:26 +08:00
Nam Vu
bef83a4d2e fix: typos and improve naming conventions: (#8687) 2024-09-23 21:32:58 +08:00
crazywoola
8cc9e68363 fix: prompt for the follow-up suggestions (#8685) 2024-09-23 20:00:34 +08:00
ice yao
d7aada38a1 Add nomic embedding model provider (#8640) 2024-09-23 19:57:21 +08:00
Vikey Chen
4f69adc8ab fix: document_create_args_validate (#8569) 2024-09-23 18:45:10 +08:00
Likename Haojie
52da5b16e7 fixbug tts(stream) not work on ios safari(17.1+) (#8645)
Co-authored-by: crazywoola <427733928@qq.com>
2024-09-23 18:44:24 +08:00
Hash Brown
11d09a92d0 fix: send message error when last sent message not succeeded (#8682) 2024-09-23 18:44:09 +08:00
Nam Vu
c7eacd1aac chore: Optimize I18nObject class for better performance and readability (#8681) 2024-09-23 18:40:40 +08:00
AAEE86
a126d535cf add Spark Max-32K (#8676) 2024-09-23 16:39:46 +08:00
AAEE86
3554a803e7 add zhipuai web search (#8668) 2024-09-23 16:19:42 +08:00
AAEE86
c66cecaa55 add Qwen model translate (#8674) 2024-09-23 16:18:55 +08:00
非法操作
b37954b966 fix: png avatar upload as jpeg (#8665) 2024-09-23 15:33:06 +08:00
Bowen Liang
86f90fd9ff chore: skip PLR6201 linter rule (#8666) 2024-09-23 15:28:57 +08:00
haike-1213
4c7beb9d7b fix: Assignment exception (#8663)
Co-authored-by: fum <fum@investoday.com.cn>
2024-09-23 15:23:52 +08:00
Aaron Ji
3618a97c20 feat: extend api params for Jina Embeddings V3 (#8657) 2024-09-23 13:45:09 +08:00
Shota Totsuka
03fdf5e7f8 chore: Enable Japanese descriptions for Tools (#8646) 2024-09-23 09:06:01 +08:00
Hiroshi Fujita
cae73b9a32 Make WORKFLOW_* configurable as environment variables. (#8644) 2024-09-23 09:05:02 +08:00
zhuhao
e34f04380d feat: add deepseek-v2.5 for model provider siliconflow (#8639) 2024-09-22 21:44:06 +08:00
zhuhao
6df77038a2 docs: fix predefined_model_scale_out.md redirect error (#8633) 2024-09-22 16:45:45 +08:00
zhuhao
45c0a44411 feat: add qwen2.5 for model provider siliconflow (#8630) 2024-09-22 16:42:34 +08:00
Hash Brown
2d869d6831 fix: send message error when chatting with opening statement (#8627) 2024-09-22 16:41:40 +08:00
Nam Vu
eaa7e9b1f0 fix: llm_generator.py JSONDecodeError (#8504) 2024-09-22 14:02:12 +08:00
Nam Vu
6e37750fbd fix: commands.py (#8483) 2024-09-22 13:41:09 +08:00
omr
8fd297f8b4 fix: redundant check for available_document_count (#8491) 2024-09-22 13:39:41 +08:00
Nam Vu
ddf6569dc5 chore: enhance configuration descriptions (#8624) 2024-09-22 13:38:41 +08:00
CXwudi
97895ec41a chore: add Gemini newest experimental models (close #7121) (#8621) 2024-09-22 13:38:08 +08:00
sino
6d56d5c1f6 feat: support o1 series models for openrouter (#8358) 2024-09-22 10:23:50 +08:00
HJY
6c2fa8defc fix: form input add tabIndex (#8478) 2024-09-22 10:14:43 +08:00
AAEE86
c9f1e18df1 Add model parameter translation (#8509)
Co-authored-by: swingchen01 <swings@126.com>
Co-authored-by: 陈长君 <chenchangjun@shuwen.com>
2024-09-22 10:14:33 +08:00
Waffle
740fad06c1 feat(tools/cogview): Updated cogview tool to support cogview-3 and the latest cogview-3-plus (#8382) 2024-09-22 10:14:14 +08:00
ice yao
0665268578 Add Fireworks AI as new model provider (#8428) 2024-09-22 10:13:00 +08:00
呆萌闷油瓶
c8b9bdebfe feat:use xinference tts stream mode (#8616) 2024-09-22 10:08:35 +08:00
Shota Totsuka
a587f0d3f1 docs: Add Japanese documentation for tools (#8469) 2024-09-22 09:04:00 +08:00
Hash Brown
8c51d06222 feat: regenerate in Chat, agent and Chatflow app (#7661) 2024-09-22 03:15:11 +08:00
Joe
b32a7713e0 feat: update pyproject.toml (#8368) 2024-09-21 23:59:50 +08:00
zhuhao
831c5a93af refactor(ops): Optimize the iteration for filter_none_values and use logging.error to record logs when an exception occurs (#8461) 2024-09-21 22:56:37 +08:00
AAEE86
1a8dcae10e add Qwen custom add model interface (#8565) 2024-09-21 22:52:10 +08:00
Nam Vu
8219f9e090 fix: api/core/ops/ops_trace_manager.py (#8501) 2024-09-21 20:49:01 +08:00
AAEE86
5ddb601e43 add MixtralAI Model (#8517) 2024-09-21 18:08:07 +08:00
Hongbin
5541248264 Update the PerfXCloud provider model list,Update PerfXCloudProvider validate_provider_credentials method. (#8587)
Co-authored-by: xhb <466010723@qq.com>
2024-09-21 17:33:15 +08:00
WalterMitty
b3cb97f0ad docs: Update ssrf_proxy related doc link in docker-compose file (#8516) 2024-09-21 17:31:49 +08:00
方程
e75c33a561 Enhance Readme Documentation to Clarify the Importance of Celery Service (#8558) 2024-09-21 17:30:58 +08:00
github-actions[bot]
483ead55d5 chore: translate i18n files (#8557)
Co-authored-by: iamjoel <2120155+iamjoel@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-09-21 17:30:43 +08:00
非法操作
d63a5a1c3c fix: a helper link error (#8508) 2024-09-21 17:30:30 +08:00
takatost
e0a3307563 fix(workflow): "Max submit count reached" error occurred when executing workflow as tool in iteration (#8595) 2024-09-20 19:47:25 +08:00
-LAN-
7f3282ec04 Update version to 0.8.3 in packaging and docker-compose files (#8590) 2024-09-20 18:24:03 +08:00
非法操作
b773ebdab1 chore: fix webpack dependencies order (#8542) 2024-09-20 18:09:35 +08:00
Qun
1583283635 ComfyUI tool use the new internal enumeration class "VariableKey" (#8533) 2024-09-20 17:42:47 +08:00
Su Yang
c87f710d58 Fix: update qwen model and model config (#8584)
Co-authored-by: -LAN- <laipz8200@outlook.com>
2024-09-20 17:05:57 +08:00
Su Yang
1568c5cae9 fix: fix qwen series model type (#8580) 2024-09-20 15:29:33 +08:00
MuYu
a03919c3b3 feat: add hunyuan-vision (#8529) 2024-09-19 18:08:01 +08:00
Joel
7411bcf167 chore: improve delimiter (#8552) 2024-09-19 17:40:20 +08:00
Jyong
d96f5ba1ca add storage error log (#8556) 2024-09-19 17:34:12 +08:00
Su Yang
d6de96c4b4 feat: sync Qwen API with Aliyun Bailian (#8538) 2024-09-19 17:08:59 +08:00
takatost
ffd2f61dd9 fix: thread_pool submit count in parallel workflow not releasing (#8549) 2024-09-19 15:34:56 +08:00
takatost
54b9e1f6d1 fix: ci issues(missing duckduckgo-search==6.2.11, ruff lint issue) (#8543) 2024-09-19 11:43:00 +08:00
HJY
2721cb8dee feat: add format util unit and add pre-commit unit check (#8427) 2024-09-19 10:39:27 +08:00
NFish
41bea4cafa validate user permission before enter app detail page (#8527) 2024-09-18 16:54:04 +08:00
Wang Bo
6f222b49f2 refactor: rename task_type to task for jina embeddings v3 (#8488) 2024-09-18 14:53:15 +08:00
-LAN-
8dfe8c773a chore: Deprecate gpt-3.5-turbo-0613 and gpt-3.5-turbo-16k-0613 models (#8500) 2024-09-18 14:38:09 +08:00
Qun
cf645c3ba1 feat: Add ComfyUI tool for Stable Diffusion (#8160) 2024-09-18 10:56:29 +08:00
zhuhao
e896d1e9d7 chore: update the .gitignore file to include opensearch,pgvector,and myscale (#8470) 2024-09-17 22:54:22 +08:00
Xiao Ley
6dba68f62d feat: Add base URL settings and secure_ascii options to the Brave search tool (#8463)
Co-authored-by: crazywoola <427733928@qq.com>
2024-09-15 17:38:43 +08:00
非法操作
3d083b758f feat: add flux dev of siliconflow image-gen tool (#8450) 2024-09-15 17:14:12 +08:00
Nam Vu
aa5b2db10a chore: workflow BRANCH, PARALLEL i18n (#8452) 2024-09-15 17:13:39 +08:00
Hirotaka Miyagi
b73faae0d0 fix(RunOnce): change to form submission instead of onKeyDown and onClick (#8460) 2024-09-15 17:09:47 +08:00
Ying Wang
4788e1c8c8 [Python SDK] Add KnowledgeBaseClient and the corresponding test cases. (#8465)
Co-authored-by: Wang Ying <wangying@xkool.org>
2024-09-15 17:08:52 +08:00
takatost
bf16de50fe fix: internal error when tool authorization (#8449) 2024-09-14 21:50:02 +08:00
Jyong
7e611ffbf3 multi-retrival use dataset's top-k (#8416) 2024-09-14 21:48:44 +08:00
zhuhao
65162a87b6 fix:docker-compose.middleware.yaml start the Weaviate container by default (#8446) (#8447) 2024-09-14 21:48:24 +08:00
Charlie.Wei
445497cf89 add svg render & Image preview optimization (#8387)
Co-authored-by: crazywoola <427733928@qq.com>
2024-09-14 19:24:53 +08:00
Ying Wang
fa1af8e47b add WorkflowClient.get_result, increase version number (#8435)
Co-authored-by: wangying <wangying@xkool.org>
2024-09-14 19:06:37 +08:00
Nam Vu
624331472a fix: Improve scrolling behavior for Conversation Opener (#8437)
Co-authored-by: crazywoola <427733928@qq.com>
2024-09-14 19:05:19 +08:00
走在修行的大街上
72b7f8a949 Bugfix/fix feishu plugins (#8443)
Co-authored-by: 黎斌 <libin.23@bytedance.com>
2024-09-14 18:59:06 +08:00
takatost
88c9834ef2 chore(workflow): Optimize the iteration when selecting a variable from a branch in the output variable causes iteration index err (#8440) 2024-09-14 18:02:43 +08:00
Yi Xiao
d882348f39 fix: delete the delay for the tooltips inside the add tool panel (#8436) 2024-09-14 17:24:31 +08:00
ybalbert001
b6ad7a1e06 Fix: https://github.com/langgenius/dify/issues/8190 (Update Model nam… (#8426)
Co-authored-by: Yuanbo Li <ybalbert@amazon.com>
2024-09-14 17:14:18 +08:00
Aaron Ji
6f7625fa47 chore: update Jina embedding model (#8376) 2024-09-14 16:21:17 +08:00
yanxiyue
de7bc22649 fix: sys_var startwith 'sys.' not 'sys' #8421 (#8422)
Co-authored-by: wuling <wuling@ke.com>
2024-09-14 15:16:12 +08:00
kurokobo
52857dc0a6 feat: allow users to specify timeout for text generations and workflows by environment variable (#8395) 2024-09-14 14:11:45 +08:00
KVOJJJin
032dd93b2f Fix: operation postion of answer in logs (#8411)
Co-authored-by: Yi <yxiaoisme@gmail.com>
2024-09-14 14:08:31 +08:00
Pika
5b18e851d2 fix: when the variable does not exist, an error should be prompted (#8413)
Co-authored-by: Chen(MAC) <chenchen404@outlook.com>
2024-09-14 14:08:10 +08:00
takatost
f01602b570 fix(workflow): the answer node after the iteration node containing the answer was output prematurely (#8419) 2024-09-14 14:02:09 +08:00
HowardChan
0123498452 fix:logs and rm unused codes in CacheEmbedding (#8409) 2024-09-14 12:56:45 +08:00
swingchen01
f55e06d8bf fix: resolve runtime error when self.folder is None (#8401)
Co-authored-by: 陈长君 <chenchangjun@shuwen.com>
2024-09-14 11:07:16 +08:00
ybalbert001
b613b11422 Fix: Support Bedrock cross region inference #8190 (Update Model name to distinguish between different region groups) (#8402)
Co-authored-by: Yuanbo Li <ybalbert@amazon.com>
2024-09-14 11:06:20 +08:00
Incca
8efae1cba2 fix(docker): aliyun oss path env key (#8394) 2024-09-14 09:52:59 +08:00
Nam Vu
bf55b1910f fix: pyproject.toml typo (#8396) 2024-09-14 09:45:49 +08:00
crazywoola
71b4480c4a fix: o1-mini 65563 -> 65536 (#8388) 2024-09-14 02:39:58 +08:00
Yeuoly
b6b1057a18 fix: sandbox issue related httpx and requests (#8397) 2024-09-14 02:02:55 +08:00
Bowen Liang
5b98acde2f chore: improve usage of striping prefix or suffix of string with Ruff 0.6.5 (#8392) 2024-09-13 23:34:39 +08:00
Bowen Liang
aad6f340b3 fix (#8322 followup): resolve the violation of pylint rules (#8391) 2024-09-13 23:19:36 +08:00
Bowen Liang
a1104ab97e chore: refurish python code by applying Pylint linter rules (#8322) 2024-09-13 22:42:08 +08:00
xiandan-erizo
1ab81b4972 support hunyuan-turbo (#8372)
Co-authored-by: sunkesi <sunkesi@hosecloud.com>
2024-09-13 20:21:48 +08:00
非法操作
06b66216d7 chore: update firecrawl scrape to V1 api (#8367) 2024-09-13 20:02:00 +08:00
takatost
cd3eaed335 fix(workflow): both parallel and single branch errors occur in if-else (#8378) 2024-09-13 19:55:54 +08:00
Joel
9d80d7def7 fix: edit load balancing not pass id (#8370) 2024-09-13 17:15:03 +08:00
Joe
84ac5ccc8f fix: add before send to remove langfuse defaultErrorResponse (#8361) 2024-09-13 16:08:08 +08:00
Joel
5dfd7abb2b fix: when edit load balancing config not pass the empty filed value hidden (#8366) 2024-09-13 16:05:26 +08:00
takatost
24af4b9313 fix: o1-series model encounters an error when the generate mode is blocking (#8363) 2024-09-13 15:37:54 +08:00
Bowen Liang
6613b8f2e0 chore: fix unnecessary string concatation in single line (#8311) 2024-09-13 14:24:49 +08:00
-LAN-
08c486452f fix: score_threshold handling in vector search methods (#8356) 2024-09-13 14:24:35 +08:00
sino
a45ac6ab98 fix: ark token usage is none (#8351) 2024-09-13 14:19:24 +08:00
-LAN-
80a322aaa2 chore: update version to 0.8.2 in packaging and docker-compose files (#8352) 2024-09-13 13:45:13 +08:00
Joe
82f7875a52 feat: add langfuse sentry ignore error (#8353) 2024-09-13 13:44:19 +08:00
takatost
4637ddaa7f feat: add o1-series models support in Agent App (ReACT only) (#8350) 2024-09-13 13:08:27 +08:00
Yi Xiao
8d2269f762 fix: copy and paste shortcut in the textarea of the workflow run panel (#8345) 2024-09-13 12:20:56 +08:00
fanlia
5f03e66489 Feature/service api workflow logs (#8323) 2024-09-13 11:03:57 +08:00
Pika
a9c1f1a041 fix(workflow): fix var-selector not update when edges change (#8259)
Co-authored-by: Chen(MAC) <chenchen404@outlook.com>
2024-09-13 11:03:39 +08:00
Jyong
49cee773c5 fixed score threshold is none (#8342) 2024-09-13 10:21:58 +08:00
1078 changed files with 32930 additions and 4364 deletions

View File

@@ -125,7 +125,7 @@ jobs:
with:
images: ${{ env[matrix.image_name_env] }}
tags: |
type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/') }}
type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/') && !contains(github.ref, '-') }}
type=ref,event=branch
type=sha,enable=true,priority=100,prefix=,suffix=,format=long
type=raw,value=${{ github.ref_name }},enable=${{ startsWith(github.ref, 'refs/tags/') }}

46
.github/workflows/web-tests.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
name: Web Tests
on:
pull_request:
branches:
- main
paths:
- web/**
concurrency:
group: web-tests-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
name: Web Tests
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./web
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@v45
with:
files: web/**
- name: Setup Node.js
uses: actions/setup-node@v4
if: steps.changed-files.outputs.any_changed == 'true'
with:
node-version: 20
cache: yarn
cache-dependency-path: ./web/package.json
- name: Install dependencies
if: steps.changed-files.outputs.any_changed == 'true'
run: yarn install --frozen-lockfile
- name: Run tests
if: steps.changed-files.outputs.any_changed == 'true'
run: yarn test

9
.gitignore vendored
View File

@@ -153,6 +153,9 @@ docker-legacy/volumes/etcd/*
docker-legacy/volumes/minio/*
docker-legacy/volumes/milvus/*
docker-legacy/volumes/chroma/*
docker-legacy/volumes/opensearch/data/*
docker-legacy/volumes/pgvectors/data/*
docker-legacy/volumes/pgvector/data/*
docker/volumes/app/storage/*
docker/volumes/certbot/*
@@ -164,6 +167,12 @@ docker/volumes/etcd/*
docker/volumes/minio/*
docker/volumes/milvus/*
docker/volumes/chroma/*
docker/volumes/opensearch/data/*
docker/volumes/myscale/data/*
docker/volumes/myscale/log/*
docker/volumes/unstructured/*
docker/volumes/pgvector/data/*
docker/volumes/pgvecto_rs/data/*
docker/nginx/conf.d/default.conf
docker/middleware.env

View File

@@ -36,7 +36,7 @@
| 被团队成员标记为高优先级的功能 | 高优先级 |
| 在 [community feedback board](https://github.com/langgenius/dify/discussions/categories/feedbacks) 内反馈的常见功能请求 | 中等优先级 |
| 非核心功能和小幅改进 | 低优先级 |
| 有价值不紧急 | 未来功能 |
| 有价值不紧急 | 未来功能 |
### 其他任何事情(例如 bug 报告、性能优化、拼写错误更正):
* 立即开始编码。
@@ -138,7 +138,7 @@ Dify 的后端使用 Python 编写,使用 [Flask](https://flask.palletsproject
├── models // 描述数据模型和 API 响应的形状
├── public // 如 favicon 等元资源
├── service // 定义 API 操作的形状
├── test
├── test
├── types // 函数参数和返回值的描述
└── utils // 共享的实用函数
```

View File

@@ -162,6 +162,8 @@ PGVECTOR_PORT=5433
PGVECTOR_USER=postgres
PGVECTOR_PASSWORD=postgres
PGVECTOR_DATABASE=postgres
PGVECTOR_MIN_CONNECTION=1
PGVECTOR_MAX_CONNECTION=5
# Tidb Vector configuration
TIDB_VECTOR_HOST=xxx.eu-central-1.xxx.aws.tidbcloud.com

View File

@@ -65,14 +65,12 @@
8. Start Dify [web](../web) service.
9. Setup your application by visiting `http://localhost:3000`...
10. If you need to debug local async processing, please start the worker service.
10. If you need to handle and debug the async tasks (e.g. dataset importing and documents indexing), please start the worker service.
```bash
poetry run python -m celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletion
```
The started celery app handles the async tasks, e.g. dataset importing and documents indexing.
## Testing
1. Install dependencies for both the backend and the test environment

View File

@@ -53,11 +53,9 @@ from services.account_service import AccountService
warnings.simplefilter("ignore", ResourceWarning)
# fix windows platform
if os.name == "nt":
os.system('tzutil /s "UTC"')
else:
os.environ["TZ"] = "UTC"
os.environ["TZ"] = "UTC"
# windows platform not support tzset
if hasattr(time, "tzset"):
time.tzset()
@@ -164,7 +162,7 @@ def initialize_extensions(app):
@login_manager.request_loader
def load_user_from_request(request_from_flask_login):
"""Load user based on the request."""
if request.blueprint not in ["console", "inner_api"]:
if request.blueprint not in {"console", "inner_api"}:
return None
# Check if the user_id contains a dot, indicating the old format
auth_header = request.headers.get("Authorization", "")

View File

@@ -28,28 +28,28 @@ from services.account_service import RegisterService, TenantService
@click.command("reset-password", help="Reset the account password.")
@click.option("--email", prompt=True, help="The email address of the account whose password you need to reset")
@click.option("--new-password", prompt=True, help="the new password.")
@click.option("--password-confirm", prompt=True, help="the new password confirm.")
@click.option("--email", prompt=True, help="Account email to reset password for")
@click.option("--new-password", prompt=True, help="New password")
@click.option("--password-confirm", prompt=True, help="Confirm new password")
def reset_password(email, new_password, password_confirm):
"""
Reset password of owner account
Only available in SELF_HOSTED mode
"""
if str(new_password).strip() != str(password_confirm).strip():
click.echo(click.style("sorry. The two passwords do not match.", fg="red"))
click.echo(click.style("Passwords do not match.", fg="red"))
return
account = db.session.query(Account).filter(Account.email == email).one_or_none()
if not account:
click.echo(click.style("sorry. the account: [{}] not exist .".format(email), fg="red"))
click.echo(click.style("Account not found for email: {}".format(email), fg="red"))
return
try:
valid_password(new_password)
except:
click.echo(click.style("sorry. The passwords must match {} ".format(password_pattern), fg="red"))
click.echo(click.style("Invalid password. Must match {}".format(password_pattern), fg="red"))
return
# generate password salt
@@ -62,37 +62,37 @@ def reset_password(email, new_password, password_confirm):
account.password = base64_password_hashed
account.password_salt = base64_salt
db.session.commit()
click.echo(click.style("Congratulations! Password has been reset.", fg="green"))
click.echo(click.style("Password reset successfully.", fg="green"))
@click.command("reset-email", help="Reset the account email.")
@click.option("--email", prompt=True, help="The old email address of the account whose email you need to reset")
@click.option("--new-email", prompt=True, help="the new email.")
@click.option("--email-confirm", prompt=True, help="the new email confirm.")
@click.option("--email", prompt=True, help="Current account email")
@click.option("--new-email", prompt=True, help="New email")
@click.option("--email-confirm", prompt=True, help="Confirm new email")
def reset_email(email, new_email, email_confirm):
"""
Replace account email
:return:
"""
if str(new_email).strip() != str(email_confirm).strip():
click.echo(click.style("Sorry, new email and confirm email do not match.", fg="red"))
click.echo(click.style("New emails do not match.", fg="red"))
return
account = db.session.query(Account).filter(Account.email == email).one_or_none()
if not account:
click.echo(click.style("sorry. the account: [{}] not exist .".format(email), fg="red"))
click.echo(click.style("Account not found for email: {}".format(email), fg="red"))
return
try:
email_validate(new_email)
except:
click.echo(click.style("sorry. {} is not a valid email. ".format(email), fg="red"))
click.echo(click.style("Invalid email: {}".format(new_email), fg="red"))
return
account.email = new_email
db.session.commit()
click.echo(click.style("Congratulations!, email has been reset.", fg="green"))
click.echo(click.style("Email updated successfully.", fg="green"))
@click.command(
@@ -104,7 +104,7 @@ def reset_email(email, new_email, email_confirm):
)
@click.confirmation_option(
prompt=click.style(
"Are you sure you want to reset encrypt key pair?" " this operation cannot be rolled back!", fg="red"
"Are you sure you want to reset encrypt key pair? This operation cannot be rolled back!", fg="red"
)
)
def reset_encrypt_key_pair():
@@ -114,13 +114,13 @@ def reset_encrypt_key_pair():
Only support SELF_HOSTED mode.
"""
if dify_config.EDITION != "SELF_HOSTED":
click.echo(click.style("Sorry, only support SELF_HOSTED mode.", fg="red"))
click.echo(click.style("This command is only for SELF_HOSTED installations.", fg="red"))
return
tenants = db.session.query(Tenant).all()
for tenant in tenants:
if not tenant:
click.echo(click.style("Sorry, no workspace found. Please enter /install to initialize.", fg="red"))
click.echo(click.style("No workspaces found. Run /install first.", fg="red"))
return
tenant.encrypt_public_key = generate_key_pair(tenant.id)
@@ -131,18 +131,18 @@ def reset_encrypt_key_pair():
click.echo(
click.style(
"Congratulations! " "the asymmetric key pair of workspace {} has been reset.".format(tenant.id),
"Congratulations! The asymmetric key pair of workspace {} has been reset.".format(tenant.id),
fg="green",
)
)
@click.command("vdb-migrate", help="migrate vector db.")
@click.command("vdb-migrate", help="Migrate vector db.")
@click.option("--scope", default="all", prompt=False, help="The scope of vector database to migrate, Default is All.")
def vdb_migrate(scope: str):
if scope in ["knowledge", "all"]:
if scope in {"knowledge", "all"}:
migrate_knowledge_vector_database()
if scope in ["annotation", "all"]:
if scope in {"annotation", "all"}:
migrate_annotation_vector_database()
@@ -150,7 +150,7 @@ def migrate_annotation_vector_database():
"""
Migrate annotation datas to target vector database .
"""
click.echo(click.style("Start migrate annotation data.", fg="green"))
click.echo(click.style("Starting annotation data migration.", fg="green"))
create_count = 0
skipped_count = 0
total_count = 0
@@ -174,14 +174,14 @@ def migrate_annotation_vector_database():
f"Processing the {total_count} app {app.id}. " + f"{create_count} created, {skipped_count} skipped."
)
try:
click.echo("Create app annotation index: {}".format(app.id))
click.echo("Creating app annotation index: {}".format(app.id))
app_annotation_setting = (
db.session.query(AppAnnotationSetting).filter(AppAnnotationSetting.app_id == app.id).first()
)
if not app_annotation_setting:
skipped_count = skipped_count + 1
click.echo("App annotation setting is disabled: {}".format(app.id))
click.echo("App annotation setting disabled: {}".format(app.id))
continue
# get dataset_collection_binding info
dataset_collection_binding = (
@@ -190,7 +190,7 @@ def migrate_annotation_vector_database():
.first()
)
if not dataset_collection_binding:
click.echo("App annotation collection binding is not exist: {}".format(app.id))
click.echo("App annotation collection binding not found: {}".format(app.id))
continue
annotations = db.session.query(MessageAnnotation).filter(MessageAnnotation.app_id == app.id).all()
dataset = Dataset(
@@ -211,11 +211,11 @@ def migrate_annotation_vector_database():
documents.append(document)
vector = Vector(dataset, attributes=["doc_id", "annotation_id", "app_id"])
click.echo(f"Start to migrate annotation, app_id: {app.id}.")
click.echo(f"Migrating annotations for app: {app.id}.")
try:
vector.delete()
click.echo(click.style(f"Successfully delete vector index for app: {app.id}.", fg="green"))
click.echo(click.style(f"Deleted vector index for app {app.id}.", fg="green"))
except Exception as e:
click.echo(click.style(f"Failed to delete vector index for app {app.id}.", fg="red"))
raise e
@@ -223,12 +223,12 @@ def migrate_annotation_vector_database():
try:
click.echo(
click.style(
f"Start to created vector index with {len(documents)} annotations for app {app.id}.",
f"Creating vector index with {len(documents)} annotations for app {app.id}.",
fg="green",
)
)
vector.create(documents)
click.echo(click.style(f"Successfully created vector index for app {app.id}.", fg="green"))
click.echo(click.style(f"Created vector index for app {app.id}.", fg="green"))
except Exception as e:
click.echo(click.style(f"Failed to created vector index for app {app.id}.", fg="red"))
raise e
@@ -237,14 +237,14 @@ def migrate_annotation_vector_database():
except Exception as e:
click.echo(
click.style(
"Create app annotation index error: {} {}".format(e.__class__.__name__, str(e)), fg="red"
"Error creating app annotation index: {} {}".format(e.__class__.__name__, str(e)), fg="red"
)
)
continue
click.echo(
click.style(
f"Congratulations! Create {create_count} app annotation indexes, and skipped {skipped_count} apps.",
f"Migration complete. Created {create_count} app annotation indexes. Skipped {skipped_count} apps.",
fg="green",
)
)
@@ -254,7 +254,7 @@ def migrate_knowledge_vector_database():
"""
Migrate vector database datas to target vector database .
"""
click.echo(click.style("Start migrate vector db.", fg="green"))
click.echo(click.style("Starting vector database migration.", fg="green"))
create_count = 0
skipped_count = 0
total_count = 0
@@ -275,11 +275,10 @@ def migrate_knowledge_vector_database():
for dataset in datasets:
total_count = total_count + 1
click.echo(
f"Processing the {total_count} dataset {dataset.id}. "
+ f"{create_count} created, {skipped_count} skipped."
f"Processing the {total_count} dataset {dataset.id}. {create_count} created, {skipped_count} skipped."
)
try:
click.echo("Create dataset vdb index: {}".format(dataset.id))
click.echo("Creating dataset vector database index: {}".format(dataset.id))
if dataset.index_struct_dict:
if dataset.index_struct_dict["type"] == vector_type:
skipped_count = skipped_count + 1
@@ -300,7 +299,7 @@ def migrate_knowledge_vector_database():
if dataset_collection_binding:
collection_name = dataset_collection_binding.collection_name
else:
raise ValueError("Dataset Collection Bindings is not exist!")
raise ValueError("Dataset Collection Binding not found")
else:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
@@ -352,14 +351,12 @@ def migrate_knowledge_vector_database():
raise ValueError(f"Vector store {vector_type} is not supported.")
vector = Vector(dataset)
click.echo(f"Start to migrate dataset {dataset.id}.")
click.echo(f"Migrating dataset {dataset.id}.")
try:
vector.delete()
click.echo(
click.style(
f"Successfully delete vector index {collection_name} for dataset {dataset.id}.", fg="green"
)
click.style(f"Deleted vector index {collection_name} for dataset {dataset.id}.", fg="green")
)
except Exception as e:
click.echo(
@@ -411,15 +408,13 @@ def migrate_knowledge_vector_database():
try:
click.echo(
click.style(
f"Start to created vector index with {len(documents)} documents of {segments_count}"
f"Creating vector index with {len(documents)} documents of {segments_count}"
f" segments for dataset {dataset.id}.",
fg="green",
)
)
vector.create(documents)
click.echo(
click.style(f"Successfully created vector index for dataset {dataset.id}.", fg="green")
)
click.echo(click.style(f"Created vector index for dataset {dataset.id}.", fg="green"))
except Exception as e:
click.echo(click.style(f"Failed to created vector index for dataset {dataset.id}.", fg="red"))
raise e
@@ -430,13 +425,13 @@ def migrate_knowledge_vector_database():
except Exception as e:
db.session.rollback()
click.echo(
click.style("Create dataset index error: {} {}".format(e.__class__.__name__, str(e)), fg="red")
click.style("Error creating dataset index: {} {}".format(e.__class__.__name__, str(e)), fg="red")
)
continue
click.echo(
click.style(
f"Congratulations! Create {create_count} dataset indexes, and skipped {skipped_count} datasets.", fg="green"
f"Migration complete. Created {create_count} dataset indexes. Skipped {skipped_count} datasets.", fg="green"
)
)
@@ -446,7 +441,7 @@ def convert_to_agent_apps():
"""
Convert Agent Assistant to Agent App.
"""
click.echo(click.style("Start convert to agent apps.", fg="green"))
click.echo(click.style("Starting convert to agent apps.", fg="green"))
proceeded_app_ids = []
@@ -497,23 +492,23 @@ def convert_to_agent_apps():
except Exception as e:
click.echo(click.style("Convert app error: {} {}".format(e.__class__.__name__, str(e)), fg="red"))
click.echo(click.style("Congratulations! Converted {} agent apps.".format(len(proceeded_app_ids)), fg="green"))
click.echo(click.style("Conversion complete. Converted {} agent apps.".format(len(proceeded_app_ids)), fg="green"))
@click.command("add-qdrant-doc-id-index", help="add qdrant doc_id index.")
@click.option("--field", default="metadata.doc_id", prompt=False, help="index field , default is metadata.doc_id.")
@click.command("add-qdrant-doc-id-index", help="Add Qdrant doc_id index.")
@click.option("--field", default="metadata.doc_id", prompt=False, help="Index field , default is metadata.doc_id.")
def add_qdrant_doc_id_index(field: str):
click.echo(click.style("Start add qdrant doc_id index.", fg="green"))
click.echo(click.style("Starting Qdrant doc_id index creation.", fg="green"))
vector_type = dify_config.VECTOR_STORE
if vector_type != "qdrant":
click.echo(click.style("Sorry, only support qdrant vector store.", fg="red"))
click.echo(click.style("This command only supports Qdrant vector store.", fg="red"))
return
create_count = 0
try:
bindings = db.session.query(DatasetCollectionBinding).all()
if not bindings:
click.echo(click.style("Sorry, no dataset collection bindings found.", fg="red"))
click.echo(click.style("No dataset collection bindings found.", fg="red"))
return
import qdrant_client
from qdrant_client.http.exceptions import UnexpectedResponse
@@ -523,7 +518,7 @@ def add_qdrant_doc_id_index(field: str):
for binding in bindings:
if dify_config.QDRANT_URL is None:
raise ValueError("Qdrant url is required.")
raise ValueError("Qdrant URL is required.")
qdrant_config = QdrantConfig(
endpoint=dify_config.QDRANT_URL,
api_key=dify_config.QDRANT_API_KEY,
@@ -540,41 +535,39 @@ def add_qdrant_doc_id_index(field: str):
except UnexpectedResponse as e:
# Collection does not exist, so return
if e.status_code == 404:
click.echo(
click.style(f"Collection not found, collection_name:{binding.collection_name}.", fg="red")
)
click.echo(click.style(f"Collection not found: {binding.collection_name}.", fg="red"))
continue
# Some other error occurred, so re-raise the exception
else:
click.echo(
click.style(
f"Failed to create qdrant index, collection_name:{binding.collection_name}.", fg="red"
f"Failed to create Qdrant index for collection: {binding.collection_name}.", fg="red"
)
)
except Exception as e:
click.echo(click.style("Failed to create qdrant client.", fg="red"))
click.echo(click.style("Failed to create Qdrant client.", fg="red"))
click.echo(click.style(f"Congratulations! Create {create_count} collection indexes.", fg="green"))
click.echo(click.style(f"Index creation complete. Created {create_count} collection indexes.", fg="green"))
@click.command("create-tenant", help="Create account and tenant.")
@click.option("--email", prompt=True, help="The email address of the tenant account.")
@click.option("--name", prompt=True, help="The workspace name of the tenant account.")
@click.option("--email", prompt=True, help="Tenant account email.")
@click.option("--name", prompt=True, help="Workspace name.")
@click.option("--language", prompt=True, help="Account language, default: en-US.")
def create_tenant(email: str, language: Optional[str] = None, name: Optional[str] = None):
"""
Create tenant account
"""
if not email:
click.echo(click.style("Sorry, email is required.", fg="red"))
click.echo(click.style("Email is required.", fg="red"))
return
# Create account
email = email.strip()
if "@" not in email:
click.echo(click.style("Sorry, invalid email address.", fg="red"))
click.echo(click.style("Invalid email address.", fg="red"))
return
account_name = email.split("@")[0]
@@ -594,19 +587,19 @@ def create_tenant(email: str, language: Optional[str] = None, name: Optional[str
click.echo(
click.style(
"Congratulations! Account and tenant created.\n" "Account: {}\nPassword: {}".format(email, new_password),
"Account and tenant created.\nAccount: {}\nPassword: {}".format(email, new_password),
fg="green",
)
)
@click.command("upgrade-db", help="upgrade the database")
@click.command("upgrade-db", help="Upgrade the database")
def upgrade_db():
click.echo("Preparing database migration...")
lock = redis_client.lock(name="db_upgrade_lock", timeout=60)
if lock.acquire(blocking=False):
try:
click.echo(click.style("Start database migration.", fg="green"))
click.echo(click.style("Starting database migration.", fg="green"))
# run db migration
import flask_migrate
@@ -616,7 +609,7 @@ def upgrade_db():
click.echo(click.style("Database migration successful!", fg="green"))
except Exception as e:
logging.exception(f"Database migration failed, error: {e}")
logging.exception(f"Database migration failed: {e}")
finally:
lock.release()
else:
@@ -628,7 +621,7 @@ def fix_app_site_missing():
"""
Fix app related site missing issue.
"""
click.echo(click.style("Start fix app related site missing issue.", fg="green"))
click.echo(click.style("Starting fix for missing app-related sites.", fg="green"))
failed_app_ids = []
while True:
@@ -651,22 +644,22 @@ where sites.id is null limit 1000"""
if tenant:
accounts = tenant.get_accounts()
if not accounts:
print("Fix app {} failed.".format(app.id))
print("Fix failed for app {}".format(app.id))
continue
account = accounts[0]
print("Fix app {} related site missing issue.".format(app.id))
print("Fixing missing site for app {}".format(app.id))
app_was_created.send(app, account=account)
except Exception as e:
failed_app_ids.append(app_id)
click.echo(click.style("Fix app {} related site missing issue failed!".format(app_id), fg="red"))
click.echo(click.style("Failed to fix missing site for app {}".format(app_id), fg="red"))
logging.exception(f"Fix app related site missing issue failed, error: {e}")
continue
if not processed_count:
break
click.echo(click.style("Congratulations! Fix app related site missing issue successful!", fg="green"))
click.echo(click.style("Fix for missing app-related sites completed successfully!", fg="green"))
def register_commands(app):

View File

@@ -4,30 +4,30 @@ from pydantic_settings import BaseSettings
class DeploymentConfig(BaseSettings):
"""
Deployment configs
Configuration settings for application deployment
"""
APPLICATION_NAME: str = Field(
description="application name",
description="Name of the application, used for identification and logging purposes",
default="langgenius/dify",
)
DEBUG: bool = Field(
description="whether to enable debug mode.",
description="Enable debug mode for additional logging and development features",
default=False,
)
TESTING: bool = Field(
description="",
description="Enable testing mode for running automated tests",
default=False,
)
EDITION: str = Field(
description="deployment edition",
description="Deployment edition of the application (e.g., 'SELF_HOSTED', 'CLOUD')",
default="SELF_HOSTED",
)
DEPLOY_ENV: str = Field(
description="deployment environment, default to PRODUCTION.",
description="Deployment environment (e.g., 'PRODUCTION', 'DEVELOPMENT'), default to PRODUCTION",
default="PRODUCTION",
)

View File

@@ -4,17 +4,17 @@ from pydantic_settings import BaseSettings
class EnterpriseFeatureConfig(BaseSettings):
"""
Enterprise feature configs.
Configuration for enterprise-level features.
**Before using, please contact business@dify.ai by email to inquire about licensing matters.**
"""
ENTERPRISE_ENABLED: bool = Field(
description="whether to enable enterprise features."
description="Enable or disable enterprise-level features."
"Before using, please contact business@dify.ai by email to inquire about licensing matters.",
default=False,
)
CAN_REPLACE_LOGO: bool = Field(
description="whether to allow replacing enterprise logo.",
description="Allow customization of the enterprise logo.",
default=False,
)

View File

@@ -6,30 +6,31 @@ from pydantic_settings import BaseSettings
class NotionConfig(BaseSettings):
"""
Notion integration configs
Configuration settings for Notion integration
"""
NOTION_CLIENT_ID: Optional[str] = Field(
description="Notion client ID",
description="Client ID for Notion API authentication. Required for OAuth 2.0 flow.",
default=None,
)
NOTION_CLIENT_SECRET: Optional[str] = Field(
description="Notion client secret key",
description="Client secret for Notion API authentication. Required for OAuth 2.0 flow.",
default=None,
)
NOTION_INTEGRATION_TYPE: Optional[str] = Field(
description="Notion integration type, default to None, available values: internal.",
description="Type of Notion integration."
" Set to 'internal' for internal integrations, or None for public integrations.",
default=None,
)
NOTION_INTERNAL_SECRET: Optional[str] = Field(
description="Notion internal secret key",
description="Secret key for internal Notion integrations. Required when NOTION_INTEGRATION_TYPE is 'internal'.",
default=None,
)
NOTION_INTEGRATION_TOKEN: Optional[str] = Field(
description="Notion integration token",
description="Integration token for Notion API access. Used for direct API calls without OAuth flow.",
default=None,
)

View File

@@ -6,20 +6,23 @@ from pydantic_settings import BaseSettings
class SentryConfig(BaseSettings):
"""
Sentry configs
Configuration settings for Sentry error tracking and performance monitoring
"""
SENTRY_DSN: Optional[str] = Field(
description="Sentry DSN",
description="Sentry Data Source Name (DSN)."
" This is the unique identifier of your Sentry project, used to send events to the correct project.",
default=None,
)
SENTRY_TRACES_SAMPLE_RATE: NonNegativeFloat = Field(
description="Sentry trace sample rate",
description="Sample rate for Sentry performance monitoring traces."
" Value between 0.0 and 1.0, where 1.0 means 100% of traces are sent to Sentry.",
default=1.0,
)
SENTRY_PROFILES_SAMPLE_RATE: NonNegativeFloat = Field(
description="Sentry profiles sample rate",
description="Sample rate for Sentry profiling."
" Value between 0.0 and 1.0, where 1.0 means 100% of profiles are sent to Sentry.",
default=1.0,
)

View File

@@ -8,145 +8,143 @@ from configs.feature.hosted_service import HostedServiceConfig
class SecurityConfig(BaseSettings):
"""
Secret Key configs
Security-related configurations for the application
"""
SECRET_KEY: Optional[str] = Field(
description="Your App secret key will be used for securely signing the session cookie"
description="Secret key for secure session cookie signing."
"Make sure you are changing this key for your deployment with a strong key."
"You can generate a strong key using `openssl rand -base64 42`."
"Alternatively you can set it with `SECRET_KEY` environment variable.",
"Generate a strong key using `openssl rand -base64 42` or set via the `SECRET_KEY` environment variable.",
default=None,
)
RESET_PASSWORD_TOKEN_EXPIRY_HOURS: PositiveInt = Field(
description="Expiry time in hours for reset token",
description="Duration in hours for which a password reset token remains valid",
default=24,
)
class AppExecutionConfig(BaseSettings):
"""
App Execution configs
Configuration parameters for application execution
"""
APP_MAX_EXECUTION_TIME: PositiveInt = Field(
description="execution timeout in seconds for app execution",
description="Maximum allowed execution time for the application in seconds",
default=1200,
)
APP_MAX_ACTIVE_REQUESTS: NonNegativeInt = Field(
description="max active request per app, 0 means unlimited",
description="Maximum number of concurrent active requests per app (0 for unlimited)",
default=0,
)
class CodeExecutionSandboxConfig(BaseSettings):
"""
Code Execution Sandbox configs
Configuration for the code execution sandbox environment
"""
CODE_EXECUTION_ENDPOINT: HttpUrl = Field(
description="endpoint URL of code execution service",
description="URL endpoint for the code execution service",
default="http://sandbox:8194",
)
CODE_EXECUTION_API_KEY: str = Field(
description="API key for code execution service",
description="API key for accessing the code execution service",
default="dify-sandbox",
)
CODE_EXECUTION_CONNECT_TIMEOUT: Optional[float] = Field(
description="connect timeout in seconds for code execution request",
description="Connection timeout in seconds for code execution requests",
default=10.0,
)
CODE_EXECUTION_READ_TIMEOUT: Optional[float] = Field(
description="read timeout in seconds for code execution request",
description="Read timeout in seconds for code execution requests",
default=60.0,
)
CODE_EXECUTION_WRITE_TIMEOUT: Optional[float] = Field(
description="write timeout in seconds for code execution request",
description="Write timeout in seconds for code execution request",
default=10.0,
)
CODE_MAX_NUMBER: PositiveInt = Field(
description="max depth for code execution",
description="Maximum allowed numeric value in code execution",
default=9223372036854775807,
)
CODE_MIN_NUMBER: NegativeInt = Field(
description="",
description="Minimum allowed numeric value in code execution",
default=-9223372036854775807,
)
CODE_MAX_DEPTH: PositiveInt = Field(
description="max depth for code execution",
description="Maximum allowed depth for nested structures in code execution",
default=5,
)
CODE_MAX_PRECISION: PositiveInt = Field(
description="max precision digits for float type in code execution",
description="mMaximum number of decimal places for floating-point numbers in code execution",
default=20,
)
CODE_MAX_STRING_LENGTH: PositiveInt = Field(
description="max string length for code execution",
description="Maximum allowed length for strings in code execution",
default=80000,
)
CODE_MAX_STRING_ARRAY_LENGTH: PositiveInt = Field(
description="",
description="Maximum allowed length for string arrays in code execution",
default=30,
)
CODE_MAX_OBJECT_ARRAY_LENGTH: PositiveInt = Field(
description="",
description="Maximum allowed length for object arrays in code execution",
default=30,
)
CODE_MAX_NUMBER_ARRAY_LENGTH: PositiveInt = Field(
description="",
description="Maximum allowed length for numeric arrays in code execution",
default=1000,
)
class EndpointConfig(BaseSettings):
"""
Module URL configs
Configuration for various application endpoints and URLs
"""
CONSOLE_API_URL: str = Field(
description="The backend URL prefix of the console API."
"used to concatenate the login authorization callback or notion integration callback.",
description="Base URL for the console API,"
"used for login authentication callback or notion integration callbacks",
default="",
)
CONSOLE_WEB_URL: str = Field(
description="The front-end URL prefix of the console web."
"used to concatenate some front-end addresses and for CORS configuration use.",
description="Base URL for the console web interface," "used for frontend references and CORS configuration",
default="",
)
SERVICE_API_URL: str = Field(
description="Service API Url prefix." "used to display Service API Base Url to the front-end.",
description="Base URL for the service API, displayed to users for API access",
default="",
)
APP_WEB_URL: str = Field(
description="WebApp Url prefix." "used to display WebAPP API Base Url to the front-end.",
description="Base URL for the web application, used for frontend references",
default="",
)
class FileAccessConfig(BaseSettings):
"""
File Access configs
Configuration for file access and handling
"""
FILES_URL: str = Field(
description="File preview or download Url prefix."
" used to display File preview or download Url to the front-end or as Multi-model inputs;"
description="Base URL for file preview or download,"
" used for frontend display and multi-model inputs"
"Url is signed and has expiration time.",
validation_alias=AliasChoices("FILES_URL", "CONSOLE_API_URL"),
alias_priority=1,
@@ -154,49 +152,49 @@ class FileAccessConfig(BaseSettings):
)
FILES_ACCESS_TIMEOUT: int = Field(
description="timeout in seconds for file accessing",
description="Expiration time in seconds for file access URLs",
default=300,
)
class FileUploadConfig(BaseSettings):
"""
File Uploading configs
Configuration for file upload limitations
"""
UPLOAD_FILE_SIZE_LIMIT: NonNegativeInt = Field(
description="size limit in Megabytes for uploading files",
description="Maximum allowed file size for uploads in megabytes",
default=15,
)
UPLOAD_FILE_BATCH_LIMIT: NonNegativeInt = Field(
description="batch size limit for uploading files",
description="Maximum number of files allowed in a single upload batch",
default=5,
)
UPLOAD_IMAGE_FILE_SIZE_LIMIT: NonNegativeInt = Field(
description="image file size limit in Megabytes for uploading files",
description="Maximum allowed image file size for uploads in megabytes",
default=10,
)
BATCH_UPLOAD_LIMIT: NonNegativeInt = Field(
description="", # todo: to be clarified
description="Maximum number of files allowed in a batch upload operation",
default=20,
)
class HttpConfig(BaseSettings):
"""
HTTP configs
HTTP-related configurations for the application
"""
API_COMPRESSION_ENABLED: bool = Field(
description="whether to enable HTTP response compression of gzip",
description="Enable or disable gzip compression for HTTP responses",
default=False,
)
inner_CONSOLE_CORS_ALLOW_ORIGINS: str = Field(
description="",
description="Comma-separated list of allowed origins for CORS in the console",
validation_alias=AliasChoices("CONSOLE_CORS_ALLOW_ORIGINS", "CONSOLE_WEB_URL"),
default="",
)
@@ -218,359 +216,360 @@ class HttpConfig(BaseSettings):
return self.inner_WEB_API_CORS_ALLOW_ORIGINS.split(",")
HTTP_REQUEST_MAX_CONNECT_TIMEOUT: Annotated[
PositiveInt, Field(ge=10, description="connect timeout in seconds for HTTP request")
PositiveInt, Field(ge=10, description="Maximum connection timeout in seconds for HTTP requests")
] = 10
HTTP_REQUEST_MAX_READ_TIMEOUT: Annotated[
PositiveInt, Field(ge=60, description="read timeout in seconds for HTTP request")
PositiveInt, Field(ge=60, description="Maximum read timeout in seconds for HTTP requests")
] = 60
HTTP_REQUEST_MAX_WRITE_TIMEOUT: Annotated[
PositiveInt, Field(ge=10, description="read timeout in seconds for HTTP request")
PositiveInt, Field(ge=10, description="Maximum write timeout in seconds for HTTP requests")
] = 20
HTTP_REQUEST_NODE_MAX_BINARY_SIZE: PositiveInt = Field(
description="",
description="Maximum allowed size in bytes for binary data in HTTP requests",
default=10 * 1024 * 1024,
)
HTTP_REQUEST_NODE_MAX_TEXT_SIZE: PositiveInt = Field(
description="",
description="Maximum allowed size in bytes for text data in HTTP requests",
default=1 * 1024 * 1024,
)
SSRF_PROXY_HTTP_URL: Optional[str] = Field(
description="HTTP URL for SSRF proxy",
description="Proxy URL for HTTP requests to prevent Server-Side Request Forgery (SSRF)",
default=None,
)
SSRF_PROXY_HTTPS_URL: Optional[str] = Field(
description="HTTPS URL for SSRF proxy",
description="Proxy URL for HTTPS requests to prevent Server-Side Request Forgery (SSRF)",
default=None,
)
class InnerAPIConfig(BaseSettings):
"""
Inner API configs
Configuration for internal API functionality
"""
INNER_API: bool = Field(
description="whether to enable the inner API",
description="Enable or disable the internal API",
default=False,
)
INNER_API_KEY: Optional[str] = Field(
description="The inner API key is used to authenticate the inner API",
description="API key for accessing the internal API",
default=None,
)
class LoggingConfig(BaseSettings):
"""
Logging configs
Configuration for application logging
"""
LOG_LEVEL: str = Field(
description="Log output level, default to INFO." "It is recommended to set it to ERROR for production.",
description="Logging level, default to INFO. Set to ERROR for production environments.",
default="INFO",
)
LOG_FILE: Optional[str] = Field(
description="logging output file path",
description="File path for log output.",
default=None,
)
LOG_FORMAT: str = Field(
description="log format",
description="Format string for log messages",
default="%(asctime)s.%(msecs)03d %(levelname)s [%(threadName)s] [%(filename)s:%(lineno)d] - %(message)s",
)
LOG_DATEFORMAT: Optional[str] = Field(
description="log date format",
description="Date format string for log timestamps",
default=None,
)
LOG_TZ: Optional[str] = Field(
description="specify log timezone, eg: America/New_York",
description="Timezone for log timestamps (e.g., 'America/New_York')",
default=None,
)
class ModelLoadBalanceConfig(BaseSettings):
"""
Model load balance configs
Configuration for model load balancing
"""
MODEL_LB_ENABLED: bool = Field(
description="whether to enable model load balancing",
description="Enable or disable load balancing for models",
default=False,
)
class BillingConfig(BaseSettings):
"""
Platform Billing Configurations
Configuration for platform billing features
"""
BILLING_ENABLED: bool = Field(
description="whether to enable billing",
description="Enable or disable billing functionality",
default=False,
)
class UpdateConfig(BaseSettings):
"""
Update configs
Configuration for application update checks
"""
CHECK_UPDATE_URL: str = Field(
description="url for checking updates",
description="URL to check for application updates",
default="https://updates.dify.ai",
)
class WorkflowConfig(BaseSettings):
"""
Workflow feature configs
Configuration for workflow execution
"""
WORKFLOW_MAX_EXECUTION_STEPS: PositiveInt = Field(
description="max execution steps in single workflow execution",
description="Maximum number of steps allowed in a single workflow execution",
default=500,
)
WORKFLOW_MAX_EXECUTION_TIME: PositiveInt = Field(
description="max execution time in seconds in single workflow execution",
description="Maximum execution time in seconds for a single workflow",
default=1200,
)
WORKFLOW_CALL_MAX_DEPTH: PositiveInt = Field(
description="max depth of calling in single workflow execution",
description="Maximum allowed depth for nested workflow calls",
default=5,
)
MAX_VARIABLE_SIZE: PositiveInt = Field(
description="The maximum size in bytes of a variable. default to 5KB.",
description="Maximum size in bytes for a single variable in workflows. Default to 5KB.",
default=5 * 1024,
)
class OAuthConfig(BaseSettings):
"""
oauth configs
Configuration for OAuth authentication
"""
OAUTH_REDIRECT_PATH: str = Field(
description="redirect path for OAuth",
description="Redirect path for OAuth authentication callbacks",
default="/console/api/oauth/authorize",
)
GITHUB_CLIENT_ID: Optional[str] = Field(
description="GitHub client id for OAuth",
description="GitHub OAuth client secret",
default=None,
)
GITHUB_CLIENT_SECRET: Optional[str] = Field(
description="GitHub client secret key for OAuth",
description="GitHub OAuth client secret",
default=None,
)
GOOGLE_CLIENT_ID: Optional[str] = Field(
description="Google client id for OAuth",
description="Google OAuth client ID",
default=None,
)
GOOGLE_CLIENT_SECRET: Optional[str] = Field(
description="Google client secret key for OAuth",
description="Google OAuth client secret",
default=None,
)
class ModerationConfig(BaseSettings):
"""
Moderation in app configs.
Configuration for content moderation
"""
MODERATION_BUFFER_SIZE: PositiveInt = Field(
description="buffer size for moderation",
description="Size of the buffer for content moderation processing",
default=300,
)
class ToolConfig(BaseSettings):
"""
Tool configs
Configuration for tool management
"""
TOOL_ICON_CACHE_MAX_AGE: PositiveInt = Field(
description="max age in seconds for tool icon caching",
description="Maximum age in seconds for caching tool icons",
default=3600,
)
class MailConfig(BaseSettings):
"""
Mail Configurations
Configuration for email services
"""
MAIL_TYPE: Optional[str] = Field(
description="Mail provider type name, default to None, available values are `smtp` and `resend`.",
description="Email service provider type ('smtp' or 'resend'), default to None.",
default=None,
)
MAIL_DEFAULT_SEND_FROM: Optional[str] = Field(
description="default email address for sending from ",
description="Default email address to use as the sender",
default=None,
)
RESEND_API_KEY: Optional[str] = Field(
description="API key for Resend",
description="API key for Resend email service",
default=None,
)
RESEND_API_URL: Optional[str] = Field(
description="API URL for Resend",
description="API URL for Resend email service",
default=None,
)
SMTP_SERVER: Optional[str] = Field(
description="smtp server host",
description="SMTP server hostname",
default=None,
)
SMTP_PORT: Optional[int] = Field(
description="smtp server port",
description="SMTP server port number",
default=465,
)
SMTP_USERNAME: Optional[str] = Field(
description="smtp server username",
description="Username for SMTP authentication",
default=None,
)
SMTP_PASSWORD: Optional[str] = Field(
description="smtp server password",
description="Password for SMTP authentication",
default=None,
)
SMTP_USE_TLS: bool = Field(
description="whether to use TLS connection to smtp server",
description="Enable TLS encryption for SMTP connections",
default=False,
)
SMTP_OPPORTUNISTIC_TLS: bool = Field(
description="whether to use opportunistic TLS connection to smtp server",
description="Enable opportunistic TLS for SMTP connections",
default=False,
)
class RagEtlConfig(BaseSettings):
"""
RAG ETL Configurations.
Configuration for RAG ETL processes
"""
ETL_TYPE: str = Field(
description="RAG ETL type name, default to `dify`, available values are `dify` and `Unstructured`. ",
description="RAG ETL type ('dify' or 'Unstructured'), default to 'dify'",
default="dify",
)
KEYWORD_DATA_SOURCE_TYPE: str = Field(
description="source type for keyword data, default to `database`, available values are `database` .",
description="Data source type for keyword extraction"
" ('database' or other supported types), default to 'database'",
default="database",
)
UNSTRUCTURED_API_URL: Optional[str] = Field(
description="API URL for Unstructured",
description="API URL for Unstructured.io service",
default=None,
)
UNSTRUCTURED_API_KEY: Optional[str] = Field(
description="API key for Unstructured",
description="API key for Unstructured.io service",
default=None,
)
class DataSetConfig(BaseSettings):
"""
Dataset configs
Configuration for dataset management
"""
CLEAN_DAY_SETTING: PositiveInt = Field(
description="interval in days for cleaning up dataset",
description="Interval in days for dataset cleanup operations",
default=30,
)
DATASET_OPERATOR_ENABLED: bool = Field(
description="whether to enable dataset operator",
description="Enable or disable dataset operator functionality",
default=False,
)
class WorkspaceConfig(BaseSettings):
"""
Workspace configs
Configuration for workspace management
"""
INVITE_EXPIRY_HOURS: PositiveInt = Field(
description="workspaces invitation expiration in hours",
description="Expiration time in hours for workspace invitation links",
default=72,
)
class IndexingConfig(BaseSettings):
"""
Indexing configs.
Configuration for indexing operations
"""
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: PositiveInt = Field(
description="max segmentation token length for indexing",
description="Maximum token length for text segmentation during indexing",
default=1000,
)
class ImageFormatConfig(BaseSettings):
MULTIMODAL_SEND_IMAGE_FORMAT: str = Field(
description="multi model send image format, support base64, url, default is base64",
description="Format for sending images in multimodal contexts ('base64' or 'url'), default is base64",
default="base64",
)
class CeleryBeatConfig(BaseSettings):
CELERY_BEAT_SCHEDULER_TIME: int = Field(
description="the time of the celery scheduler, default to 1 day",
description="Interval in days for Celery Beat scheduler execution, default to 1 day",
default=1,
)
class PositionConfig(BaseSettings):
POSITION_PROVIDER_PINS: str = Field(
description="The heads of model providers",
description="Comma-separated list of pinned model providers",
default="",
)
POSITION_PROVIDER_INCLUDES: str = Field(
description="The included model providers",
description="Comma-separated list of included model providers",
default="",
)
POSITION_PROVIDER_EXCLUDES: str = Field(
description="The excluded model providers",
description="Comma-separated list of excluded model providers",
default="",
)
POSITION_TOOL_PINS: str = Field(
description="The heads of tools",
description="Comma-separated list of pinned tools",
default="",
)
POSITION_TOOL_INCLUDES: str = Field(
description="The included tools",
description="Comma-separated list of included tools",
default="",
)
POSITION_TOOL_EXCLUDES: str = Field(
description="The excluded tools",
description="Comma-separated list of excluded tools",
default="",
)

View File

@@ -6,31 +6,31 @@ from pydantic_settings import BaseSettings
class HostedOpenAiConfig(BaseSettings):
"""
Hosted OpenAI service config
Configuration for hosted OpenAI service
"""
HOSTED_OPENAI_API_KEY: Optional[str] = Field(
description="",
description="API key for hosted OpenAI service",
default=None,
)
HOSTED_OPENAI_API_BASE: Optional[str] = Field(
description="",
description="Base URL for hosted OpenAI API",
default=None,
)
HOSTED_OPENAI_API_ORGANIZATION: Optional[str] = Field(
description="",
description="Organization ID for hosted OpenAI service",
default=None,
)
HOSTED_OPENAI_TRIAL_ENABLED: bool = Field(
description="",
description="Enable trial access to hosted OpenAI service",
default=False,
)
HOSTED_OPENAI_TRIAL_MODELS: str = Field(
description="",
description="Comma-separated list of available models for trial access",
default="gpt-3.5-turbo,"
"gpt-3.5-turbo-1106,"
"gpt-3.5-turbo-instruct,"
@@ -42,17 +42,17 @@ class HostedOpenAiConfig(BaseSettings):
)
HOSTED_OPENAI_QUOTA_LIMIT: NonNegativeInt = Field(
description="",
description="Quota limit for hosted OpenAI service usage",
default=200,
)
HOSTED_OPENAI_PAID_ENABLED: bool = Field(
description="",
description="Enable paid access to hosted OpenAI service",
default=False,
)
HOSTED_OPENAI_PAID_MODELS: str = Field(
description="",
description="Comma-separated list of available models for paid access",
default="gpt-4,"
"gpt-4-turbo-preview,"
"gpt-4-turbo-2024-04-09,"
@@ -71,124 +71,122 @@ class HostedOpenAiConfig(BaseSettings):
class HostedAzureOpenAiConfig(BaseSettings):
"""
Hosted OpenAI service config
Configuration for hosted Azure OpenAI service
"""
HOSTED_AZURE_OPENAI_ENABLED: bool = Field(
description="",
description="Enable hosted Azure OpenAI service",
default=False,
)
HOSTED_AZURE_OPENAI_API_KEY: Optional[str] = Field(
description="",
description="API key for hosted Azure OpenAI service",
default=None,
)
HOSTED_AZURE_OPENAI_API_BASE: Optional[str] = Field(
description="",
description="Base URL for hosted Azure OpenAI API",
default=None,
)
HOSTED_AZURE_OPENAI_QUOTA_LIMIT: NonNegativeInt = Field(
description="",
description="Quota limit for hosted Azure OpenAI service usage",
default=200,
)
class HostedAnthropicConfig(BaseSettings):
"""
Hosted Azure OpenAI service config
Configuration for hosted Anthropic service
"""
HOSTED_ANTHROPIC_API_BASE: Optional[str] = Field(
description="",
description="Base URL for hosted Anthropic API",
default=None,
)
HOSTED_ANTHROPIC_API_KEY: Optional[str] = Field(
description="",
description="API key for hosted Anthropic service",
default=None,
)
HOSTED_ANTHROPIC_TRIAL_ENABLED: bool = Field(
description="",
description="Enable trial access to hosted Anthropic service",
default=False,
)
HOSTED_ANTHROPIC_QUOTA_LIMIT: NonNegativeInt = Field(
description="",
description="Quota limit for hosted Anthropic service usage",
default=600000,
)
HOSTED_ANTHROPIC_PAID_ENABLED: bool = Field(
description="",
description="Enable paid access to hosted Anthropic service",
default=False,
)
class HostedMinmaxConfig(BaseSettings):
"""
Hosted Minmax service config
Configuration for hosted Minmax service
"""
HOSTED_MINIMAX_ENABLED: bool = Field(
description="",
description="Enable hosted Minmax service",
default=False,
)
class HostedSparkConfig(BaseSettings):
"""
Hosted Spark service config
Configuration for hosted Spark service
"""
HOSTED_SPARK_ENABLED: bool = Field(
description="",
description="Enable hosted Spark service",
default=False,
)
class HostedZhipuAIConfig(BaseSettings):
"""
Hosted Minmax service config
Configuration for hosted ZhipuAI service
"""
HOSTED_ZHIPUAI_ENABLED: bool = Field(
description="",
description="Enable hosted ZhipuAI service",
default=False,
)
class HostedModerationConfig(BaseSettings):
"""
Hosted Moderation service config
Configuration for hosted Moderation service
"""
HOSTED_MODERATION_ENABLED: bool = Field(
description="",
description="Enable hosted Moderation service",
default=False,
)
HOSTED_MODERATION_PROVIDERS: str = Field(
description="",
description="Comma-separated list of moderation providers",
default="",
)
class HostedFetchAppTemplateConfig(BaseSettings):
"""
Hosted Moderation service config
Configuration for fetching app templates
"""
HOSTED_FETCH_APP_TEMPLATES_MODE: str = Field(
description="the mode for fetching app templates,"
" default to remote,"
" available values: remote, db, builtin",
description="Mode for fetching app templates: remote, db, or builtin" " default to remote,",
default="remote",
)
HOSTED_FETCH_APP_TEMPLATES_REMOTE_DOMAIN: str = Field(
description="the domain for fetching remote app templates",
description="Domain for fetching remote app templates",
default="https://tmpl.dify.ai",
)

View File

@@ -31,70 +31,71 @@ from configs.middleware.vdb.weaviate_config import WeaviateConfig
class StorageConfig(BaseSettings):
STORAGE_TYPE: str = Field(
description="storage type,"
" default to `local`,"
" available values are `local`, `s3`, `azure-blob`, `aliyun-oss`, `google-storage`.",
description="Type of storage to use."
" Options: 'local', 's3', 'azure-blob', 'aliyun-oss', 'google-storage'. Default is 'local'.",
default="local",
)
STORAGE_LOCAL_PATH: str = Field(
description="local storage path",
description="Path for local storage when STORAGE_TYPE is set to 'local'.",
default="storage",
)
class VectorStoreConfig(BaseSettings):
VECTOR_STORE: Optional[str] = Field(
description="vector store type",
description="Type of vector store to use for efficient similarity search."
" Set to None if not using a vector store.",
default=None,
)
class KeywordStoreConfig(BaseSettings):
KEYWORD_STORE: str = Field(
description="keyword store type",
description="Method for keyword extraction and storage."
" Default is 'jieba', a Chinese text segmentation library.",
default="jieba",
)
class DatabaseConfig:
DB_HOST: str = Field(
description="db host",
description="Hostname or IP address of the database server.",
default="localhost",
)
DB_PORT: PositiveInt = Field(
description="db port",
description="Port number for database connection.",
default=5432,
)
DB_USERNAME: str = Field(
description="db username",
description="Username for database authentication.",
default="postgres",
)
DB_PASSWORD: str = Field(
description="db password",
description="Password for database authentication.",
default="",
)
DB_DATABASE: str = Field(
description="db database",
description="Name of the database to connect to.",
default="dify",
)
DB_CHARSET: str = Field(
description="db charset",
description="Character set for database connection.",
default="",
)
DB_EXTRAS: str = Field(
description="db extras options. Example: keepalives_idle=60&keepalives=1",
description="Additional database connection parameters. Example: 'keepalives_idle=60&keepalives=1'",
default="",
)
SQLALCHEMY_DATABASE_URI_SCHEME: str = Field(
description="db uri scheme",
description="Database URI scheme for SQLAlchemy connection.",
default="postgresql",
)
@@ -112,27 +113,27 @@ class DatabaseConfig:
)
SQLALCHEMY_POOL_SIZE: NonNegativeInt = Field(
description="pool size of SqlAlchemy",
description="Maximum number of database connections in the pool.",
default=30,
)
SQLALCHEMY_MAX_OVERFLOW: NonNegativeInt = Field(
description="max overflows for SqlAlchemy",
description="Maximum number of connections that can be created beyond the pool_size.",
default=10,
)
SQLALCHEMY_POOL_RECYCLE: NonNegativeInt = Field(
description="SqlAlchemy pool recycle",
description="Number of seconds after which a connection is automatically recycled.",
default=3600,
)
SQLALCHEMY_POOL_PRE_PING: bool = Field(
description="whether to enable pool pre-ping in SqlAlchemy",
description="If True, enables connection pool pre-ping feature to check connections.",
default=False,
)
SQLALCHEMY_ECHO: bool | str = Field(
description="whether to enable SqlAlchemy echo",
description="If True, SQLAlchemy will log all SQL statements.",
default=False,
)
@@ -150,27 +151,27 @@ class DatabaseConfig:
class CeleryConfig(DatabaseConfig):
CELERY_BACKEND: str = Field(
description="Celery backend, available values are `database`, `redis`",
description="Backend for Celery task results. Options: 'database', 'redis'.",
default="database",
)
CELERY_BROKER_URL: Optional[str] = Field(
description="CELERY_BROKER_URL",
description="URL of the message broker for Celery tasks.",
default=None,
)
CELERY_USE_SENTINEL: Optional[bool] = Field(
description="Whether to use Redis Sentinel mode",
description="Whether to use Redis Sentinel for high availability.",
default=False,
)
CELERY_SENTINEL_MASTER_NAME: Optional[str] = Field(
description="Redis Sentinel master name",
description="Name of the Redis Sentinel master.",
default=None,
)
CELERY_SENTINEL_SOCKET_TIMEOUT: Optional[PositiveFloat] = Field(
description="Redis Sentinel socket timeout",
description="Timeout for Redis Sentinel socket operations in seconds.",
default=0.1,
)

View File

@@ -6,65 +6,65 @@ from pydantic_settings import BaseSettings
class RedisConfig(BaseSettings):
"""
Redis configs
Configuration settings for Redis connection
"""
REDIS_HOST: str = Field(
description="Redis host",
description="Hostname or IP address of the Redis server",
default="localhost",
)
REDIS_PORT: PositiveInt = Field(
description="Redis port",
description="Port number on which the Redis server is listening",
default=6379,
)
REDIS_USERNAME: Optional[str] = Field(
description="Redis username",
description="Username for Redis authentication (if required)",
default=None,
)
REDIS_PASSWORD: Optional[str] = Field(
description="Redis password",
description="Password for Redis authentication (if required)",
default=None,
)
REDIS_DB: NonNegativeInt = Field(
description="Redis database id, default to 0",
description="Redis database number to use (0-15)",
default=0,
)
REDIS_USE_SSL: bool = Field(
description="whether to use SSL for Redis connection",
description="Enable SSL/TLS for the Redis connection",
default=False,
)
REDIS_USE_SENTINEL: Optional[bool] = Field(
description="Whether to use Redis Sentinel mode",
description="Enable Redis Sentinel mode for high availability",
default=False,
)
REDIS_SENTINELS: Optional[str] = Field(
description="Redis Sentinel nodes",
description="Comma-separated list of Redis Sentinel nodes (host:port)",
default=None,
)
REDIS_SENTINEL_SERVICE_NAME: Optional[str] = Field(
description="Redis Sentinel service name",
description="Name of the Redis Sentinel service to monitor",
default=None,
)
REDIS_SENTINEL_USERNAME: Optional[str] = Field(
description="Redis Sentinel username",
description="Username for Redis Sentinel authentication (if required)",
default=None,
)
REDIS_SENTINEL_PASSWORD: Optional[str] = Field(
description="Redis Sentinel password",
description="Password for Redis Sentinel authentication (if required)",
default=None,
)
REDIS_SENTINEL_SOCKET_TIMEOUT: Optional[PositiveFloat] = Field(
description="Redis Sentinel socket timeout",
description="Socket timeout in seconds for Redis Sentinel connections",
default=0.1,
)

View File

@@ -6,40 +6,40 @@ from pydantic_settings import BaseSettings
class AliyunOSSStorageConfig(BaseSettings):
"""
Aliyun storage configs
Configuration settings for Aliyun Object Storage Service (OSS)
"""
ALIYUN_OSS_BUCKET_NAME: Optional[str] = Field(
description="Aliyun OSS bucket name",
description="Name of the Aliyun OSS bucket to store and retrieve objects",
default=None,
)
ALIYUN_OSS_ACCESS_KEY: Optional[str] = Field(
description="Aliyun OSS access key",
description="Access key ID for authenticating with Aliyun OSS",
default=None,
)
ALIYUN_OSS_SECRET_KEY: Optional[str] = Field(
description="Aliyun OSS secret key",
description="Secret access key for authenticating with Aliyun OSS",
default=None,
)
ALIYUN_OSS_ENDPOINT: Optional[str] = Field(
description="Aliyun OSS endpoint URL",
description="URL of the Aliyun OSS endpoint for your chosen region",
default=None,
)
ALIYUN_OSS_REGION: Optional[str] = Field(
description="Aliyun OSS region",
description="Aliyun OSS region where your bucket is located (e.g., 'oss-cn-hangzhou')",
default=None,
)
ALIYUN_OSS_AUTH_VERSION: Optional[str] = Field(
description="Aliyun OSS authentication version",
description="Version of the authentication protocol to use with Aliyun OSS (e.g., 'v4')",
default=None,
)
ALIYUN_OSS_PATH: Optional[str] = Field(
description="Aliyun OSS path",
description="Base path within the bucket to store objects (e.g., 'my-app-data/')",
default=None,
)

View File

@@ -6,40 +6,40 @@ from pydantic_settings import BaseSettings
class S3StorageConfig(BaseSettings):
"""
S3 storage configs
Configuration settings for S3-compatible object storage
"""
S3_ENDPOINT: Optional[str] = Field(
description="S3 storage endpoint",
description="URL of the S3-compatible storage endpoint (e.g., 'https://s3.amazonaws.com')",
default=None,
)
S3_REGION: Optional[str] = Field(
description="S3 storage region",
description="Region where the S3 bucket is located (e.g., 'us-east-1')",
default=None,
)
S3_BUCKET_NAME: Optional[str] = Field(
description="S3 storage bucket name",
description="Name of the S3 bucket to store and retrieve objects",
default=None,
)
S3_ACCESS_KEY: Optional[str] = Field(
description="S3 storage access key",
description="Access key ID for authenticating with the S3 service",
default=None,
)
S3_SECRET_KEY: Optional[str] = Field(
description="S3 storage secret key",
description="Secret access key for authenticating with the S3 service",
default=None,
)
S3_ADDRESS_STYLE: str = Field(
description="S3 storage address style",
description="S3 addressing style: 'auto', 'path', or 'virtual'",
default="auto",
)
S3_USE_AWS_MANAGED_IAM: bool = Field(
description="whether to use aws managed IAM for S3",
description="Use AWS managed IAM roles for authentication instead of access/secret keys",
default=False,
)

View File

@@ -6,25 +6,25 @@ from pydantic_settings import BaseSettings
class AzureBlobStorageConfig(BaseSettings):
"""
Azure Blob storage configs
Configuration settings for Azure Blob Storage
"""
AZURE_BLOB_ACCOUNT_NAME: Optional[str] = Field(
description="Azure Blob account name",
description="Name of the Azure Storage account (e.g., 'mystorageaccount')",
default=None,
)
AZURE_BLOB_ACCOUNT_KEY: Optional[str] = Field(
description="Azure Blob account key",
description="Access key for authenticating with the Azure Storage account",
default=None,
)
AZURE_BLOB_CONTAINER_NAME: Optional[str] = Field(
description="Azure Blob container name",
description="Name of the Azure Blob container to store and retrieve objects",
default=None,
)
AZURE_BLOB_ACCOUNT_URL: Optional[str] = Field(
description="Azure Blob account URL",
description="URL of the Azure Blob storage endpoint (e.g., 'https://mystorageaccount.blob.core.windows.net')",
default=None,
)

View File

@@ -6,15 +6,15 @@ from pydantic_settings import BaseSettings
class GoogleCloudStorageConfig(BaseSettings):
"""
Google Cloud storage configs
Configuration settings for Google Cloud Storage
"""
GOOGLE_STORAGE_BUCKET_NAME: Optional[str] = Field(
description="Google Cloud storage bucket name",
description="Name of the Google Cloud Storage bucket to store and retrieve objects (e.g., 'my-gcs-bucket')",
default=None,
)
GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: Optional[str] = Field(
description="Google Cloud storage service account json base64",
description="Base64-encoded JSON key file for Google Cloud service account authentication",
default=None,
)

View File

@@ -5,25 +5,25 @@ from pydantic import BaseModel, Field
class HuaweiCloudOBSStorageConfig(BaseModel):
"""
Huawei Cloud OBS storage configs
Configuration settings for Huawei Cloud Object Storage Service (OBS)
"""
HUAWEI_OBS_BUCKET_NAME: Optional[str] = Field(
description="Huawei Cloud OBS bucket name",
description="Name of the Huawei Cloud OBS bucket to store and retrieve objects (e.g., 'my-obs-bucket')",
default=None,
)
HUAWEI_OBS_ACCESS_KEY: Optional[str] = Field(
description="Huawei Cloud OBS Access key",
description="Access Key ID for authenticating with Huawei Cloud OBS",
default=None,
)
HUAWEI_OBS_SECRET_KEY: Optional[str] = Field(
description="Huawei Cloud OBS Secret key",
description="Secret Access Key for authenticating with Huawei Cloud OBS",
default=None,
)
HUAWEI_OBS_SERVER: Optional[str] = Field(
description="Huawei Cloud OBS server URL",
description="Endpoint URL for Huawei Cloud OBS (e.g., 'https://obs.cn-north-4.myhuaweicloud.com')",
default=None,
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class OCIStorageConfig(BaseSettings):
"""
OCI storage configs
Configuration settings for Oracle Cloud Infrastructure (OCI) Object Storage
"""
OCI_ENDPOINT: Optional[str] = Field(
description="OCI storage endpoint",
description="URL of the OCI Object Storage endpoint (e.g., 'https://objectstorage.us-phoenix-1.oraclecloud.com')",
default=None,
)
OCI_REGION: Optional[str] = Field(
description="OCI storage region",
description="OCI region where the bucket is located (e.g., 'us-phoenix-1')",
default=None,
)
OCI_BUCKET_NAME: Optional[str] = Field(
description="OCI storage bucket name",
description="Name of the OCI Object Storage bucket to store and retrieve objects (e.g., 'my-oci-bucket')",
default=None,
)
OCI_ACCESS_KEY: Optional[str] = Field(
description="OCI storage access key",
description="Access key (also known as API key) for authenticating with OCI Object Storage",
default=None,
)
OCI_SECRET_KEY: Optional[str] = Field(
description="OCI storage secret key",
description="Secret key associated with the access key for authenticating with OCI Object Storage",
default=None,
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class TencentCloudCOSStorageConfig(BaseSettings):
"""
Tencent Cloud COS storage configs
Configuration settings for Tencent Cloud Object Storage (COS)
"""
TENCENT_COS_BUCKET_NAME: Optional[str] = Field(
description="Tencent Cloud COS bucket name",
description="Name of the Tencent Cloud COS bucket to store and retrieve objects",
default=None,
)
TENCENT_COS_REGION: Optional[str] = Field(
description="Tencent Cloud COS region",
description="Tencent Cloud region where the COS bucket is located (e.g., 'ap-guangzhou')",
default=None,
)
TENCENT_COS_SECRET_ID: Optional[str] = Field(
description="Tencent Cloud COS secret id",
description="SecretId for authenticating with Tencent Cloud COS (part of API credentials)",
default=None,
)
TENCENT_COS_SECRET_KEY: Optional[str] = Field(
description="Tencent Cloud COS secret key",
description="SecretKey for authenticating with Tencent Cloud COS (part of API credentials)",
default=None,
)
TENCENT_COS_SCHEME: Optional[str] = Field(
description="Tencent Cloud COS scheme",
description="Protocol scheme for COS requests: 'https' (recommended) or 'http'",
default=None,
)

View File

@@ -5,30 +5,30 @@ from pydantic import BaseModel, Field
class VolcengineTOSStorageConfig(BaseModel):
"""
Volcengine tos storage configs
Configuration settings for Volcengine Tinder Object Storage (TOS)
"""
VOLCENGINE_TOS_BUCKET_NAME: Optional[str] = Field(
description="Volcengine TOS Bucket Name",
description="Name of the Volcengine TOS bucket to store and retrieve objects (e.g., 'my-tos-bucket')",
default=None,
)
VOLCENGINE_TOS_ACCESS_KEY: Optional[str] = Field(
description="Volcengine TOS Access Key",
description="Access Key ID for authenticating with Volcengine TOS",
default=None,
)
VOLCENGINE_TOS_SECRET_KEY: Optional[str] = Field(
description="Volcengine TOS Secret Key",
description="Secret Access Key for authenticating with Volcengine TOS",
default=None,
)
VOLCENGINE_TOS_ENDPOINT: Optional[str] = Field(
description="Volcengine TOS Endpoint URL",
description="URL of the Volcengine TOS endpoint (e.g., 'https://tos-cn-beijing.volces.com')",
default=None,
)
VOLCENGINE_TOS_REGION: Optional[str] = Field(
description="Volcengine TOS Region",
description="Volcengine region where the TOS bucket is located (e.g., 'cn-beijing')",
default=None,
)

View File

@@ -5,33 +5,38 @@ from pydantic import BaseModel, Field
class AnalyticdbConfig(BaseModel):
"""
Configuration for connecting to AnalyticDB.
Configuration for connecting to Alibaba Cloud AnalyticDB for PostgreSQL.
Refer to the following documentation for details on obtaining credentials:
https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/getting-started/create-an-instance-instances-with-vector-engine-optimization-enabled
"""
ANALYTICDB_KEY_ID: Optional[str] = Field(
default=None, description="The Access Key ID provided by Alibaba Cloud for authentication."
default=None, description="The Access Key ID provided by Alibaba Cloud for API authentication."
)
ANALYTICDB_KEY_SECRET: Optional[str] = Field(
default=None, description="The Secret Access Key corresponding to the Access Key ID for secure access."
default=None, description="The Secret Access Key corresponding to the Access Key ID for secure API access."
)
ANALYTICDB_REGION_ID: Optional[str] = Field(
default=None, description="The region where the AnalyticDB instance is deployed (e.g., 'cn-hangzhou')."
default=None,
description="The region where the AnalyticDB instance is deployed (e.g., 'cn-hangzhou', 'ap-southeast-1').",
)
ANALYTICDB_INSTANCE_ID: Optional[str] = Field(
default=None,
description="The unique identifier of the AnalyticDB instance you want to connect to (e.g., 'gp-ab123456')..",
description="The unique identifier of the AnalyticDB instance you want to connect to.",
)
ANALYTICDB_ACCOUNT: Optional[str] = Field(
default=None, description="The account name used to log in to the AnalyticDB instance."
default=None,
description="The account name used to log in to the AnalyticDB instance"
" (usually the initial account created with the instance).",
)
ANALYTICDB_PASSWORD: Optional[str] = Field(
default=None, description="The password associated with the AnalyticDB account for authentication."
default=None, description="The password associated with the AnalyticDB account for database authentication."
)
ANALYTICDB_NAMESPACE: Optional[str] = Field(
default=None, description="The namespace within AnalyticDB for schema isolation."
default=None, description="The namespace within AnalyticDB for schema isolation (if using namespace feature)."
)
ANALYTICDB_NAMESPACE_PASSWORD: Optional[str] = Field(
default=None, description="The password for accessing the specified namespace within the AnalyticDB instance."
default=None,
description="The password for accessing the specified namespace within the AnalyticDB instance"
" (if namespace feature is enabled).",
)

View File

@@ -6,35 +6,35 @@ from pydantic_settings import BaseSettings
class ChromaConfig(BaseSettings):
"""
Chroma configs
Configuration settings for Chroma vector database
"""
CHROMA_HOST: Optional[str] = Field(
description="Chroma host",
description="Hostname or IP address of the Chroma server (e.g., 'localhost' or '192.168.1.100')",
default=None,
)
CHROMA_PORT: PositiveInt = Field(
description="Chroma port",
description="Port number on which the Chroma server is listening (default is 8000)",
default=8000,
)
CHROMA_TENANT: Optional[str] = Field(
description="Chroma database",
description="Tenant identifier for multi-tenancy support in Chroma",
default=None,
)
CHROMA_DATABASE: Optional[str] = Field(
description="Chroma database",
description="Name of the Chroma database to connect to",
default=None,
)
CHROMA_AUTH_PROVIDER: Optional[str] = Field(
description="Chroma authentication provider",
description="Authentication provider for Chroma (e.g., 'basic', 'token', or a custom provider)",
default=None,
)
CHROMA_AUTH_CREDENTIALS: Optional[str] = Field(
description="Chroma authentication credentials",
description="Authentication credentials for Chroma (format depends on the auth provider)",
default=None,
)

View File

@@ -6,25 +6,25 @@ from pydantic_settings import BaseSettings
class ElasticsearchConfig(BaseSettings):
"""
Elasticsearch configs
Configuration settings for Elasticsearch
"""
ELASTICSEARCH_HOST: Optional[str] = Field(
description="Elasticsearch host",
description="Hostname or IP address of the Elasticsearch server (e.g., 'localhost' or '192.168.1.100')",
default="127.0.0.1",
)
ELASTICSEARCH_PORT: PositiveInt = Field(
description="Elasticsearch port",
description="Port number on which the Elasticsearch server is listening (default is 9200)",
default=9200,
)
ELASTICSEARCH_USERNAME: Optional[str] = Field(
description="Elasticsearch username",
description="Username for authenticating with Elasticsearch (default is 'elastic')",
default="elastic",
)
ELASTICSEARCH_PASSWORD: Optional[str] = Field(
description="Elasticsearch password",
description="Password for authenticating with Elasticsearch (default is 'elastic')",
default="elastic",
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class MilvusConfig(BaseSettings):
"""
Milvus configs
Configuration settings for Milvus vector database
"""
MILVUS_URI: Optional[str] = Field(
description="Milvus uri",
description="URI for connecting to the Milvus server (e.g., 'http://localhost:19530' or 'https://milvus-instance.example.com:19530')",
default="http://127.0.0.1:19530",
)
MILVUS_TOKEN: Optional[str] = Field(
description="Milvus token",
description="Authentication token for Milvus, if token-based authentication is enabled",
default=None,
)
MILVUS_USER: Optional[str] = Field(
description="Milvus user",
description="Username for authenticating with Milvus, if username/password authentication is enabled",
default=None,
)
MILVUS_PASSWORD: Optional[str] = Field(
description="Milvus password",
description="Password for authenticating with Milvus, if username/password authentication is enabled",
default=None,
)
MILVUS_DATABASE: str = Field(
description="Milvus database, default to `default`",
description="Name of the Milvus database to connect to (default is 'default')",
default="default",
)

View File

@@ -3,35 +3,35 @@ from pydantic import BaseModel, Field, PositiveInt
class MyScaleConfig(BaseModel):
"""
MyScale configs
Configuration settings for MyScale vector database
"""
MYSCALE_HOST: str = Field(
description="MyScale host",
description="Hostname or IP address of the MyScale server (e.g., 'localhost' or 'myscale.example.com')",
default="localhost",
)
MYSCALE_PORT: PositiveInt = Field(
description="MyScale port",
description="Port number on which the MyScale server is listening (default is 8123)",
default=8123,
)
MYSCALE_USER: str = Field(
description="MyScale user",
description="Username for authenticating with MyScale (default is 'default')",
default="default",
)
MYSCALE_PASSWORD: str = Field(
description="MyScale password",
description="Password for authenticating with MyScale (default is an empty string)",
default="",
)
MYSCALE_DATABASE: str = Field(
description="MyScale database name",
description="Name of the MyScale database to connect to (default is 'default')",
default="default",
)
MYSCALE_FTS_PARAMS: str = Field(
description="MyScale fts index parameters",
description="Additional parameters for MyScale Full Text Search index)",
default="",
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class OpenSearchConfig(BaseSettings):
"""
OpenSearch configs
Configuration settings for OpenSearch
"""
OPENSEARCH_HOST: Optional[str] = Field(
description="OpenSearch host",
description="Hostname or IP address of the OpenSearch server (e.g., 'localhost' or 'opensearch.example.com')",
default=None,
)
OPENSEARCH_PORT: PositiveInt = Field(
description="OpenSearch port",
description="Port number on which the OpenSearch server is listening (default is 9200)",
default=9200,
)
OPENSEARCH_USER: Optional[str] = Field(
description="OpenSearch user",
description="Username for authenticating with OpenSearch",
default=None,
)
OPENSEARCH_PASSWORD: Optional[str] = Field(
description="OpenSearch password",
description="Password for authenticating with OpenSearch",
default=None,
)
OPENSEARCH_SECURE: bool = Field(
description="whether to use SSL connection for OpenSearch",
description="Whether to use SSL/TLS encrypted connection for OpenSearch (True for HTTPS, False for HTTP)",
default=False,
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class OracleConfig(BaseSettings):
"""
ORACLE configs
Configuration settings for Oracle database
"""
ORACLE_HOST: Optional[str] = Field(
description="ORACLE host",
description="Hostname or IP address of the Oracle database server (e.g., 'localhost' or 'oracle.example.com')",
default=None,
)
ORACLE_PORT: Optional[PositiveInt] = Field(
description="ORACLE port",
description="Port number on which the Oracle database server is listening (default is 1521)",
default=1521,
)
ORACLE_USER: Optional[str] = Field(
description="ORACLE user",
description="Username for authenticating with the Oracle database",
default=None,
)
ORACLE_PASSWORD: Optional[str] = Field(
description="ORACLE password",
description="Password for authenticating with the Oracle database",
default=None,
)
ORACLE_DATABASE: Optional[str] = Field(
description="ORACLE database",
description="Name of the Oracle database or service to connect to (e.g., 'ORCL' or 'pdborcl')",
default=None,
)

View File

@@ -6,30 +6,40 @@ from pydantic_settings import BaseSettings
class PGVectorConfig(BaseSettings):
"""
PGVector configs
Configuration settings for PGVector (PostgreSQL with vector extension)
"""
PGVECTOR_HOST: Optional[str] = Field(
description="PGVector host",
description="Hostname or IP address of the PostgreSQL server with PGVector extension (e.g., 'localhost')",
default=None,
)
PGVECTOR_PORT: Optional[PositiveInt] = Field(
description="PGVector port",
description="Port number on which the PostgreSQL server is listening (default is 5433)",
default=5433,
)
PGVECTOR_USER: Optional[str] = Field(
description="PGVector user",
description="Username for authenticating with the PostgreSQL database",
default=None,
)
PGVECTOR_PASSWORD: Optional[str] = Field(
description="PGVector password",
description="Password for authenticating with the PostgreSQL database",
default=None,
)
PGVECTOR_DATABASE: Optional[str] = Field(
description="PGVector database",
description="Name of the PostgreSQL database to connect to",
default=None,
)
PGVECTOR_MIN_CONNECTION: PositiveInt = Field(
description="Min connection of the PostgreSQL database",
default=1,
)
PGVECTOR_MAX_CONNECTION: PositiveInt = Field(
description="Max connection of the PostgreSQL database",
default=5,
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class PGVectoRSConfig(BaseSettings):
"""
PGVectoRS configs
Configuration settings for PGVecto.RS (Rust-based vector extension for PostgreSQL)
"""
PGVECTO_RS_HOST: Optional[str] = Field(
description="PGVectoRS host",
description="Hostname or IP address of the PostgreSQL server with PGVecto.RS extension (e.g., 'localhost')",
default=None,
)
PGVECTO_RS_PORT: Optional[PositiveInt] = Field(
description="PGVectoRS port",
description="Port number on which the PostgreSQL server with PGVecto.RS is listening (default is 5431)",
default=5431,
)
PGVECTO_RS_USER: Optional[str] = Field(
description="PGVectoRS user",
description="Username for authenticating with the PostgreSQL database using PGVecto.RS",
default=None,
)
PGVECTO_RS_PASSWORD: Optional[str] = Field(
description="PGVectoRS password",
description="Password for authenticating with the PostgreSQL database using PGVecto.RS",
default=None,
)
PGVECTO_RS_DATABASE: Optional[str] = Field(
description="PGVectoRS database",
description="Name of the PostgreSQL database with PGVecto.RS extension to connect to",
default=None,
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class QdrantConfig(BaseSettings):
"""
Qdrant configs
Configuration settings for Qdrant vector database
"""
QDRANT_URL: Optional[str] = Field(
description="Qdrant url",
description="URL of the Qdrant server (e.g., 'http://localhost:6333' or 'https://qdrant.example.com')",
default=None,
)
QDRANT_API_KEY: Optional[str] = Field(
description="Qdrant api key",
description="API key for authenticating with the Qdrant server",
default=None,
)
QDRANT_CLIENT_TIMEOUT: NonNegativeInt = Field(
description="Qdrant client timeout in seconds",
description="Timeout in seconds for Qdrant client operations (default is 20 seconds)",
default=20,
)
QDRANT_GRPC_ENABLED: bool = Field(
description="whether enable grpc support for Qdrant connection",
description="Whether to enable gRPC support for Qdrant connection (True for gRPC, False for HTTP)",
default=False,
)
QDRANT_GRPC_PORT: PositiveInt = Field(
description="Qdrant grpc port",
description="Port number for gRPC connection to Qdrant server (default is 6334)",
default=6334,
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class RelytConfig(BaseSettings):
"""
Relyt configs
Configuration settings for Relyt database
"""
RELYT_HOST: Optional[str] = Field(
description="Relyt host",
description="Hostname or IP address of the Relyt server (e.g., 'localhost' or 'relyt.example.com')",
default=None,
)
RELYT_PORT: PositiveInt = Field(
description="Relyt port",
description="Port number on which the Relyt server is listening (default is 9200)",
default=9200,
)
RELYT_USER: Optional[str] = Field(
description="Relyt user",
description="Username for authenticating with the Relyt database",
default=None,
)
RELYT_PASSWORD: Optional[str] = Field(
description="Relyt password",
description="Password for authenticating with the Relyt database",
default=None,
)
RELYT_DATABASE: Optional[str] = Field(
description="Relyt database",
description="Name of the Relyt database to connect to (default is 'default')",
default="default",
)

View File

@@ -6,45 +6,45 @@ from pydantic_settings import BaseSettings
class TencentVectorDBConfig(BaseSettings):
"""
Tencent Vector configs
Configuration settings for Tencent Vector Database
"""
TENCENT_VECTOR_DB_URL: Optional[str] = Field(
description="Tencent Vector URL",
description="URL of the Tencent Vector Database service (e.g., 'https://vectordb.tencentcloudapi.com')",
default=None,
)
TENCENT_VECTOR_DB_API_KEY: Optional[str] = Field(
description="Tencent Vector API key",
description="API key for authenticating with the Tencent Vector Database service",
default=None,
)
TENCENT_VECTOR_DB_TIMEOUT: PositiveInt = Field(
description="Tencent Vector timeout in seconds",
description="Timeout in seconds for Tencent Vector Database operations (default is 30 seconds)",
default=30,
)
TENCENT_VECTOR_DB_USERNAME: Optional[str] = Field(
description="Tencent Vector username",
description="Username for authenticating with the Tencent Vector Database (if required)",
default=None,
)
TENCENT_VECTOR_DB_PASSWORD: Optional[str] = Field(
description="Tencent Vector password",
description="Password for authenticating with the Tencent Vector Database (if required)",
default=None,
)
TENCENT_VECTOR_DB_SHARD: PositiveInt = Field(
description="Tencent Vector sharding number",
description="Number of shards for the Tencent Vector Database (default is 1)",
default=1,
)
TENCENT_VECTOR_DB_REPLICAS: NonNegativeInt = Field(
description="Tencent Vector replicas",
description="Number of replicas for the Tencent Vector Database (default is 2)",
default=2,
)
TENCENT_VECTOR_DB_DATABASE: Optional[str] = Field(
description="Tencent Vector Database",
description="Name of the specific Tencent Vector Database to connect to",
default=None,
)

View File

@@ -6,30 +6,30 @@ from pydantic_settings import BaseSettings
class TiDBVectorConfig(BaseSettings):
"""
TiDB Vector configs
Configuration settings for TiDB Vector database
"""
TIDB_VECTOR_HOST: Optional[str] = Field(
description="TiDB Vector host",
description="Hostname or IP address of the TiDB Vector server (e.g., 'localhost' or 'tidb.example.com')",
default=None,
)
TIDB_VECTOR_PORT: Optional[PositiveInt] = Field(
description="TiDB Vector port",
description="Port number on which the TiDB Vector server is listening (default is 4000)",
default=4000,
)
TIDB_VECTOR_USER: Optional[str] = Field(
description="TiDB Vector user",
description="Username for authenticating with the TiDB Vector database",
default=None,
)
TIDB_VECTOR_PASSWORD: Optional[str] = Field(
description="TiDB Vector password",
description="Password for authenticating with the TiDB Vector database",
default=None,
)
TIDB_VECTOR_DATABASE: Optional[str] = Field(
description="TiDB Vector database",
description="Name of the TiDB Vector database to connect to",
default=None,
)

View File

@@ -6,25 +6,25 @@ from pydantic_settings import BaseSettings
class WeaviateConfig(BaseSettings):
"""
Weaviate configs
Configuration settings for Weaviate vector database
"""
WEAVIATE_ENDPOINT: Optional[str] = Field(
description="Weaviate endpoint URL",
description="URL of the Weaviate server (e.g., 'http://localhost:8080' or 'https://weaviate.example.com')",
default=None,
)
WEAVIATE_API_KEY: Optional[str] = Field(
description="Weaviate API key",
description="API key for authenticating with the Weaviate server",
default=None,
)
WEAVIATE_GRPC_ENABLED: bool = Field(
description="whether to enable gRPC for Weaviate connection",
description="Whether to enable gRPC for Weaviate connection (True for gRPC, False for HTTP)",
default=True,
)
WEAVIATE_BATCH_SIZE: PositiveInt = Field(
description="Weaviate batch size",
description="Number of objects to be processed in a single batch operation (default is 100)",
default=100,
)

View File

@@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description="Dify version",
default="0.8.1",
default="0.9.1",
)
COMMIT_SHA: str = Field(

View File

@@ -1 +1,2 @@
HIDDEN_VALUE = "[__HIDDEN__]"
UUID_NIL = "00000000-0000-0000-0000-000000000000"

View File

@@ -37,7 +37,16 @@ from .auth import activate, data_source_bearer_auth, data_source_oauth, forgot_p
from .billing import billing
# Import datasets controllers
from .datasets import data_source, datasets, datasets_document, datasets_segments, file, hit_testing, website
from .datasets import (
data_source,
datasets,
datasets_document,
datasets_segments,
external,
file,
hit_testing,
website,
)
# Import explore controllers
from .explore import (

View File

@@ -94,7 +94,7 @@ class ChatMessageTextApi(Resource):
message_id = args.get("message_id", None)
text = args.get("text", None)
if (
app_model.mode in [AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value]
app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}
and app_model.workflow
and app_model.workflow.features_dict
):

View File

@@ -109,6 +109,7 @@ class ChatMessageApi(Resource):
parser.add_argument("files", type=list, required=False, location="json")
parser.add_argument("model_config", type=dict, required=True, location="json")
parser.add_argument("conversation_id", type=uuid_value, location="json")
parser.add_argument("parent_message_id", type=uuid_value, required=False, location="json")
parser.add_argument("response_mode", type=str, choices=["blocking", "streaming"], location="json")
parser.add_argument("retriever_from", type=str, required=False, default="dev", location="json")
args = parser.parse_args()

View File

@@ -105,8 +105,6 @@ class ChatMessageListApi(Resource):
if rest_count > 0:
has_more = True
history_messages = list(reversed(history_messages))
return InfiniteScrollPagination(data=history_messages, limit=args["limit"], has_more=has_more)

View File

@@ -166,6 +166,8 @@ class AdvancedChatDraftWorkflowRunApi(Resource):
parser.add_argument("query", type=str, required=True, location="json", default="")
parser.add_argument("files", type=list, location="json")
parser.add_argument("conversation_id", type=uuid_value, location="json")
parser.add_argument("parent_message_id", type=uuid_value, required=False, location="json")
args = parser.parse_args()
try:
@@ -465,6 +467,6 @@ api.add_resource(
api.add_resource(PublishedWorkflowApi, "/apps/<uuid:app_id>/workflows/publish")
api.add_resource(DefaultBlockConfigsApi, "/apps/<uuid:app_id>/workflows/default-workflow-block-configs")
api.add_resource(
DefaultBlockConfigApi, "/apps/<uuid:app_id>/workflows/default-workflow-block-configs" "/<string:block_type>"
DefaultBlockConfigApi, "/apps/<uuid:app_id>/workflows/default-workflow-block-configs/<string:block_type>"
)
api.add_resource(ConvertToWorkflowApi, "/apps/<uuid:app_id>/convert-to-workflow")

View File

@@ -71,7 +71,7 @@ class OAuthCallback(Resource):
account = _generate_account(provider, user_info)
# Check account status
if account.status == AccountStatus.BANNED.value or account.status == AccountStatus.CLOSED.value:
if account.status in {AccountStatus.BANNED.value, AccountStatus.CLOSED.value}:
return {"error": "Account is banned or closed."}, 403
if account.status == AccountStatus.PENDING.value:

View File

@@ -49,7 +49,7 @@ class DatasetListApi(Resource):
page = request.args.get("page", default=1, type=int)
limit = request.args.get("limit", default=20, type=int)
ids = request.args.getlist("ids")
provider = request.args.get("provider", default="vendor")
# provider = request.args.get("provider", default="vendor")
search = request.args.get("keyword", default=None, type=str)
tag_ids = request.args.getlist("tag_ids")
@@ -57,7 +57,7 @@ class DatasetListApi(Resource):
datasets, total = DatasetService.get_datasets_by_ids(ids, current_user.current_tenant_id)
else:
datasets, total = DatasetService.get_datasets(
page, limit, provider, current_user.current_tenant_id, current_user, search, tag_ids
page, limit, current_user.current_tenant_id, current_user, search, tag_ids
)
# check embedding setting
@@ -110,6 +110,26 @@ class DatasetListApi(Resource):
nullable=True,
help="Invalid indexing technique.",
)
parser.add_argument(
"external_knowledge_api_id",
type=str,
nullable=True,
required=False,
)
parser.add_argument(
"provider",
type=str,
nullable=True,
choices=Dataset.PROVIDER_LIST,
required=False,
default="vendor",
)
parser.add_argument(
"external_knowledge_id",
type=str,
nullable=True,
required=False,
)
args = parser.parse_args()
# The role of the current user in the ta table must be admin, owner, or editor, or dataset_operator
@@ -123,6 +143,9 @@ class DatasetListApi(Resource):
indexing_technique=args["indexing_technique"],
account=current_user,
permission=DatasetPermissionEnum.ONLY_ME,
provider=args["provider"],
external_knowledge_api_id=args["external_knowledge_api_id"],
external_knowledge_id=args["external_knowledge_id"],
)
except services.errors.dataset.DatasetNameDuplicateError:
raise DatasetNameDuplicateError()
@@ -211,6 +234,33 @@ class DatasetApi(Resource):
)
parser.add_argument("retrieval_model", type=dict, location="json", help="Invalid retrieval model.")
parser.add_argument("partial_member_list", type=list, location="json", help="Invalid parent user list.")
parser.add_argument(
"external_retrieval_model",
type=dict,
required=False,
nullable=True,
location="json",
help="Invalid external retrieval model.",
)
parser.add_argument(
"external_knowledge_id",
type=str,
required=False,
nullable=True,
location="json",
help="Invalid external knowledge id.",
)
parser.add_argument(
"external_knowledge_api_id",
type=str,
required=False,
nullable=True,
location="json",
help="Invalid external knowledge api id.",
)
args = parser.parse_args()
data = request.get_json()
@@ -399,7 +449,7 @@ class DatasetIndexingEstimateApi(Resource):
)
except LLMBadRequestError:
raise ProviderNotInitializeError(
"No Embedding Model available. Please configure a valid provider " "in the Settings -> Model Provider."
"No Embedding Model available. Please configure a valid provider in the Settings -> Model Provider."
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
@@ -563,10 +613,10 @@ class DatasetRetrievalSettingApi(Resource):
case (
VectorType.MILVUS
| VectorType.RELYT
| VectorType.PGVECTOR
| VectorType.TIDB_VECTOR
| VectorType.CHROMA
| VectorType.TENCENT
| VectorType.PGVECTO_RS
):
return {"retrieval_method": [RetrievalMethod.SEMANTIC_SEARCH.value]}
case (
@@ -577,6 +627,7 @@ class DatasetRetrievalSettingApi(Resource):
| VectorType.MYSCALE
| VectorType.ORACLE
| VectorType.ELASTICSEARCH
| VectorType.PGVECTOR
):
return {
"retrieval_method": [

View File

@@ -354,7 +354,7 @@ class DocumentIndexingEstimateApi(DocumentResource):
document_id = str(document_id)
document = self.get_document(dataset_id, document_id)
if document.indexing_status in ["completed", "error"]:
if document.indexing_status in {"completed", "error"}:
raise DocumentAlreadyFinishedError()
data_process_rule = document.dataset_process_rule
@@ -421,7 +421,7 @@ class DocumentBatchIndexingEstimateApi(DocumentResource):
info_list = []
extract_settings = []
for document in documents:
if document.indexing_status in ["completed", "error"]:
if document.indexing_status in {"completed", "error"}:
raise DocumentAlreadyFinishedError()
data_source_info = document.data_source_info_dict
# format document files info
@@ -665,7 +665,7 @@ class DocumentProcessingApi(DocumentResource):
db.session.commit()
elif action == "resume":
if document.indexing_status not in ["paused", "error"]:
if document.indexing_status not in {"paused", "error"}:
raise InvalidActionError("Document not in paused or error state.")
document.paused_by = None

View File

@@ -0,0 +1,239 @@
from flask import request
from flask_login import current_user
from flask_restful import Resource, marshal, reqparse
from werkzeug.exceptions import Forbidden, InternalServerError, NotFound
import services
from controllers.console import api
from controllers.console.datasets.error import DatasetNameDuplicateError
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
from fields.dataset_fields import dataset_detail_fields
from libs.login import login_required
from services.dataset_service import DatasetService
from services.external_knowledge_service import ExternalDatasetService
from services.hit_testing_service import HitTestingService
def _validate_name(name):
if not name or len(name) < 1 or len(name) > 100:
raise ValueError("Name must be between 1 to 100 characters.")
return name
def _validate_description_length(description):
if description and len(description) > 400:
raise ValueError("Description cannot exceed 400 characters.")
return description
class ExternalApiTemplateListApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
page = request.args.get("page", default=1, type=int)
limit = request.args.get("limit", default=20, type=int)
search = request.args.get("keyword", default=None, type=str)
external_knowledge_apis, total = ExternalDatasetService.get_external_knowledge_apis(
page, limit, current_user.current_tenant_id, search
)
response = {
"data": [item.to_dict() for item in external_knowledge_apis],
"has_more": len(external_knowledge_apis) == limit,
"limit": limit,
"total": total,
"page": page,
}
return response, 200
@setup_required
@login_required
@account_initialization_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument(
"name",
nullable=False,
required=True,
help="Name is required. Name must be between 1 to 100 characters.",
type=_validate_name,
)
parser.add_argument(
"settings",
type=dict,
location="json",
nullable=False,
required=True,
)
args = parser.parse_args()
ExternalDatasetService.validate_api_list(args["settings"])
# The role of the current user in the ta table must be admin, owner, or editor, or dataset_operator
if not current_user.is_dataset_editor:
raise Forbidden()
try:
external_knowledge_api = ExternalDatasetService.create_external_knowledge_api(
tenant_id=current_user.current_tenant_id, user_id=current_user.id, args=args
)
except services.errors.dataset.DatasetNameDuplicateError:
raise DatasetNameDuplicateError()
return external_knowledge_api.to_dict(), 201
class ExternalApiTemplateApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self, external_knowledge_api_id):
external_knowledge_api_id = str(external_knowledge_api_id)
external_knowledge_api = ExternalDatasetService.get_external_knowledge_api(external_knowledge_api_id)
if external_knowledge_api is None:
raise NotFound("API template not found.")
return external_knowledge_api.to_dict(), 200
@setup_required
@login_required
@account_initialization_required
def patch(self, external_knowledge_api_id):
external_knowledge_api_id = str(external_knowledge_api_id)
parser = reqparse.RequestParser()
parser.add_argument(
"name",
nullable=False,
required=True,
help="type is required. Name must be between 1 to 100 characters.",
type=_validate_name,
)
parser.add_argument(
"settings",
type=dict,
location="json",
nullable=False,
required=True,
)
args = parser.parse_args()
ExternalDatasetService.validate_api_list(args["settings"])
external_knowledge_api = ExternalDatasetService.update_external_knowledge_api(
tenant_id=current_user.current_tenant_id,
user_id=current_user.id,
external_knowledge_api_id=external_knowledge_api_id,
args=args,
)
return external_knowledge_api.to_dict(), 200
@setup_required
@login_required
@account_initialization_required
def delete(self, external_knowledge_api_id):
external_knowledge_api_id = str(external_knowledge_api_id)
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor or current_user.is_dataset_operator:
raise Forbidden()
ExternalDatasetService.delete_external_knowledge_api(current_user.current_tenant_id, external_knowledge_api_id)
return {"result": "success"}, 200
class ExternalApiUseCheckApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self, external_knowledge_api_id):
external_knowledge_api_id = str(external_knowledge_api_id)
external_knowledge_api_is_using, count = ExternalDatasetService.external_knowledge_api_use_check(
external_knowledge_api_id
)
return {"is_using": external_knowledge_api_is_using, "count": count}, 200
class ExternalDatasetCreateApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument("external_knowledge_api_id", type=str, required=True, nullable=False, location="json")
parser.add_argument("external_knowledge_id", type=str, required=True, nullable=False, location="json")
parser.add_argument(
"name",
nullable=False,
required=True,
help="name is required. Name must be between 1 to 100 characters.",
type=_validate_name,
)
parser.add_argument("description", type=str, required=False, nullable=True, location="json")
parser.add_argument("external_retrieval_model", type=dict, required=False, location="json")
args = parser.parse_args()
# The role of the current user in the ta table must be admin, owner, or editor, or dataset_operator
if not current_user.is_dataset_editor:
raise Forbidden()
try:
dataset = ExternalDatasetService.create_external_dataset(
tenant_id=current_user.current_tenant_id,
user_id=current_user.id,
args=args,
)
except services.errors.dataset.DatasetNameDuplicateError:
raise DatasetNameDuplicateError()
return marshal(dataset, dataset_detail_fields), 201
class ExternalKnowledgeHitTestingApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self, dataset_id):
dataset_id_str = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id_str)
if dataset is None:
raise NotFound("Dataset not found.")
try:
DatasetService.check_dataset_permission(dataset, current_user)
except services.errors.account.NoPermissionError as e:
raise Forbidden(str(e))
parser = reqparse.RequestParser()
parser.add_argument("query", type=str, location="json")
parser.add_argument("external_retrieval_model", type=dict, required=False, location="json")
args = parser.parse_args()
HitTestingService.hit_testing_args_check(args)
try:
response = HitTestingService.external_retrieve(
dataset=dataset,
query=args["query"],
account=current_user,
external_retrieval_model=args["external_retrieval_model"],
)
return response
except Exception as e:
raise InternalServerError(str(e))
api.add_resource(ExternalKnowledgeHitTestingApi, "/datasets/<uuid:dataset_id>/external-hit-testing")
api.add_resource(ExternalDatasetCreateApi, "/datasets/external")
api.add_resource(ExternalApiTemplateListApi, "/datasets/external-knowledge-api")
api.add_resource(ExternalApiTemplateApi, "/datasets/external-knowledge-api/<uuid:external_knowledge_api_id>")
api.add_resource(ExternalApiUseCheckApi, "/datasets/external-knowledge-api/<uuid:external_knowledge_api_id>/use-check")

View File

@@ -47,6 +47,7 @@ class HitTestingApi(Resource):
parser = reqparse.RequestParser()
parser.add_argument("query", type=str, location="json")
parser.add_argument("retrieval_model", type=dict, required=False, location="json")
parser.add_argument("external_retrieval_model", type=dict, required=False, location="json")
args = parser.parse_args()
HitTestingService.hit_testing_args_check(args)
@@ -57,6 +58,7 @@ class HitTestingApi(Resource):
query=args["query"],
account=current_user,
retrieval_model=args["retrieval_model"],
external_retrieval_model=args["external_retrieval_model"],
limit=10,
)

View File

@@ -14,7 +14,9 @@ class WebsiteCrawlApi(Resource):
@account_initialization_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("provider", type=str, choices=["firecrawl"], required=True, nullable=True, location="json")
parser.add_argument(
"provider", type=str, choices=["firecrawl", "jinareader"], required=True, nullable=True, location="json"
)
parser.add_argument("url", type=str, required=True, nullable=True, location="json")
parser.add_argument("options", type=dict, required=True, nullable=True, location="json")
args = parser.parse_args()
@@ -33,7 +35,7 @@ class WebsiteCrawlStatusApi(Resource):
@account_initialization_required
def get(self, job_id: str):
parser = reqparse.RequestParser()
parser.add_argument("provider", type=str, choices=["firecrawl"], required=True, location="args")
parser.add_argument("provider", type=str, choices=["firecrawl", "jinareader"], required=True, location="args")
args = parser.parse_args()
# get crawl status
try:

View File

@@ -18,9 +18,7 @@ class NotSetupError(BaseHTTPException):
class NotInitValidateError(BaseHTTPException):
error_code = "not_init_validated"
description = (
"Init validation has not been completed yet. " "Please proceed with the init validation process first."
)
description = "Init validation has not been completed yet. Please proceed with the init validation process first."
code = 401

View File

@@ -81,7 +81,7 @@ class ChatTextApi(InstalledAppResource):
message_id = args.get("message_id", None)
text = args.get("text", None)
if (
app_model.mode in [AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value]
app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}
and app_model.workflow
and app_model.workflow.features_dict
):

View File

@@ -92,7 +92,7 @@ class ChatApi(InstalledAppResource):
def post(self, installed_app):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -100,6 +100,7 @@ class ChatApi(InstalledAppResource):
parser.add_argument("query", type=str, required=True, location="json")
parser.add_argument("files", type=list, required=False, location="json")
parser.add_argument("conversation_id", type=uuid_value, location="json")
parser.add_argument("parent_message_id", type=uuid_value, required=False, location="json")
parser.add_argument("retriever_from", type=str, required=False, default="explore_app", location="json")
args = parser.parse_args()
@@ -140,7 +141,7 @@ class ChatStopApi(InstalledAppResource):
def post(self, installed_app, task_id):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
AppQueueManager.set_stop_flag(task_id, InvokeFrom.EXPLORE, current_user.id)

View File

@@ -20,7 +20,7 @@ class ConversationListApi(InstalledAppResource):
def get(self, installed_app):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -50,7 +50,7 @@ class ConversationApi(InstalledAppResource):
def delete(self, installed_app, c_id):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)
@@ -68,7 +68,7 @@ class ConversationRenameApi(InstalledAppResource):
def post(self, installed_app, c_id):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)
@@ -90,7 +90,7 @@ class ConversationPinApi(InstalledAppResource):
def patch(self, installed_app, c_id):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)
@@ -107,7 +107,7 @@ class ConversationUnPinApi(InstalledAppResource):
def patch(self, installed_app, c_id):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)

View File

@@ -31,7 +31,7 @@ class InstalledAppsListApi(Resource):
"app_owner_tenant_id": installed_app.app_owner_tenant_id,
"is_pinned": installed_app.is_pinned,
"last_used_at": installed_app.last_used_at,
"editable": current_user.role in ["owner", "admin"],
"editable": current_user.role in {"owner", "admin"},
"uninstallable": current_tenant_id == installed_app.app_owner_tenant_id,
}
for installed_app in installed_apps

View File

@@ -40,7 +40,7 @@ class MessageListApi(InstalledAppResource):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -51,7 +51,7 @@ class MessageListApi(InstalledAppResource):
try:
return MessageService.pagination_by_first_id(
app_model, current_user, args["conversation_id"], args["first_id"], args["limit"]
app_model, current_user, args["conversation_id"], args["first_id"], args["limit"], "desc"
)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
@@ -125,7 +125,7 @@ class MessageSuggestedQuestionApi(InstalledAppResource):
def get(self, installed_app, message_id):
app_model = installed_app.app
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
message_id = str(message_id)

View File

@@ -43,7 +43,7 @@ class AppParameterApi(InstalledAppResource):
"""Retrieve app parameters."""
app_model = installed_app.app
if app_model.mode in [AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value]:
if app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}:
workflow = app_model.workflow
if workflow is None:
raise AppUnavailableError()

View File

@@ -38,11 +38,52 @@ class VersionApi(Resource):
return result
content = json.loads(response.content)
result["version"] = content["version"]
result["release_date"] = content["releaseDate"]
result["release_notes"] = content["releaseNotes"]
result["can_auto_update"] = content["canAutoUpdate"]
if _has_new_version(latest_version=content["version"], current_version=f"{args.get('current_version')}"):
result["version"] = content["version"]
result["release_date"] = content["releaseDate"]
result["release_notes"] = content["releaseNotes"]
result["can_auto_update"] = content["canAutoUpdate"]
return result
def _has_new_version(*, latest_version: str, current_version: str) -> bool:
def parse_version(version: str) -> tuple:
# Split version into parts and pre-release suffix if any
parts = version.split("-")
version_parts = parts[0].split(".")
pre_release = parts[1] if len(parts) > 1 else None
# Validate version format
if len(version_parts) != 3:
raise ValueError(f"Invalid version format: {version}")
try:
# Convert version parts to integers
major, minor, patch = map(int, version_parts)
return (major, minor, patch, pre_release)
except ValueError:
raise ValueError(f"Invalid version format: {version}")
latest = parse_version(latest_version)
current = parse_version(current_version)
# Compare major, minor, and patch versions
for latest_part, current_part in zip(latest[:3], current[:3]):
if latest_part > current_part:
return True
elif latest_part < current_part:
return False
# If versions are equal, check pre-release suffixes
if latest[3] is None and current[3] is not None:
return True
elif latest[3] is not None and current[3] is None:
return False
elif latest[3] is not None and current[3] is not None:
# Simple string comparison for pre-release versions
return latest[3] > current[3]
return False
api.add_resource(VersionApi, "/version")

View File

@@ -218,7 +218,7 @@ api.add_resource(ModelProviderCredentialApi, "/workspaces/current/model-provider
api.add_resource(ModelProviderValidateApi, "/workspaces/current/model-providers/<string:provider>/credentials/validate")
api.add_resource(ModelProviderApi, "/workspaces/current/model-providers/<string:provider>")
api.add_resource(
ModelProviderIconApi, "/workspaces/current/model-providers/<string:provider>/" "<string:icon_type>/<string:lang>"
ModelProviderIconApi, "/workspaces/current/model-providers/<string:provider>/<string:icon_type>/<string:lang>"
)
api.add_resource(

View File

@@ -72,8 +72,9 @@ class DefaultModelApi(Resource):
provider=model_setting["provider"],
model=model_setting["model"],
)
except Exception:
logging.warning(f"{model_setting['model_type']} save error")
except Exception as ex:
logging.exception(f"{model_setting['model_type']} save error: {ex}")
raise ex
return {"result": "success"}

View File

@@ -194,7 +194,7 @@ class WebappLogoWorkspaceApi(Resource):
raise TooManyFilesError()
extension = file.filename.split(".")[-1]
if extension.lower() not in ["svg", "png"]:
if extension.lower() not in {"svg", "png"}:
raise UnsupportedFileTypeError()
try:

View File

@@ -42,7 +42,7 @@ class AppParameterApi(Resource):
@marshal_with(parameters_fields)
def get(self, app_model: App):
"""Retrieve app parameters."""
if app_model.mode in [AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value]:
if app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}:
workflow = app_model.workflow
if workflow is None:
raise AppUnavailableError()

View File

@@ -79,7 +79,7 @@ class TextApi(Resource):
message_id = args.get("message_id", None)
text = args.get("text", None)
if (
app_model.mode in [AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value]
app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}
and app_model.workflow
and app_model.workflow.features_dict
):

View File

@@ -96,7 +96,7 @@ class ChatApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON, required=True))
def post(self, app_model: App, end_user: EndUser):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -144,7 +144,7 @@ class ChatStopApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON, required=True))
def post(self, app_model: App, end_user: EndUser, task_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
AppQueueManager.set_stop_flag(task_id, InvokeFrom.SERVICE_API, end_user.id)

View File

@@ -18,7 +18,7 @@ class ConversationApi(Resource):
@marshal_with(conversation_infinite_scroll_pagination_fields)
def get(self, app_model: App, end_user: EndUser):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -52,7 +52,7 @@ class ConversationDetailApi(Resource):
@marshal_with(simple_conversation_fields)
def delete(self, app_model: App, end_user: EndUser, c_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)
@@ -69,7 +69,7 @@ class ConversationRenameApi(Resource):
@marshal_with(simple_conversation_fields)
def post(self, app_model: App, end_user: EndUser, c_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)

View File

@@ -54,6 +54,7 @@ class MessageListApi(Resource):
message_fields = {
"id": fields.String,
"conversation_id": fields.String,
"parent_message_id": fields.String,
"inputs": fields.Raw,
"query": fields.String,
"answer": fields.String(attribute="re_sign_file_url_answer"),
@@ -76,7 +77,7 @@ class MessageListApi(Resource):
@marshal_with(message_infinite_scroll_pagination_fields)
def get(self, app_model: App, end_user: EndUser):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -117,7 +118,7 @@ class MessageSuggestedApi(Resource):
def get(self, app_model: App, end_user: EndUser, message_id):
message_id = str(message_id)
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
try:

View File

@@ -1,6 +1,7 @@
import logging
from flask_restful import Resource, fields, marshal_with, reqparse
from flask_restful.inputs import int_range
from werkzeug.exceptions import InternalServerError
from controllers.service_api import api
@@ -22,10 +23,12 @@ from core.errors.error import (
)
from core.model_runtime.errors.invoke import InvokeError
from extensions.ext_database import db
from fields.workflow_app_log_fields import workflow_app_log_pagination_fields
from libs import helper
from models.model import App, AppMode, EndUser
from models.workflow import WorkflowRun
from services.app_generate_service import AppGenerateService
from services.workflow_app_service import WorkflowAppService
logger = logging.getLogger(__name__)
@@ -113,6 +116,30 @@ class WorkflowTaskStopApi(Resource):
return {"result": "success"}
class WorkflowAppLogApi(Resource):
@validate_app_token
@marshal_with(workflow_app_log_pagination_fields)
def get(self, app_model: App):
"""
Get workflow app logs
"""
parser = reqparse.RequestParser()
parser.add_argument("keyword", type=str, location="args")
parser.add_argument("status", type=str, choices=["succeeded", "failed", "stopped"], location="args")
parser.add_argument("page", type=int_range(1, 99999), default=1, location="args")
parser.add_argument("limit", type=int_range(1, 100), default=20, location="args")
args = parser.parse_args()
# get paginate workflow app logs
workflow_app_service = WorkflowAppService()
workflow_app_log_pagination = workflow_app_service.get_paginate_workflow_app_logs(
app_model=app_model, args=args
)
return workflow_app_log_pagination
api.add_resource(WorkflowRunApi, "/workflows/run")
api.add_resource(WorkflowRunDetailApi, "/workflows/run/<string:workflow_id>")
api.add_resource(WorkflowTaskStopApi, "/workflows/tasks/<string:task_id>/stop")
api.add_resource(WorkflowAppLogApi, "/workflows/logs")

View File

@@ -28,11 +28,11 @@ class DatasetListApi(DatasetApiResource):
page = request.args.get("page", default=1, type=int)
limit = request.args.get("limit", default=20, type=int)
provider = request.args.get("provider", default="vendor")
# provider = request.args.get("provider", default="vendor")
search = request.args.get("keyword", default=None, type=str)
tag_ids = request.args.getlist("tag_ids")
datasets, total = DatasetService.get_datasets(page, limit, provider, tenant_id, current_user, search, tag_ids)
datasets, total = DatasetService.get_datasets(page, limit, tenant_id, current_user, search, tag_ids)
# check embedding setting
provider_manager = ProviderManager()
configurations = provider_manager.get_configurations(tenant_id=current_user.current_tenant_id)
@@ -82,6 +82,26 @@ class DatasetListApi(DatasetApiResource):
required=False,
nullable=False,
)
parser.add_argument(
"external_knowledge_api_id",
type=str,
nullable=True,
required=False,
default="_validate_name",
)
parser.add_argument(
"provider",
type=str,
nullable=True,
required=False,
default="vendor",
)
parser.add_argument(
"external_knowledge_id",
type=str,
nullable=True,
required=False,
)
args = parser.parse_args()
try:
@@ -91,6 +111,9 @@ class DatasetListApi(DatasetApiResource):
indexing_technique=args["indexing_technique"],
account=current_user,
permission=args["permission"],
provider=args["provider"],
external_knowledge_api_id=args["external_knowledge_api_id"],
external_knowledge_id=args["external_knowledge_id"],
)
except services.errors.dataset.DatasetNameDuplicateError:
raise DatasetNameDuplicateError()

View File

@@ -41,7 +41,7 @@ class AppParameterApi(WebApiResource):
@marshal_with(parameters_fields)
def get(self, app_model: App, end_user):
"""Retrieve app parameters."""
if app_model.mode in [AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value]:
if app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}:
workflow = app_model.workflow
if workflow is None:
raise AppUnavailableError()

View File

@@ -78,7 +78,7 @@ class TextApi(WebApiResource):
message_id = args.get("message_id", None)
text = args.get("text", None)
if (
app_model.mode in [AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value]
app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}
and app_model.workflow
and app_model.workflow.features_dict
):

View File

@@ -87,7 +87,7 @@ class CompletionStopApi(WebApiResource):
class ChatApi(WebApiResource):
def post(self, app_model, end_user):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -96,6 +96,7 @@ class ChatApi(WebApiResource):
parser.add_argument("files", type=list, required=False, location="json")
parser.add_argument("response_mode", type=str, choices=["blocking", "streaming"], location="json")
parser.add_argument("conversation_id", type=uuid_value, location="json")
parser.add_argument("parent_message_id", type=uuid_value, required=False, location="json")
parser.add_argument("retriever_from", type=str, required=False, default="web_app", location="json")
args = parser.parse_args()
@@ -136,7 +137,7 @@ class ChatApi(WebApiResource):
class ChatStopApi(WebApiResource):
def post(self, app_model, end_user, task_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
AppQueueManager.set_stop_flag(task_id, InvokeFrom.WEB_APP, end_user.id)

View File

@@ -18,7 +18,7 @@ class ConversationListApi(WebApiResource):
@marshal_with(conversation_infinite_scroll_pagination_fields)
def get(self, app_model, end_user):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -56,7 +56,7 @@ class ConversationListApi(WebApiResource):
class ConversationApi(WebApiResource):
def delete(self, app_model, end_user, c_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)
@@ -73,7 +73,7 @@ class ConversationRenameApi(WebApiResource):
@marshal_with(simple_conversation_fields)
def post(self, app_model, end_user, c_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)
@@ -92,7 +92,7 @@ class ConversationRenameApi(WebApiResource):
class ConversationPinApi(WebApiResource):
def patch(self, app_model, end_user, c_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)
@@ -108,7 +108,7 @@ class ConversationPinApi(WebApiResource):
class ConversationUnPinApi(WebApiResource):
def patch(self, app_model, end_user, c_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
conversation_id = str(c_id)

View File

@@ -57,6 +57,7 @@ class MessageListApi(WebApiResource):
message_fields = {
"id": fields.String,
"conversation_id": fields.String,
"parent_message_id": fields.String,
"inputs": fields.Raw,
"query": fields.String,
"answer": fields.String(attribute="re_sign_file_url_answer"),
@@ -78,7 +79,7 @@ class MessageListApi(WebApiResource):
@marshal_with(message_infinite_scroll_pagination_fields)
def get(self, app_model, end_user):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotChatAppError()
parser = reqparse.RequestParser()
@@ -89,7 +90,7 @@ class MessageListApi(WebApiResource):
try:
return MessageService.pagination_by_first_id(
app_model, end_user, args["conversation_id"], args["first_id"], args["limit"]
app_model, end_user, args["conversation_id"], args["first_id"], args["limit"], "desc"
)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
@@ -160,7 +161,7 @@ class MessageMoreLikeThisApi(WebApiResource):
class MessageSuggestedQuestionApi(WebApiResource):
def get(self, app_model, end_user, message_id):
app_mode = AppMode.value_of(app_model.mode)
if app_mode not in [AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]:
if app_mode not in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT}:
raise NotCompletionAppError()
message_id = str(message_id)

View File

@@ -32,6 +32,7 @@ from core.model_runtime.entities.message_entities import (
from core.model_runtime.entities.model_entities import ModelFeature
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.model_runtime.utils.encoders import jsonable_encoder
from core.prompt.utils.extract_thread_messages import extract_thread_messages
from core.tools.entities.tool_entities import (
ToolParameter,
ToolRuntimeVariablePool,
@@ -441,10 +442,12 @@ class BaseAgentRunner(AppRunner):
.filter(
Message.conversation_id == self.message.conversation_id,
)
.order_by(Message.created_at.asc())
.order_by(Message.created_at.desc())
.all()
)
messages = list(reversed(extract_thread_messages(messages)))
for message in messages:
if message.id == self.message.id:
continue

View File

@@ -90,7 +90,7 @@ class CotAgentOutputParser:
if not in_code_block and not in_json:
if delta.lower() == action_str[action_idx] and action_idx == 0:
if last_character not in ["\n", " ", ""]:
if last_character not in {"\n", " ", ""}:
index += steps
yield delta
continue
@@ -117,7 +117,7 @@ class CotAgentOutputParser:
action_idx = 0
if delta.lower() == thought_str[thought_idx] and thought_idx == 0:
if last_character not in ["\n", " ", ""]:
if last_character not in {"\n", " ", ""}:
index += steps
yield delta
continue

View File

@@ -29,7 +29,7 @@ class BaseAppConfigManager:
additional_features.show_retrieve_source = RetrievalResourceConfigManager.convert(config=config_dict)
additional_features.file_upload = FileUploadConfigManager.convert(
config=config_dict, is_vision=app_mode in [AppMode.CHAT, AppMode.COMPLETION, AppMode.AGENT_CHAT]
config=config_dict, is_vision=app_mode in {AppMode.CHAT, AppMode.COMPLETION, AppMode.AGENT_CHAT}
)
additional_features.opening_statement, additional_features.suggested_questions = (

View File

@@ -18,7 +18,7 @@ class AgentConfigManager:
if agent_strategy == "function_call":
strategy = AgentEntity.Strategy.FUNCTION_CALLING
elif agent_strategy == "cot" or agent_strategy == "react":
elif agent_strategy in {"cot", "react"}:
strategy = AgentEntity.Strategy.CHAIN_OF_THOUGHT
else:
# old configs, try to detect default strategy
@@ -43,10 +43,10 @@ class AgentConfigManager:
agent_tools.append(AgentToolEntity(**agent_tool_properties))
if "strategy" in config["agent_mode"] and config["agent_mode"]["strategy"] not in [
if "strategy" in config["agent_mode"] and config["agent_mode"]["strategy"] not in {
"react_router",
"router",
]:
}:
agent_prompt = agent_dict.get("prompt", None) or {}
# check model mode
model_mode = config.get("model", {}).get("mode", "completion")

View File

@@ -167,7 +167,7 @@ class DatasetConfigManager:
config["agent_mode"]["strategy"] = PlanningStrategy.ROUTER.value
has_datasets = False
if config["agent_mode"]["strategy"] in [PlanningStrategy.ROUTER.value, PlanningStrategy.REACT_ROUTER.value]:
if config["agent_mode"]["strategy"] in {PlanningStrategy.ROUTER.value, PlanningStrategy.REACT_ROUTER.value}:
for tool in config["agent_mode"]["tools"]:
key = list(tool.keys())[0]
if key == "dataset":

View File

@@ -86,7 +86,7 @@ class PromptTemplateConfigManager:
if config["prompt_type"] == PromptTemplateEntity.PromptType.ADVANCED.value:
if not config["chat_prompt_config"] and not config["completion_prompt_config"]:
raise ValueError(
"chat_prompt_config or completion_prompt_config is required " "when prompt_type is advanced"
"chat_prompt_config or completion_prompt_config is required when prompt_type is advanced"
)
model_mode_vals = [mode.value for mode in ModelMode]

View File

@@ -42,12 +42,12 @@ class BasicVariablesConfigManager:
variable=variable["variable"], type=variable["type"], config=variable["config"]
)
)
elif variable_type in [
elif variable_type in {
VariableEntityType.TEXT_INPUT,
VariableEntityType.PARAGRAPH,
VariableEntityType.NUMBER,
VariableEntityType.SELECT,
]:
}:
variable = variables[variable_type]
variable_entities.append(
VariableEntity(
@@ -97,7 +97,7 @@ class BasicVariablesConfigManager:
variables = []
for item in config["user_input_form"]:
key = list(item.keys())[0]
if key not in ["text-input", "select", "paragraph", "number", "external_data_tool"]:
if key not in {"text-input", "select", "paragraph", "number", "external_data_tool"}:
raise ValueError("Keys in user_input_form list can only be 'text-input', 'paragraph' or 'select'")
form_item = item[key]
@@ -115,7 +115,7 @@ class BasicVariablesConfigManager:
pattern = re.compile(r"^(?!\d)[\u4e00-\u9fa5A-Za-z0-9_\U0001F300-\U0001F64F\U0001F680-\U0001F6FF]{1,100}$")
if pattern.match(form_item["variable"]) is None:
raise ValueError("variable in user_input_form must be a string, " "and cannot start with a number")
raise ValueError("variable in user_input_form must be a string, and cannot start with a number")
variables.append(form_item["variable"])

View File

@@ -54,14 +54,14 @@ class FileUploadConfigManager:
if is_vision:
detail = config["file_upload"]["image"]["detail"]
if detail not in ["high", "low"]:
if detail not in {"high", "low"}:
raise ValueError("detail must be in ['high', 'low']")
transfer_methods = config["file_upload"]["image"]["transfer_methods"]
if not isinstance(transfer_methods, list):
raise ValueError("transfer_methods must be of list type")
for method in transfer_methods:
if method not in ["remote_url", "local_file"]:
if method not in {"remote_url", "local_file"}:
raise ValueError("transfer_methods must be in ['remote_url', 'local_file']")
return config, ["file_upload"]

View File

@@ -121,6 +121,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
inputs=conversation.inputs if conversation else self._get_cleaned_inputs(inputs, app_config),
query=query,
files=file_objs,
parent_message_id=args.get("parent_message_id"),
user_id=user.id,
stream=stream,
invoke_from=invoke_from,

View File

@@ -73,7 +73,7 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
raise ValueError("Workflow not initialized")
user_id = None
if self.application_generate_entity.invoke_from in [InvokeFrom.WEB_APP, InvokeFrom.SERVICE_API]:
if self.application_generate_entity.invoke_from in {InvokeFrom.WEB_APP, InvokeFrom.SERVICE_API}:
end_user = db.session.query(EndUser).filter(EndUser.id == self.application_generate_entity.user_id).first()
if end_user:
user_id = end_user.session_id
@@ -175,7 +175,7 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
user_id=self.application_generate_entity.user_id,
user_from=(
UserFrom.ACCOUNT
if self.application_generate_entity.invoke_from in [InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER]
if self.application_generate_entity.invoke_from in {InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER}
else UserFrom.END_USER
),
invoke_from=self.application_generate_entity.invoke_from,

View File

@@ -231,7 +231,8 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
except Exception as e:
logger.error(e)
break
yield MessageAudioEndStreamResponse(audio="", task_id=task_id)
if tts_publisher:
yield MessageAudioEndStreamResponse(audio="", task_id=task_id)
def _process_stream_response(
self,

View File

@@ -127,6 +127,7 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
inputs=conversation.inputs if conversation else self._get_cleaned_inputs(inputs, app_config),
query=query,
files=file_objs,
parent_message_id=args.get("parent_message_id"),
user_id=user.id,
stream=stream,
invoke_from=invoke_from,

View File

@@ -16,7 +16,7 @@ class AppGenerateResponseConverter(ABC):
def convert(
cls, response: Union[AppBlockingResponse, Generator[AppStreamResponse, Any, None]], invoke_from: InvokeFrom
) -> dict[str, Any] | Generator[str, Any, None]:
if invoke_from in [InvokeFrom.DEBUGGER, InvokeFrom.SERVICE_API]:
if invoke_from in {InvokeFrom.DEBUGGER, InvokeFrom.SERVICE_API}:
if isinstance(response, AppBlockingResponse):
return cls.convert_blocking_full_response(response)
else:
@@ -75,10 +75,10 @@ class AppGenerateResponseConverter(ABC):
:return:
"""
# show_retrieve_source
updated_resources = []
if "retriever_resources" in metadata:
metadata["retriever_resources"] = []
for resource in metadata["retriever_resources"]:
metadata["retriever_resources"].append(
updated_resources.append(
{
"segment_id": resource["segment_id"],
"position": resource["position"],
@@ -87,6 +87,7 @@ class AppGenerateResponseConverter(ABC):
"content": resource["content"],
}
)
metadata["retriever_resources"] = updated_resources
# show annotation reply
if "annotation_reply" in metadata:

View File

@@ -22,11 +22,11 @@ class BaseAppGenerator:
return var.default or ""
if (
var.type
in (
in {
VariableEntityType.TEXT_INPUT,
VariableEntityType.SELECT,
VariableEntityType.PARAGRAPH,
)
}
and user_input_value
and not isinstance(user_input_value, str)
):
@@ -44,7 +44,7 @@ class BaseAppGenerator:
options = var.options or []
if user_input_value not in options:
raise ValueError(f"{var.variable} in input form must be one of the following: {options}")
elif var.type in (VariableEntityType.TEXT_INPUT, VariableEntityType.PARAGRAPH):
elif var.type in {VariableEntityType.TEXT_INPUT, VariableEntityType.PARAGRAPH}:
if var.max_length and user_input_value and len(user_input_value) > var.max_length:
raise ValueError(f"{var.variable} in input form must be less than {var.max_length} characters")

View File

@@ -32,7 +32,7 @@ class AppQueueManager:
self._user_id = user_id
self._invoke_from = invoke_from
user_prefix = "account" if self._invoke_from in [InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER] else "end-user"
user_prefix = "account" if self._invoke_from in {InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER} else "end-user"
redis_client.setex(
AppQueueManager._generate_task_belong_cache_key(self._task_id), 1800, f"{user_prefix}-{self._user_id}"
)
@@ -118,7 +118,7 @@ class AppQueueManager:
if result is None:
return
user_prefix = "account" if invoke_from in [InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER] else "end-user"
user_prefix = "account" if invoke_from in {InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER} else "end-user"
if result.decode("utf-8") != f"{user_prefix}-{user_id}":
return

View File

@@ -309,7 +309,7 @@ class AppRunner:
if not prompt_messages:
prompt_messages = result.prompt_messages
if not usage and result.delta.usage:
if result.delta.usage:
usage = result.delta.usage
if not usage:
@@ -379,7 +379,7 @@ class AppRunner:
queue_manager=queue_manager,
app_generate_entity=application_generate_entity,
prompt_messages=prompt_messages,
text="I apologize for any confusion, " "but I'm an AI assistant to be helpful, harmless, and honest.",
text="I apologize for any confusion, but I'm an AI assistant to be helpful, harmless, and honest.",
stream=application_generate_entity.stream,
)

View File

@@ -128,6 +128,7 @@ class ChatAppGenerator(MessageBasedAppGenerator):
inputs=conversation.inputs if conversation else self._get_cleaned_inputs(inputs, app_config),
query=query,
files=file_objs,
parent_message_id=args.get("parent_message_id"),
user_id=user.id,
stream=stream,
invoke_from=invoke_from,

View File

@@ -148,7 +148,7 @@ class MessageBasedAppGenerator(BaseAppGenerator):
# get from source
end_user_id = None
account_id = None
if application_generate_entity.invoke_from in [InvokeFrom.WEB_APP, InvokeFrom.SERVICE_API]:
if application_generate_entity.invoke_from in {InvokeFrom.WEB_APP, InvokeFrom.SERVICE_API}:
from_source = "api"
end_user_id = application_generate_entity.user_id
else:
@@ -165,11 +165,11 @@ class MessageBasedAppGenerator(BaseAppGenerator):
model_provider = application_generate_entity.model_conf.provider
model_id = application_generate_entity.model_conf.model
override_model_configs = None
if app_config.app_model_config_from == EasyUIBasedAppModelConfigFrom.ARGS and app_config.app_mode in [
if app_config.app_model_config_from == EasyUIBasedAppModelConfigFrom.ARGS and app_config.app_mode in {
AppMode.AGENT_CHAT,
AppMode.CHAT,
AppMode.COMPLETION,
]:
}:
override_model_configs = app_config.app_model_config_dict
# get conversation introduction
@@ -218,6 +218,7 @@ class MessageBasedAppGenerator(BaseAppGenerator):
answer_tokens=0,
answer_unit_price=0,
answer_price_unit=0,
parent_message_id=getattr(application_generate_entity, "parent_message_id", None),
provider_response_latency=0,
total_price=0,
currency="USD",

View File

@@ -53,7 +53,7 @@ class WorkflowAppRunner(WorkflowBasedAppRunner):
app_config = cast(WorkflowAppConfig, app_config)
user_id = None
if self.application_generate_entity.invoke_from in [InvokeFrom.WEB_APP, InvokeFrom.SERVICE_API]:
if self.application_generate_entity.invoke_from in {InvokeFrom.WEB_APP, InvokeFrom.SERVICE_API}:
end_user = db.session.query(EndUser).filter(EndUser.id == self.application_generate_entity.user_id).first()
if end_user:
user_id = end_user.session_id
@@ -113,7 +113,7 @@ class WorkflowAppRunner(WorkflowBasedAppRunner):
user_id=self.application_generate_entity.user_id,
user_from=(
UserFrom.ACCOUNT
if self.application_generate_entity.invoke_from in [InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER]
if self.application_generate_entity.invoke_from in {InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER}
else UserFrom.END_USER
),
invoke_from=self.application_generate_entity.invoke_from,

View File

@@ -212,7 +212,8 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
except Exception as e:
logger.error(e)
break
yield MessageAudioEndStreamResponse(audio="", task_id=task_id)
if tts_publisher:
yield MessageAudioEndStreamResponse(audio="", task_id=task_id)
def _process_stream_response(
self,

View File

@@ -84,7 +84,7 @@ class WorkflowLoggingCallback(WorkflowCallback):
if route_node_state.node_run_result:
node_run_result = route_node_state.node_run_result
self.print_text(
f"Inputs: " f"{jsonable_encoder(node_run_result.inputs) if node_run_result.inputs else ''}",
f"Inputs: {jsonable_encoder(node_run_result.inputs) if node_run_result.inputs else ''}",
color="green",
)
self.print_text(
@@ -116,7 +116,7 @@ class WorkflowLoggingCallback(WorkflowCallback):
node_run_result = route_node_state.node_run_result
self.print_text(f"Error: {node_run_result.error}", color="red")
self.print_text(
f"Inputs: " f"" f"{jsonable_encoder(node_run_result.inputs) if node_run_result.inputs else ''}",
f"Inputs: {jsonable_encoder(node_run_result.inputs) if node_run_result.inputs else ''}",
color="red",
)
self.print_text(
@@ -125,7 +125,7 @@ class WorkflowLoggingCallback(WorkflowCallback):
color="red",
)
self.print_text(
f"Outputs: " f"{jsonable_encoder(node_run_result.outputs) if node_run_result.outputs else ''}",
f"Outputs: {jsonable_encoder(node_run_result.outputs) if node_run_result.outputs else ''}",
color="red",
)

View File

@@ -122,6 +122,7 @@ class ChatAppGenerateEntity(EasyUIBasedAppGenerateEntity):
"""
conversation_id: Optional[str] = None
parent_message_id: Optional[str] = None
class CompletionAppGenerateEntity(EasyUIBasedAppGenerateEntity):
@@ -138,6 +139,7 @@ class AgentChatAppGenerateEntity(EasyUIBasedAppGenerateEntity):
"""
conversation_id: Optional[str] = None
parent_message_id: Optional[str] = None
class AdvancedChatAppGenerateEntity(AppGenerateEntity):
@@ -149,6 +151,7 @@ class AdvancedChatAppGenerateEntity(AppGenerateEntity):
app_config: WorkflowUIBasedAppConfig
conversation_id: Optional[str] = None
parent_message_id: Optional[str] = None
query: str
class SingleIterationRunEntity(BaseModel):

View File

@@ -63,7 +63,7 @@ class AnnotationReplyFeature:
score = documents[0].metadata["score"]
annotation = AppAnnotationService.get_annotation_by_id(annotation_id)
if annotation:
if invoke_from in [InvokeFrom.SERVICE_API, InvokeFrom.WEB_APP]:
if invoke_from in {InvokeFrom.SERVICE_API, InvokeFrom.WEB_APP}:
from_source = "api"
else:
from_source = "console"

View File

@@ -1,2 +1,2 @@
class VariableError(Exception):
class VariableError(ValueError):
pass

View File

@@ -248,7 +248,8 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline, MessageCycleMan
else:
start_listener_time = time.time()
yield MessageAudioStreamResponse(audio=audio.audio, task_id=task_id)
yield MessageAudioEndStreamResponse(audio="", task_id=task_id)
if publisher:
yield MessageAudioEndStreamResponse(audio="", task_id=task_id)
def _process_stream_response(
self, publisher: AppGeneratorTTSPublisher, trace_manager: Optional[TraceQueueManager] = None
@@ -372,7 +373,7 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline, MessageCycleMan
self._message,
application_generate_entity=self._application_generate_entity,
conversation=self._conversation,
is_first_message=self._application_generate_entity.app_config.app_mode in [AppMode.AGENT_CHAT, AppMode.CHAT]
is_first_message=self._application_generate_entity.app_config.app_mode in {AppMode.AGENT_CHAT, AppMode.CHAT}
and self._application_generate_entity.conversation_id is None,
extras=self._application_generate_entity.extras,
)

View File

@@ -383,7 +383,7 @@ class WorkflowCycleManage:
:param workflow_node_execution: workflow node execution
:return:
"""
if workflow_node_execution.node_type in [NodeType.ITERATION.value, NodeType.LOOP.value]:
if workflow_node_execution.node_type in {NodeType.ITERATION.value, NodeType.LOOP.value}:
return None
response = NodeStartStreamResponse(
@@ -430,7 +430,7 @@ class WorkflowCycleManage:
:param workflow_node_execution: workflow node execution
:return:
"""
if workflow_node_execution.node_type in [NodeType.ITERATION.value, NodeType.LOOP.value]:
if workflow_node_execution.node_type in {NodeType.ITERATION.value, NodeType.LOOP.value}:
return None
return NodeFinishStreamResponse(

Some files were not shown because too many files have changed in this diff Show More