Compare commits

...

199 Commits

Author SHA1 Message Date
Joel
c110888aee feat: agent app support generate prompt (#7007) 2024-08-06 17:43:54 +08:00
yanghx
c53875ce8c fix #6902 .docx handles images within tables and handles cross-column tables (#6951) 2024-08-06 17:14:24 +08:00
crazywoola
7f18c06b0a fix: code-block-missing-checks (#7002) 2024-08-06 16:11:14 +08:00
灰灰
96dcf0fe8a fix: code tool fails when null property exists in object (#6988) 2024-08-06 16:11:00 +08:00
Yi Xiao
0c22e4e3d1 Feat/new confirm (#6984) 2024-08-06 14:31:13 +08:00
Yefori
bd3ed89516 feat: add function calling for deepseek models (#6990) 2024-08-06 13:37:27 +08:00
Vico Chu
1c043b8426 Chores: fix name typo (#6987) 2024-08-06 13:33:21 +08:00
小羽
23ed15d19f feat:nvidia add nemotron4-340b and microsoft/phi-3 (#6973) 2024-08-06 10:16:41 +08:00
非法操作
312d905c9b chore: update duckduckgo tool (#6983) 2024-08-06 10:16:04 +08:00
Dr. Artificial曾小健
cba9319cc7 fix doc (#6974) 2024-08-06 10:10:55 +08:00
takatost
d839f1ada7 version to 0.6.16 (#6972) 2024-08-05 23:33:37 +08:00
takatost
6da14c2d48 security: fix api image security issues (#6971) 2024-08-05 20:21:08 +08:00
Pedro Gomes
a34285196b Revise the wrong pricing of certain LLM models. (#6967) 2024-08-05 18:41:44 +08:00
quicksand
e4587b2151 chore: MAX_TREE_DEPTH spelling mistake (#6965) 2024-08-05 18:41:08 +08:00
takatost
ea30174057 chore: optimize streaming tts of xinference (#6966) 2024-08-05 18:23:23 +08:00
Sangmin Ahn
dd676866aa chore: exclude .txt extenstion in create_by_text API (#6956) 2024-08-05 15:52:07 +08:00
TzuxinChen
f0d10553b4 Fixed a bug where permission was clearly displaye… (#6934) 2024-08-05 13:19:01 +08:00
liuzhenghua
ef616c604a fix: The permissions issue of the editor role accessing some backend … (#6945)
Co-authored-by: liuzhenghua-jk <liuzhenghua-jk@360shuke.com>
2024-08-05 12:55:55 +08:00
KVOJJJin
2288efbf48 Fix: tag & settings modal in dataset card in Firefox (#6953) 2024-08-05 12:51:26 +08:00
Bowen Liang
f656e1bae2 fix: ensure db migration in docker entry script running with upgrade-db command for proper locking (#6946) 2024-08-05 10:55:26 +08:00
alwqx
5a7fc8cd8c chore: fix markdown format and one typo (#6939) 2024-08-05 08:29:59 +08:00
liuzhenghua
141e4e0276 fix: restore xinference secret field (#6941)
Co-authored-by: liuzhenghua-jk <liuzhenghua-jk@360shuke.com>
2024-08-04 22:32:24 +08:00
Pedro Gomes
20d3e1d297 Fix increase_usage of total_price in agent_runner (#6688) 2024-08-04 14:42:22 +08:00
crazywoola
79715345ef fix: import workflow errors (#6937) 2024-08-04 14:34:39 +08:00
chenxu9741
dff3f41ef6 Workflow TTS playback node filtering issue. (#6877) 2024-08-04 14:28:56 +08:00
Weaxs
5e634a59a2 compatible xinference reranker server (#6927) 2024-08-04 13:49:38 +08:00
Joe
26e46d365c fix: workflow trace user_id error (#6932) 2024-08-04 03:28:50 +08:00
takatost
bcd7c8e921 fix: sending app trace data to other app trace provider (#6931) 2024-08-04 00:05:51 +08:00
Bowen Liang
70283f5b9f dep: support for Python 3.12 (#6771) 2024-08-02 21:14:36 +08:00
JuHyung Son
2e941bb91c add new provider Solar (#6884) 2024-08-02 20:48:09 +08:00
zhuhao
541bf1db5a feat: add the tool Serper for Google search. (#6786) (#6790) 2024-08-02 20:37:04 +08:00
Jyong
048bc4c06e fix update dataset failed when embedding model is not exist (#6920) 2024-08-02 20:30:22 +08:00
Kevin9703
4d0a6cc382 fix(nodes/knowledge-retrieval): workflow knowledge retrieval rerank model check (#6918) 2024-08-02 20:30:05 +08:00
zxhlyh
6feea0d75b fix: default rerank model check (#6917) 2024-08-02 18:27:06 +08:00
Joe
f97a51ce24 fix: reranking disable timer error (#6910) 2024-08-02 16:34:50 +08:00
NFish
df530b53e5 fix: system model setting missing space between buttons (#6912) 2024-08-02 16:26:16 +08:00
Bowen Liang
6aa02f8c63 dep: bump pgvecto-rs client from 0.1.x to 0.2.x (#6891) 2024-08-02 15:51:23 +08:00
crazywoola
7ab04e17e7 fix: return code in service api (#6911) 2024-08-02 15:48:58 +08:00
非法操作
bf3f1027c8 fix: code execution node not display clear reasons when sandbox res error (#6830) 2024-08-02 15:36:44 +08:00
KVOJJJin
62cc4077bb Fix: webapp color theme (#6908) 2024-08-02 15:08:14 +08:00
zxhlyh
e683461416 fix: knowledge save button visible (#6905) 2024-08-02 14:20:20 +08:00
zxhlyh
33dab4fe54 fix: multiple retrieval default weighted score (#6897) 2024-08-02 14:05:27 +08:00
sino
8166a8caf5 feat: update llama3.1 parameters for openrouter (#6901) 2024-08-02 13:13:34 +08:00
Jyong
44801df8f8 fix score threshold limit be None (#6900) 2024-08-02 12:10:51 +08:00
灰灰
56af1a0adf pref: change ollama embedded api request (#6876) 2024-08-02 12:04:47 +08:00
dufei
f8617db012 fix tongyi tool calls (#6896) 2024-08-02 10:03:43 +08:00
Jyong
2ab9af3b38 delete weight_type in knowledge retrieval node (#6892) 2024-08-01 21:38:59 +08:00
Hanqing Zhao
24a89f7753 Modify/modify jp doc (#6889) 2024-08-01 20:33:35 +08:00
Weaxs
cc4785f094 fix: xinference reranker return_documents (#6888) 2024-08-01 19:57:53 +08:00
ian
093f902335 fix: Change API key authentication failure response code from 404 to 401 (#6885) 2024-08-01 17:41:35 +08:00
hursit
104c797dd0 feat: Add support for i18n Turkish language (tr-TR) (#6886)
Co-authored-by: hursit <hursit.topal@enuygun.com>
2024-08-01 17:30:35 +08:00
chenxu9741
a9cd6df97e Remove tts (blocking call) (#6869) 2024-08-01 14:50:22 +08:00
呆萌闷油瓶
f31142e758 Azure 4o mini options (#6873) 2024-08-01 14:04:18 +08:00
zxhlyh
9ae88ede12 chore: n to 1 retrieval (#6839) 2024-08-01 13:45:18 +08:00
crazywoola
792f908afb Revert "feat:Azure gpt4o mini" (#6870) 2024-08-01 13:32:03 +08:00
非法操作
29e3c3061c fix: remote image not display in answer node (#6867) 2024-08-01 13:21:49 +08:00
呆萌闷油瓶
14367ddc09 feat:Azure gpt4o mini (#6866) 2024-08-01 13:03:08 +08:00
Jyong
8157fccf6d delete weight_type (#6865) 2024-08-01 13:02:33 +08:00
Charlie.Wei
cbf7f21ade Add azure gpt4omini (#6862)
Co-authored-by: luowei <glpat-EjySCyNjWiLqAED-YmwM>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-08-01 12:57:52 +08:00
NFish
9c4f3be0f3 Fix keyboard shortcut conflict between workflow and browser (#6863) 2024-08-01 12:57:30 +08:00
Weaxs
f6e8e120a1 support xinference tts (#6746) 2024-08-01 11:59:15 +08:00
Joe
08f922d8c9 fix: anthropic max token NoneType error (#6858) 2024-08-01 11:30:00 +08:00
zxhlyh
e9d6a43907 fix: model parameter selector (#6861) 2024-08-01 11:23:53 +08:00
-LAN-
feb4576ee7 chore: update SQLAlchemy configuration with custom naming convention (#6854) 2024-08-01 11:16:49 +08:00
小羽
56b43f62d1 feat: nvidia add llama3.1 model (#6844) 2024-07-31 21:24:02 +08:00
Giga Group
4b410494b3 Add model parameter enable_enhance for hunyuan llm model (#6847)
Co-authored-by: sun <sun@centen.cn>
2024-07-31 20:04:43 +08:00
Jyong
13f5867a16 add unstructured profiles (#6846) 2024-07-31 19:39:38 +08:00
Joe
df9bd36cab fix: claude-3-5-sonnet-20240620 max token error (#6843) 2024-07-31 18:34:44 +08:00
Joel
77c071e26f chore: upgrade slider ui (#6838) 2024-07-31 17:46:43 +08:00
Jyong
af76381b98 fix notion internal setting (#6836) 2024-07-31 17:17:46 +08:00
非法操作
35d0534eb9 chore: fix ssrf doc url (#6828) 2024-07-31 17:14:53 +08:00
William Espegren
4be12b29b9 fix: improved error handling for spider tool (#6835) 2024-07-31 17:11:52 +08:00
NFish
dd64e65ea0 fix: edit segment missing space between buttons (#6826)
Co-authored-by: xc Dou <xcdou@192.168.88.89>
2024-07-31 15:11:07 +08:00
ybalbert001
c23aa50bea Add AWS builtin Tools (#6721)
Co-authored-by: Yuanbo Li <ybalbert@amazon.com>
Co-authored-by: crazywoola <427733928@qq.com>
2024-07-31 14:41:42 +08:00
Nam Vu
8eb0d0fddd feat: support Celery auto-scale (#6249)
Co-authored-by: takatost <takatost@gmail.com>
2024-07-31 14:34:44 +08:00
kimjion
8904745129 feat: tag filter adapte to scrolling (#6819) 2024-07-31 13:55:32 +08:00
k-brahma
936ac8826d Add docker-compose certbot configurations with backward compatibility (#6702)
Co-authored-by: Your Name <you@example.com>
2024-07-31 13:21:56 +08:00
Ever
545d3c5a93 chore: Add processId field for metrics of threads/db-pool-stat/health (#6797)
Co-authored-by: 老潮 <zhangyongsheng@3vjia.com>
Co-authored-by: takatost <takatost@users.noreply.github.com>
Co-authored-by: takatost <takatost@gmail.com>
2024-07-31 00:21:16 +08:00
crazywoola
3c371a6cb0 fix: workflow api (#6810) 2024-07-30 23:51:48 +08:00
longzhihun
9ce5cea911 feat: bedrock invoke enhancement (#6808) 2024-07-30 21:57:18 +08:00
eric-0x72
98d9837fbc fix wrong charset when decoding Chinese content (#6774)
Co-authored-by: zhangwb <zhangwb@zts.com.cn>
2024-07-30 21:32:45 +08:00
Joel
53a89bbbc7 chore : option card (#6800) 2024-07-30 17:33:08 +08:00
zxhlyh
0a744a73b3 fix: eco knowledge retrieval method (#6798) 2024-07-30 16:59:03 +08:00
非法操作
0675c5f716 chore: add shortcut keys and hints for the shortcuts (#6779) 2024-07-30 16:18:58 +08:00
Yeuoly
72963d1f13 fix: nonetype in webscraper validation (#6788) 2024-07-30 14:45:14 +08:00
Chenhe Gu
028261f760 improve issue templates (#6785) 2024-07-30 14:17:45 +08:00
-LAN-
a98284b1ef refactor(api): Switch to dify_config (#6750)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-30 11:15:26 +08:00
Bowen Liang
daa31b2cb3 chore: remove redundant version pinning for indirect dependencies (#6772) 2024-07-30 08:45:57 +08:00
Bowen Liang
b414ea41d6 dep: initial support for Milvus 2.4.x (#6084) 2024-07-29 19:56:45 +08:00
Joe
f78d0082ae feat: implement function dispatch table for trace processing (#6628) 2024-07-29 18:47:25 +08:00
SiliconFlow, Inc
3e18d32ce5 add deepseek-coder-v2 in siliconflow (#6149) 2024-07-29 18:45:19 +08:00
Charles
94d68b6a08 upgrade deepseek params (#6744) 2024-07-29 18:31:56 +08:00
Giga Group
c9ff0e3961 Add model hunyuan-embedding (#6657)
Co-authored-by: sun <sun@centen.cn>
2024-07-29 18:30:52 +08:00
-LAN-
8dd68e2034 fix(api/core/moderation/output_moderation.py): Fix config call. (#6769) 2024-07-29 18:30:29 +08:00
zxhlyh
2cd662c43b chore: n to 1 retrieval legacy text (#6767) 2024-07-29 18:09:44 +08:00
zxhlyh
4945184f8c fix: n to 1 retrieval legacy text (#6760) 2024-07-29 16:03:47 +08:00
Bowen Liang
cb01bf2986 chore: set logging level to debug when reading YAML files and falling back to default value in case of None (#6758) 2024-07-29 13:40:18 +08:00
Yi Xiao
f43e27814c Fix action button size (#6753) 2024-07-29 13:19:15 +08:00
Bowen Liang
20268708cc chore: improve position map conversion and tolerate empty position yaml file (#6541) 2024-07-29 10:32:11 +08:00
Hanqing Zhao
c8da4a1b7e Add jp translation for new features (#6749) 2024-07-29 08:58:48 +08:00
Vicky Guo
829472a1d7 switch to diffy_config with Pydantic in files, moderation and app (#6747)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2024-07-29 02:57:45 +08:00
Hiroshige Aoki
e23461c837 Fix/6615 40 varchar limit on DatasetCollectionBinding and Embedding model name (#6723) 2024-07-28 09:42:58 +08:00
非法操作
21f6caacd4 feat: enhance the firecrawl tool (#6705) 2024-07-27 15:00:06 +08:00
Pascal M
082c46a903 chore: migrate to poetry in devcontainer commands (#6724) 2024-07-27 14:49:34 +08:00
-LAN-
6a3bef8378 feat(api/core/app/segments): Update segment types and variables (#6734)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-27 14:43:51 +08:00
-LAN-
b6c3010f02 refactor(api/core/workflow/nodes/base_node.py): Update extract_variable_selector_to_variable_mapping method signature. (#6733)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-27 14:43:25 +08:00
crazywoola
90d2c01218 Feat/6725 can not get image url from cogview tool (#6728) 2024-07-27 00:07:31 +08:00
-LAN-
83af50368f fix(api/core/model_runtime/model_providers/azure_openai/llm/llm.py): Try to skip if delta.delta is None. (#6727)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-27 00:05:21 +08:00
Jyong
cf258b7a67 add xlsx support hyperlink extract (#6722) 2024-07-26 19:26:52 +08:00
-LAN-
5d77dc4f58 feat(api/core/app/segments/parser.py): Remove blank segment in convert_template (#6709)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-26 18:19:33 +08:00
Joe
e4542215cc fix: tongyi empty tool_calls is not supported in message (#6719) 2024-07-26 18:10:13 +08:00
Jason
3d3677e912 Feat/model provider novita (#6717)
Co-authored-by: takatost <takatost@gmail.com>
2024-07-26 17:37:21 +08:00
Kevin9703
427f48be6b fix(answer/operation): feedback status in the logs (#6716) 2024-07-26 17:36:36 +08:00
-LAN-
c6996a48a4 refactor(api/core/app/segments): Support more kinds of Segments. (#6706)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-26 15:03:56 +08:00
chenxu9741
6b50bb0fe6 issues #6655 Open ai tts issues (#6696) 2024-07-26 14:55:49 +08:00
Kevin9703
80b3871c55 fix(log/list): Incorrect field 'app_id' causing annotations to fail t… (#6697) 2024-07-26 11:24:32 +08:00
KVOJJJin
4839523e53 Fix appId missing in annotations (#6699) 2024-07-26 11:21:51 +08:00
Sangmin Ahn
ecb9c311b5 chore: make prompt generator max tokens configurable (#6693) 2024-07-26 10:20:23 +08:00
crazywoola
bd97ce9489 fix: doc link in knowledge base (#6691) 2024-07-26 09:14:08 +08:00
Yeuoly
79cb23e8ac security/SSRF vulns (#6682) 2024-07-25 20:50:26 +08:00
longzhihun
c5ac004f15 [seanguo] fix: unsupported filename in windows & add Mistral Large 2 (#6679) 2024-07-25 19:26:46 +08:00
crazywoola
5fbfa0f2c8 Update bug_report.yml (#6678) 2024-07-25 18:59:04 +08:00
RookieAgent
78a339a794 modify llama3-1 yaml filename to support Windows pull operations (#6677) 2024-07-25 18:58:55 +08:00
Hanqing Zhao
f904df4b63 Add french and jp translation for new feature (#6675) 2024-07-25 18:55:16 +08:00
灰灰
5e4ac11df3 fix: code block segmentation problem of markdown document (#6465) 2024-07-25 17:24:37 +08:00
tmuife
16b4f560cd fix bugs(when using Oracle23ai as Vector DB) (#6658) 2024-07-25 17:07:14 +08:00
-LAN-
75e6576c67 refactor(api/core/app/segments): implement to_object in ObjectVariable and ArrayVariable. (#6671)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-25 17:06:38 +08:00
Seayon
0b4c26578e Enhance database URI security and add URL encoding (#6668) 2024-07-25 16:48:00 +08:00
xielong
ebcc07e3e9 feat: support max_retries in jina requests (#6585) 2024-07-25 13:10:39 +08:00
-LAN-
55c2b61921 fix(api/fields/workflow_fields.py): Add check in environment variables (#6621) 2024-07-25 11:30:52 +08:00
Giga Group
ca696fe94c Add support of tool-call for model provider "hunyuan" (#6656)
Co-authored-by: sun <sun@centen.cn>
2024-07-25 11:27:58 +08:00
非法操作
585444c50c chore: fix type annotations (#6600) 2024-07-25 11:21:51 +08:00
longzhihun
9815aab7a3 [seanguo] feat: add llama 3.1 support in bedrock (#6645) 2024-07-25 11:20:37 +08:00
yanghx
349ec0db77 fix tencent_cos_storage image-preview error is not a byte (#6652) 2024-07-25 11:20:20 +08:00
majian
a876baf0a9 Resolve variable type parameter error (#6646) 2024-07-25 11:15:54 +08:00
Jyong
91fd8521c3 fix reranking model field error (#6654) 2024-07-25 10:07:55 +08:00
-LAN-
4ec9a87e46 fix(api/core/workflow/nodes/iteration/iteration_node.py): Extend output in iteration if output is a array. (#6647)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-25 00:32:39 +08:00
Vico Chu
fb5e3662d5 Chores: add missing profile for middleware docker compose cmd and fix ssrf-proxy doc link (#6372)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-07-24 19:36:06 +08:00
-LAN-
31efe10c75 refactor(api/core/workflow/workflow_engine_manager.py): Remove (#6630)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-24 19:35:40 +08:00
-LAN-
72bc9d5f2b feat(api/core/app/segments/variables.py): Support description in Variable. (#6636)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-07-24 19:35:22 +08:00
Jyong
600f13436d remove rerank model must be required when retrieval_model is multiple (#6640) 2024-07-24 19:34:41 +08:00
Joe
b347a2f839 Feat/user session id search (#6638) 2024-07-24 19:34:23 +08:00
crazywoola
47b5bd7243 fix: value is not an array (#6632) 2024-07-24 19:14:04 +08:00
zhangzhiqiangcs
d4c55748f1 doc: fix about model features (#6619) 2024-07-24 19:12:10 +08:00
takatost
0625db0bf5 chore: optimize asynchronous workflow deletion performance of app related data (#6639) 2024-07-24 19:00:37 +08:00
takatost
05141ede16 chore: optimize asynchronous deletion performance of app related data (#6634) 2024-07-24 18:15:03 +08:00
Yi Xiao
c112188207 feat: added ActionButton component (#6631) 2024-07-24 18:09:44 +08:00
dufei
5af2df0cd5 fix: qwen fc error (#6620)
Co-authored-by: dufei <du_fei@venusgroup.com.cn>
2024-07-24 16:56:06 +08:00
crazywoola
f324374b95 Fix/6615 40 varchar limit on model name (#6623) 2024-07-24 16:23:16 +08:00
KVOJJJin
2aad128883 Fix: DSL backup (#6616) 2024-07-24 15:02:30 +08:00
KVOJJJin
3c78fdec1c Fix: reset button in embedded chatbot (#6611) 2024-07-24 14:46:52 +08:00
zxhlyh
6fe9aa69cc feat: n to 1 retrieval legacy (#6554) 2024-07-24 12:50:48 +08:00
Jyong
e4bb943fe5 Feat/delete single dataset retrival (#6570) 2024-07-24 12:50:11 +08:00
takatost
0fb741f269 fix: downgraded sentry-sdk to 1.44.1 due to claude LLM token returning 0 (#6597) 2024-07-24 04:49:03 +08:00
takatost
4c85393a1d feat: add GroqCloud llama3.1 series models support (#6596) 2024-07-24 00:41:58 +08:00
sino
d5c2680fde feat: support llama3.1 series models for openrouter provider (#6595) 2024-07-24 00:37:48 +08:00
takatost
49729647ea bump to 0.6.15 (#6592) 2024-07-23 22:46:42 +08:00
-LAN-
85a883e281 fix(variables): NoneVariable should inherit from NoneSegment. (#6584) 2024-07-23 21:46:08 +08:00
Joe
8123a00e97 feat: update prompt generate (#6516) 2024-07-23 19:52:14 +08:00
Joel
0f6a064c08 chore: enchance auto generate prompt (#6564) 2024-07-23 19:51:38 +08:00
-LAN-
2bc0632d0d fix(segments): Support NoneType. (#6581) 2024-07-23 17:59:32 +08:00
Lance Mao
75445a0c66 fix audio not working during development due to react's useEffect wil be triggered twice (#6126) 2024-07-23 17:24:29 +08:00
Joel
6a9d202414 chore: layout UI upgrade (#6577) 2024-07-23 17:11:02 +08:00
-LAN-
ad7552ea8d fix(api/core/workflow/nodes/llm/llm_node.py): Fix LLM Node error. (#6576) 2024-07-23 17:09:16 +08:00
非法操作
c0ada940bd fix: tool params not work as expected when develop a tool (#6550) 2024-07-23 17:00:39 +08:00
takatost
1690788827 fix: name 'current_app' is not defined in recommended_app_service (#6574) 2024-07-23 16:48:21 +08:00
Lance Mao
7c55c39085 feat: add tencent asr (#6091) 2024-07-23 16:38:39 +08:00
非法操作
f17d4fe412 fix: extract only like feedback to caculate User Satisfaction (#6553) 2024-07-23 16:32:36 +08:00
-LAN-
f019bc4bd7 feat(variables): Support to_object. (#6572) 2024-07-23 16:22:06 +08:00
-LAN-
cfc408095c fix(api/nodes): Fallback to get_any in some nodes that use object or array. (#6566) 2024-07-23 15:51:07 +08:00
takatost
6b5fac3004 fix: fetch context error in llm node (#6562) 2024-07-23 15:04:51 +08:00
崔亮
0569c547ee fix the issue of MILVUS_DATABASE has no effect. (#6424) 2024-07-23 15:03:55 +08:00
tmuife
06fc1bce9e Add search by full text when using Oracle23ai as vector DB (#6559) 2024-07-23 15:03:21 +08:00
Sangmin Ahn
093b8ca475 fix: escape double quotation marks in the vector DB search query (#6506) 2024-07-23 15:02:25 +08:00
Ryan Tian
5fcc2caeed feat: add Mingdao HAP tool, implemented read and maintain HAP application worksheet data. (#6257)
Co-authored-by: takatost <takatost@gmail.com>
2024-07-23 14:34:19 +08:00
guogeer
f30a51e673 fix: chat flow chat with annotation or moderation but answer empty (#6202)
Co-authored-by: jinqi.guo <jinqi.guo@ubtrobot.com>
2024-07-23 14:13:58 +08:00
dependabot[bot]
642723d09e chore(deps): bump sentry-sdk from 1.39.2 to 2.8.0 in /api (#6517)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-23 13:48:23 +08:00
Joel
155e708540 Revert "chore: improve prompt auto generator" (#6556) 2024-07-23 13:35:35 +08:00
Joel
d726473c6d Revert "chore: use node specify llm to auto generate prompt" (#6555) 2024-07-23 13:31:32 +08:00
crazywoola
e80412df23 feat: renanme template (#6547) 2024-07-23 10:07:54 +08:00
crazywoola
66765acf00 Update help.yml (#6546) 2024-07-23 10:01:19 +08:00
crazywoola
7208ea1da9 fix: template (#6545) 2024-07-23 09:58:47 +08:00
crazywoola
5e2f3ec6f0 update discussion template (#6544) 2024-07-23 09:44:27 +08:00
-LAN-
cd7fa8027a fix(api/core/model_manager.py): Avoid mutation during iteration. (#6536) 2024-07-22 22:58:22 +08:00
-LAN-
617847e3c0 fix(api/services/app_generate_service.py): Remove wrong type hints. (#6535) 2024-07-22 22:58:07 +08:00
crazywoola
71a7211411 Feat/add email support for pro and team (#6533) 2024-07-22 19:56:46 +08:00
Joel
dc7335cdf8 chore: use node specify llm to auto generate prompt (#6525) 2024-07-22 18:16:33 +08:00
crazywoola
a7c1e4c7ae chore: remove support email from readme (#6530) 2024-07-22 17:23:19 +08:00
zxhlyh
87594008f8 fix: iteration node bg color (#6523) 2024-07-22 15:43:24 +08:00
-LAN-
5e6fc58db3 Feat/environment variables in workflow (#6515)
Co-authored-by: JzoNg <jzongcode@gmail.com>
2024-07-22 15:29:39 +08:00
Jason Tan
87d583f454 fix: privilege for editor role (#6521) 2024-07-22 15:01:25 +08:00
Benjamin
a67831773f refactor: handle missing position file gracefully (#6513) 2024-07-22 13:24:32 +08:00
Jian Yu
5b89b6fe2d allow custom base_url of dify api server (#6510) 2024-07-22 13:24:24 +08:00
Joel
a6350daa02 chore: improve prompt auto generator (#6514) 2024-07-22 11:44:12 +08:00
sino
dfb6f4fec6 fix: extract tool calls correctly while arguments is empty (#6503) 2024-07-22 07:43:18 +08:00
Jyong
f38034e455 clean vector collection redis cache (#6494) 2024-07-21 15:09:09 +08:00
Shoya SHIRAKI
c57b3931d5 refactor(api): switch to dify_config in controllers/console (#6485) 2024-07-21 01:11:40 +08:00
Jyong
f73a3a58ae update delete embeddings by id (#6489) 2024-07-20 09:04:21 +08:00
Jyong
1e0e573165 update clean embedding cache query logic (#6483) 2024-07-20 01:29:25 +08:00
670 changed files with 19655 additions and 4251 deletions

View File

@@ -3,8 +3,8 @@
cd web && npm install
pipx install poetry
echo 'alias start-api="cd /workspaces/dify/api && flask run --host 0.0.0.0 --port=5001 --debug"' >> ~/.bashrc
echo 'alias start-worker="cd /workspaces/dify/api && celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletion"' >> ~/.bashrc
echo 'alias start-api="cd /workspaces/dify/api && poetry run python -m flask run --host 0.0.0.0 --port=5001 --debug"' >> ~/.bashrc
echo 'alias start-worker="cd /workspaces/dify/api && poetry run python -m celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletion"' >> ~/.bashrc
echo 'alias start-web="cd /workspaces/dify/web && npm run dev"' >> ~/.bashrc
echo 'alias start-containers="cd /workspaces/dify/docker && docker-compose -f docker-compose.middleware.yaml -p dify up -d"' >> ~/.bashrc

24
.github/DISCUSSION_TEMPLATE/general.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
title: "General Discussion"
body:
- type: checkboxes
attributes:
label: Self Checks
description: "To make sure we get to you in time, please check the following :)"
options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: textarea
attributes:
label: Content
placeholder: Please describe the content you would like to discuss.
validations:
required: true
- type: markdown
attributes:
value: Please limit one request per issue.

30
.github/DISCUSSION_TEMPLATE/help.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
title: "Help"
body:
- type: checkboxes
attributes:
label: Self Checks
description: "To make sure we get to you in time, please check the following :)"
options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: textarea
attributes:
label: 1. Is this request related to a challenge you're experiencing? Tell me about your story.
placeholder: Please describe the specific scenario or problem you're facing as clearly as possible. For instance "I was trying to use [feature] for [specific task], and [what happened]... It was frustrating because...."
validations:
required: true
- type: textarea
attributes:
label: 2. Additional context or comments
placeholder: (Any other information, comments, documentations, links, or screenshots that would provide more clarity. This is the place to add anything else not covered above.)
validations:
required: false
- type: markdown
attributes:
value: Please limit one request per issue.

View File

@@ -0,0 +1,37 @@
title: Suggestions for New Features
body:
- type: checkboxes
attributes:
label: Self Checks
description: "To make sure we get to you in time, please check the following :)"
options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: textarea
attributes:
label: 1. Is this request related to a challenge you're experiencing? Tell me about your story.
placeholder: Please describe the specific scenario or problem you're facing as clearly as possible. For instance "I was trying to use [feature] for [specific task], and [what happened]... It was frustrating because...."
validations:
required: true
- type: textarea
attributes:
label: 2. Additional context or comments
placeholder: (Any other information, comments, documentations, links, or screenshots that would provide more clarity. This is the place to add anything else not covered above.)
validations:
required: false
- type: checkboxes
attributes:
label: 3. Can you help us with this feature?
description: Let us know! This is not a commitment, but a starting point for collaboration.
options:
- label: I am interested in contributing to this feature.
required: false
- type: markdown
attributes:
value: Please limit one request per issue.

View File

@@ -14,7 +14,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
@@ -22,7 +22,6 @@ body:
- type: input
attributes:
label: Dify version
placeholder: 0.6.11
description: See about section in Dify console
validations:
required: true

View File

@@ -12,7 +12,7 @@ body:
required: true
- label: I confirm that I am using English to submit report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true

View File

@@ -12,7 +12,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true

View File

@@ -12,14 +12,13 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: input
attributes:
label: Dify version
placeholder: 0.3.21
description: Hover over system tray icon or look at Settings
validations:
required: true

View File

@@ -21,6 +21,7 @@ jobs:
python-version:
- "3.10"
- "3.11"
- "3.12"
steps:
- name: Checkout code
@@ -89,6 +90,5 @@ jobs:
pgvecto-rs
pgvector
chroma
myscale
- name: Test Vector Stores
run: poetry run -C api bash dev/pytest/pytest_vdb.sh

2
.gitignore vendored
View File

@@ -155,6 +155,7 @@ docker-legacy/volumes/milvus/*
docker-legacy/volumes/chroma/*
docker/volumes/app/storage/*
docker/volumes/certbot/*
docker/volumes/db/data/*
docker/volumes/redis/data/*
docker/volumes/weaviate/*
@@ -174,5 +175,6 @@ sdks/python-client/dify_client.egg-info
.vscode/*
!.vscode/launch.json
pyrightconfig.json
api/.vscode
.idea/

View File

@@ -1,7 +1,7 @@
Dify にコントリビュートしたいとお考えなのですね。それは素晴らしいことです。
私たちは、LLM アプリケーションの構築と管理のための最も直感的なワークフローを設計するという壮大な野望を持っています。人数も資金も限られている新興企業として、コミュニティからの支援は本当に重要です。
私たちは現状を鑑み、機敏かつ迅速に開発をする必要がありますが、同時にあなたのようなコントリビューターの方々に、可能な限りスムーズな貢献体験をしていただきたいと思っています。そのためにこのコントリビュートガイドを作成しました。
私たちは現状を鑑み、機敏かつ迅速に開発をする必要がありますが、同時にあなたのようなコントリビューターの方々に、可能な限りスムーズな貢献体験をしていただきたいと思っています。そのためにこのコントリビュートガイドを作成しました。
コードベースやコントリビュータの方々と私たちがどのように仕事をしているのかに慣れていただき、楽しいパートにすぐに飛び込めるようにすることが目的です。
このガイドは Dify そのものと同様に、継続的に改善されています。実際のプロジェクトに遅れをとることがあるかもしれませんが、ご理解のほどよろしくお願いいたします。
@@ -14,13 +14,13 @@ Dify にコントリビュートしたいとお考えなのですね。それは
### 機能リクエスト
* 新しい機能要望を出す場合は、提案する機能が何を実現するものなのかを説明し、可能な限り多くのコンテキストを含めてください。[@perzeusss](https://github.com/perzeuss)は、あなたの要望を書き出すのに役立つ [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) を作ってくれました。気軽に試してみてください。
* 新しい機能要望を出す場合は、提案する機能が何を実現するものなのかを説明し、可能な限り多くのコンテキストを含めてください。[@perzeusss](https://github.com/perzeuss)は、あなたの要望を書き出すのに役立つ [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) を作ってくれました。気軽に試してみてください。
* 既存の課題から 1 つ選びたい場合は、その下にコメントを書いてください。
関連する方向で作業しているチームメンバーが参加します。すべてが良好であれば、コーディングを開始する許可が与えられます。私たちが変更を提案した場合にあなたの作業が無駄になることがないよう、それまでこの機能の作業を控えていただくようお願いいたします。
関連する方向で作業しているチームメンバーが参加します。すべてが良好であれば、コーディングを開始する許可が与えられます。私たちが変更を提案した場合にあなたの作業が無駄になることがないよう、それまでこの機能の作業を控えていただくようお願いいたします。
提案された機能がどの分野に属するかによって、あなたは異なるチーム・メンバーと話をするかもしれません。以下は、各チームメンバーが現在取り組んでいる分野の概要です。
提案された機能がどの分野に属するかによって、あなたは異なるチーム・メンバーと話をするかもしれません。以下は、各チームメンバーが現在取り組んでいる分野の概要です。
| Member | Scope |
| --------------------------------------------------------------------------------------- | ------------------------------------ |
@@ -153,7 +153,7 @@ Dify のバックエンドは[Flask](https://flask.palletsprojects.com/en/3.0.x/
いよいよ、私たちのリポジトリにプルリクエスト (PR) を提出する時が来ました。主要な機能については、まず `deploy/dev` ブランチにマージしてテストしてから `main` ブランチにマージします。
マージ競合などの問題が発生した場合、またはプル リクエストを開く方法がわからない場合は、[GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) をチェックしてみてください。
これで完了です!あなたの PR がマージされると、[README](https://github.com/langgenius/dify/blob/main/README.md) にコントリビューターとして紹介されます。
これで完了です!あなたの PR がマージされると、[README](https://github.com/langgenius/dify/blob/main/README.md) にコントリビューターとして紹介されます。
## ヘルプを得る

View File

@@ -37,6 +37,7 @@
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
@@ -64,7 +65,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
@@ -216,7 +217,6 @@ At the same time, please consider supporting Dify by sharing it on social media
* [Github Discussion](https://github.com/langgenius/dify/discussions). Best for: sharing feedback and asking questions.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Email](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). Best for: questions you have about using Dify.AI.
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.

View File

@@ -37,6 +37,7 @@
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
<div style="text-align: right;">
@@ -56,7 +57,7 @@
**4. خط أنابيب RAG**: قدرات RAG الواسعة التي تغطي كل شيء من استيعاب الوثائق إلى الاسترجاع، مع الدعم الفوري لاستخراج النص من ملفات PDF و PPT وتنسيقات الوثائق الشائعة الأخرى.
**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DELL·E وStable Diffusion و WolframAlpha.
**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DALL·E وStable Diffusion و WolframAlpha.
**6. الـ LLMOps**: راقب وتحلل سجلات التطبيق والأداء على مر الزمن. يمكنك تحسين الأوامر والبيانات والنماذج باستمرار استنادًا إلى البيانات الإنتاجية والتعليقات.
@@ -199,7 +200,6 @@ docker compose up -d
## المجتمع والاتصال
* [مناقشة Github](https://github.com/langgenius/dify/discussions). الأفضل لـ: مشاركة التعليقات وطرح الأسئلة.
* [المشكلات على GitHub](https://github.com/langgenius/dify/issues). الأفضل لـ: الأخطاء التي تواجهها في استخدام Dify.AI، واقتراحات الميزات. انظر [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [البريد الإلكتروني](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). الأفضل لـ: الأسئلة التي تتعلق باستخدام Dify.AI.
* [Discord](https://discord.gg/FngNHpbcY7). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
* [تويتر](https://twitter.com/dify_ai). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.

View File

@@ -36,6 +36,7 @@
<a href="./README_KL.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/法语-d9d9d9"></a>
<a href="./README_FR.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/克林贡语-d9d9d9"></a>
<a href="./README_KR.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/韓國語-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</div>
@@ -69,7 +70,7 @@ Dify 是一个开源的 LLM 应用开发平台。其直观的界面结合了 AI
广泛的 RAG 功能,涵盖从文档摄入到检索的所有内容,支持从 PDF、PPT 和其他常见文档格式中提取文本的开箱即用的支持。
**5. Agent 智能体**:
您可以基于 LLM 函数调用或 ReAct 定义 Agent并为 Agent 添加预构建或自定义工具。Dify 为 AI Agent 提供了50多种内置工具如谷歌搜索、DELL·E、Stable Diffusion 和 WolframAlpha 等。
您可以基于 LLM 函数调用或 ReAct 定义 Agent并为 Agent 添加预构建或自定义工具。Dify 为 AI Agent 提供了50多种内置工具如谷歌搜索、DALL·E、Stable Diffusion 和 WolframAlpha 等。
**6. LLMOps**:
随时间监视和分析应用程序日志和性能。您可以根据生产数据和标注持续改进提示、数据集和模型。

View File

@@ -36,6 +36,7 @@
<a href="./README_KL.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
#
@@ -69,7 +70,7 @@ Dify es una plataforma de desarrollo de aplicaciones de LLM de código abierto.
**5. Capacidades de agente**:
Puedes definir agent
es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruidas o personalizadas para el agente. Dify proporciona más de 50 herramientas integradas para agentes de IA, como Búsqueda de Google, DELL·E, Difusión Estable y WolframAlpha.
es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruidas o personalizadas para el agente. Dify proporciona más de 50 herramientas integradas para agentes de IA, como Búsqueda de Google, DALL·E, Difusión Estable y WolframAlpha.
**6. LLMOps**:
Supervisa y analiza registros de aplicaciones y rendimiento a lo largo del tiempo. Podrías mejorar continuamente prompts, conjuntos de datos y modelos basados en datos de producción y anotaciones.
@@ -224,7 +225,6 @@ Al mismo tiempo, considera apoyar a Dify compartiéndolo en redes sociales y en
* [Discusión en GitHub](https://github.com/langgenius/dify/discussions). Lo mejor para: compartir comentarios y hacer preguntas.
* [Reporte de problemas en GitHub](https://github.com/langgenius/dify/issues). Lo mejor para: errores que encuentres usando Dify.AI y propuestas de características. Consulta nuestra [Guía de contribución](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Correo electrónico](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). Lo mejor para: preguntas que tengas sobre el uso de Dify.AI.
* [Discord](https://discord.gg/FngNHpbcY7). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
* [Twitter](https://twitter.com/dify_ai). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
@@ -256,4 +256,4 @@ Para proteger tu privacidad, evita publicar problemas de seguridad en GitHub. En
## Licencia
Este repositorio está disponible bajo la [Licencia de Código Abierto de Dify](LICENSE), que es esencialmente Apache 2.0 con algunas restricciones adicionales.
Este repositorio está disponible bajo la [Licencia de Código Abierto de Dify](LICENSE), que es esencialmente Apache 2.0 con algunas restricciones adicionales.

View File

@@ -36,6 +36,7 @@
<a href="./README_KL.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
#
@@ -69,7 +70,7 @@ Dify est une plateforme de développement d'applications LLM open source. Son in
**5. Capac
ités d'agent**:
Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DELL·E, Stable Diffusion et WolframAlpha.
Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DALL·E, Stable Diffusion et WolframAlpha.
**6. LLMOps**:
Surveillez et analysez les journaux d'application et les performances au fil du temps. Vous pouvez continuellement améliorer les prompts, les ensembles de données et les modèles en fonction des données de production et des annotations.
@@ -222,7 +223,6 @@ Dans le même temps, veuillez envisager de soutenir Dify en le partageant sur le
* [Discussion GitHub](https://github.com/langgenius/dify/discussions). Meilleur pour: partager des commentaires et poser des questions.
* [Problèmes GitHub](https://github.com/langgenius/dify/issues). Meilleur pour: les bogues que vous rencontrez en utilisant Dify.AI et les propositions de fonctionnalités. Consultez notre [Guide de contribution](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [E-mail](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). Meilleur pour: les questions que vous avez sur l'utilisation de Dify.AI.
* [Discord](https://discord.gg/FngNHpbcY7). Meilleur pour: partager vos applications et passer du temps avec la communauté.
* [Twitter](https://twitter.com/dify_ai). Meilleur pour: partager vos applications et passer du temps avec la communauté.

View File

@@ -36,6 +36,7 @@
<a href="./README_KL.md"><img alt="先月のコミット" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="先月のコミット" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="先月のコミット" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
#
@@ -68,7 +69,7 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサーポイントも提供します。
**5. エージェント機能**:
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DELL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DALL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
**6. LLMOps**:
アプリケーションのログやパフォーマンスを監視と分析し、生産のデータと注釈に基づいて、プロンプト、データセット、モデルを継続的に改善できます。
@@ -221,7 +222,6 @@ docker compose up -d
* [Github Discussion](https://github.com/langgenius/dify/discussions). 主に: フィードバックの共有や質問。
* [GitHub Issues](https://github.com/langgenius/dify/issues). 主に: Dify.AIを使用する際に発生するエラーや問題については、[貢献ガイド](CONTRIBUTING_JA.md)を参照してください
* [Email](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). 主に: Dify.AIの使用に関する質問。
* [Discord](https://discord.gg/FngNHpbcY7). 主に: アプリケーションの共有やコミュニティとの交流。
* [Twitter](https://twitter.com/dify_ai). 主に: アプリケーションの共有やコミュニティとの交流。
@@ -239,7 +239,7 @@ docker compose up -d
<td>無料の30分間のミーティングをスケジュール</td>
</tr>
<tr>
<td><a href='mailto:support@dify.ai?subject=[GitHub]Technical%20Support'>技術サポート</a></td>
<td><a href='https://github.com/langgenius/dify/issues'>技術サポート</a></td>
<td>技術的な問題やサポートに関する質問</td>
</tr>
<tr>

View File

@@ -36,6 +36,7 @@
<a href="./README_KL.md"><img alt="Commits last month" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Commits last month" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Commits last month" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
#
@@ -67,7 +68,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
@@ -224,7 +225,6 @@ At the same time, please consider supporting Dify by sharing it on social media
). Best for: sharing feedback and asking questions.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Email](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). Best for: questions you have about using Dify.AI.
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
@@ -256,4 +256,4 @@ To protect your privacy, please avoid posting security issues on GitHub. Instead
## License
This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.
This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.

View File

@@ -36,6 +36,7 @@
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="한국어 README" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
@@ -63,7 +64,7 @@
문서 수집부터 검색까지 모든 것을 다루며, PDF, PPT 및 기타 일반적인 문서 형식에서 텍스트 추출을 위한 기본 지원이 포함되어 있는 광범위한 RAG 기능을 제공합니다.
**5. 에이전트 기능**:
LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DELL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DALL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
**6. LLMOps**:
시간 경과에 따른 애플리케이션 로그와 성능을 모니터링하고 분석합니다. 생산 데이터와 주석을 기반으로 프롬프트, 데이터세트, 모델을 지속적으로 개선할 수 있습니다.
@@ -214,7 +215,6 @@ Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했
* [Github 토론](https://github.com/langgenius/dify/discussions). 피드백 공유 및 질문하기에 적합합니다.
* [GitHub 이슈](https://github.com/langgenius/dify/issues). Dify.AI 사용 중 발견한 버그와 기능 제안에 적합합니다. [기여 가이드](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)를 참조하세요.
* [이메일](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). Dify.AI 사용에 대한 질문하기에 적합합니다.
* [디스코드](https://discord.gg/FngNHpbcY7). 애플리케이션 공유 및 커뮤니티와 소통하기에 적합합니다.
* [트위터](https://twitter.com/dify_ai). 애플리케이션 공유 및 커뮤니티와 소통하기에 적합합니다.

253
README_TR.md Normal file
View File

@@ -0,0 +1,253 @@
![cover-v5-optimized](https://github.com/langgenius/dify/assets/13230914/f9e19af5-61ba-4119-b926-d10c4c06ebab)
<p align="center">
<a href="https://cloud.dify.ai">Dify Bulut</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">Kendi Sunucunuzda Barındırma</a> ·
<a href="https://docs.dify.ai">Dokümantasyon</a> ·
<a href="https://cal.com/guchenhe/60-min-meeting">Kurumsal Sorgu</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Statik Rozet" src="https://img.shields.io/badge/Ürün-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Statik Rozet" src="https://img.shields.io/badge/ücretsiz-fiyatlandırma?logo=free&color=%20%23155EEF&label=fiyatlandirma&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="Discord'da sohbet et"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="Twitter'da takip et"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Çekmeleri" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Geçen ay yapılan commitler" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Kapatılan sorunlar" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=kapatilan%20sorunlar&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Tartışma gönderileri" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
Dify, açık kaynaklı bir LLM uygulama geliştirme platformudur. Sezgisel arayüzü, AI iş akışı, RAG pipeline'ı, ajan yetenekleri, model yönetimi, gözlemlenebilirlik özellikleri ve daha fazlasını birleştirerek, prototipten üretime hızlıca geçmenizi sağlar. İşte temel özelliklerin bir listesi:
</br> </br>
**1. Workflow**:
Görsel bir arayüz üzerinde güçlü AI iş akışları oluşturun ve test edin, aşağıdaki tüm özellikleri ve daha fazlasını kullanarak.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
**2. Kapsamlı model desteği**:
Çok sayıda çıkarım sağlayıcısı ve kendi kendine barındırılan çözümlerden yüzlerce özel / açık kaynaklı LLM ile sorunsuz entegrasyon sağlar. GPT, Mistral, Llama3 ve OpenAI API uyumlu tüm modelleri kapsar. Desteklenen model sağlayıcılarının tam listesine [buradan](https://docs.dify.ai/getting-started/readme/model-providers) ulaşabilirsiniz.
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
Özür dilerim, haklısınız. Daha anlamlı ve akıcı bir çeviri yapmaya çalışayım. İşte güncellenmiş çeviri:
**3. Prompt IDE**:
Komut istemlerini oluşturmak, model performansını karşılaştırmak ve sohbet tabanlı uygulamalara metin-konuşma gibi ek özellikler eklemek için kullanıcı dostu bir arayüz.
**4. RAG Pipeline**:
Belge alımından bilgi çekmeye kadar geniş kapsamlı RAG yetenekleri. PDF'ler, PPT'ler ve diğer yaygın belge formatlarından metin çıkarma için hazır destek sunar.
**5. Ajan yetenekleri**:
LLM Fonksiyon Çağırma veya ReAct'a dayalı ajanlar tanımlayabilir ve bu ajanlara önceden hazırlanmış veya özel araçlar ekleyebilirsiniz. Dify, AI ajanları için Google Arama, DALL·E, Stable Diffusion ve WolframAlpha gibi 50'den fazla yerleşik araç sağlar.
**6. LLMOps**:
Uygulama loglarını ve performans metriklerini zaman içinde izleme ve analiz etme imkanı. Üretim ortamından elde edilen verilere ve kullanıcı geri bildirimlerine dayanarak, prompt'ları, veri setlerini ve modelleri sürekli olarak optimize edebilirsiniz. Bu sayede, AI uygulamanızın performansını ve doğruluğunu sürekli olarak artırabilirsiniz.
**7. Hizmet Olarak Backend**:
Dify'ın tüm özellikleri ilgili API'lerle birlikte gelir, böylece Dify'ı kendi iş mantığınıza kolayca entegre edebilirsiniz.
## Özellik karşılaştırması
<table style="width: 100%;">
<tr>
<th align="center">Özellik</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">Programlama Yaklaşımı</td>
<td align="center">API + Uygulama odaklı</td>
<td align="center">Python Kodu</td>
<td align="center">Uygulama odaklı</td>
<td align="center">API odaklı</td>
</tr>
<tr>
<td align="center">Desteklenen LLM'ler</td>
<td align="center">Zengin Çeşitlilik</td>
<td align="center">Zengin Çeşitlilik</td>
<td align="center">Zengin Çeşitlilik</td>
<td align="center">Yalnızca OpenAI</td>
</tr>
<tr>
<td align="center">RAG Motoru</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr>
<td align="center">Ajan</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">✅</td>
</tr>
<tr>
<td align="center">İş Akışı</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">Gözlemlenebilirlik</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">Kurumsal Özellikler (SSO/Erişim kontrolü)</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">❌</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">Yerel Dağıtım</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
</table>
## Dify'ı Kullanma
- **Cloud </br>**
İşte verdiğiniz metnin Türkçe çevirisi, kod bloğu içinde:
-
Herkesin sıfır kurulumla denemesi için bir [Dify Cloud](https://dify.ai) hizmeti sunuyoruz. Bu hizmet, kendi kendine dağıtılan versiyonun tüm yeteneklerini sağlar ve sandbox planında 200 ücretsiz GPT-4 çağrısı içerir.
- **Dify Topluluk Sürümünü Kendi Sunucunuzda Barındırma</br>**
Bu [başlangıç kılavuzu](#quick-start) ile Dify'ı kendi ortamınızda hızlıca çalıştırın.
Daha fazla referans ve detaylı talimatlar için [dokümantasyonumuzu](https://docs.dify.ai) kullanın.
- **Kurumlar / organizasyonlar için Dify</br>**
Ek kurumsal odaklı özellikler sunuyoruz. Kurumsal ihtiyaçları görüşmek için [bizimle bir toplantı planlayın](https://cal.com/guchenhe/30min) veya [bize bir e-posta gönderin](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry). </br>
> AWS kullanan startuplar ve küçük işletmeler için, [AWS Marketplace'deki Dify Premium'a](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) göz atın ve tek tıklamayla kendi AWS VPC'nize dağıtın. Bu, özel logo ve marka ile uygulamalar oluşturma seçeneğine sahip uygun fiyatlı bir AMI teklifdir.
## Güncel Kalma
GitHub'da Dify'a yıldız verin ve yeni sürümlerden anında haberdar olun.
![bizi-yıldızlayın](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## Hızlı başlangıç
> Dify'ı kurmadan önce, makinenizin aşağıdaki minimum sistem gereksinimlerini karşıladığından emin olun:
>
>- CPU >= 2 Çekirdek
>- RAM >= 4GB
</br>
İşte verdiğiniz metnin Türkçe çevirisi, kod bloğu içinde:
Dify sunucusunu başlatmanın en kolay yolu, [docker-compose.yml](docker/docker-compose.yaml) dosyamızı çalıştırmaktır. Kurulum komutunu çalıştırmadan önce, makinenizde [Docker](https://docs.docker.com/get-docker/) ve [Docker Compose](https://docs.docker.com/compose/install/)'un kurulu olduğundan emin olun:
```bash
cd docker
cp .env.example .env
docker compose up -d
```
Çalıştırdıktan sonra, tarayıcınızda [http://localhost/install](http://localhost/install) adresinden Dify kontrol paneline erişebilir ve başlangıç ayarları sürecini başlatabilirsiniz.
> Eğer Dify'a katkıda bulunmak veya ek geliştirmeler yapmak isterseniz, [kaynak koddan dağıtım kılavuzumuza](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code) başvurun.
## Sonraki adımlar
Yapılandırmayı özelleştirmeniz gerekiyorsa, lütfen [.env.example](docker/.env.example) dosyamızdaki yorumlara bakın ve `.env` dosyanızdaki ilgili değerleri güncelleyin. Ayrıca, spesifik dağıtım ortamınıza ve gereksinimlerinize bağlı olarak `docker-compose.yaml` dosyasının kendisinde de, imaj sürümlerini, port eşlemelerini veya hacim bağlantılarını değiştirmek gibi ayarlamalar yapmanız gerekebilir. Herhangi bir değişiklik yaptıktan sonra, lütfen `docker-compose up -d` komutunu tekrar çalıştırın. Kullanılabilir tüm ortam değişkenlerinin tam listesini [burada](https://docs.dify.ai/getting-started/install-self-hosted/environments) bulabilirsiniz.
Yüksek kullanılabilirliğe sahip bir kurulum yapılandırmak isterseniz, Dify'ın Kubernetes üzerine dağıtılmasına olanak tanıyan topluluk katkılı [Helm Charts](https://helm.sh/) ve YAML dosyaları mevcuttur.
- [@LeoQuote tarafından Helm Chart](https://github.com/douban/charts/tree/master/charts/dify)
- [@BorisPolonsky tarafından Helm Chart](https://github.com/BorisPolonsky/dify-helm)
- [@Winson-030 tarafından YAML dosyası](https://github.com/Winson-030/dify-kubernetes)
#### Dağıtım için Terraform Kullanımı
##### Azure Global
[Terraform](https://www.terraform.io/) kullanarak Dify'ı Azure'a tek tıklamayla dağıtın.
- [@nikawang tarafından Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
## Katkıda Bulunma
Kod katkısında bulunmak isteyenler için [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakabilirsiniz.
Aynı zamanda, lütfen Dify'ı sosyal medyada, etkinliklerde ve konferanslarda paylaşarak desteklemeyi düşünün.
> Dify'ı Mandarin veya İngilizce dışındaki dillere çevirmemize yardımcı olacak katkıda bulunanlara ihtiyacımız var. Yardımcı olmakla ilgileniyorsanız, lütfen daha fazla bilgi için [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) dosyasına bakın ve [Discord Topluluk Sunucumuzdaki](https://discord.gg/8Tpq4AcN9c) `global-users` kanalında bize bir yorum bırakın.
**Katkıda Bulunanlar**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## Topluluk & iletişim
* [Github Tartışmaları](https://github.com/langgenius/dify/discussions). En uygun: geri bildirim paylaşmak ve soru sormak için.
* [GitHub Sorunları](https://github.com/langgenius/dify/issues). En uygun: Dify.AI kullanırken karşılaştığınız hatalar ve özellik önerileri için. [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakın.
* [Discord](https://discord.gg/FngNHpbcY7). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
* [Twitter](https://twitter.com/dify_ai). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
Veya doğrudan bir ekip üyesiyle toplantı planlayın:
<table>
<tr>
<th>İletişim Noktası</th>
<th>Amaç</th>
</tr>
<tr>
<td><a href='https://cal.com/guchenhe/15min' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/9ebcd111-1205-4d71-83d5-948d70b809f5' alt='Git-Hub-README-Button-3x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
<td>İş sorgulamaları & ürün geri bildirimleri</td>
</tr>
<tr>
<td><a href='https://cal.com/pinkbanana' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/d1edd00a-d7e4-4513-be6c-e57038e143fd' alt='Git-Hub-README-Button-2x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
<td>Katkılar, sorunlar & özellik istekleri</td>
</tr>
</table>
## Star history
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## Güvenlik açıklaması
Gizliliğinizi korumak için, lütfen güvenlik sorunlarını GitHub'da paylaşmaktan kaçının. Bunun yerine, sorularınızı security@dify.ai adresine gönderin ve size daha detaylı bir cevap vereceğiz.
## Lisans
Bu depo, temel olarak Apache 2.0 lisansı ve birkaç ek kısıtlama içeren [Dify Açık Kaynak Lisansı](LICENSE) altında kullanıma sunulmuştur.

View File

@@ -183,6 +183,7 @@ UPLOAD_IMAGE_FILE_SIZE_LIMIT=10
# Model Configuration
MULTIMODAL_SEND_IMAGE_FORMAT=base64
PROMPT_GENERATION_MAX_TOKENS=512
# Mail configuration, support: resend, smtp
MAIL_TYPE=
@@ -216,6 +217,7 @@ UNSTRUCTURED_API_KEY=
SSRF_PROXY_HTTP_URL=
SSRF_PROXY_HTTPS_URL=
SSRF_DEFAULT_MAX_RETRIES=3
BATCH_UPLOAD_LIMIT=10
KEYWORD_DATA_SOURCE_TYPE=database

View File

@@ -41,8 +41,12 @@ ENV TZ=UTC
WORKDIR /app/api
RUN apt-get update \
&& apt-get install -y --no-install-recommends curl wget vim nodejs ffmpeg libgmp-dev libmpfr-dev libmpc-dev \
&& apt-get autoremove \
&& apt-get install -y --no-install-recommends curl nodejs libgmp-dev libmpfr-dev libmpc-dev \
&& echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list \
&& apt-get update \
# For Security
&& apt-get install -y --no-install-recommends zlib1g=1:1.3.dfsg+really1.3.1-1 expat=2.6.2-1 libldap-2.5-0=2.5.18+dfsg-2 perl=5.38.2-5 libsqlite3-0=3.46.0-1 \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
# Copy Python environment and packages

View File

@@ -12,7 +12,8 @@
```bash
cd ../docker
cp middleware.env.example middleware.env
docker compose -f docker-compose.middleware.yaml -p dify up -d
# change the profile to other vector database if you are not using weaviate
docker compose -f docker-compose.middleware.yaml --profile weaviate -p dify up -d
cd ../api
```

View File

@@ -1,7 +1,5 @@
import os
from configs import dify_config
if os.environ.get("DEBUG", "false").lower() != 'true':
from gevent import monkey
@@ -23,7 +21,9 @@ from flask import Flask, Response, request
from flask_cors import CORS
from werkzeug.exceptions import Unauthorized
import contexts
from commands import register_commands
from configs import dify_config
# DO NOT REMOVE BELOW
from events import event_handlers
@@ -181,7 +181,10 @@ def load_user_from_request(request_from_flask_login):
decoded = PassportService().verify(auth_token)
user_id = decoded.get('user_id')
return AccountService.load_logged_in_account(account_id=user_id, token=auth_token)
account = AccountService.load_logged_in_account(account_id=user_id, token=auth_token)
if account:
contexts.tenant_id.set(account.current_tenant_id)
return account
@login_manager.unauthorized_handler
@@ -258,6 +261,7 @@ def after_request(response):
@app.route('/health')
def health():
return Response(json.dumps({
'pid': os.getpid(),
'status': 'ok',
'version': app.config['CURRENT_VERSION']
}), status=200, content_type="application/json")
@@ -281,6 +285,7 @@ def threads():
})
return {
'pid': os.getpid(),
'thread_num': num_threads,
'threads': thread_list
}
@@ -290,6 +295,7 @@ def threads():
def pool_stat():
engine = db.engine
return {
'pid': os.getpid(),
'pool_size': engine.pool.size(),
'checked_in_connections': engine.pool.checkedin(),
'checked_out_connections': engine.pool.checkedout(),

View File

@@ -249,8 +249,7 @@ def migrate_knowledge_vector_database():
create_count = 0
skipped_count = 0
total_count = 0
config = current_app.config
vector_type = config.get('VECTOR_STORE')
vector_type = dify_config.VECTOR_STORE
page = 1
while True:
try:
@@ -484,8 +483,7 @@ def convert_to_agent_apps():
@click.option('--field', default='metadata.doc_id', prompt=False, help='index field , default is metadata.doc_id.')
def add_qdrant_doc_id_index(field: str):
click.echo(click.style('Start add qdrant doc_id index.', fg='green'))
config = current_app.config
vector_type = config.get('VECTOR_STORE')
vector_type = dify_config.VECTOR_STORE
if vector_type != "qdrant":
click.echo(click.style('Sorry, only support qdrant vector store.', fg='red'))
return
@@ -502,13 +500,15 @@ def add_qdrant_doc_id_index(field: str):
from core.rag.datasource.vdb.qdrant.qdrant_vector import QdrantConfig
for binding in bindings:
if dify_config.QDRANT_URL is None:
raise ValueError('Qdrant url is required.')
qdrant_config = QdrantConfig(
endpoint=config.get('QDRANT_URL'),
api_key=config.get('QDRANT_API_KEY'),
endpoint=dify_config.QDRANT_URL,
api_key=dify_config.QDRANT_API_KEY,
root_path=current_app.root_path,
timeout=config.get('QDRANT_CLIENT_TIMEOUT'),
grpc_port=config.get('QDRANT_GRPC_PORT'),
prefer_grpc=config.get('QDRANT_GRPC_ENABLED')
timeout=dify_config.QDRANT_CLIENT_TIMEOUT,
grpc_port=dify_config.QDRANT_GRPC_PORT,
prefer_grpc=dify_config.QDRANT_GRPC_ENABLED
)
try:
client = qdrant_client.QdrantClient(**qdrant_config.to_qdrant_params())

View File

@@ -64,4 +64,6 @@ class DifyConfig(
return f'{self.HTTP_REQUEST_NODE_MAX_TEXT_SIZE / 1024 / 1024:.2f}MB'
SSRF_PROXY_HTTP_URL: str | None = None
SSRF_PROXY_HTTPS_URL: str | None = None
SSRF_PROXY_HTTPS_URL: str | None = None
MODERATION_BUFFER_SIZE: int = Field(default=300, description='The buffer size for moderation.')

View File

@@ -406,7 +406,6 @@ class DataSetConfig(BaseSettings):
default=False,
)
class WorkspaceConfig(BaseSettings):
"""
Workspace configs

View File

@@ -1,4 +1,5 @@
from typing import Any, Optional
from urllib.parse import quote_plus
from pydantic import Field, NonNegativeInt, PositiveInt, computed_field
from pydantic_settings import BaseSettings
@@ -104,7 +105,7 @@ class DatabaseConfig:
).strip("&")
db_extras = f"?{db_extras}" if db_extras else ""
return (f"{self.SQLALCHEMY_DATABASE_URI_SCHEME}://"
f"{self.DB_USERNAME}:{self.DB_PASSWORD}@{self.DB_HOST}:{self.DB_PORT}/{self.DB_DATABASE}"
f"{quote_plus(self.DB_USERNAME)}:{quote_plus(self.DB_PASSWORD)}@{self.DB_HOST}:{self.DB_PORT}/{self.DB_DATABASE}"
f"{db_extras}")
SQLALCHEMY_POOL_SIZE: NonNegativeInt = Field(

View File

@@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description='Dify version',
default='0.6.14',
default='0.6.16',
)
COMMIT_SHA: str = Field(

View File

@@ -0,0 +1,2 @@
# TODO: Update all string in code to use this constant
HIDDEN_VALUE = '[__HIDDEN__]'

View File

@@ -15,6 +15,7 @@ language_timezone_mapping = {
'ro-RO': 'Europe/Bucharest',
'pl-PL': 'Europe/Warsaw',
'hi-IN': 'Asia/Kolkata',
'tr-TR': 'Europe/Istanbul',
}
languages = list(language_timezone_mapping.keys())

3
api/contexts/__init__.py Normal file
View File

@@ -0,0 +1,3 @@
from contextvars import ContextVar
tenant_id: ContextVar[str] = ContextVar('tenant_id')

View File

@@ -23,8 +23,7 @@ class AnnotationReplyActionApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def post(self, app_id, action):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -47,8 +46,7 @@ class AppAnnotationSettingDetailApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -61,8 +59,7 @@ class AppAnnotationSettingUpdateApi(Resource):
@login_required
@account_initialization_required
def post(self, app_id, annotation_setting_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -82,8 +79,7 @@ class AnnotationReplyActionStatusApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def get(self, app_id, job_id, action):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
job_id = str(job_id)
@@ -110,8 +106,7 @@ class AnnotationListApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
page = request.args.get('page', default=1, type=int)
@@ -135,8 +130,7 @@ class AnnotationExportApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -154,8 +148,7 @@ class AnnotationCreateApi(Resource):
@cloud_edition_billing_resource_check('annotation')
@marshal_with(annotation_fields)
def post(self, app_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -174,8 +167,7 @@ class AnnotationUpdateDeleteApi(Resource):
@cloud_edition_billing_resource_check('annotation')
@marshal_with(annotation_fields)
def post(self, app_id, annotation_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -191,8 +183,7 @@ class AnnotationUpdateDeleteApi(Resource):
@login_required
@account_initialization_required
def delete(self, app_id, annotation_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -207,8 +198,7 @@ class AnnotationBatchImportApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def post(self, app_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -232,8 +222,7 @@ class AnnotationBatchImportStatusApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def get(self, app_id, job_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
job_id = str(job_id)
@@ -259,8 +248,7 @@ class AnnotationHitHistoryListApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id, annotation_id):
# The role of the current user in the table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
page = request.args.get('page', default=1, type=int)

View File

@@ -212,7 +212,7 @@ class AppCopyApi(Resource):
parser.add_argument('icon_background', type=str, location='json')
args = parser.parse_args()
data = AppDslService.export_dsl(app_model=app_model)
data = AppDslService.export_dsl(app_model=app_model, include_secret=True)
app = AppDslService.import_and_create_new_app(
tenant_id=current_user.current_tenant_id,
data=data,
@@ -234,8 +234,13 @@ class AppExportApi(Resource):
if not current_user.is_editor:
raise Forbidden()
# Add include_secret params
parser = reqparse.RequestParser()
parser.add_argument('include_secret', type=inputs.boolean, default=False, location='args')
args = parser.parse_args()
return {
"data": AppDslService.export_dsl(app_model=app_model)
"data": AppDslService.export_dsl(app_model=app_model, include_secret=args['include_secret'])
}

View File

@@ -22,7 +22,7 @@ from fields.conversation_fields import (
)
from libs.helper import datetime_string
from libs.login import login_required
from models.model import AppMode, Conversation, Message, MessageAnnotation
from models.model import AppMode, Conversation, EndUser, Message, MessageAnnotation
class CompletionConversationApi(Resource):
@@ -143,7 +143,7 @@ class ChatConversationApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@marshal_with(conversation_with_summary_pagination_fields)
def get(self, app_model):
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('keyword', type=str, location='args')
@@ -156,19 +156,31 @@ class ChatConversationApi(Resource):
parser.add_argument('limit', type=int_range(1, 100), required=False, default=20, location='args')
args = parser.parse_args()
subquery = (
db.session.query(
Conversation.id.label('conversation_id'),
EndUser.session_id.label('from_end_user_session_id')
)
.outerjoin(EndUser, Conversation.from_end_user_id == EndUser.id)
.subquery()
)
query = db.select(Conversation).where(Conversation.app_id == app_model.id)
if args['keyword']:
keyword_filter = '%{}%'.format(args['keyword'])
query = query.join(
Message, Message.conversation_id == Conversation.id
Message, Message.conversation_id == Conversation.id,
).join(
subquery, subquery.c.conversation_id == Conversation.id
).filter(
or_(
Message.query.ilike('%{}%'.format(args['keyword'])),
Message.answer.ilike('%{}%'.format(args['keyword'])),
Conversation.name.ilike('%{}%'.format(args['keyword'])),
Conversation.introduction.ilike('%{}%'.format(args['keyword'])),
Message.query.ilike(keyword_filter),
Message.answer.ilike(keyword_filter),
Conversation.name.ilike(keyword_filter),
Conversation.introduction.ilike(keyword_filter),
subquery.c.from_end_user_session_id.ilike(keyword_filter)
),
)
account = current_user
@@ -233,7 +245,7 @@ class ChatConversationDetailApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@marshal_with(conversation_detail_fields)
def get(self, app_model, conversation_id):
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
conversation_id = str(conversation_id)

View File

@@ -1,3 +1,5 @@
import os
from flask_login import current_user
from flask_restful import Resource, reqparse
@@ -22,17 +24,21 @@ class RuleGenerateApi(Resource):
@account_initialization_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument('audiences', type=str, required=True, nullable=False, location='json')
parser.add_argument('hoping_to_solve', type=str, required=True, nullable=False, location='json')
parser.add_argument('instruction', type=str, required=True, nullable=False, location='json')
parser.add_argument('model_config', type=dict, required=True, nullable=False, location='json')
parser.add_argument('no_variable', type=bool, required=True, default=False, location='json')
args = parser.parse_args()
account = current_user
PROMPT_GENERATION_MAX_TOKENS = int(os.getenv('PROMPT_GENERATION_MAX_TOKENS', '512'))
try:
rules = LLMGenerator.generate_rule_config(
account.current_tenant_id,
args['audiences'],
args['hoping_to_solve']
tenant_id=account.current_tenant_id,
instruction=args['instruction'],
model_config=args['model_config'],
no_variable=args['no_variable'],
rule_config_max_tokens=PROMPT_GENERATION_MAX_TOKENS
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)

View File

@@ -149,8 +149,7 @@ class MessageAnnotationApi(Resource):
@get_app_model
@marshal_with(annotation_fields)
def post(self, app_model):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()

View File

@@ -281,7 +281,7 @@ class UserSatisfactionRateStatistic(Resource):
SELECT date(DATE_TRUNC('day', m.created_at AT TIME ZONE 'UTC' AT TIME ZONE :tz )) AS date,
COUNT(m.id) as message_count, COUNT(mf.id) as feedback_count
FROM messages m
LEFT JOIN message_feedbacks mf on mf.message_id=m.id
LEFT JOIN message_feedbacks mf on mf.message_id=m.id and mf.rating='like'
WHERE m.app_id = :app_id
'''
arg_dict = {'tz': account.timezone, 'app_id': app_model.id}

View File

@@ -13,6 +13,7 @@ from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
from core.app.apps.base_app_queue_manager import AppQueueManager
from core.app.entities.app_invoke_entities import InvokeFrom
from core.app.segments import factory
from core.errors.error import AppInvokeQuotaExceededError
from fields.workflow_fields import workflow_fields
from fields.workflow_run_fields import workflow_run_node_execution_fields
@@ -41,7 +42,7 @@ class DraftWorkflowApi(Resource):
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
# fetch draft workflow by app_model
workflow_service = WorkflowService()
workflow = workflow_service.get_draft_workflow(app_model=app_model)
@@ -64,13 +65,15 @@ class DraftWorkflowApi(Resource):
if not current_user.is_editor:
raise Forbidden()
content_type = request.headers.get('Content-Type')
content_type = request.headers.get('Content-Type', '')
if 'application/json' in content_type:
parser = reqparse.RequestParser()
parser.add_argument('graph', type=dict, required=True, nullable=False, location='json')
parser.add_argument('features', type=dict, required=True, nullable=False, location='json')
parser.add_argument('hash', type=str, required=False, location='json')
# TODO: set this to required=True after frontend is updated
parser.add_argument('environment_variables', type=list, required=False, location='json')
args = parser.parse_args()
elif 'text/plain' in content_type:
try:
@@ -84,7 +87,8 @@ class DraftWorkflowApi(Resource):
args = {
'graph': data.get('graph'),
'features': data.get('features'),
'hash': data.get('hash')
'hash': data.get('hash'),
'environment_variables': data.get('environment_variables')
}
except json.JSONDecodeError:
return {'message': 'Invalid JSON data'}, 400
@@ -94,12 +98,15 @@ class DraftWorkflowApi(Resource):
workflow_service = WorkflowService()
try:
environment_variables_list = args.get('environment_variables') or []
environment_variables = [factory.build_variable_from_mapping(obj) for obj in environment_variables_list]
workflow = workflow_service.sync_draft_workflow(
app_model=app_model,
graph=args.get('graph'),
features=args.get('features'),
graph=args['graph'],
features=args['features'],
unique_hash=args.get('hash'),
account=current_user
account=current_user,
environment_variables=environment_variables,
)
except WorkflowHashNotEqualError:
raise DraftWorkflowNotSync()

View File

@@ -17,8 +17,6 @@ from ..wraps import account_initialization_required
def get_oauth_providers():
with current_app.app_context():
if not dify_config.NOTION_CLIENT_ID or not dify_config.NOTION_CLIENT_SECRET:
return {}
notion_oauth = NotionOAuth(client_id=dify_config.NOTION_CLIENT_ID,
client_secret=dify_config.NOTION_CLIENT_SECRET,
redirect_uri=dify_config.CONSOLE_API_URL + '/console/api/oauth/data-source/callback/notion')

View File

@@ -71,7 +71,7 @@ class ResetPasswordApi(Resource):
# AccountService.update_password(account, new_password)
# todo: Send email
# MAILCHIMP_API_KEY = current_app.config['MAILCHIMP_TRANSACTIONAL_API_KEY']
# MAILCHIMP_API_KEY = dify_config.MAILCHIMP_TRANSACTIONAL_API_KEY
# mailchimp = MailchimpTransactional(MAILCHIMP_API_KEY)
# message = {
@@ -92,7 +92,7 @@ class ResetPasswordApi(Resource):
# 'message': message,
# # required for transactional email
# ' settings': {
# 'sandbox_mode': current_app.config['MAILCHIMP_SANDBOX_MODE'],
# 'sandbox_mode': dify_config.MAILCHIMP_SANDBOX_MODE,
# },
# })

View File

@@ -1,10 +1,11 @@
import flask_restful
from flask import current_app, request
from flask import request
from flask_login import current_user
from flask_restful import Resource, marshal, marshal_with, reqparse
from werkzeug.exceptions import Forbidden, NotFound
import services
from configs import dify_config
from controllers.console import api
from controllers.console.apikey import api_key_fields, api_key_list
from controllers.console.app.error import ProviderNotInitializeError
@@ -188,8 +189,6 @@ class DatasetApi(Resource):
dataset = DatasetService.get_dataset(dataset_id_str)
if dataset is None:
raise NotFound("Dataset not found.")
# check user's model setting
DatasetService.check_dataset_model_setting(dataset)
parser = reqparse.RequestParser()
parser.add_argument('name', nullable=False,
@@ -214,6 +213,13 @@ class DatasetApi(Resource):
args = parser.parse_args()
data = request.get_json()
# check embedding model setting
if data.get('indexing_technique') == 'high_quality':
DatasetService.check_embedding_model_setting(dataset.tenant_id,
data.get('embedding_model_provider'),
data.get('embedding_model')
)
# The role of the current user in the ta table must be admin, owner, editor, or dataset_operator
DatasetPermissionService.check_permission(
current_user, dataset, data.get('permission'), data.get('partial_member_list')
@@ -232,7 +238,8 @@ class DatasetApi(Resource):
DatasetPermissionService.update_partial_member_list(
tenant_id, dataset_id_str, data.get('partial_member_list')
)
else:
# clear partial member list when permission is only_me or all_team_members
elif data.get('permission') == 'only_me' or data.get('permission') == 'all_team_members':
DatasetPermissionService.clear_partial_member_list(dataset_id_str)
partial_member_list = DatasetPermissionService.get_dataset_partial_member_list(dataset_id_str)
@@ -530,7 +537,7 @@ class DatasetApiBaseUrlApi(Resource):
@account_initialization_required
def get(self):
return {
'api_base_url': (current_app.config['SERVICE_API_URL'] if current_app.config['SERVICE_API_URL']
'api_base_url': (dify_config.SERVICE_API_URL if dify_config.SERVICE_API_URL
else request.host_url.rstrip('/')) + '/v1'
}
@@ -540,15 +547,15 @@ class DatasetRetrievalSettingApi(Resource):
@login_required
@account_initialization_required
def get(self):
vector_type = current_app.config['VECTOR_STORE']
vector_type = dify_config.VECTOR_STORE
match vector_type:
case VectorType.MILVUS | VectorType.RELYT | VectorType.PGVECTOR | VectorType.TIDB_VECTOR | VectorType.CHROMA | VectorType.TENCENT | VectorType.ORACLE:
case VectorType.MILVUS | VectorType.RELYT | VectorType.PGVECTOR | VectorType.TIDB_VECTOR | VectorType.CHROMA | VectorType.TENCENT:
return {
'retrieval_method': [
RetrievalMethod.SEMANTIC_SEARCH.value
]
}
case VectorType.QDRANT | VectorType.WEAVIATE | VectorType.OPENSEARCH | VectorType.ANALYTICDB | VectorType.MYSCALE:
case VectorType.QDRANT | VectorType.WEAVIATE | VectorType.OPENSEARCH | VectorType.ANALYTICDB | VectorType.MYSCALE | VectorType.ORACLE:
return {
'retrieval_method': [
RetrievalMethod.SEMANTIC_SEARCH.value,
@@ -566,13 +573,13 @@ class DatasetRetrievalSettingMockApi(Resource):
@account_initialization_required
def get(self, vector_type):
match vector_type:
case VectorType.MILVUS | VectorType.RELYT | VectorType.PGVECTOR | VectorType.TIDB_VECTOR | VectorType.CHROMA | VectorType.TENCENT | VectorType.ORACLE:
case VectorType.MILVUS | VectorType.RELYT | VectorType.PGVECTOR | VectorType.TIDB_VECTOR | VectorType.CHROMA | VectorType.TENCENT:
return {
'retrieval_method': [
RetrievalMethod.SEMANTIC_SEARCH.value
]
}
case VectorType.QDRANT | VectorType.WEAVIATE | VectorType.OPENSEARCH| VectorType.ANALYTICDB | VectorType.MYSCALE:
case VectorType.QDRANT | VectorType.WEAVIATE | VectorType.OPENSEARCH| VectorType.ANALYTICDB | VectorType.MYSCALE | VectorType.ORACLE:
return {
'retrieval_method': [
RetrievalMethod.SEMANTIC_SEARCH.value,

View File

@@ -223,8 +223,7 @@ class DatasetDocumentSegmentAddApi(Resource):
document = DocumentService.get_document(dataset_id, document_id)
if not document:
raise NotFound('Document not found.')
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
# check embedding model setting
if dataset.indexing_technique == 'high_quality':
@@ -347,7 +346,7 @@ class DatasetDocumentSegmentUpdateApi(Resource):
if not segment:
raise NotFound('Segment not found.')
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
try:
DatasetService.check_dataset_permission(dataset, current_user)

View File

@@ -1,8 +1,9 @@
from flask import current_app, request
from flask import request
from flask_login import current_user
from flask_restful import Resource, marshal_with
import services
from configs import dify_config
from controllers.console import api
from controllers.console.datasets.error import (
FileTooLargeError,
@@ -26,9 +27,9 @@ class FileApi(Resource):
@account_initialization_required
@marshal_with(upload_config_fields)
def get(self):
file_size_limit = current_app.config.get("UPLOAD_FILE_SIZE_LIMIT")
batch_count_limit = current_app.config.get("UPLOAD_FILE_BATCH_LIMIT")
image_file_size_limit = current_app.config.get("UPLOAD_IMAGE_FILE_SIZE_LIMIT")
file_size_limit = dify_config.UPLOAD_FILE_SIZE_LIMIT
batch_count_limit = dify_config.UPLOAD_FILE_BATCH_LIMIT
image_file_size_limit = dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT
return {
'file_size_limit': file_size_limit,
'batch_count_limit': batch_count_limit,
@@ -76,7 +77,7 @@ class FileSupportTypeApi(Resource):
@login_required
@account_initialization_required
def get(self):
etl_type = current_app.config['ETL_TYPE']
etl_type = dify_config.ETL_TYPE
allowed_extensions = UNSTRUCTURED_ALLOWED_EXTENSIONS if etl_type == 'Unstructured' else ALLOWED_EXTENSIONS
return {'allowed_extensions': allowed_extensions}

View File

@@ -1,7 +1,7 @@
from flask import current_app
from flask_restful import fields, marshal_with
from configs import dify_config
from controllers.console import api
from controllers.console.app.error import AppUnavailableError
from controllers.console.explore.wraps import InstalledAppResource
@@ -78,7 +78,7 @@ class AppParameterApi(InstalledAppResource):
"transfer_methods": ["remote_url", "local_file"]
}}),
'system_parameters': {
'image_file_size_limit': current_app.config.get('UPLOAD_IMAGE_FILE_SIZE_LIMIT')
'image_file_size_limit': dify_config.UPLOAD_IMAGE_FILE_SIZE_LIMIT
}
}

View File

@@ -1,8 +1,9 @@
import os
from flask import current_app, session
from flask import session
from flask_restful import Resource, reqparse
from configs import dify_config
from libs.helper import str_len
from models.model import DifySetup
from services.account_service import TenantService
@@ -40,7 +41,7 @@ class InitValidateAPI(Resource):
return {'result': 'success'}, 201
def get_init_validate_status():
if current_app.config['EDITION'] == 'SELF_HOSTED':
if dify_config.EDITION == 'SELF_HOSTED':
if os.environ.get('INIT_PASSWORD'):
return session.get('is_init_validated') or DifySetup.query.first()

View File

@@ -1,8 +1,9 @@
from functools import wraps
from flask import current_app, request
from flask import request
from flask_restful import Resource, reqparse
from configs import dify_config
from libs.helper import email, get_remote_ip, str_len
from libs.password import valid_password
from models.model import DifySetup
@@ -17,7 +18,7 @@ from .wraps import only_edition_self_hosted
class SetupApi(Resource):
def get(self):
if current_app.config['EDITION'] == 'SELF_HOSTED':
if dify_config.EDITION == 'SELF_HOSTED':
setup_status = get_setup_status()
if setup_status:
return {
@@ -77,7 +78,7 @@ def setup_required(view):
def get_setup_status():
if current_app.config['EDITION'] == 'SELF_HOSTED':
if dify_config.EDITION == 'SELF_HOSTED':
return DifySetup.query.first()
else:
return True

View File

@@ -3,9 +3,10 @@ import json
import logging
import requests
from flask import current_app
from flask_restful import Resource, reqparse
from configs import dify_config
from . import api
@@ -15,16 +16,16 @@ class VersionApi(Resource):
parser = reqparse.RequestParser()
parser.add_argument('current_version', type=str, required=True, location='args')
args = parser.parse_args()
check_update_url = current_app.config['CHECK_UPDATE_URL']
check_update_url = dify_config.CHECK_UPDATE_URL
result = {
'version': current_app.config['CURRENT_VERSION'],
'version': dify_config.CURRENT_VERSION,
'release_date': '',
'release_notes': '',
'can_auto_update': False,
'features': {
'can_replace_logo': current_app.config['CAN_REPLACE_LOGO'],
'model_load_balancing_enabled': current_app.config['MODEL_LB_ENABLED']
'can_replace_logo': dify_config.CAN_REPLACE_LOGO,
'model_load_balancing_enabled': dify_config.MODEL_LB_ENABLED
}
}

View File

@@ -1,10 +1,11 @@
import datetime
import pytz
from flask import current_app, request
from flask import request
from flask_login import current_user
from flask_restful import Resource, fields, marshal_with, reqparse
from configs import dify_config
from constants.languages import supported_language
from controllers.console import api
from controllers.console.setup import setup_required
@@ -36,7 +37,7 @@ class AccountInitApi(Resource):
parser = reqparse.RequestParser()
if current_app.config['EDITION'] == 'CLOUD':
if dify_config.EDITION == 'CLOUD':
parser.add_argument('invitation_code', type=str, location='json')
parser.add_argument(
@@ -45,7 +46,7 @@ class AccountInitApi(Resource):
required=True, location='json')
args = parser.parse_args()
if current_app.config['EDITION'] == 'CLOUD':
if dify_config.EDITION == 'CLOUD':
if not args['invitation_code']:
raise ValueError('invitation_code is required')

View File

@@ -1,8 +1,8 @@
from flask import current_app
from flask_login import current_user
from flask_restful import Resource, abort, marshal_with, reqparse
import services
from configs import dify_config
from controllers.console import api
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required, cloud_edition_billing_resource_check
@@ -48,7 +48,7 @@ class MemberInviteEmailApi(Resource):
inviter = current_user
invitation_results = []
console_web_url = current_app.config.get("CONSOLE_WEB_URL")
console_web_url = dify_config.CONSOLE_WEB_URL
for invitee_email in invitee_emails:
try:
token = RegisterService.invite_new_member(inviter.current_tenant, invitee_email, interface_language, role=invitee_role, inviter=inviter)

View File

@@ -1,10 +1,11 @@
import io
from flask import current_app, send_file
from flask import send_file
from flask_login import current_user
from flask_restful import Resource, reqparse
from werkzeug.exceptions import Forbidden
from configs import dify_config
from controllers.console import api
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
@@ -104,7 +105,7 @@ class ToolBuiltinProviderIconApi(Resource):
@setup_required
def get(self, provider):
icon_bytes, mimetype = BuiltinToolManageService.get_builtin_tool_provider_icon(provider)
icon_cache_max_age = current_app.config.get('TOOL_ICON_CACHE_MAX_AGE')
icon_cache_max_age = dify_config.TOOL_ICON_CACHE_MAX_AGE
return send_file(io.BytesIO(icon_bytes), mimetype=mimetype, max_age=icon_cache_max_age)
class ToolApiProviderAddApi(Resource):

View File

@@ -1,9 +1,10 @@
import json
from functools import wraps
from flask import abort, current_app, request
from flask import abort, request
from flask_login import current_user
from configs import dify_config
from controllers.console.workspace.error import AccountNotInitializedError
from services.feature_service import FeatureService
from services.operation_service import OperationService
@@ -26,7 +27,7 @@ def account_initialization_required(view):
def only_edition_cloud(view):
@wraps(view)
def decorated(*args, **kwargs):
if current_app.config['EDITION'] != 'CLOUD':
if dify_config.EDITION != 'CLOUD':
abort(404)
return view(*args, **kwargs)
@@ -37,7 +38,7 @@ def only_edition_cloud(view):
def only_edition_self_hosted(view):
@wraps(view)
def decorated(*args, **kwargs):
if current_app.config['EDITION'] != 'SELF_HOSTED':
if dify_config.EDITION != 'SELF_HOSTED':
abort(404)
return view(*args, **kwargs)

View File

@@ -19,7 +19,7 @@ def inner_api_only(view):
# get header 'X-Inner-Api-Key'
inner_api_key = request.headers.get('X-Inner-Api-Key')
if not inner_api_key or inner_api_key != dify_config.INNER_API_KEY:
abort(404)
abort(401)
return view(*args, **kwargs)

View File

@@ -53,7 +53,7 @@ class ConversationDetailApi(Resource):
ConversationService.delete(app_model, conversation_id, end_user)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
return {"result": "success"}, 204
return {'result': 'success'}, 200
class ConversationRenameApi(Resource):

View File

@@ -29,22 +29,21 @@ from services.app_generate_service import AppGenerateService
logger = logging.getLogger(__name__)
workflow_run_fields = {
'id': fields.String,
'workflow_id': fields.String,
'status': fields.String,
'inputs': fields.Raw,
'outputs': fields.Raw,
'error': fields.String,
'total_steps': fields.Integer,
'total_tokens': fields.Integer,
'created_at': fields.DateTime,
'finished_at': fields.DateTime,
'elapsed_time': fields.Float,
}
class WorkflowRunApi(Resource):
workflow_run_fields = {
'id': fields.String,
'workflow_id': fields.String,
'status': fields.String,
'inputs': fields.Raw,
'outputs': fields.Raw,
'error': fields.String,
'total_steps': fields.Integer,
'total_tokens': fields.Integer,
'created_at': fields.DateTime,
'finished_at': fields.DateTime,
'elapsed_time': fields.Float,
}
class WorkflowRunDetailApi(Resource):
@validate_app_token
@marshal_with(workflow_run_fields)
def get(self, app_model: App, workflow_id: str):
@@ -57,7 +56,7 @@ class WorkflowRunApi(Resource):
workflow_run = db.session.query(WorkflowRun).filter(WorkflowRun.id == workflow_id).first()
return workflow_run
class WorkflowRunApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON, required=True))
def post(self, app_model: App, end_user: EndUser):
"""
@@ -117,5 +116,6 @@ class WorkflowTaskStopApi(Resource):
}
api.add_resource(WorkflowRunApi, '/workflows/run/<string:workflow_id>', '/workflows/run')
api.add_resource(WorkflowRunApi, '/workflows/run')
api.add_resource(WorkflowRunDetailApi, '/workflows/run/<string:workflow_id>')
api.add_resource(WorkflowTaskStopApi, '/workflows/tasks/<string:task_id>/stop')

View File

@@ -79,6 +79,7 @@ class CotAgentRunner(BaseAgentRunner, ABC):
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
llm_usage.total_price += usage.total_price
model_instance = self.model_instance

View File

@@ -62,6 +62,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
llm_usage.total_price += usage.total_price
model_instance = self.model_instance
@@ -342,10 +343,14 @@ class FunctionCallAgentRunner(BaseAgentRunner):
"""
tool_calls = []
for prompt_message in llm_result_chunk.delta.message.tool_calls:
args = {}
if prompt_message.function.arguments != '':
args = json.loads(prompt_message.function.arguments)
tool_calls.append((
prompt_message.id,
prompt_message.function.name,
json.loads(prompt_message.function.arguments),
args,
))
return tool_calls
@@ -359,10 +364,14 @@ class FunctionCallAgentRunner(BaseAgentRunner):
"""
tool_calls = []
for prompt_message in llm_result.message.tool_calls:
args = {}
if prompt_message.function.arguments != '':
args = json.loads(prompt_message.function.arguments)
tool_calls.append((
prompt_message.id,
prompt_message.function.name,
json.loads(prompt_message.function.arguments),
args,
))
return tool_calls

View File

@@ -1,6 +1,7 @@
from typing import Optional, Union
from collections.abc import Mapping
from typing import Any
from core.app.app_config.entities import AppAdditionalFeatures, EasyUIBasedAppModelConfigFrom
from core.app.app_config.entities import AppAdditionalFeatures
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.app_config.features.more_like_this.manager import MoreLikeThisConfigManager
from core.app.app_config.features.opening_statement.manager import OpeningStatementConfigManager
@@ -10,37 +11,19 @@ from core.app.app_config.features.suggested_questions_after_answer.manager impor
SuggestedQuestionsAfterAnswerConfigManager,
)
from core.app.app_config.features.text_to_speech.manager import TextToSpeechConfigManager
from models.model import AppMode, AppModelConfig
from models.model import AppMode
class BaseAppConfigManager:
@classmethod
def convert_to_config_dict(cls, config_from: EasyUIBasedAppModelConfigFrom,
app_model_config: Union[AppModelConfig, dict],
config_dict: Optional[dict] = None) -> dict:
"""
Convert app model config to config dict
:param config_from: app model config from
:param app_model_config: app model config
:param config_dict: app model config dict
:return:
"""
if config_from != EasyUIBasedAppModelConfigFrom.ARGS:
app_model_config_dict = app_model_config.to_dict()
config_dict = app_model_config_dict.copy()
return config_dict
@classmethod
def convert_features(cls, config_dict: dict, app_mode: AppMode) -> AppAdditionalFeatures:
def convert_features(cls, config_dict: Mapping[str, Any], app_mode: AppMode) -> AppAdditionalFeatures:
"""
Convert app config to app model config
:param config_dict: app config
:param app_mode: app mode
"""
config_dict = config_dict.copy()
config_dict = dict(config_dict.items())
additional_features = AppAdditionalFeatures()
additional_features.show_retrieve_source = RetrievalResourceConfigManager.convert(

View File

@@ -62,7 +62,12 @@ class DatasetConfigManager:
return None
# dataset configs
dataset_configs = config.get('dataset_configs', {'retrieval_model': 'single'})
if 'dataset_configs' in config and config.get('dataset_configs'):
dataset_configs = config.get('dataset_configs')
else:
dataset_configs = {
'retrieval_model': 'multiple'
}
query_variable = config.get('dataset_query_variable')
if dataset_configs['retrieval_model'] == 'single':
@@ -83,9 +88,11 @@ class DatasetConfigManager:
retrieve_strategy=DatasetRetrieveConfigEntity.RetrieveStrategy.value_of(
dataset_configs['retrieval_model']
),
top_k=dataset_configs.get('top_k'),
top_k=dataset_configs.get('top_k', 4),
score_threshold=dataset_configs.get('score_threshold'),
reranking_model=dataset_configs.get('reranking_model')
reranking_model=dataset_configs.get('reranking_model'),
weights=dataset_configs.get('weights'),
reranking_enabled=dataset_configs.get('reranking_enabled', True),
)
)
@@ -114,12 +121,6 @@ class DatasetConfigManager:
if not isinstance(config["dataset_configs"], dict):
raise ValueError("dataset_configs must be of object type")
if config["dataset_configs"]['retrieval_model'] == 'multiple':
if not config["dataset_configs"]['reranking_model']:
raise ValueError("reranking_model has not been set")
if not isinstance(config["dataset_configs"]['reranking_model'], dict):
raise ValueError("reranking_model must be of object type")
if not isinstance(config["dataset_configs"], dict):
raise ValueError("dataset_configs must be of object type")

View File

@@ -158,8 +158,13 @@ class DatasetRetrieveConfigEntity(BaseModel):
retrieve_strategy: RetrieveStrategy
top_k: Optional[int] = None
score_threshold: Optional[float] = None
score_threshold: Optional[float] = .0
rerank_mode: Optional[str] = 'reranking_model'
reranking_model: Optional[dict] = None
weights: Optional[dict] = None
reranking_enabled: Optional[bool] = True
class DatasetEntity(BaseModel):

View File

@@ -1,11 +1,12 @@
from typing import Optional
from collections.abc import Mapping
from typing import Any, Optional
from core.app.app_config.entities import FileExtraConfig
class FileUploadConfigManager:
@classmethod
def convert(cls, config: dict, is_vision: bool = True) -> Optional[FileExtraConfig]:
def convert(cls, config: Mapping[str, Any], is_vision: bool = True) -> Optional[FileExtraConfig]:
"""
Convert model config to model config

View File

@@ -3,13 +3,13 @@ from core.app.app_config.entities import TextToSpeechEntity
class TextToSpeechConfigManager:
@classmethod
def convert(cls, config: dict) -> bool:
def convert(cls, config: dict):
"""
Convert model config to model config
:param config: model config args
"""
text_to_speech = False
text_to_speech = None
text_to_speech_dict = config.get('text_to_speech')
if text_to_speech_dict:
if text_to_speech_dict.get('enabled'):

View File

@@ -1,3 +1,4 @@
import contextvars
import logging
import os
import threading
@@ -8,6 +9,7 @@ from typing import Union
from flask import Flask, current_app
from pydantic import ValidationError
import contexts
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfigManager
from core.app.apps.advanced_chat.app_runner import AdvancedChatAppRunner
@@ -107,6 +109,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
extras=extras,
trace_manager=trace_manager
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)
return self._generate(
app_model=app_model,
@@ -173,6 +176,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
inputs=args['inputs']
)
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)
return self._generate(
app_model=app_model,
@@ -225,6 +229,8 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
'queue_manager': queue_manager,
'conversation_id': conversation.id,
'message_id': message.id,
'user': user,
'context': contextvars.copy_context()
})
worker_thread.start()
@@ -249,7 +255,9 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
application_generate_entity: AdvancedChatAppGenerateEntity,
queue_manager: AppQueueManager,
conversation_id: str,
message_id: str) -> None:
message_id: str,
user: Account,
context: contextvars.Context) -> None:
"""
Generate worker in a new thread.
:param flask_app: Flask app
@@ -259,6 +267,8 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
:param message_id: message ID
:return:
"""
for var, val in context.items():
var.set(val)
with flask_app.app_context():
try:
runner = AdvancedChatAppRunner()

View File

@@ -5,7 +5,12 @@ import queue
import re
import threading
from core.app.entities.queue_entities import QueueAgentMessageEvent, QueueLLMChunkEvent, QueueTextChunkEvent
from core.app.entities.queue_entities import (
QueueAgentMessageEvent,
QueueLLMChunkEvent,
QueueNodeSucceededEvent,
QueueTextChunkEvent,
)
from core.model_manager import ModelManager
from core.model_runtime.entities.model_entities import ModelType
@@ -88,6 +93,8 @@ class AppGeneratorTTSPublisher:
self.msg_text += message.event.chunk.delta.message.content
elif isinstance(message.event, QueueTextChunkEvent):
self.msg_text += message.event.text
elif isinstance(message.event, QueueNodeSucceededEvent):
self.msg_text += message.event.outputs.get('output', '')
self.last_message = message
sentence_arr, text_tmp = self._extract_sentence(self.msg_text)
if len(sentence_arr) >= min(self.MAX_SENTENCE, 7):

View File

@@ -1,7 +1,8 @@
import logging
import os
import time
from typing import Optional, cast
from collections.abc import Mapping
from typing import Any, Optional, cast
from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfig
from core.app.apps.advanced_chat.workflow_event_trigger_callback import WorkflowEventTriggerCallback
@@ -14,6 +15,7 @@ from core.app.entities.app_invoke_entities import (
)
from core.app.entities.queue_entities import QueueAnnotationReplyEvent, QueueStopEvent, QueueTextChunkEvent
from core.moderation.base import ModerationException
from core.workflow.callbacks.base_workflow_callback import WorkflowCallback
from core.workflow.entities.node_entities import SystemVariable
from core.workflow.nodes.base_node import UserFrom
from core.workflow.workflow_engine_manager import WorkflowEngineManager
@@ -87,7 +89,7 @@ class AdvancedChatAppRunner(AppRunner):
db.session.close()
workflow_callbacks = [WorkflowEventTriggerCallback(
workflow_callbacks: list[WorkflowCallback] = [WorkflowEventTriggerCallback(
queue_manager=queue_manager,
workflow=workflow
)]
@@ -161,7 +163,7 @@ class AdvancedChatAppRunner(AppRunner):
self, queue_manager: AppQueueManager,
app_record: App,
app_generate_entity: AdvancedChatAppGenerateEntity,
inputs: dict,
inputs: Mapping[str, Any],
query: str,
message_id: str
) -> bool:

View File

@@ -1,9 +1,11 @@
import json
from collections.abc import Generator
from typing import cast
from typing import Any, cast
from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter
from core.app.entities.task_entities import (
AppBlockingResponse,
AppStreamResponse,
ChatbotAppBlockingResponse,
ChatbotAppStreamResponse,
ErrorStreamResponse,
@@ -18,12 +20,13 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter):
_blocking_response_type = ChatbotAppBlockingResponse
@classmethod
def convert_blocking_full_response(cls, blocking_response: ChatbotAppBlockingResponse) -> dict:
def convert_blocking_full_response(cls, blocking_response: AppBlockingResponse) -> dict[str, Any]:
"""
Convert blocking full response.
:param blocking_response: blocking response
:return:
"""
blocking_response = cast(ChatbotAppBlockingResponse, blocking_response)
response = {
'event': 'message',
'task_id': blocking_response.task_id,
@@ -39,7 +42,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter):
return response
@classmethod
def convert_blocking_simple_response(cls, blocking_response: ChatbotAppBlockingResponse) -> dict:
def convert_blocking_simple_response(cls, blocking_response: AppBlockingResponse) -> dict[str, Any]:
"""
Convert blocking simple response.
:param blocking_response: blocking response
@@ -53,8 +56,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter):
return response
@classmethod
def convert_stream_full_response(cls, stream_response: Generator[ChatbotAppStreamResponse, None, None]) \
-> Generator[str, None, None]:
def convert_stream_full_response(cls, stream_response: Generator[AppStreamResponse, None, None]) -> Generator[str, Any, None]:
"""
Convert stream full response.
:param stream_response: stream response
@@ -83,8 +85,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter):
yield json.dumps(response_chunk)
@classmethod
def convert_stream_simple_response(cls, stream_response: Generator[ChatbotAppStreamResponse, None, None]) \
-> Generator[str, None, None]:
def convert_stream_simple_response(cls, stream_response: Generator[AppStreamResponse, None, None]) -> Generator[str, Any, None]:
"""
Convert stream simple response.
:param stream_response: stream response

View File

@@ -118,7 +118,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
self._stream_generate_routes = self._get_stream_generate_routes()
self._conversation_name_generate_thread = None
def process(self) -> Union[ChatbotAppBlockingResponse, Generator[ChatbotAppStreamResponse, None, None]]:
def process(self):
"""
Process generate task pipeline.
:return:
@@ -141,8 +141,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
else:
return self._to_blocking_response(generator)
def _to_blocking_response(self, generator: Generator[StreamResponse, None, None]) \
-> ChatbotAppBlockingResponse:
def _to_blocking_response(self, generator: Generator[StreamResponse, None, None]) -> ChatbotAppBlockingResponse:
"""
Process blocking response.
:return:
@@ -172,8 +171,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
raise Exception('Queue listening stopped unexpectedly.')
def _to_stream_response(self, generator: Generator[StreamResponse, None, None]) \
-> Generator[ChatbotAppStreamResponse, None, None]:
def _to_stream_response(self, generator: Generator[StreamResponse, None, None]) -> Generator[ChatbotAppStreamResponse, Any, None]:
"""
To stream response.
:return:
@@ -246,7 +244,12 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
:return:
"""
for message in self._queue_manager.listen():
if publisher:
if hasattr(message.event, 'metadata') and message.event.metadata.get('is_answer_previous_node', False) and publisher:
publisher.publish(message=message)
elif (hasattr(message.event, 'execution_metadata')
and message.event.execution_metadata
and message.event.execution_metadata.get('is_answer_previous_node', False)
and publisher):
publisher.publish(message=message)
event = message.event

View File

@@ -14,13 +14,13 @@ from core.app.entities.queue_entities import (
QueueWorkflowStartedEvent,
QueueWorkflowSucceededEvent,
)
from core.workflow.callbacks.base_workflow_callback import BaseWorkflowCallback
from core.workflow.callbacks.base_workflow_callback import WorkflowCallback
from core.workflow.entities.base_node_data_entities import BaseNodeData
from core.workflow.entities.node_entities import NodeType
from models.workflow import Workflow
class WorkflowEventTriggerCallback(BaseWorkflowCallback):
class WorkflowEventTriggerCallback(WorkflowCallback):
def __init__(self, queue_manager: AppQueueManager, workflow: Workflow):
self._queue_manager = queue_manager

View File

@@ -110,7 +110,8 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
)
# get tracing instance
trace_manager = TraceQueueManager(app_model.id)
user_id = user.id if isinstance(user, Account) else user.session_id
trace_manager = TraceQueueManager(app_model.id, user_id)
# init application generate entity
application_generate_entity = AgentChatAppGenerateEntity(

View File

@@ -1,7 +1,7 @@
import logging
from abc import ABC, abstractmethod
from collections.abc import Generator
from typing import Union
from typing import Any, Union
from core.app.entities.app_invoke_entities import InvokeFrom
from core.app.entities.task_entities import AppBlockingResponse, AppStreamResponse
@@ -15,44 +15,41 @@ class AppGenerateResponseConverter(ABC):
@classmethod
def convert(cls, response: Union[
AppBlockingResponse,
Generator[AppStreamResponse, None, None]
], invoke_from: InvokeFrom) -> Union[
dict,
Generator[str, None, None]
]:
Generator[AppStreamResponse, Any, None]
], invoke_from: InvokeFrom):
if invoke_from in [InvokeFrom.DEBUGGER, InvokeFrom.SERVICE_API]:
if isinstance(response, cls._blocking_response_type):
if isinstance(response, AppBlockingResponse):
return cls.convert_blocking_full_response(response)
else:
def _generate():
def _generate_full_response() -> Generator[str, Any, None]:
for chunk in cls.convert_stream_full_response(response):
if chunk == 'ping':
yield f'event: {chunk}\n\n'
else:
yield f'data: {chunk}\n\n'
return _generate()
return _generate_full_response()
else:
if isinstance(response, cls._blocking_response_type):
if isinstance(response, AppBlockingResponse):
return cls.convert_blocking_simple_response(response)
else:
def _generate():
def _generate_simple_response() -> Generator[str, Any, None]:
for chunk in cls.convert_stream_simple_response(response):
if chunk == 'ping':
yield f'event: {chunk}\n\n'
else:
yield f'data: {chunk}\n\n'
return _generate()
return _generate_simple_response()
@classmethod
@abstractmethod
def convert_blocking_full_response(cls, blocking_response: AppBlockingResponse) -> dict:
def convert_blocking_full_response(cls, blocking_response: AppBlockingResponse) -> dict[str, Any]:
raise NotImplementedError
@classmethod
@abstractmethod
def convert_blocking_simple_response(cls, blocking_response: AppBlockingResponse) -> dict:
def convert_blocking_simple_response(cls, blocking_response: AppBlockingResponse) -> dict[str, Any]:
raise NotImplementedError
@classmethod
@@ -68,7 +65,7 @@ class AppGenerateResponseConverter(ABC):
raise NotImplementedError
@classmethod
def _get_simple_metadata(cls, metadata: dict) -> dict:
def _get_simple_metadata(cls, metadata: dict[str, Any]):
"""
Get simple metadata.
:param metadata: metadata

View File

@@ -5,9 +5,9 @@ from collections.abc import Generator
from enum import Enum
from typing import Any
from flask import current_app
from sqlalchemy.orm import DeclarativeMeta
from configs import dify_config
from core.app.entities.app_invoke_entities import InvokeFrom
from core.app.entities.queue_entities import (
AppQueueEvent,
@@ -48,7 +48,7 @@ class AppQueueManager:
:return:
"""
# wait for APP_MAX_EXECUTION_TIME seconds to stop listen
listen_timeout = current_app.config.get("APP_MAX_EXECUTION_TIME")
listen_timeout = dify_config.APP_MAX_EXECUTION_TIME
start_time = time.time()
last_ping_time = 0
while True:

View File

@@ -1,3 +1,4 @@
import contextvars
import logging
import os
import threading
@@ -8,6 +9,7 @@ from typing import Union
from flask import Flask, current_app
from pydantic import ValidationError
import contexts
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.apps.base_app_generator import BaseAppGenerator
from core.app.apps.base_app_queue_manager import AppQueueManager, GenerateTaskStoppedException, PublishFrom
@@ -38,7 +40,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
invoke_from: InvokeFrom,
stream: bool = True,
call_depth: int = 0,
) -> Union[dict, Generator[dict, None, None]]:
):
"""
Generate App response.
@@ -72,7 +74,8 @@ class WorkflowAppGenerator(BaseAppGenerator):
)
# get tracing instance
trace_manager = TraceQueueManager(app_model.id)
user_id = user.id if isinstance(user, Account) else user.session_id
trace_manager = TraceQueueManager(app_model.id, user_id)
# init application generate entity
application_generate_entity = WorkflowAppGenerateEntity(
@@ -86,6 +89,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
call_depth=call_depth,
trace_manager=trace_manager
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)
return self._generate(
app_model=app_model,
@@ -126,7 +130,8 @@ class WorkflowAppGenerator(BaseAppGenerator):
worker_thread = threading.Thread(target=self._generate_worker, kwargs={
'flask_app': current_app._get_current_object(),
'application_generate_entity': application_generate_entity,
'queue_manager': queue_manager
'queue_manager': queue_manager,
'context': contextvars.copy_context()
})
worker_thread.start()
@@ -150,8 +155,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
node_id: str,
user: Account,
args: dict,
stream: bool = True) \
-> Union[dict, Generator[dict, None, None]]:
stream: bool = True):
"""
Generate App response.
@@ -193,6 +197,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
inputs=args['inputs']
)
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)
return self._generate(
app_model=app_model,
@@ -205,7 +210,8 @@ class WorkflowAppGenerator(BaseAppGenerator):
def _generate_worker(self, flask_app: Flask,
application_generate_entity: WorkflowAppGenerateEntity,
queue_manager: AppQueueManager) -> None:
queue_manager: AppQueueManager,
context: contextvars.Context) -> None:
"""
Generate worker in a new thread.
:param flask_app: Flask app
@@ -213,6 +219,8 @@ class WorkflowAppGenerator(BaseAppGenerator):
:param queue_manager: queue manager
:return:
"""
for var, val in context.items():
var.set(val)
with flask_app.app_context():
try:
# workflow app

View File

@@ -10,6 +10,7 @@ from core.app.entities.app_invoke_entities import (
InvokeFrom,
WorkflowAppGenerateEntity,
)
from core.workflow.callbacks.base_workflow_callback import WorkflowCallback
from core.workflow.entities.node_entities import SystemVariable
from core.workflow.nodes.base_node import UserFrom
from core.workflow.workflow_engine_manager import WorkflowEngineManager
@@ -57,7 +58,7 @@ class WorkflowAppRunner:
db.session.close()
workflow_callbacks = [WorkflowEventTriggerCallback(
workflow_callbacks: list[WorkflowCallback] = [WorkflowEventTriggerCallback(
queue_manager=queue_manager,
workflow=workflow
)]

View File

@@ -14,13 +14,13 @@ from core.app.entities.queue_entities import (
QueueWorkflowStartedEvent,
QueueWorkflowSucceededEvent,
)
from core.workflow.callbacks.base_workflow_callback import BaseWorkflowCallback
from core.workflow.callbacks.base_workflow_callback import WorkflowCallback
from core.workflow.entities.base_node_data_entities import BaseNodeData
from core.workflow.entities.node_entities import NodeType
from models.workflow import Workflow
class WorkflowEventTriggerCallback(BaseWorkflowCallback):
class WorkflowEventTriggerCallback(WorkflowCallback):
def __init__(self, queue_manager: AppQueueManager, workflow: Workflow):
self._queue_manager = queue_manager

View File

@@ -2,7 +2,7 @@ from typing import Optional
from core.app.entities.queue_entities import AppQueueEvent
from core.model_runtime.utils.encoders import jsonable_encoder
from core.workflow.callbacks.base_workflow_callback import BaseWorkflowCallback
from core.workflow.callbacks.base_workflow_callback import WorkflowCallback
from core.workflow.entities.base_node_data_entities import BaseNodeData
from core.workflow.entities.node_entities import NodeType
@@ -15,7 +15,7 @@ _TEXT_COLOR_MAPPING = {
}
class WorkflowLoggingCallback(BaseWorkflowCallback):
class WorkflowLoggingCallback(WorkflowCallback):
def __init__(self) -> None:
self.current_node_id = None

View File

@@ -1,3 +1,4 @@
from collections.abc import Mapping
from enum import Enum
from typing import Any, Optional
@@ -76,7 +77,7 @@ class AppGenerateEntity(BaseModel):
# app config
app_config: AppConfig
inputs: dict[str, Any]
inputs: Mapping[str, Any]
files: list[FileVar] = []
user_id: str
@@ -140,7 +141,7 @@ class AdvancedChatAppGenerateEntity(AppGenerateEntity):
app_config: WorkflowUIBasedAppConfig
conversation_id: Optional[str] = None
query: Optional[str] = None
query: str
class SingleIterationRunEntity(BaseModel):
"""

View File

@@ -0,0 +1,53 @@
from .segment_group import SegmentGroup
from .segments import (
ArrayAnySegment,
FileSegment,
FloatSegment,
IntegerSegment,
NoneSegment,
ObjectSegment,
Segment,
StringSegment,
)
from .types import SegmentType
from .variables import (
ArrayAnyVariable,
ArrayFileVariable,
ArrayNumberVariable,
ArrayObjectVariable,
ArrayStringVariable,
FileVariable,
FloatVariable,
IntegerVariable,
NoneVariable,
ObjectVariable,
SecretVariable,
StringVariable,
Variable,
)
__all__ = [
'IntegerVariable',
'FloatVariable',
'ObjectVariable',
'SecretVariable',
'FileVariable',
'StringVariable',
'ArrayAnyVariable',
'Variable',
'SegmentType',
'SegmentGroup',
'Segment',
'NoneSegment',
'NoneVariable',
'IntegerSegment',
'FloatSegment',
'ObjectSegment',
'ArrayAnySegment',
'FileSegment',
'StringSegment',
'ArrayStringVariable',
'ArrayNumberVariable',
'ArrayObjectVariable',
'ArrayFileVariable',
]

View File

@@ -0,0 +1,86 @@
from collections.abc import Mapping
from typing import Any
from core.file.file_obj import FileVar
from .segments import (
ArrayAnySegment,
FileSegment,
FloatSegment,
IntegerSegment,
NoneSegment,
ObjectSegment,
Segment,
StringSegment,
)
from .types import SegmentType
from .variables import (
ArrayFileVariable,
ArrayNumberVariable,
ArrayObjectVariable,
ArrayStringVariable,
FileVariable,
FloatVariable,
IntegerVariable,
ObjectVariable,
SecretVariable,
StringVariable,
Variable,
)
def build_variable_from_mapping(m: Mapping[str, Any], /) -> Variable:
if (value_type := m.get('value_type')) is None:
raise ValueError('missing value type')
if not m.get('name'):
raise ValueError('missing name')
if (value := m.get('value')) is None:
raise ValueError('missing value')
match value_type:
case SegmentType.STRING:
return StringVariable.model_validate(m)
case SegmentType.SECRET:
return SecretVariable.model_validate(m)
case SegmentType.NUMBER if isinstance(value, int):
return IntegerVariable.model_validate(m)
case SegmentType.NUMBER if isinstance(value, float):
return FloatVariable.model_validate(m)
case SegmentType.NUMBER if not isinstance(value, float | int):
raise ValueError(f'invalid number value {value}')
case SegmentType.FILE:
return FileVariable.model_validate(m)
case SegmentType.OBJECT if isinstance(value, dict):
return ObjectVariable.model_validate(
{**m, 'value': {k: build_variable_from_mapping(v) for k, v in value.items()}}
)
case SegmentType.ARRAY_STRING if isinstance(value, list):
return ArrayStringVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
case SegmentType.ARRAY_NUMBER if isinstance(value, list):
return ArrayNumberVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
case SegmentType.ARRAY_OBJECT if isinstance(value, list):
return ArrayObjectVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
case SegmentType.ARRAY_FILE if isinstance(value, list):
return ArrayFileVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
raise ValueError(f'not supported value type {value_type}')
def build_segment(value: Any, /) -> Segment:
if value is None:
return NoneSegment()
if isinstance(value, str):
return StringSegment(value=value)
if isinstance(value, int):
return IntegerSegment(value=value)
if isinstance(value, float):
return FloatSegment(value=value)
if isinstance(value, dict):
# TODO: Limit the depth of the object
obj = {k: build_segment(v) for k, v in value.items()}
return ObjectSegment(value=obj)
if isinstance(value, list):
# TODO: Limit the depth of the array
elements = [build_segment(v) for v in value]
return ArrayAnySegment(value=elements)
if isinstance(value, FileVar):
return FileSegment(value=value)
raise ValueError(f'not supported value {value}')

View File

@@ -0,0 +1,18 @@
import re
from core.workflow.entities.variable_pool import VariablePool
from . import SegmentGroup, factory
VARIABLE_PATTERN = re.compile(r'\{\{#([a-zA-Z0-9_]{1,50}(?:\.[a-zA-Z_][a-zA-Z0-9_]{0,29}){1,10})#\}\}')
def convert_template(*, template: str, variable_pool: VariablePool):
parts = re.split(VARIABLE_PATTERN, template)
segments = []
for part in filter(lambda x: x, parts):
if '.' in part and (value := variable_pool.get(part.split('.'))):
segments.append(value)
else:
segments.append(factory.build_segment(part))
return SegmentGroup(value=segments)

View File

@@ -0,0 +1,22 @@
from .segments import Segment
from .types import SegmentType
class SegmentGroup(Segment):
value_type: SegmentType = SegmentType.GROUP
value: list[Segment]
@property
def text(self):
return ''.join([segment.text for segment in self.value])
@property
def log(self):
return ''.join([segment.log for segment in self.value])
@property
def markdown(self):
return ''.join([segment.markdown for segment in self.value])
def to_object(self):
return [segment.to_object() for segment in self.value]

View File

@@ -0,0 +1,140 @@
import json
from collections.abc import Mapping, Sequence
from typing import Any
from pydantic import BaseModel, ConfigDict, field_validator
from core.file.file_obj import FileVar
from .types import SegmentType
class Segment(BaseModel):
model_config = ConfigDict(frozen=True)
value_type: SegmentType
value: Any
@field_validator('value_type')
def validate_value_type(cls, value):
"""
This validator checks if the provided value is equal to the default value of the 'value_type' field.
If the value is different, a ValueError is raised.
"""
if value != cls.model_fields['value_type'].default:
raise ValueError("Cannot modify 'value_type'")
return value
@property
def text(self) -> str:
return str(self.value)
@property
def log(self) -> str:
return str(self.value)
@property
def markdown(self) -> str:
return str(self.value)
def to_object(self) -> Any:
return self.value
class NoneSegment(Segment):
value_type: SegmentType = SegmentType.NONE
value: None = None
@property
def text(self) -> str:
return 'null'
@property
def log(self) -> str:
return 'null'
@property
def markdown(self) -> str:
return 'null'
class StringSegment(Segment):
value_type: SegmentType = SegmentType.STRING
value: str
class FloatSegment(Segment):
value_type: SegmentType = SegmentType.NUMBER
value: float
class IntegerSegment(Segment):
value_type: SegmentType = SegmentType.NUMBER
value: int
class FileSegment(Segment):
value_type: SegmentType = SegmentType.FILE
# TODO: embed FileVar in this model.
value: FileVar
@property
def markdown(self) -> str:
return self.value.to_markdown()
class ObjectSegment(Segment):
value_type: SegmentType = SegmentType.OBJECT
value: Mapping[str, Segment]
@property
def text(self) -> str:
# TODO: Process variables.
return json.dumps(self.model_dump()['value'], ensure_ascii=False)
@property
def log(self) -> str:
# TODO: Process variables.
return json.dumps(self.model_dump()['value'], ensure_ascii=False, indent=2)
@property
def markdown(self) -> str:
# TODO: Use markdown code block
return json.dumps(self.model_dump()['value'], ensure_ascii=False, indent=2)
def to_object(self):
return {k: v.to_object() for k, v in self.value.items()}
class ArraySegment(Segment):
@property
def markdown(self) -> str:
return '\n'.join(['- ' + item.markdown for item in self.value])
def to_object(self):
return [v.to_object() for v in self.value]
class ArrayAnySegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_ANY
value: Sequence[Segment]
class ArrayStringSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_STRING
value: Sequence[StringSegment]
class ArrayNumberSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_NUMBER
value: Sequence[FloatSegment | IntegerSegment]
class ArrayObjectSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_OBJECT
value: Sequence[ObjectSegment]
class ArrayFileSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_FILE
value: Sequence[FileSegment]

View File

@@ -0,0 +1,17 @@
from enum import Enum
class SegmentType(str, Enum):
NONE = 'none'
NUMBER = 'number'
STRING = 'string'
SECRET = 'secret'
ARRAY_ANY = 'array[any]'
ARRAY_STRING = 'array[string]'
ARRAY_NUMBER = 'array[number]'
ARRAY_OBJECT = 'array[object]'
ARRAY_FILE = 'array[file]'
OBJECT = 'object'
FILE = 'file'
GROUP = 'group'

View File

@@ -0,0 +1,85 @@
from pydantic import Field
from core.helper import encrypter
from .segments import (
ArrayAnySegment,
ArrayFileSegment,
ArrayNumberSegment,
ArrayObjectSegment,
ArrayStringSegment,
FileSegment,
FloatSegment,
IntegerSegment,
NoneSegment,
ObjectSegment,
Segment,
StringSegment,
)
from .types import SegmentType
class Variable(Segment):
"""
A variable is a segment that has a name.
"""
id: str = Field(
default='',
description="Unique identity for variable. It's only used by environment variables now.",
)
name: str
description: str = Field(default='', description='Description of the variable.')
class StringVariable(StringSegment, Variable):
pass
class FloatVariable(FloatSegment, Variable):
pass
class IntegerVariable(IntegerSegment, Variable):
pass
class FileVariable(FileSegment, Variable):
pass
class ObjectVariable(ObjectSegment, Variable):
pass
class ArrayAnyVariable(ArrayAnySegment, Variable):
pass
class ArrayStringVariable(ArrayStringSegment, Variable):
pass
class ArrayNumberVariable(ArrayNumberSegment, Variable):
pass
class ArrayObjectVariable(ArrayObjectSegment, Variable):
pass
class ArrayFileVariable(ArrayFileSegment, Variable):
pass
class SecretVariable(StringVariable):
value_type: SegmentType = SegmentType.SECRET
@property
def log(self) -> str:
return encrypter.obfuscated_token(self.value)
class NoneVariable(NoneSegment, Variable):
value_type: SegmentType = SegmentType.NONE
value: None = None

View File

@@ -131,6 +131,7 @@ class WorkflowCycleManage(WorkflowIterationCycleManage):
TraceTaskName.WORKFLOW_TRACE,
workflow_run=workflow_run,
conversation_id=conversation_id,
user_id=trace_manager.user_id,
)
)
@@ -173,6 +174,7 @@ class WorkflowCycleManage(WorkflowIterationCycleManage):
TraceTaskName.WORKFLOW_TRACE,
workflow_run=workflow_run,
conversation_id=conversation_id,
user_id=trace_manager.user_id,
)
)

View File

@@ -1,9 +1,11 @@
import os
from collections.abc import Mapping, Sequence
from typing import Any, Optional, TextIO, Union
from pydantic import BaseModel
from core.ops.ops_trace_manager import TraceQueueManager, TraceTask, TraceTaskName
from core.tools.entities.tool_entities import ToolInvokeMessage
_TEXT_COLOR_MAPPING = {
"blue": "36;1",
@@ -43,7 +45,7 @@ class DifyAgentCallbackHandler(BaseModel):
def on_tool_start(
self,
tool_name: str,
tool_inputs: dict[str, Any],
tool_inputs: Mapping[str, Any],
) -> None:
"""Do nothing."""
print_text("\n[on_tool_start] ToolCall:" + tool_name + "\n" + str(tool_inputs) + "\n", color=self.color)
@@ -51,8 +53,8 @@ class DifyAgentCallbackHandler(BaseModel):
def on_tool_end(
self,
tool_name: str,
tool_inputs: dict[str, Any],
tool_outputs: str,
tool_inputs: Mapping[str, Any],
tool_outputs: Sequence[ToolInvokeMessage],
message_id: Optional[str] = None,
timer: Optional[Any] = None,
trace_manager: Optional[TraceQueueManager] = None

View File

@@ -1,4 +1,5 @@
from typing import Union
from collections.abc import Mapping, Sequence
from typing import Any, Union
import requests
@@ -16,7 +17,7 @@ class MessageFileParser:
self.tenant_id = tenant_id
self.app_id = app_id
def validate_and_transform_files_arg(self, files: list[dict], file_extra_config: FileExtraConfig,
def validate_and_transform_files_arg(self, files: Sequence[Mapping[str, Any]], file_extra_config: FileExtraConfig,
user: Union[Account, EndUser]) -> list[FileVar]:
"""
validate and transform files arg

View File

@@ -6,8 +6,7 @@ import os
import time
from typing import Optional
from flask import current_app
from configs import dify_config
from extensions.ext_storage import storage
IMAGE_EXTENSIONS = ['jpg', 'jpeg', 'png', 'webp', 'gif', 'svg']
@@ -23,7 +22,7 @@ class UploadFileParser:
if upload_file.extension not in IMAGE_EXTENSIONS:
return None
if current_app.config['MULTIMODAL_SEND_IMAGE_FORMAT'] == 'url' or force_url:
if dify_config.MULTIMODAL_SEND_IMAGE_FORMAT == 'url' or force_url:
return cls.get_signed_temp_image_url(upload_file.id)
else:
# get image file base64
@@ -44,13 +43,13 @@ class UploadFileParser:
:param upload_file: UploadFile object
:return:
"""
base_url = current_app.config.get('FILES_URL')
base_url = dify_config.FILES_URL
image_preview_url = f'{base_url}/files/{upload_file_id}/image-preview'
timestamp = str(int(time.time()))
nonce = os.urandom(16).hex()
data_to_sign = f"image-preview|{upload_file_id}|{timestamp}|{nonce}"
secret_key = current_app.config['SECRET_KEY'].encode()
secret_key = dify_config.SECRET_KEY.encode()
sign = hmac.new(secret_key, data_to_sign.encode(), hashlib.sha256).digest()
encoded_sign = base64.urlsafe_b64encode(sign).decode()
@@ -68,7 +67,7 @@ class UploadFileParser:
:return:
"""
data_to_sign = f"image-preview|{upload_file_id}|{timestamp}|{nonce}"
secret_key = current_app.config['SECRET_KEY'].encode()
secret_key = dify_config.SECRET_KEY.encode()
recalculated_sign = hmac.new(secret_key, data_to_sign.encode(), hashlib.sha256).digest()
recalculated_encoded_sign = base64.urlsafe_b64encode(recalculated_sign).decode()
@@ -77,4 +76,4 @@ class UploadFileParser:
return False
current_time = int(time.time())
return current_time - int(timestamp) <= current_app.config.get('FILES_ACCESS_TIMEOUT')
return current_time - int(timestamp) <= dify_config.FILES_ACCESS_TIMEOUT

View File

@@ -21,7 +21,7 @@ logger = logging.getLogger(__name__)
CODE_EXECUTION_ENDPOINT = dify_config.CODE_EXECUTION_ENDPOINT
CODE_EXECUTION_API_KEY = dify_config.CODE_EXECUTION_API_KEY
CODE_EXECUTION_TIMEOUT= (10, 60)
CODE_EXECUTION_TIMEOUT = (10, 60)
class CodeExecutionException(Exception):
pass
@@ -64,7 +64,7 @@ class CodeExecutor:
@classmethod
def execute_code(cls,
language: Literal['python3', 'javascript', 'jinja2'],
language: CodeLanguage,
preload: str,
code: str,
dependencies: Optional[list[CodeDependency]] = None) -> str:
@@ -107,11 +107,11 @@ class CodeExecutor:
response = response.json()
except:
raise CodeExecutionException('Failed to parse response')
if (code := response.get('code')) != 0:
raise CodeExecutionException(f"Got error code: {code}. Got error msg: {response.get('message')}")
response = CodeExecutionResponse(**response)
if response.code != 0:
raise CodeExecutionException(response.message)
if response.data.error:
raise CodeExecutionException(response.data.error)
@@ -119,7 +119,7 @@ class CodeExecutor:
return response.data.stdout
@classmethod
def execute_workflow_code_template(cls, language: Literal['python3', 'javascript', 'jinja2'], code: str, inputs: dict, dependencies: Optional[list[CodeDependency]] = None) -> dict:
def execute_workflow_code_template(cls, language: CodeLanguage, code: str, inputs: dict, dependencies: Optional[list[CodeDependency]] = None) -> dict:
"""
Execute code
:param language: code language

View File

@@ -6,11 +6,16 @@ from models.account import Tenant
def obfuscated_token(token: str):
return token[:6] + '*' * (len(token) - 8) + token[-2:]
if not token:
return token
if len(token) <= 8:
return '*' * 20
return token[:6] + '*' * 12 + token[-2:]
def encrypt_token(tenant_id: str, token: str):
tenant = db.session.query(Tenant).filter(Tenant.id == tenant_id).first()
if not (tenant := db.session.query(Tenant).filter(Tenant.id == tenant_id).first()):
raise ValueError(f'Tenant with id {tenant_id} not found')
encrypted_token = rsa.encrypt(token, tenant.encrypt_public_key)
return base64.b64encode(encrypted_token).decode()

View File

@@ -13,15 +13,10 @@ def get_position_map(folder_path: str, *, file_name: str = "_position.yaml") ->
:param file_name: the YAML file name, default to '_position.yaml'
:return: a dict with name as key and index as value
"""
position_file_name = os.path.join(folder_path, file_name)
positions = load_yaml_file(position_file_name, ignore_error=True)
position_map = {}
index = 0
for _, name in enumerate(positions):
if name and isinstance(name, str):
position_map[name.strip()] = index
index += 1
return position_map
position_file_path = os.path.join(folder_path, file_name)
yaml_content = load_yaml_file(file_path=position_file_path, default_value=[])
positions = [item.strip() for item in yaml_content if item and isinstance(item, str) and item.strip()]
return {name: index for index, name in enumerate(positions)}
def sort_by_position_map(

View File

@@ -1,48 +1,75 @@
"""
Proxy requests to avoid SSRF
"""
import logging
import os
import time
import httpx
SSRF_PROXY_ALL_URL = os.getenv('SSRF_PROXY_ALL_URL', '')
SSRF_PROXY_HTTP_URL = os.getenv('SSRF_PROXY_HTTP_URL', '')
SSRF_PROXY_HTTPS_URL = os.getenv('SSRF_PROXY_HTTPS_URL', '')
SSRF_DEFAULT_MAX_RETRIES = int(os.getenv('SSRF_DEFAULT_MAX_RETRIES', '3'))
proxies = {
'http://': SSRF_PROXY_HTTP_URL,
'https://': SSRF_PROXY_HTTPS_URL
} if SSRF_PROXY_HTTP_URL and SSRF_PROXY_HTTPS_URL else None
BACKOFF_FACTOR = 0.5
STATUS_FORCELIST = [429, 500, 502, 503, 504]
def make_request(method, url, **kwargs):
if SSRF_PROXY_ALL_URL:
return httpx.request(method=method, url=url, proxy=SSRF_PROXY_ALL_URL, **kwargs)
elif proxies:
return httpx.request(method=method, url=url, proxies=proxies, **kwargs)
else:
return httpx.request(method=method, url=url, **kwargs)
def make_request(method, url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
if "allow_redirects" in kwargs:
allow_redirects = kwargs.pop("allow_redirects")
if "follow_redirects" not in kwargs:
kwargs["follow_redirects"] = allow_redirects
retries = 0
while retries <= max_retries:
try:
if SSRF_PROXY_ALL_URL:
response = httpx.request(method=method, url=url, proxy=SSRF_PROXY_ALL_URL, **kwargs)
elif proxies:
response = httpx.request(method=method, url=url, proxies=proxies, **kwargs)
else:
response = httpx.request(method=method, url=url, **kwargs)
if response.status_code not in STATUS_FORCELIST:
return response
else:
logging.warning(f"Received status code {response.status_code} for URL {url} which is in the force list")
except httpx.RequestError as e:
logging.warning(f"Request to URL {url} failed on attempt {retries + 1}: {e}")
retries += 1
if retries <= max_retries:
time.sleep(BACKOFF_FACTOR * (2 ** (retries - 1)))
raise Exception(f"Reached maximum retries ({max_retries}) for URL {url}")
def get(url, **kwargs):
return make_request('GET', url, **kwargs)
def get(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('GET', url, max_retries=max_retries, **kwargs)
def post(url, **kwargs):
return make_request('POST', url, **kwargs)
def post(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('POST', url, max_retries=max_retries, **kwargs)
def put(url, **kwargs):
return make_request('PUT', url, **kwargs)
def put(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('PUT', url, max_retries=max_retries, **kwargs)
def patch(url, **kwargs):
return make_request('PATCH', url, **kwargs)
def patch(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('PATCH', url, max_retries=max_retries, **kwargs)
def delete(url, **kwargs):
return make_request('DELETE', url, **kwargs)
def delete(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('DELETE', url, max_retries=max_retries, **kwargs)
def head(url, **kwargs):
return make_request('HEAD', url, **kwargs)
def head(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('HEAD', url, max_retries=max_retries, **kwargs)

View File

@@ -73,6 +73,8 @@ class HostingConfiguration:
quota_limit=hosted_quota_limit,
restrict_models=[
RestrictModel(model="gpt-4", base_model_name="gpt-4", model_type=ModelType.LLM),
RestrictModel(model="gpt-4o", base_model_name="gpt-4o", model_type=ModelType.LLM),
RestrictModel(model="gpt-4o-mini", base_model_name="gpt-4o-mini", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-32k", base_model_name="gpt-4-32k", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-1106-preview", base_model_name="gpt-4-1106-preview", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-vision-preview", base_model_name="gpt-4-vision-preview", model_type=ModelType.LLM),

View File

@@ -12,6 +12,7 @@ from flask import Flask, current_app
from flask_login import current_user
from sqlalchemy.orm.exc import ObjectDeletedError
from configs import dify_config
from core.errors.error import ProviderTokenNotInitError
from core.llm_generator.llm_generator import LLMGenerator
from core.model_manager import ModelInstance, ModelManager
@@ -224,7 +225,7 @@ class IndexingRunner:
features = FeatureService.get_features(tenant_id)
if features.billing.enabled:
count = len(extract_settings)
batch_upload_limit = int(current_app.config['BATCH_UPLOAD_LIMIT'])
batch_upload_limit = dify_config.BATCH_UPLOAD_LIMIT
if count > batch_upload_limit:
raise ValueError(f"You have reached the batch upload limit of {batch_upload_limit}.")
@@ -427,7 +428,7 @@ class IndexingRunner:
# The user-defined segmentation rule
rules = json.loads(processing_rule.rules)
segmentation = rules["segmentation"]
max_segmentation_tokens_length = int(current_app.config['INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH'])
max_segmentation_tokens_length = dify_config.INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH
if segmentation["max_tokens"] < 50 or segmentation["max_tokens"] > max_segmentation_tokens_length:
raise ValueError(f"Custom segment length should be between 50 and {max_segmentation_tokens_length}.")

View File

@@ -3,10 +3,13 @@ import logging
import re
from typing import Optional
from core.llm_generator.output_parser.errors import OutputParserException
from core.llm_generator.output_parser.rule_config_generator import RuleConfigGeneratorOutputParser
from core.llm_generator.output_parser.suggested_questions_after_answer import SuggestedQuestionsAfterAnswerOutputParser
from core.llm_generator.prompts import CONVERSATION_TITLE_PROMPT, GENERATOR_QA_PROMPT
from core.llm_generator.prompts import (
CONVERSATION_TITLE_PROMPT,
GENERATOR_QA_PROMPT,
WORKFLOW_RULE_CONFIG_PROMPT_GENERATE_TEMPLATE,
)
from core.model_manager import ModelManager
from core.model_runtime.entities.message_entities import SystemPromptMessage, UserPromptMessage
from core.model_runtime.entities.model_entities import ModelType
@@ -115,55 +118,158 @@ class LLMGenerator:
return questions
@classmethod
def generate_rule_config(cls, tenant_id: str, audiences: str, hoping_to_solve: str) -> dict:
def generate_rule_config(cls, tenant_id: str, instruction: str, model_config: dict, no_variable: bool, rule_config_max_tokens: int = 512) -> dict:
output_parser = RuleConfigGeneratorOutputParser()
error = ""
error_step = ""
rule_config = {
"prompt": "",
"variables": [],
"opening_statement": "",
"error": ""
}
model_parameters = {
"max_tokens": rule_config_max_tokens,
"temperature": 0.01
}
if no_variable:
prompt_template = PromptTemplateParser(
WORKFLOW_RULE_CONFIG_PROMPT_GENERATE_TEMPLATE
)
prompt_generate = prompt_template.format(
inputs={
"TASK_DESCRIPTION": instruction,
},
remove_template_variables=False
)
prompt_messages = [UserPromptMessage(content=prompt_generate)]
model_manager = ModelManager()
model_instance = model_manager.get_default_model_instance(
tenant_id=tenant_id,
model_type=ModelType.LLM,
)
try:
response = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=model_parameters,
stream=False
)
rule_config["prompt"] = response.message.content
except InvokeError as e:
error = str(e)
error_step = "generate rule config"
except Exception as e:
logging.exception(e)
rule_config["error"] = str(e)
rule_config["error"] = f"Failed to {error_step}. Error: {error}" if error else ""
return rule_config
# get rule config prompt, parameter and statement
prompt_generate, parameter_generate, statement_generate = output_parser.get_format_instructions()
prompt_template = PromptTemplateParser(
template=output_parser.get_format_instructions()
prompt_generate
)
prompt = prompt_template.format(
parameter_template = PromptTemplateParser(
parameter_generate
)
statement_template = PromptTemplateParser(
statement_generate
)
# format the prompt_generate_prompt
prompt_generate_prompt = prompt_template.format(
inputs={
"audiences": audiences,
"hoping_to_solve": hoping_to_solve,
"variable": "{{variable}}",
"lanA": "{{lanA}}",
"lanB": "{{lanB}}",
"topic": "{{topic}}"
"TASK_DESCRIPTION": instruction,
},
remove_template_variables=False
)
prompt_messages = [UserPromptMessage(content=prompt_generate_prompt)]
# get model instance
model_manager = ModelManager()
model_instance = model_manager.get_default_model_instance(
model_instance = model_manager.get_model_instance(
tenant_id=tenant_id,
model_type=ModelType.LLM,
provider=model_config.get("provider") if model_config else None,
model=model_config.get("name") if model_config else None,
)
prompt_messages = [UserPromptMessage(content=prompt)]
try:
response = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters={
"max_tokens": 512,
"temperature": 0
},
stream=False
)
try:
# the first step to generate the task prompt
prompt_content = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=model_parameters,
stream=False
)
except InvokeError as e:
error = str(e)
error_step = "generate prefix prompt"
rule_config["error"] = f"Failed to {error_step}. Error: {error}" if error else ""
return rule_config
rule_config["prompt"] = prompt_content.message.content
parameter_generate_prompt = parameter_template.format(
inputs={
"INPUT_TEXT": prompt_content.message.content,
},
remove_template_variables=False
)
parameter_messages = [UserPromptMessage(content=parameter_generate_prompt)]
# the second step to generate the task_parameter and task_statement
statement_generate_prompt = statement_template.format(
inputs={
"TASK_DESCRIPTION": instruction,
"INPUT_TEXT": prompt_content.message.content,
},
remove_template_variables=False
)
statement_messages = [UserPromptMessage(content=statement_generate_prompt)]
try:
parameter_content = model_instance.invoke_llm(
prompt_messages=parameter_messages,
model_parameters=model_parameters,
stream=False
)
rule_config["variables"] = re.findall(r'"\s*([^"]+)\s*"', parameter_content.message.content)
except InvokeError as e:
error = str(e)
error_step = "generate variables"
try:
statement_content = model_instance.invoke_llm(
prompt_messages=statement_messages,
model_parameters=model_parameters,
stream=False
)
rule_config["opening_statement"] = statement_content.message.content
except InvokeError as e:
error = str(e)
error_step = "generate conversation opener"
rule_config = output_parser.parse(response.message.content)
except InvokeError as e:
raise e
except OutputParserException:
raise ValueError('Please give a valid input for intended audience or hoping to solve problems.')
except Exception as e:
logging.exception(e)
rule_config = {
"prompt": "",
"variables": [],
"opening_statement": ""
}
rule_config["error"] = str(e)
rule_config["error"] = f"Failed to {error_step}. Error: {error}" if error else ""
return rule_config

View File

@@ -1,14 +1,18 @@
from typing import Any
from core.llm_generator.output_parser.errors import OutputParserException
from core.llm_generator.prompts import RULE_CONFIG_GENERATE_TEMPLATE
from core.llm_generator.prompts import (
RULE_CONFIG_PARAMETER_GENERATE_TEMPLATE,
RULE_CONFIG_PROMPT_GENERATE_TEMPLATE,
RULE_CONFIG_STATEMENT_GENERATE_TEMPLATE,
)
from libs.json_in_md_parser import parse_and_check_json_markdown
class RuleConfigGeneratorOutputParser:
def get_format_instructions(self) -> str:
return RULE_CONFIG_GENERATE_TEMPLATE
def get_format_instructions(self) -> tuple[str, str, str]:
return RULE_CONFIG_PROMPT_GENERATE_TEMPLATE, RULE_CONFIG_PARAMETER_GENERATE_TEMPLATE, RULE_CONFIG_STATEMENT_GENERATE_TEMPLATE
def parse(self, text: str) -> Any:
try:

View File

@@ -81,65 +81,73 @@ GENERATOR_QA_PROMPT = (
'<QA Pairs>'
)
RULE_CONFIG_GENERATE_TEMPLATE = """Given MY INTENDED AUDIENCES and HOPING TO SOLVE using a language model, please select \
the model prompt that best suits the input.
You will be provided with the prompt, variables, and an opening statement.
Only the content enclosed in double curly braces, such as {{variable}}, in the prompt can be considered as a variable; \
otherwise, it cannot exist as a variable in the variables.
If you believe revising the original input will result in a better response from the language model, you may \
suggest revisions.
WORKFLOW_RULE_CONFIG_PROMPT_GENERATE_TEMPLATE = """
Here is a task description for which I would like you to create a high-quality prompt template for:
<task_description>
{{TASK_DESCRIPTION}}
</task_description>
Based on task description, please create a well-structured prompt template that another AI could use to consistently complete the task. The prompt template should include:
- Do not inlcude <input> or <output> section and variables in the prompt, assume user will add them at their own will.
- Clear instructions for the AI that will be using this prompt, demarcated with <instructions> tags. The instructions should provide step-by-step directions on how to complete the task using the input variables. Also Specifies in the instructions that the output should not contain any xml tag.
- Relevant examples if needed to clarify the task further, demarcated with <example> tags. Do not include variables in the prompt. Give three pairs of input and output examples.
- Include other relevant sections demarcated with appropriate XML tags like <examples>, <instructions>.
- Use the same language as task description.
- Output in ``` xml ``` and start with <instruction>
Please generate the full prompt template with at least 300 words and output only the prompt template.
"""
<<PRINCIPLES OF GOOD PROMPT>>
Integrate the intended audience in the prompt e.g. the audience is an expert in the field.
Break down complex tasks into a sequence of simpler prompts in an interactive conversation.
Implement example-driven prompting (Use few-shot prompting).
When formatting your prompt start with Instruction followed by either Example if relevant. \
Subsequently present your content. Use one or more line breaks to separate instructions examples questions context and input data.
Incorporate the following phrases: “Your task is” and “You MUST”.
Incorporate the following phrases: “You will be penalized”.
Use leading words like writing “think step by step”.
Add to your prompt the following phrase “Ensure that your answer is unbiased and does not rely on stereotypes”.
Assign a role to the large language models.
Use Delimiters.
To write an essay /text /paragraph /article or any type of text that should be detailed: “Write a detailed [essay/text/paragraph] for me on [topic] in detail by adding all the information necessary”.
Clearly state the requirements that the model must follow in order to produce content in the form of the keywords regulations hint or instructions
RULE_CONFIG_PROMPT_GENERATE_TEMPLATE = """
Here is a task description for which I would like you to create a high-quality prompt template for:
<task_description>
{{TASK_DESCRIPTION}}
</task_description>
Based on task description, please create a well-structured prompt template that another AI could use to consistently complete the task. The prompt template should include:
- Descriptive variable names surrounded by {{ }} (two curly brackets) to indicate where the actual values will be substituted in. Choose variable names that clearly indicate the type of value expected. Variable names have to be composed of number, english alphabets and underline and nothing else.
- Clear instructions for the AI that will be using this prompt, demarcated with <instructions> tags. The instructions should provide step-by-step directions on how to complete the task using the input variables. Also Specifies in the instructions that the output should not contain any xml tag.
- Relevant examples if needed to clarify the task further, demarcated with <example> tags. Do not use curly brackets any other than in <instruction> section.
- Any other relevant sections demarcated with appropriate XML tags like <input>, <output>, etc.
- Use the same language as task description.
- Output in ``` xml ``` and start with <instruction>
Please generate the full prompt template and output only the prompt template.
"""
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like, \
no any other string out of markdown code snippet:
```json
{{{{
"prompt": string \\ generated prompt
"variables": list of string \\ variables
"opening_statement": string \\ an opening statement to guide users on how to ask questions with generated prompt \
and fill in variables, with a welcome sentence, and keep TLDR.
}}}}
```
RULE_CONFIG_PARAMETER_GENERATE_TEMPLATE = """
I need to extract the following information from the input text. The <information to be extracted> tag specifies the 'type', 'description' and 'required' of the information to be extracted.
<information to be extracted>
variables name bounded two double curly brackets. Variable name has to be composed of number, english alphabets and underline and nothing else.
</information to be extracted>
<< EXAMPLES >>
[EXAMPLE A]
```json
{
"prompt": "I need your help to translate the following {{Input_language}}paper paragraph into {{Target_language}}, in a style similar to a popular science magazine in {{Target_language}}. #### Rules Ensure accurate conveyance of the original text's facts and context during translation. Maintain the original paragraph format and retain technical terms and company abbreviations ",
"variables": ["Input_language", "Target_language"],
"opening_statement": " Hi. I am your translation assistant. I can help you with any translation and ensure accurate conveyance of information. "
}
```
Step 1: Carefully read the input and understand the structure of the expected output.
Step 2: Extract relevant parameters from the provided text based on the name and description of object.
Step 3: Structure the extracted parameters to JSON object as specified in <structure>.
Step 4: Ensure that the list of variable_names is properly formatted and valid. The output should not contain any XML tags. Output an empty list if there is no valid variable name in input text.
[EXAMPLE B]
```json
{
"prompt": "Your task is to review the provided meeting notes and create a concise summary that captures the essential information, focusing on key takeaways and action items assigned to specific individuals or departments during the meeting. Use clear and professional language, and organize the summary in a logical manner using appropriate formatting such as headings, subheadings, and bullet points. Ensure that the summary is easy to understand and provides a comprehensive but succinct overview of the meeting's content, with a particular focus on clearly indicating who is responsible for each action item.",
"variables": ["meeting_notes"],
"opening_statement": "Hi! I'm your meeting notes summarizer AI. I can help you with any meeting notes and ensure accurate conveyance of information."
}
```
### Structure
Here is the structure of the expected output, I should always follow the output structure.
["variable_name_1", "variable_name_2"]
<< MY INTENDED AUDIENCES >>
{{audiences}}
### Input Text
Inside <text></text> XML tags, there is a text that I should extract parameters and convert to a JSON object.
<text>
{{INPUT_TEXT}}
</text>
<< HOPING TO SOLVE >>
{{hoping_to_solve}}
### Answer
I should always output a valid list. Output nothing other than the list of variable_name. Output an empty list if there is no variable name in input text.
"""
<< OUTPUT >>
"""
RULE_CONFIG_STATEMENT_GENERATE_TEMPLATE = """
<instruction>
Step 1: Identify the purpose of the chatbot from the variable {{TASK_DESCRIPTION}} and infer chatbot's tone (e.g., friendly, professional, etc.) to add personality traits.
Step 2: Create a coherent and engaging opening statement.
Step 3: Ensure the output is welcoming and clearly explains what the chatbot is designed to do. Do not include any XML tags in the output.
Please use the same language as the user's input language. If user uses chinese then generate opening statement in chinese, if user uses english then generate opening statement in english.
Example Input:
Provide customer support for an e-commerce website
Example Output:
Welcome! I'm here to assist you with any questions or issues you might have with your shopping experience. Whether you're looking for product information, need help with your order, or have any other inquiries, feel free to ask. I'm friendly, helpful, and ready to support you in any way I can.
<Task>
Here is the task description: {{INPUT_TEXT}}
You just need to generate the output
"""

View File

@@ -410,7 +410,7 @@ class LBModelManager:
self._model = model
self._load_balancing_configs = load_balancing_configs
for load_balancing_config in self._load_balancing_configs:
for load_balancing_config in self._load_balancing_configs[:]: # Iterate over a shallow copy of the list
if load_balancing_config.name == "__inherit__":
if not managed_credentials:
# remove __inherit__ if managed credentials is not provided

View File

@@ -86,6 +86,9 @@
- `agent-thought` Agent reasoning, generally over 70B with thought chain capability.
- `vision` Vision, i.e., image understanding.
- `tool-call`
- `multi-tool-call`
- `stream-tool-call`
### FetchFrom

View File

@@ -87,6 +87,9 @@
- `agent-thought` Agent 推理,一般超过 70B 有思维链能力。
- `vision` 视觉,即:图像理解。
- `tool-call` 工具调用
- `multi-tool-call` 多工具调用
- `stream-tool-call` 流式工具调用
### FetchFrom

View File

@@ -162,7 +162,7 @@ class AIModel(ABC):
# traverse all model_schema_yaml_paths
for model_schema_yaml_path in model_schema_yaml_paths:
# read yaml data from yaml file
yaml_data = load_yaml_file(model_schema_yaml_path, ignore_error=True)
yaml_data = load_yaml_file(model_schema_yaml_path)
new_parameter_rules = []
for parameter_rule in yaml_data.get('parameter_rules', []):

Some files were not shown because too many files have changed in this diff Show More