Compare commits

...

201 Commits

Author SHA1 Message Date
takatost
12c815c597 fix: ExtractSetting optional value missing None as default val (#5238) 2024-06-15 02:58:47 +08:00
takatost
d098bdc59b version to 0.6.11 (#5224) 2024-06-15 02:46:24 +08:00
Jyong
ba5f8afaa8 Feat/firecrawl data source (#5232)
Co-authored-by: Nicolas <nicolascamara29@gmail.com>
Co-authored-by: chenhe <guchenhe@gmail.com>
Co-authored-by: takatost <takatost@gmail.com>
2024-06-15 02:46:02 +08:00
Chenhe Gu
918ebe1620 update tooltip (#5235) 2024-06-15 02:21:46 +08:00
zxhlyh
6be0027853 fix: note editor italic (#5230) 2024-06-14 22:31:39 +08:00
crazywoola
bc757f1ddc fix: z-index (#5229) 2024-06-14 22:31:19 +08:00
takatost
8da035aac6 Update README.md (#5228) 2024-06-14 22:31:01 +08:00
kurokobo
ef6034abfd fix: allow the name and icon of the web app to be set independently of that of the bot itself (#5225) 2024-06-14 22:16:11 +08:00
kurokobo
0391282b5e fix: initialize site with customized icon and icon_background (#5227) 2024-06-14 22:15:50 +08:00
Joel
28554350de feat: support firecrawl frontend code (#5226) 2024-06-14 22:02:41 +08:00
走在修行的大街上
8d1386df0f feat(Tools): Add Feishu multi-dimensional table operation function (#5213)
Co-authored-by: 黎斌 <libin.23@bytedance.com>
Co-authored-by: takatost <takatost@gmail.com>
2024-06-14 21:19:20 +08:00
Bowen Liang
e7752e8135 chore: development script for syncing Poetry lockfile (#5170) 2024-06-14 20:54:07 +08:00
DomKing
43c19007e0 fix: workspace member's last_active should be last_active_time, but not last_login_time (#4906) 2024-06-14 20:49:19 +08:00
rerorero
c6b791d070 fix: number variable cause type error in openai moderation (#5222) 2024-06-14 20:43:03 +08:00
Charles Zhou
8bcc5a36bb feat: new editor user permission profile (#4435)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com>
2024-06-14 20:34:25 +08:00
th3n00b13
cdb6c801c1 Fix: http_request delete method not working (#4975) 2024-06-14 20:07:22 +08:00
Winson Li
511ead4b8d Update README, deploy dify with YAML file on Kubernetes (#5131) 2024-06-14 19:53:40 +08:00
quicksand
4080f7b8ad feat: support tencent vector db (#3568) 2024-06-14 19:25:17 +08:00
gongzhongqiang
9ed21737d5 fix: add repo check for build-push.yml (#5141) 2024-06-14 19:15:27 +08:00
Jaxon Ley
337bad8525 feat: Add Optional API Key, Proxy Server, and Bypass Cache Parameters to Jina Tools (#5197) 2024-06-14 19:09:25 +08:00
Bin
0f35d07052 support ERNIE-4.0-8K-Latest (#5216) 2024-06-14 18:45:24 +08:00
-LAN-
7f44e88eda fix(model_providers/ollama): Fix OllamaLargeLanguageModel to correctly set the stop option (#5217) 2024-06-14 18:26:14 +08:00
Jason
b7ff765d8d Add novita.ai as model provider (#4961) 2024-06-14 18:23:06 +08:00
zxhlyh
c28d709d7f feat: workflow add note node (#5164) 2024-06-14 17:08:11 +08:00
Jyong
d7fbae286a add aws s3 iam check (#5174) 2024-06-14 15:19:59 +08:00
Masashi Tomooka
0633aae7dc feat: allow to use IAM Role for Bedrock (#5188) 2024-06-14 15:18:42 +08:00
doufa
f87f11e92c chore: make the Celery command more noticeable (#5203) 2024-06-14 15:06:07 +08:00
Bowen Liang
2b04388361 chore: remove bump-pydantic dependency (#5177) 2024-06-14 15:05:17 +08:00
takatost
3c0f21d174 fix: workflow as tool create error by type misuse (#5205) 2024-06-14 15:01:09 +08:00
Hanqing Zhao
8e2f8ffb9e Modify docs in JP (#5185) 2024-06-14 14:06:23 +08:00
KVOJJJin
e68d1b88de Fix: conversation id display & support copy (#5195) 2024-06-14 13:58:51 +08:00
-LAN-
ed53ef29f4 fix(core/tools): Fix the issue with iterating over None in _transform_tool_parameters_type. (#5190) 2024-06-14 11:25:48 +08:00
KVOJJJin
4289f17be2 Chore: refactor embedded chatbot (#5125) 2024-06-14 08:42:41 +08:00
dependabot[bot]
54e02b8147 chore(deps): bump authlib from 1.2.0 to 1.3.1 in /api (#5115)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: takatost <takatost@gmail.com>
2024-06-14 03:55:40 +08:00
Summer-Gu
7f98c2ea3f refactor: Delete the dataset to verify whether it is in use (#5112) 2024-06-14 03:25:38 +08:00
dependabot[bot]
7189a4c379 chore(deps): bump azure-identity from 1.15.0 to 1.16.1 in /api (#5116)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: takatost <takatost@gmail.com>
2024-06-14 03:24:32 +08:00
takatost
415022aa14 fix: pydantic2 error (#5172) 2024-06-14 03:05:04 +08:00
saga.rey
edf2047f04 fix: milvus_vector default dataset index_struct type from weaviate to milvus (#5098) 2024-06-14 02:36:01 +08:00
rerorero
b85ae146a7 fix: JSON mode with an image doesn't work for Gemini (#5169) 2024-06-14 02:32:09 +08:00
takatost
5ec7d85629 fix: issues by pydantic2 upgrade (#5171) 2024-06-14 02:28:28 +08:00
Pan, Wen-Ming
f13af5a811 fix(model_providers/vertex_ai): Vertex AI Anthropic models authentication failed (#4971) 2024-06-14 01:34:31 +08:00
Bowen Liang
f976740b57 improve: mordernizing validation by migrating pydantic from 1.x to 2.x (#4592) 2024-06-14 01:05:37 +08:00
Yeuoly
e8afc416dd improve: CI experience (#5168) 2024-06-13 23:16:28 +08:00
Yeuoly
0cccf9c67d feat: introduce APP_MAX_EXECUTION_TIME (#5167) 2024-06-13 23:08:05 +08:00
Bowen Liang
cdc08a434f feat: support Chroma vector store (#5015) 2024-06-13 18:02:18 +08:00
Kazuki Takamatsu
3f18369ad2 Fix: google storage init with sa and download (#5054) 2024-06-13 17:36:34 +08:00
Kazuki Hasegawa
db976a1f74 Upgrade boto3 library to support EKS Pod Identity. (#5064) 2024-06-13 17:36:14 +08:00
kurokobo
e61f5d029a chore(docs): fix minor small typos (#5124) 2024-06-13 17:36:01 +08:00
Charles Zhou
eaca892c4e fix: front end error when same tool is called twice at once (#5068) 2024-06-13 17:16:59 +08:00
非法操作
015c26d303 fix: style misalignment and inconsistency (#5149) 2024-06-13 16:32:42 +08:00
sino
8210637bc5 feat: support jina-clip-v1 embedding model (#5146) 2024-06-13 16:31:18 +08:00
呆萌闷油瓶
790543131a chore:add some new api version for azure openai (#5142) 2024-06-13 16:30:47 +08:00
Ikko Eltociear Ashimine
a40f68cf94 chore: update qdrant_vector.py (#5128) 2024-06-13 15:35:14 +08:00
yanghx
adc948e87c fix(api/core/model_runtime/model_providers/baichuan,localai): Parse ToolPromptMessage. #4943 (#5138)
Co-authored-by: -LAN- <laipz8200@outlook.com>
2024-06-13 13:08:30 +08:00
smoky
742b08e1d5 chore: update question classifier prompt (#5137)
Signed-off-by: 0xff-dev <stevenshuang521@gmail.com>
2024-06-13 13:04:51 +08:00
orangeclk
79e8489942 feat: support siliconflow (#5129) 2024-06-13 12:59:41 +08:00
Charlie.Wei
d6fa130cb5 remove dalle3 seed (#5136)
Co-authored-by: luowei <glpat-EjySCyNjWiLqAED-YmwM>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-06-13 08:05:55 +08:00
takatost
0c92f81efc chore: sync pyproject.toml from requirements.txt (#5130) 2024-06-13 00:06:05 +08:00
fishisnow
11fd4a5dcc Fix: fix load_yaml logging, Avoid setting the log level to warning (#5019)
Co-authored-by: huangyusong <huangyusong@yingzi.com>
2024-06-12 19:27:01 +08:00
Harry Wang
b399e8a359 fixed a typo and grammar error in sampled app (#5061) 2024-06-12 18:02:22 +08:00
非法操作
e04fc9b304 fix: select field not work when it is not required (#5101) 2024-06-12 17:46:53 +08:00
xielong
ea69dc2a7e feat: support hunyuan llm models (#5013)
Co-authored-by: takatost <takatost@users.noreply.github.com>
Co-authored-by: Bowen Liang <bowenliang@apache.org>
2024-06-12 17:24:23 +08:00
Pika
ecc7f130b4 fix(typo): misspelling (#5094) 2024-06-12 17:01:21 +08:00
zxhlyh
95443bd551 chore: workflow syncing modal (#5108) 2024-06-12 16:35:19 +08:00
sino
0ce97e6315 feat: support doubao llm function calling (#5100) 2024-06-12 15:43:50 +08:00
Bowen Liang
25b0a97851 build: use Poetry as default build system for dependency installation in CI jobs (#5088) 2024-06-12 14:43:03 +08:00
rerorero
28997772a5 fix: remote_url doesn't work for gemini (#5090) 2024-06-12 13:14:53 +08:00
Charlie.Wei
b7c72f7a97 dalle3 add style consistency parameter (#5067)
Co-authored-by: luowei <glpat-EjySCyNjWiLqAED-YmwM>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2024-06-12 12:59:03 +08:00
Joel
9f7b38c068 fix: #4970 (#5093) 2024-06-12 11:29:38 +08:00
非法操作
3b36ba797f feat: add duckduckgo img search, translate, ai chat (#5074) 2024-06-12 10:04:10 +08:00
ugyuji
4d2e6c3391 fix: Google HL Parameter for SearchApi (#5071) 2024-06-12 08:30:01 +08:00
Nam Vu
3520d35f38 fix: autoHeightTextarea dimensions in Firefox (#4891) 2024-06-12 08:24:58 +08:00
KVOJJJin
5f104bab57 Fix: infinite loading not work when message is too short (#5075) 2024-06-12 08:23:39 +08:00
orangeclk
2050a8b8f0 feat: add glm4 new models and zhipu embedding-2 (#5089) 2024-06-12 08:22:17 +08:00
takatost
e3544c6ef7 fix: dependency package versions are not synchronized to requirements.txt (#5084) 2024-06-11 22:21:18 +08:00
ra2230
472b976946 Arabic README.md (#5078) 2024-06-11 18:43:54 +08:00
Matri
f62f71a81a build: initial support for poetry build tool (#4513)
Co-authored-by: Bowen Liang <bowenliang@apache.org>
2024-06-11 13:11:28 +08:00
Takuya Ono
f426e1b3bd 🔧 Fix(docker/volumes/ssrf_proxy/squid.conf): The squid process on ssrf_proxy docker service crashes at startup (#5050) 2024-06-11 12:32:05 +08:00
sino
5f870ac950 chore: update maas model provider description (#5056) 2024-06-11 11:22:22 +08:00
Pascal M
415816cf35 feat: add dataset delete endpoint (#5048) 2024-06-11 11:21:38 +08:00
Pascal M
9103112555 fix: wrong link to web app repo in chatflow mode (#5062) 2024-06-11 11:20:52 +08:00
takatost
5986841e27 fix: issue where an error occurs when invoking TTS without selecting a voice (#5046) 2024-06-09 20:28:24 +08:00
Jaxon Ley
2573b138bf fix: update presence_penalty configuration for wenxin AI ernie-4.0-8k and ernie-3.5-8k models (#5039) 2024-06-09 14:44:11 +08:00
Takuya Ono
308ce66af5 🔧 fix docker-compose ssrf_proxy service WARNING: You should probably remove '::/0' from the ACL named 'all' (#5005) 2024-06-09 14:39:52 +08:00
Bowen Liang
bdad993901 improve: generalize vector factory classes and vector type (#5033) 2024-06-08 22:29:24 +08:00
zxhlyh
3b62ab564a feat: feature modal style (#5032) 2024-06-08 07:32:34 +08:00
Pika
d319d9fc5e fix(style): some style issues (#5029) 2024-06-07 20:59:39 +08:00
doufa
ea5c8a72e2 Fix language setting not success (#5023) 2024-06-07 20:02:08 +08:00
Jyong
3b60c28b3a deal the external image when extract docx image (#5024) 2024-06-07 20:00:39 +08:00
KVOJJJin
ea0219a5d5 Fix: z-index in header (#5017) 2024-06-07 16:01:33 +08:00
Jyong
481e7bc6b9 Fix/azure blob new version (#5004) 2024-06-06 23:36:13 +08:00
Charles Zhou
1ccba85c91 fix: modal z-index and cleanup (#4978) 2024-06-06 22:28:13 +08:00
doufa
2539e56514 fix: some base models cannot be selected in Azure OpenAI Service setting page (#4985) 2024-06-06 22:27:57 +08:00
takatost
3929d289e0 feat: set default memory messages limit to infinite (#5002) 2024-06-06 17:39:44 +08:00
Yeuoly
52585aea74 fix: typo in sd3 (#5000) 2024-06-06 17:08:49 +08:00
Yeuoly
73dee84cab fix: add handling for non-string type in variable template parser (#4996) 2024-06-06 16:38:13 +08:00
Joel
efecdccf35 feat: support login by given mail (#4991) 2024-06-06 15:01:58 +08:00
Joel
da5f2e168a fix: llm selector position is incorrect in not workflow app (#4982) 2024-06-06 10:47:36 +08:00
Joe
5cdb95be1f fix: gemini timeout error (#4955) 2024-06-06 10:19:03 +08:00
Bowen Liang
7fa735a43b chore: rename vdb tests for PGVector and PGvectoRS (#4973) 2024-06-06 07:22:49 +08:00
takatost
3579fd1b09 feat: add create tenant command (#4974) 2024-06-06 00:42:00 +08:00
Jyong
237b8fe3d9 add meta.doc_id index for tidb (#4963) 2024-06-05 20:45:43 +08:00
Jyong
02e4de5166 fix some tidb bugs (#4960) 2024-06-05 19:14:18 +08:00
Mark Sun
64c8093c1e Typo in Knowledge settings (#4958) 2024-06-05 18:31:24 +08:00
Weaxs
0797f9bc05 feat: support tidb vector (#4588) 2024-06-05 18:19:53 +08:00
非法操作
602c4e51ec fix: duckduckgo search does not work (#4949)
Co-authored-by: Jyong <76649700+johnjyong@users.noreply.github.com>
2024-06-05 17:33:58 +08:00
YC
9f8ca75a81 fixing a bug of handling header row when parsing xls file, and tune xls/xlsx parsing result to be more structured (#3600) 2024-06-05 15:28:43 +08:00
Yeuoly
80a87f36ea fix: missing iterator in task pipeline (#4948) 2024-06-05 15:10:20 +08:00
Charles Zhou
63addc9258 fix: missing dataset patch parameters in settings modal (#4901) 2024-06-05 14:21:59 +08:00
Bowen Liang
f32b440c4a chore: fix indention violations by applying E111 to E117 ruff rules (#4925) 2024-06-05 14:05:15 +08:00
crazywoola
6b6afb7708 fix: import error in web/app/components/header/account-setting/model-provider-page/declarations.ts (#4944) 2024-06-05 14:01:12 +08:00
zxhlyh
a4041cb40b fix: end node limit in next step (#4945) 2024-06-05 14:00:47 +08:00
JasonVV
7749b71fff Optimize knowledge retrieval performance by batching dataset quries. (#4917) 2024-06-05 13:30:32 +08:00
Joel
3006124e6d feat: pricing page add llm load balancing info (#4942) 2024-06-05 11:31:44 +08:00
Harry Wang
3d276f4a7f change "Import from text file" to "Import from file" (#4935) 2024-06-05 09:29:29 +08:00
takatost
b20d173324 pref: optimize feature model_load_balancing_enabled value fetch speed… (#4933) 2024-06-05 02:06:19 +08:00
takatost
f44d1e62d2 fix: bedrock get_num_tokens prompt_messages parameter name err (#4932) 2024-06-05 01:53:05 +08:00
takatost
21ac2afb3a fix: question classifier instruction npe (#4931) 2024-06-05 01:27:58 +08:00
takatost
f7dd327bc2 version to 0.6.10 (#4929) 2024-06-05 01:12:20 +08:00
takatost
09298a32e7 fix: vanna CVE-2024-5565 by disable visualize of ask func (#4930) 2024-06-05 00:46:22 +08:00
Nite Knite
37f292ea91 feat: model load balancing (#4926) 2024-06-05 00:13:29 +08:00
takatost
d1dbbc1e33 feat: backend model load balancing support (#4927) 2024-06-05 00:13:04 +08:00
Yeuoly
52ec152dd3 fix: incorrect parameters transforming while validating (#4928) 2024-06-05 00:01:30 +08:00
Jyong
c7bddb637b support instruction in classifier node (#4913) 2024-06-04 20:07:54 +08:00
Jyong
4e3b0c5aea support rename document (#4915) 2024-06-04 20:07:40 +08:00
Jyong
b6631cd878 modify rerank and splitter code directory (#4924) 2024-06-04 20:07:25 +08:00
Nam Vu
c212700341 fix: router replace in Explore page (#4918) 2024-06-04 19:41:54 +08:00
非法操作
e121788ff5 chore: make the error msg more clear when validate app token (#4919)
Co-authored-by: Jyong <76649700+johnjyong@users.noreply.github.com>
2024-06-04 18:04:10 +08:00
Joel
96460d5ea3 feat: document support rename in in dataset (#4732) 2024-06-04 15:10:34 +08:00
Jyong
9cf9720efa Fix/azure blob token expire (#4914) 2024-06-04 14:30:23 +08:00
Henry Lu
2d9f55b632 feat: Add Vanna.AI as a builtin tool (#4878)
Co-authored-by: Yeuoly <admin@srmxy.cn>
2024-06-04 14:05:29 +08:00
非法操作
7133a16511 chore: refactor the serpapi's google search tool (#4834) 2024-06-04 14:05:05 +08:00
Joel
a38dfc006e feat: question classify node support use var in instruction (#4710) 2024-06-04 14:01:40 +08:00
Moonlit
86e7c7321f Fixed a bug where any content in the 'fetch' was converted to True (#4400) 2024-06-04 13:27:23 +08:00
Bowen Liang
58db719a2c dep: bump pandas from 1.x to 2.x (#4820) 2024-06-04 13:24:28 +08:00
doufa
9abeb99b32 chore: modify tools/JinaReader label to Jina (#4908) 2024-06-04 13:21:05 +08:00
Jyong
d828a7fc35 fix azure blob token expire (#4911) 2024-06-04 13:04:56 +08:00
Ikko Eltociear Ashimine
c6f9ea4434 chore: update page.tsx (#4897) 2024-06-04 10:19:49 +08:00
Bowen Liang
fb6843815c chore: separate style checks into multiple jobs triggering on file changes (#4876) 2024-06-04 03:03:18 +08:00
dependabot[bot]
b97181a793 chore(deps): bump azure-storage-blob from 12.9.0 to 12.13.0 in /api (#4695)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-04 02:57:33 +08:00
Bowen Liang
5d15aca85f chore: remove unused code and class in text splitter (#4864) 2024-06-04 02:54:09 +08:00
Pan, Wen-Ming
b98a1a3303 feat: added Anthropic Claude3 models to Google Cloud Vertex AI (#4870)
Co-authored-by: pwm <pwm@google.com>
2024-06-04 02:52:46 +08:00
takatost
696c5308a9 chore: optimize nvidia nim credential schema and info (#4898) 2024-06-04 02:26:26 +08:00
Bowen Liang
3542d55e67 improve: generalize tool parameter converter (#4786) 2024-06-03 21:26:58 +08:00
Joshua
3c8a120e51 add-nvidia-mim (#4882) 2024-06-03 21:10:18 +08:00
Bowen Liang
cd24308f20 chore: add issue link tempate for IDEA (#4866) 2024-06-03 13:39:54 +08:00
Joel
69190e088e fix: update npm version to fix Incorrect argument types in createChatMessage (#4865) 2024-06-03 08:22:27 +08:00
Charlie.Wei
d058a234ba Fixed workflow tts feature audition (#4867) 2024-06-03 00:22:14 +08:00
Kishida Takashi
41e536109b fix: Incorrect argument types in createChatMessage (#4861) 2024-06-02 21:03:42 +08:00
Yeuoly
f916aa0f92 chore: upgrade sandbox (#4839) 2024-06-02 11:30:14 +08:00
Pan, Wen-Ming
cdbc260571 Bugfix: Vertex AI vision model not support image (#4853) 2024-06-02 11:11:09 +08:00
Bowen Liang
b234710af9 chore: fix invalid escape sequences by applying W605 rule (#4851) 2024-06-02 10:02:37 +08:00
Bowen Liang
23498883d4 chore: skip explicit installing jinja2 as testing dependency (#4845) 2024-06-02 09:49:20 +08:00
Bowen Liang
a47e8d0da2 test: CI test for db migration scripts on changes (#4739) 2024-05-31 16:45:34 +08:00
Bowen Liang
6dd0e07af8 test: triggering tests on changes and allow cancelling in-progress CI test jobs (#4743) 2024-05-31 16:42:14 +08:00
doufa
b1c9671a60 fix: status query not stop when leaving document embedding detail page (#4754) 2024-05-31 16:07:48 +08:00
Yeuoly
7aaa1ff270 chore: increase workflow max steps to 500 (#4835) 2024-05-31 15:16:35 +08:00
Yeuoly
85698ca4f7 chore: cleanup tools, remove useless code (#4833) 2024-05-31 14:19:59 +08:00
Oliver Lee
176d91937d fix 'NoneType' and new ContentType supported. (#4818) 2024-05-31 14:19:33 +08:00
Yash Parmar
e0da0744b5 add: ollama keep alive parameter added. issue #4024 (#4655) 2024-05-31 12:22:02 +08:00
zxhlyh
0b4902bdc2 fix: workflow app run (#4831) 2024-05-31 12:15:25 +08:00
xielong
e9904e66e6 chore: Enable case-insensitive search for large models (#4817) 2024-05-31 08:55:37 +08:00
crazywoola
3de8e8fd6a Feat/i18n workflow (#4819) 2024-05-30 21:03:32 +08:00
DomKing
38a470a873 fix: app_count of dataset is error when apps was deleted (#4810) 2024-05-30 19:23:46 +08:00
Whitewater
4308a79e89 fix: revision styles for workflow (#4087) 2024-05-30 19:10:14 +08:00
Krasus.Chen
93d3350c8c update sd-webui api parameters to v1.9.3 (#4798)
Co-authored-by: Your Name <chen@krasus.red>
2024-05-30 19:04:47 +08:00
Lesenelir
615c009c42 fix: remove redundant props (#4787) 2024-05-30 18:58:08 +08:00
Charles Zhou
a325a294bd feat: opportunistic tls flag for smtp (#4794) 2024-05-30 18:56:46 +08:00
zxhlyh
4b91383efc feat: workflow variable aggregator support group (#4811)
Co-authored-by: Yeuoly <admin@srmxy.cn>
2024-05-30 18:54:58 +08:00
paragonnov
18ab63bd37 add: i18n: update korean (#4813) 2024-05-30 17:40:35 +08:00
Joel
a7fb1ffcd8 feat: show more usage info in billing page (#4808) 2024-05-30 16:15:38 +08:00
Joel
11f173693b fix: some filed in model param selector has no left spacing (#4803) 2024-05-30 14:49:41 +08:00
zxhlyh
5b2cd8d03a chore: node help link (#4795) 2024-05-30 14:24:53 +08:00
SebastjanPrachovskij
b10e67be3b Add SearchApi tools (#4648) 2024-05-30 11:11:17 +08:00
Joel
d41c077fac chore: improve node user experience (#4792) 2024-05-30 10:53:02 +08:00
Joel
3175a2c76a fix: in tool and http node of iteration can not show item var correctly (#4791) 2024-05-30 10:40:27 +08:00
Kota-Yamaguchi
3b60b712ec feat: Add logging warning when MAIL_TYPE is not set (#4771) 2024-05-29 18:06:16 +08:00
zeroameli
afed3610fc fix organize agent's history messages without recalculating tokens (#4324)
Co-authored-by: chenyongzhao <chenyz@mama.cn>
2024-05-29 15:25:20 +08:00
Yeuoly
74f38eacda feat: support define tags in tool yaml (#4763) 2024-05-29 15:19:14 +08:00
Weaxs
b189faca52 feat: update ernie model (#4756) 2024-05-29 14:57:23 +08:00
Yeuoly
d4cd6149ac fix: incorrect workflow max call depth (#4759) 2024-05-29 14:52:28 +08:00
xielong
e1cd9aef8f feat: support baichuan3 turbo, baichuan3 turbo 128k, and baichuan4 (#4762) 2024-05-29 14:46:04 +08:00
Yeuoly
ba37275503 fix: confusing chart description (#4760) 2024-05-29 14:36:33 +08:00
非法操作
e01b44af61 style: fix annotation panel display misalignment (#4750) 2024-05-29 14:23:44 +08:00
majian
72a90074bc Add WORKFLOW_CALL_MAX_DEPTH env var. (#4713) 2024-05-29 13:39:11 +08:00
crazywoola
705a6e3a8e Fix/4742 ollama num gpu option not consistent with allowed values (#4751) 2024-05-29 13:33:35 +08:00
非法操作
f4a240d225 style: the 'all' of add tool panel should contain workflow tools (#4755) 2024-05-29 13:04:23 +08:00
xielong
793f0c1dd6 fix: Corrected schema link in model_runtime's README.md (#4757) 2024-05-29 13:03:21 +08:00
Charles Zhou
008edd0eeb fix: optimize sticky header styles z-index in tools - ProviderList component (#4746) 2024-05-29 08:36:11 +08:00
takatost
9e6b6e7b82 fix: workflow run sequence number slow sql (#4737) 2024-05-28 20:41:52 +08:00
xxhong
164d6e47b9 Show tool i18n name on chat pannel (#4724) 2024-05-28 18:58:02 +08:00
xielong
88b4d69278 fix: Correct context size for banchuan2-53b and banchuan2-turbo (#4721) 2024-05-28 16:37:44 +08:00
Joel
5bcbcd3c57 fix: retrieval value greater more than 1 caused ui problem (#4718) 2024-05-28 16:01:19 +08:00
Jyong
1b2d862973 add error msg for hit test (#4704) 2024-05-28 14:54:53 +08:00
Hash Brown
e6f6a59f3b style: update VarPanel to use whitespace-pre-wrap for value display (#4684) 2024-05-28 14:54:29 +08:00
Yeuoly
e198bc9b9a fix: workflow as tool garbled (#4707) 2024-05-28 14:51:42 +08:00
非法操作
b7f81f0999 fix: the new node name is generated based on the original node when duplicating (#4675) 2024-05-28 13:50:43 +08:00
doufa
eb8dc15ad6 fix: Input fields in the model provider's settings modal do not switch sequence via keyboard navigation (Tab key) (#4662) 2024-05-28 11:34:44 +08:00
Pika
2ee3a1b6f3 fix: key-value-table styles (#4678) 2024-05-28 10:57:40 +08:00
岩本宙士
0960b17fbc Add workflow translations for ja-JP (#4698)
Co-authored-by: crazywoola <427733928@qq.com>
2024-05-28 10:27:35 +08:00
crazywoola
6534566b7e feat: add América/São Paulo tz (#4701) 2024-05-28 10:12:18 +08:00
854 changed files with 33774 additions and 5196 deletions

View File

@@ -4,6 +4,13 @@ on:
pull_request:
branches:
- main
paths:
- api/**
- docker/**
concurrency:
group: api-tests-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
@@ -51,7 +58,7 @@ jobs:
- name: Run Workflow
run: dev/pytest/pytest_workflow.sh
- name: Set up Vector Stores (Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS)
- name: Set up Vector Stores (Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS, Chroma)
uses: hoverkraft-tech/compose-action@v2.0.0
with:
compose-file: |
@@ -60,6 +67,7 @@ jobs:
docker/docker-compose.milvus.yaml
docker/docker-compose.pgvecto-rs.yaml
docker/docker-compose.pgvector.yaml
docker/docker-compose.chroma.yaml
services: |
weaviate
qdrant
@@ -68,6 +76,84 @@ jobs:
milvus-standalone
pgvecto-rs
pgvector
chroma
- name: Test Vector Stores
run: dev/pytest/pytest_vdb.sh
test-in-poetry:
name: API Tests
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.10"
- "3.11"
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install Poetry
uses: abatilo/actions-poetry@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
cache-dependency-path: |
api/pyproject.toml
api/poetry.lock
- name: Poetry check
run: |
poetry check -C api
poetry show -C api
- name: Install dependencies
run: poetry install -C api --with dev
- name: Run Unit tests
run: poetry run -C api bash dev/pytest/pytest_unit_tests.sh
- name: Run ModelRuntime
run: poetry run -C api bash dev/pytest/pytest_model_runtime.sh
- name: Run Tool
run: poetry run -C api bash dev/pytest/pytest_tools.sh
- name: Set up Sandbox
uses: hoverkraft-tech/compose-action@v2.0.0
with:
compose-file: |
docker/docker-compose.middleware.yaml
services: |
sandbox
ssrf_proxy
- name: Run Workflow
run: poetry run -C api bash dev/pytest/pytest_workflow.sh
- name: Set up Vector Stores (Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS, Chroma)
uses: hoverkraft-tech/compose-action@v2.0.0
with:
compose-file: |
docker/docker-compose.middleware.yaml
docker/docker-compose.qdrant.yaml
docker/docker-compose.milvus.yaml
docker/docker-compose.pgvecto-rs.yaml
docker/docker-compose.pgvector.yaml
docker/docker-compose.chroma.yaml
services: |
weaviate
qdrant
etcd
minio
milvus-standalone
pgvecto-rs
pgvector
chroma
- name: Test Vector Stores
run: poetry run -C api bash dev/pytest/pytest_vdb.sh

View File

@@ -17,7 +17,7 @@ env:
jobs:
build-and-push:
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
if: github.repository == 'langgenius/dify'
strategy:
matrix:
include:

57
.github/workflows/db-migration-test.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: DB Migration Test
on:
pull_request:
branches:
- main
paths:
- api/migrations/**
concurrency:
group: db-migration-test-${{ github.ref }}
cancel-in-progress: true
jobs:
db-migration-test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.10"
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install Poetry
uses: abatilo/actions-poetry@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
cache-dependency-path: |
api/pyproject.toml
api/poetry.lock
- name: Install dependencies
run: poetry install -C api
- name: Set up Middleware
uses: hoverkraft-tech/compose-action@v2.0.0
with:
compose-file: |
docker/docker-compose.middleware.yaml
services: |
db
- name: Prepare configs
run: |
cd api
cp .env.example .env
- name: Run DB Migration
run: |
cd api
poetry run python -m flask db upgrade

View File

@@ -6,7 +6,7 @@ on:
- main
concurrency:
group: dep-${{ github.head_ref || github.run_id }}
group: style-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
@@ -18,54 +18,93 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@v44
with:
files: api/**
- name: Install Poetry
uses: abatilo/actions-poetry@v3
- name: Set up Python
uses: actions/setup-python@v5
if: steps.changed-files.outputs.any_changed == 'true'
with:
python-version: '3.10'
- name: Python dependencies
run: pip install ruff dotenv-linter
if: steps.changed-files.outputs.any_changed == 'true'
run: poetry install -C api --only lint
- name: Ruff check
run: ruff check ./api
if: steps.changed-files.outputs.any_changed == 'true'
run: poetry run -C api ruff check --preview ./api
- name: Dotenv check
run: dotenv-linter ./api/.env.example ./web/.env.example
if: steps.changed-files.outputs.any_changed == 'true'
run: poetry run -C api dotenv-linter ./api/.env.example ./web/.env.example
- name: Lint hints
if: failure()
run: echo "Please run 'dev/reformat' to fix the fixable linting errors."
test:
name: ESLint and SuperLinter
web-style:
name: Web Style
runs-on: ubuntu-latest
needs: python-style
defaults:
run:
working-directory: ./web
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@v44
with:
fetch-depth: 0
files: web/**
- name: Setup NodeJS
uses: actions/setup-node@v4
if: steps.changed-files.outputs.any_changed == 'true'
with:
node-version: 20
cache: yarn
cache-dependency-path: ./web/package.json
- name: Web dependencies
run: |
cd ./web
yarn install --frozen-lockfile
if: steps.changed-files.outputs.any_changed == 'true'
run: yarn install --frozen-lockfile
- name: Web style check
run: |
cd ./web
yarn run lint
if: steps.changed-files.outputs.any_changed == 'true'
run: yarn run lint
superlinter:
name: SuperLinter
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@v44
with:
files: |
**.sh
**.yaml
**.yml
Dockerfile
dev/**
- name: Super-linter
uses: super-linter/super-linter/slim@v6
if: steps.changed-files.outputs.any_changed == 'true'
env:
BASH_SEVERITY: warning
DEFAULT_BRANCH: main
@@ -76,4 +115,5 @@ jobs:
VALIDATE_BASH_EXEC: true
VALIDATE_GITHUB_ACTIONS: true
VALIDATE_DOCKERFILE_HADOLINT: true
VALIDATE_XML: true
VALIDATE_YAML: true

View File

@@ -4,6 +4,13 @@ on:
pull_request:
branches:
- main
paths:
- sdks/**
concurrency:
group: sdk-tests-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build:
name: unit test for Node.js SDK

4
.gitignore vendored
View File

@@ -134,7 +134,8 @@ dmypy.json
web/.vscode/settings.json
# Intellij IDEA Files
.idea/
.idea/*
!.idea/vcs.xml
.ideaDataSources/
api/.env
@@ -148,6 +149,7 @@ docker/volumes/qdrant/*
docker/volumes/etcd/*
docker/volumes/minio/*
docker/volumes/milvus/*
docker/volumes/chroma/*
sdks/python-client/build
sdks/python-client/dist

16
.idea/vcs.xml generated Normal file
View File

@@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="IssueNavigationConfiguration">
<option name="links">
<list>
<IssueNavigationLink>
<option name="issueRegexp" value="#(\d+)" />
<option name="linkRegexp" value="https://github.com/langgenius/dify/issues/$1" />
</IssueNavigationLink>
</list>
</option>
</component>
<component name="VcsDirectoryMappings">
<mapping directory="" vcs="Git" />
</component>
</project>

View File

@@ -4,7 +4,7 @@ Dify にコントリビュートしたいとお考えなのですね。それは
私たちは現状を鑑み、機敏かつ迅速に開発をする必要がありますが、同時にあなたのようなコントリビューターの方々に、可能な限りスムーズな貢献体験をしていただきたいと思っています。そのためにこのコントリビュートガイドを作成しました。
コードベースやコントリビュータの方々と私たちがどのように仕事をしているのかに慣れていただき、楽しいパートにすぐに飛び込めるようにすることが目的です。
このガイドは Dify そのものと同様に、継続的に改善されています。実際のプロジェクトに遅れをとることがあるかもしれませんが、ご理解お願いします。
このガイドは Dify そのものと同様に、継続的に改善されています。実際のプロジェクトに遅れをとることがあるかもしれませんが、ご理解のほどよろしくお願いいたします。
ライセンスに関しては、私たちの短い[ライセンスおよびコントリビューター規約](./LICENSE)をお読みください。また、コミュニティは[行動規範](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)を遵守しています。
@@ -14,7 +14,7 @@ Dify にコントリビュートしたいとお考えなのですね。それは
### 機能リクエスト
* 新しい機能要望を出す場合は、提案する機能が何を実現するものなのかを説明し、可能な限り多くの文脈を含めてください。[@perzeusss](https://github.com/perzeuss)は、あなたの要望を書き出すのに役立つ [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) を作ってくれました。気軽に試してみてください。
* 新しい機能要望を出す場合は、提案する機能が何を実現するものなのかを説明し、可能な限り多くのコンテキストを含めてください。[@perzeusss](https://github.com/perzeuss)は、あなたの要望を書き出すのに役立つ [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) を作ってくれました。気軽に試してみてください。
* 既存の課題から 1 つ選びたい場合は、その下にコメントを書いてください。
@@ -54,7 +54,7 @@ Dify にコントリビュートしたいとお考えなのですね。それは
## インストール
Dify を開発用にセットアップする手順は以下の通りです
以下の手順で 、Difyのセットアップをしてください
### 1. このリポジトリをフォークする
@@ -120,7 +120,7 @@ Dify のバックエンドは[Flask](https://flask.palletsprojects.com/en/3.0.x/
### フロントエンド
このウェブサイトは、Typescript の[Next.js](https://nextjs.org/)ボイラープレートでブートストラップされており、スタイリングには[Tailwind CSS](https://tailwindcss.com/)を使用しています。国際化には[React-i18next](https://react.i18next.com/)を使用しています。
このウェブサイトは、Typescriptベースの[Next.js](https://nextjs.org/)テンプレートを使ってブートストラップされ、[Tailwind CSS](https://tailwindcss.com/)を使ってスタイリングされています。国際化には[React-i18next](https://react.i18next.com/)を使用しています。
```
[web/]

View File

@@ -36,6 +36,7 @@
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
</p>
@@ -184,10 +185,11 @@ After running, you can access the Dify dashboard in your browser at [http://loca
If you need to customize the configuration, please refer to the comments in our [docker-compose.yml](docker/docker-compose.yaml) file and manually set the environment configuration. After making the changes, please run `docker-compose up -d` again. You can see the full list of environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) which allow Dify to be deployed on Kubernetes.
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) and YAML files which allow Dify to be deployed on Kubernetes.
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
## Contributing

226
README_AR.md Normal file
View File

@@ -0,0 +1,226 @@
![cover-v5-optimized](https://github.com/langgenius/dify/assets/13230914/f9e19af5-61ba-4119-b926-d10c4c06ebab)
<p align="center">
<a href="https://cloud.dify.ai">Dify Cloud</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">الاستضافة الذاتية</a> ·
<a href="https://docs.dify.ai">التوثيق</a> ·
<a href="https://cal.com/guchenhe/60-min-meeting">استفسارات الشركات</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Product-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/free-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on Twitter"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Commits last month" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
</p>
<div style="text-align: right;">
مشروع Dify هو منصة تطوير تطبيقات الذكاء الصناعي مفتوحة المصدر. تجمع واجهته البديهية بين سير العمل الذكي بالذكاء الاصطناعي وخط أنابيب RAG وقدرات الوكيل وإدارة النماذج وميزات الملاحظة وأكثر من ذلك، مما يتيح لك الانتقال بسرعة من المرحلة التجريبية إلى الإنتاج. إليك قائمة بالميزات الأساسية:
</br> </br>
**1. سير العمل**: قم ببناء واختبار سير عمل الذكاء الاصطناعي القوي على قماش بصري، مستفيدًا من جميع الميزات التالية وأكثر.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
**2. الدعم الشامل للنماذج**: تكامل سلس مع مئات من LLMs الخاصة / مفتوحة المصدر من عشرات من موفري التحليل والحلول المستضافة ذاتيًا، مما يغطي GPT و Mistral و Llama3 وأي نماذج متوافقة مع واجهة OpenAI API. يمكن العثور على قائمة كاملة بمزودي النموذج المدعومين [هنا](https://docs.dify.ai/getting-started/readme/model-providers).
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. بيئة التطوير للأوامر**: واجهة بيئة التطوير المبتكرة لصياغة الأمر ومقارنة أداء النموذج، وإضافة ميزات إضافية مثل تحويل النص إلى كلام إلى تطبيق قائم على الدردشة.
**4. خط أنابيب RAG**: قدرات RAG الواسعة التي تغطي كل شيء من استيعاب الوثائق إلى الاسترجاع، مع الدعم الفوري لاستخراج النص من ملفات PDF و PPT وتنسيقات الوثائق الشائعة الأخرى.
**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DELL·E وStable Diffusion و WolframAlpha.
**6. الـ LLMOps**: راقب وتحلل سجلات التطبيق والأداء على مر الزمن. يمكنك تحسين الأوامر والبيانات والنماذج باستمرار استنادًا إلى البيانات الإنتاجية والتعليقات.
**7.الواجهة الخلفية (Backend) كخدمة**: تأتي جميع عروض Dify مع APIs مطابقة، حتى يمكنك دمج Dify بسهولة في منطق أعمالك الخاص.
## مقارنة الميزات
<table style="width: 100%;">
<tr>
<th align="center">الميزة</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">نهج البرمجة</td>
<td align="center">موجّه لـ تطبيق + واجهة برمجة تطبيق (API)</td>
<td align="center">برمجة Python</td>
<td align="center">موجه لتطبيق</td>
<td align="center">واجهة برمجة تطبيق (API)</td>
</tr>
<tr>
<td align="center">LLMs المدعومة</td>
<td align="center">تنوع غني</td>
<td align="center">تنوع غني</td>
<td align="center">تنوع غني</td>
<td align="center">فقط OpenAI</td>
</tr>
<tr>
<td align="center">محرك RAG</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr>
<td align="center">الوكيل</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">✅</td>
</tr>
<tr>
<td align="center">سير العمل</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">الملاحظة</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">ميزات الشركات (SSO / مراقبة الوصول)</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td align="center">❌</td>
<td align="center">❌</td>
</tr>
<tr>
<td align="center">نشر محلي</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
</table>
## استخدام Dify
- **سحابة </br>**
نحن نستضيف [خدمة Dify Cloud](https://dify.ai) لأي شخص لتجربتها بدون أي إعدادات. توفر كل قدرات النسخة التي تمت استضافتها ذاتيًا، وتتضمن 200 أمر GPT-4 مجانًا في خطة الصندوق الرملي.
- **استضافة ذاتية لنسخة المجتمع Dify</br>**
ابدأ سريعًا في تشغيل Dify في بيئتك باستخدام [دليل البدء السريع](#البدء السريع).
استخدم [توثيقنا](https://docs.dify.ai) للمزيد من المراجع والتعليمات الأعمق.
- **مشروع Dify للشركات / المؤسسات</br>**
نحن نوفر ميزات إضافية مركزة على الشركات. [جدول اجتماع معنا](https://cal.com/guchenhe/30min) أو [أرسل لنا بريدًا إلكترونيًا](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) لمناقشة احتياجات الشركات. </br>
> بالنسبة للشركات الناشئة والشركات الصغيرة التي تستخدم خدمات AWS، تحقق من [Dify Premium على AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) ونشرها في شبكتك الخاصة على AWS VPC بنقرة واحدة. إنها عرض AMI بأسعار معقولة مع خيار إنشاء تطبيقات بشعار وعلامة تجارية مخصصة.
## البقاء قدمًا
قم بإضافة نجمة إلى Dify على GitHub وتلق تنبيهًا فوريًا بالإصدارات الجديدة.
![نجمنا](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## البداية السريعة
> قبل تثبيت Dify، تأكد من أن جهازك يلبي الحد الأدنى من متطلبات النظام التالية:
>
>- معالج >= 2 نواة
>- ذاكرة وصول عشوائي (RAM) >= 4 جيجابايت
</br>
أسهل طريقة لبدء تشغيل خادم Dify هي تشغيل ملف [docker-compose.yml](docker/docker-compose.yaml) الخاص بنا. قبل تشغيل أمر التثبيت، تأكد من تثبيت [Docker](https://docs.docker.com/get-docker/) و [Docker Compose](https://docs.docker.com/compose/install/) على جهازك:
```bash
cd docker
docker compose up -d
```
بعد التشغيل، يمكنك الوصول إلى لوحة تحكم Dify في متصفحك على [http://localhost/install](http://localhost/install) وبدء عملية التهيئة.
> إذا كنت ترغب في المساهمة في Dify أو القيام بتطوير إضافي، فانظر إلى [دليلنا للنشر من الشفرة (code) المصدرية](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code)
## الخطوات التالية
إذا كنت بحاجة إلى تخصيص التكوين، يرجى الرجوع إلى التعليقات في ملف [docker-compose.yml](docker/docker-compose.yaml) لدينا وتعيين التكوينات البيئية يدويًا. بعد إجراء التغييرات، يرجى تشغيل `docker-compose up -d` مرة أخرى. يمكنك رؤية قائمة كاملة بالمتغيرات البيئية [هنا](https://docs.dify.ai/getting-started/install-self-hosted/environments).
يوجد مجتمع خاص بـ [Helm Charts](https://helm.sh/) وملفات YAML التي تسمح بتنفيذ Dify على Kubernetes للنظام من الإيجابيات العلوية.
- [رسم بياني Helm من قبل @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [رسم بياني Helm من قبل @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [ملف YAML من قبل @Winson-030](https://github.com/Winson-030/dify-kubernetes)
## المساهمة
لأولئك الذين يرغبون في المساهمة، انظر إلى [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) لدينا.
في الوقت نفسه، يرجى النظر في دعم Dify عن طريق مشاركته على وسائل التواصل الاجتماعي وفي الفعاليات والمؤتمرات.
> نحن نبحث عن مساهمين لمساعدة في ترجمة Dify إلى لغات أخرى غير اللغة الصينية المندرين أو الإنجليزية. إذا كنت مهتمًا بالمساعدة، يرجى الاطلاع على [README للترجمة](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) لمزيد من المعلومات، واترك لنا تعليقًا في قناة `global-users` على [خادم المجتمع على Discord](https://discord.gg/8Tpq4AcN9c).
**المساهمون**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## المجتمع والاتصال
* [مناقشة Github](https://github.com/langgenius/dify/discussions). الأفضل لـ: مشاركة التعليقات وطرح الأسئلة.
* [المشكلات على GitHub](https://github.com/langgenius/dify/issues). الأفضل لـ: الأخطاء التي تواجهها في استخدام Dify.AI، واقتراحات الميزات. انظر [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [البريد الإلكتروني](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). الأفضل لـ: الأسئلة التي تتعلق باستخدام Dify.AI.
* [Discord](https://discord.gg/FngNHpbcY7). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
* [تويتر](https://twitter.com/dify_ai). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
أو، قم بجدولة اجتماع مباشرة مع أحد أعضاء الفريق:
<table>
<tr>
<th>نقطة الاتصال</th>
<th>الغرض</th>
</tr>
<tr>
<td><a href='https://cal.com/guchenhe/15min' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/9ebcd111-1205-4d71-83d5-948d70b809f5' alt='Git-Hub-README-Button-3x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
<td>استفسارات الأعمال واقتراحات حول المنتج</td>
</tr>
<tr>
<td><a href='https://cal.com/pinkbanana' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/d1edd00a-d7e4-4513-be6c-e57038e143fd' alt='Git-Hub-README-Button-2x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
<td>المساهمات والمشكلات وطلبات الميزات</td>
</tr>
</table>
## تاريخ النجمة
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## الكشف عن الأمان
لحماية خصوصيتك، يرجى تجنب نشر مشكلات الأمان على GitHub. بدلاً من ذلك، أرسل أسئلتك إلى security@dify.ai وسنقدم لك إجابة أكثر تفصيلاً.
## الرخصة
هذا المستودع متاح تحت [رخصة البرنامج الحر Dify](LICENSE)، والتي تعتبر بشكل أساسي Apache 2.0 مع بعض القيود الإضافية.

View File

@@ -186,10 +186,11 @@ docker compose up -d
#### 使用 Helm Chart 部署
使用 [Helm Chart](https://helm.sh/) 版本,可以在 Kubernetes 上部署 Dify。
使用 [Helm Chart](https://helm.sh/) 版本或者 YAML 文件,可以在 Kubernetes 上部署 Dify。
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [YAML 文件 by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
### 配置

View File

@@ -192,10 +192,11 @@ Si necesitas personalizar la configuración, consulta los comentarios en nuestro
. Después de realizar los cambios, ejecuta `docker-compose up -d` nuevamente. Puedes ver la lista completa de variables de entorno [aquí](https://docs.dify.ai/getting-started/install-self-hosted/environments).
Si deseas configurar una instalación altamente disponible, hay [Gráficos Helm](https://helm.sh/) contribuidos por la comunidad que permiten implementar Dify en Kubernetes.
Si desea configurar una configuración de alta disponibilidad, la comunidad proporciona [Gráficos Helm](https://helm.sh/) y archivos YAML, a través de los cuales puede desplegar Dify en Kubernetes.
- [Gráfico Helm por @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Gráfico Helm por @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [Ficheros YAML por @Winson-030](https://github.com/Winson-030/dify-kubernetes)
## Contribuir

View File

@@ -192,10 +192,11 @@ Si vous devez personnaliser la configuration, veuillez
vous référer aux commentaires dans notre fichier [docker-compose.yml](docker/docker-compose.yaml) et définir manuellement la configuration de l'environnement. Après avoir apporté les modifications, veuillez exécuter à nouveau `docker-compose up -d`. Vous pouvez voir la liste complète des variables d'environnement [ici](https://docs.dify.ai/getting-started/install-self-hosted/environments).
Si vous souhaitez configurer une installation hautement disponible, il existe des [Helm Charts](https://helm.sh/) contribués par la communauté qui permettent de déployer Dify sur Kubernetes.
Si vous souhaitez configurer une configuration haute disponibilité, la communauté fournit des [Helm Charts](https://helm.sh/) et des fichiers YAML, à travers lesquels vous pouvez déployer Dify sur Kubernetes.
- [Helm Chart par @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart par @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [Fichier YAML par @Winson-030](https://github.com/Winson-030/dify-kubernetes)
## Contribuer

View File

@@ -2,9 +2,9 @@
<p align="center">
<a href="https://cloud.dify.ai">Dify Cloud</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">セルフホス</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">セルフホスティング</a> ·
<a href="https://docs.dify.ai">ドキュメント</a> ·
<a href="https://cal.com/guchenhe/dify-demo">デモのスケジュール</a>
<a href="https://cal.com/guchenhe/dify-demo">デモの予約</a>
</p>
<p align="center">
@@ -44,37 +44,37 @@
<a href="https://trendshift.io/repositories/2152" target="_blank"><img src="https://trendshift.io/api/badge/repositories/2152" alt="langgenius%2Fdify | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</p>
DifyはオープンソースのLLMアプリケーション開発プラットフォームです。直感的なインターフェスには、AIワークフロー、RAGパイプライン、エージェント機能、モデル管理、観測機能などが組み合わさっており、プロトタイプから本番までの移行を迅速に行うことができます。以下は、主要機能のリストです:
DifyはオープンソースのLLMアプリケーション開発プラットフォームです。直感的なインターフェスには、AIワークフロー、RAGパイプライン、エージェント機能、モデル管理、観測機能などが組み合わさっており、プロトタイプから生産まで迅速に進めることができます。以下の機能が含まれます:
</br> </br>
**1. ワークフロー**:
ビジュアルキャンバス上で強力なAIワークフローを構築しテストし、以下の機能を活用してプロトタイプを超えることができます。
強力なAIワークフローをビジュアルキャンバス上で構築しテストできます。すべての機能、および以下の機能を使用できます。
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
**2. 包括的なモデルサポート**:
数百のプロプライエタリ/オープンソースのLLMと、数十の推論プロバイダーおよびセルフホスティングソリューションとのシームレスな統合を提供します。GPT、Mistral、Llama3、およびOpenAI API互換のモデルをカバーします。サポートされているモデルプロバイダーの完全なリストは[こちら](https://docs.dify.ai/getting-started/readme/model-providers)をご覧ください。
**2. 総合的なモデルサポート**:
数百のプロプライエタリ/オープンソースのLLMと、数十の推論プロバイダーおよびセルフホスティングソリューションとのシームレスな統合を提供します。GPT、Mistral、Llama3、OpenAI API互換性のあるすべてのモデルを統合されています。サポートされているモデルプロバイダーの完全なリストは[こちら](https://docs.dify.ai/getting-started/readme/model-providers)をご覧ください。
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. プロンプトIDE**:
チャットベースのアプリにテキスト読み上げなどの追加機能を追加するプロンプト作成、モデルパフォーマンス比較する直感的なインターフェース
プロンプト作成、モデルパフォーマンス比較が行え、チャットベースのアプリに音声合成などの機能も追加できます
**4. RAGパイプライン**:
文書の取り込みから取得までをカバーする幅広いRAG機能で、PDF、PPTなどの一般的なドキュメント形式からのテキスト抽出に対するアウトオブボックスのサポートを提供します。
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサーポイントも提供します。
**5. エージェント機能**:
LLM関数呼び出しまたはReActに基づいてエージェント定義し、エージェント向けの事前構築済みまたはカスタムツールを追加できます。Difyには、Google検索、DELL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが用意されています。
LLM Function CallingやReActに基づエージェント定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DELL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
**6. LLMOps**:
アプリケーションログパフォーマンスを時間の経過とともにモニタリングおよび分析します。本番データと注釈に基づいて、プロンプト、データセット、およびモデルを継続的に改善できます。
アプリケーションログパフォーマンスを監視と分析し、生産のデータと注釈に基づいて、プロンプト、データセット、モデルを継続的に改善できます。
**7. Backend-as-a-Service**:
Difyのすべての提供には、それに対応するAPIが付属しており、独自のビジネスロジックにDifyをシームレスに統合できます。
すべての機能はAPIを提供されており、Difyを自分のビジネスロジックに簡単に統合できます。
## 機能比較
@@ -95,9 +95,9 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
</tr>
<tr>
<td align="center">サポートされているLLM</td>
<td align="center">バリエーション豊富</td>
<td align="center">バリエーション豊富</td>
<td align="center">バリエーション豊富</td>
<td align="center">バラエティ豊か</td>
<td align="center">バラエティ豊か</td>
<td align="center">バラエティ豊か</td>
<td align="center">OpenAIのみ</td>
</tr>
<tr>
@@ -147,15 +147,15 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
## Difyの使用方法
- **クラウド </br>**
[こちら](https://dify.ai)のDify Cloudサービスを利用して、セットアップ不要で試すことができます。サンドボックスプランには、200回の無料のGPT-4呼び出しが含まれています。
[こちら](https://dify.ai)のDify Cloudサービスを利用して、セットアップ不要で試すことができます。サンドボックスプランには、200回のGPT-4呼び出しが無料で含まれています。
- **Dify Community Editionのセルフホスティング</br>**
この[スターターガイド](#quick-start)を使用して、ローカル環境でDifyを簡単に実行できます。
さらなる参考資料や詳細な手順については、[ドキュメント](https://docs.dify.ai)をご覧ください。
この[スターガイド](#quick-start)を使用して、ローカル環境でDifyを簡単に実行できます。
詳しくは[ドキュメント](https://docs.dify.ai)をご覧ください。
- **エンタープライズ/組織向けのDify</br>**
追加のエンタープライズ向け機能を提供しています。[こちらからミーティングを予約](https://cal.com/guchenhe/30min)したり、[メールを送信](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)してエンタープライズのニーズについて相談してください。 </br>
> AWSを使用しているスタートアップや中小企業の場合は、[AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6)のDify Premiumをチェックして、ワンクリックで自のAWS VPCにデプロイできます。カスタムロゴブランディングでアプリを作成するオプションを備えた手頃な価格のAMIオファリングです。
- **企業/組織向けのDify</br>**
企業中心の機能を提供しています。[こちらからミーティングを予約](https://cal.com/guchenhe/30min)したり、[メールを送信](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)して企業のニーズについて相談してください。 </br>
> AWSを使用しているスタートアップ企業や中小企業の場合は、[AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6)のDify Premiumをチェックして、ワンクリックで自のAWS VPCにデプロイできます。さらに、手頃な価格のAMIオファリングどして、ロゴブランディングをカスタマイズしてアプリケーションを作成するオプションがあります。
## 最新の情報を入手
@@ -189,10 +189,11 @@ docker compose up -d
環境設定をカスタマイズする場合は、[docker-compose.yml](docker/docker-compose.yaml)ファイル内のコメントを参照して、環境設定を手動で設定してください。変更を加えた後は、再び `docker-compose up -d` を実行してください。環境変数の完全なリストは[こちら](https://docs.dify.ai/getting-started/install-self-hosted/environments)をご覧ください。
高可用性のセットアップを構成する場合、コミュニティによって提供されている[Helm Charts](https://helm.sh/)があり、これによりKubernetes上にDifyを展開できます。
高可用性設定を設定する必要がある場合、コミュニティ[Helm Charts](https://helm.sh/)とYAMLファイルにより、DifyをKubernetesにデプロイすることができます。
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
## 貢献
@@ -212,7 +213,7 @@ docker compose up -d
## コミュニティ & お問い合わせ
* [Github Discussion](https://github.com/langgenius/dify/discussions). 主に: フィードバックの共有や質問。
* [GitHub Issues](https://github.com/langgenius/dify/issues). 主に: Dify.AI使用中に遭遇したバグや機能提案。
* [GitHub Issues](https://github.com/langgenius/dify/issues). 主に: Dify.AI使用する際に発生するエラーや問題については、[貢献ガイド](CONTRIBUTING_JA.md)を参照してください
* [Email](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). 主に: Dify.AIの使用に関する質問。
* [Discord](https://discord.gg/FngNHpbcY7). 主に: アプリケーションの共有やコミュニティとの交流。
* [Twitter](https://twitter.com/dify_ai). 主に: アプリケーションの共有やコミュニティとの交流。

View File

@@ -190,11 +190,11 @@ After running, you can access the Dify dashboard in your browser at [http://loca
If you need to customize the configuration, please refer to the comments in our [docker-compose.yml](docker/docker-compose.yaml) file and manually set the environment configuration. After making the changes, please run `docker-compose up -d` again. You can see the full list of environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) which allow Dify to be deployed on Kubernetes.
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) and YAML files which allow Dify to be deployed on Kubernetes.
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
## Contributing

View File

@@ -184,11 +184,11 @@ docker compose up -d
구성 커스터마이징이 필요한 경우, [docker-compose.yml](docker/docker-compose.yaml) 파일의 코멘트를 참조하여 환경 구성을 수동으로 설정하십시오. 변경 후 `docker-compose up -d` 를 다시 실행하십시오. 환경 변수의 전체 목록은 [여기](https://docs.dify.ai/getting-started/install-self-hosted/environments)에서 확인할 수 있습니다.
고가용성 설정을 구성하려면 Dify를 Kubernetes에 배포할 수 있는 커뮤니티 제공 [Helm Charts](https://helm.sh/)가 있습니다.
Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했다는 커뮤니티 제공하는 [Helm Charts](https://helm.sh/)와 YAML 파일이 존재합니다.
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
## 기여

View File

@@ -42,6 +42,7 @@ DB_DATABASE=dify
# storage type: local, s3, azure-blob
STORAGE_TYPE=local
STORAGE_LOCAL_PATH=storage
S3_USE_AWS_MANAGED_IAM=false
S3_ENDPOINT=https://your-bucket-name.storage.s3.clooudflare.com
S3_BUCKET_NAME=your-bucket-name
S3_ACCESS_KEY=your-access-key
@@ -98,6 +99,15 @@ RELYT_USER=postgres
RELYT_PASSWORD=postgres
RELYT_DATABASE=postgres
# Tencent configuration
TENCENT_VECTOR_DB_URL=http://127.0.0.1
TENCENT_VECTOR_DB_API_KEY=dify
TENCENT_VECTOR_DB_TIMEOUT=30
TENCENT_VECTOR_DB_USERNAME=dify
TENCENT_VECTOR_DB_DATABASE=dify
TENCENT_VECTOR_DB_SHARD=1
TENCENT_VECTOR_DB_REPLICAS=2
# PGVECTO_RS configuration
PGVECTO_RS_HOST=localhost
PGVECTO_RS_PORT=5431
@@ -112,6 +122,21 @@ PGVECTOR_USER=postgres
PGVECTOR_PASSWORD=postgres
PGVECTOR_DATABASE=postgres
# Tidb Vector configuration
TIDB_VECTOR_HOST=xxx.eu-central-1.xxx.aws.tidbcloud.com
TIDB_VECTOR_PORT=4000
TIDB_VECTOR_USER=xxx.root
TIDB_VECTOR_PASSWORD=xxxxxx
TIDB_VECTOR_DATABASE=dify
# Chroma configuration
CHROMA_HOST=127.0.0.1
CHROMA_PORT=8000
CHROMA_TENANT=default_tenant
CHROMA_DATABASE=default_database
CHROMA_AUTH_PROVIDER=chromadb.auth.token_authn.TokenAuthenticationServerProvider
CHROMA_AUTH_CREDENTIALS=difyai123456
# Upload configuration
UPLOAD_FILE_SIZE_LIMIT=15
UPLOAD_FILE_BATCH_LIMIT=5
@@ -127,10 +152,11 @@ RESEND_API_KEY=
RESEND_API_URL=https://api.resend.com
# smtp configuration
SMTP_SERVER=smtp.gmail.com
SMTP_PORT=587
SMTP_PORT=465
SMTP_USERNAME=123
SMTP_PASSWORD=abc
SMTP_USE_TLS=false
SMTP_USE_TLS=true
SMTP_OPPORTUNISTIC_TLS=false
# Sentry configuration
SENTRY_DSN=
@@ -182,3 +208,12 @@ LOG_FILE=
# Indexing configuration
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH=1000
# Workflow runtime configuration
WORKFLOW_MAX_EXECUTION_STEPS=500
WORKFLOW_MAX_EXECUTION_TIME=1200
WORKFLOW_CALL_MAX_DEPTH=5
# App configuration
APP_MAX_EXECUTION_TIME=1200

View File

@@ -17,7 +17,8 @@
"FLASK_DEBUG": "1",
"GEVENT_SUPPORT": "True"
},
"console": "integratedTerminal"
"console": "integratedTerminal",
"python": "${command:python.interpreterPath}"
},
{
"name": "Python: Flask",
@@ -36,7 +37,8 @@
"--debug"
],
"jinja": true,
"justMyCode": true
"justMyCode": true,
"python": "${command:python.interpreterPath}"
}
]
}

View File

@@ -17,16 +17,105 @@
```bash
sed -i "/^SECRET_KEY=/c\SECRET_KEY=$(openssl rand -base64 42)" .env
```
4. If you use Anaconda, create a new environment and activate it
4. Create environment.
Dify API service uses [Poetry](https://python-poetry.org/docs/) to manage dependencies. You can execute `poetry shell` to activate the environment.
> Using pip can be found [below](#usage-with-pip).
6. Install dependencies
```bash
poetry install
```
In case of contributors missing to update dependencies for `pyproject.toml`, you can perform the following shell instead.
```bash
poetry shell # activate current environment
poetry add $(cat requirements.txt) # install dependencies of production and update pyproject.toml
poetry add $(cat requirements-dev.txt) --group dev # install dependencies of development and update pyproject.toml
```
7. Run migrate
Before the first launch, migrate the database to the latest version.
```bash
poetry run python -m flask db upgrade
```
8. Start backend
```bash
poetry run python -m flask run --host 0.0.0.0 --port=5001 --debug
```
9. Start Dify [web](../web) service.
10. Setup your application by visiting `http://localhost:3000`...
11. If you need to debug local async processing, please start the worker service.
```bash
poetry run python -m celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail
```
The started celery app handles the async tasks, e.g. dataset importing and documents indexing.
## Testing
1. Install dependencies for both the backend and the test environment
```bash
poetry install --with dev
```
2. Run the tests locally with mocked system environment variables in `tool.pytest_env` section in `pyproject.toml`
```bash
cd ../
poetry run -C api bash dev/pytest/pytest_all_tests.sh
```
## Usage with pip
> [!NOTE]
> In the next version, we will deprecate pip as the primary package management tool for dify api service, currently Poetry and pip coexist.
1. Start the docker-compose stack
The backend require some middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using `docker-compose`.
```bash
cd ../docker
docker-compose -f docker-compose.middleware.yaml -p dify up -d
cd ../api
```
2. Copy `.env.example` to `.env`
3. Generate a `SECRET_KEY` in the `.env` file.
```bash
sed -i "/^SECRET_KEY=/c\SECRET_KEY=$(openssl rand -base64 42)" .env
```
4. Create environment.
If you use Anaconda, create a new environment and activate it
```bash
conda create --name dify python=3.10
conda activate dify
```
5. Install dependencies
6. Install dependencies
```bash
pip install -r requirements.txt
```
6. Run migrate
7. Run migrate
Before the first launch, migrate the database to the latest version.
@@ -34,27 +123,16 @@
flask db upgrade
```
⚠️ If you encounter problems with jieba, for example
```
> flask db upgrade
Error: While importing 'app', an ImportError was raised:
```
Please run the following command instead.
```
pip install -r requirements.txt --upgrade --force-reinstall
```
7. Start backend:
8. Start backend:
```bash
flask run --host 0.0.0.0 --port=5001 --debug
```
8. Setup your application by visiting http://localhost:5001/console/api/setup or other apis...
9. If you need to debug local async processing, please start the worker service by running
`celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail`.
The started celery app handles the async tasks, e.g. dataset importing and documents indexing.
9. Setup your application by visiting http://localhost:5001/console/api/setup or other apis...
10. If you need to debug local async processing, please start the worker service.
```bash
celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail
```
The started celery app handles the async tasks, e.g. dataset importing and documents indexing.
## Testing
@@ -68,3 +146,4 @@ The started celery app handles the async tasks, e.g. dataset importing and docum
```bash
dev/pytest/pytest_all_tests.sh
```

View File

@@ -1,12 +1,15 @@
import base64
import json
import secrets
from typing import Optional
import click
from flask import current_app
from werkzeug.exceptions import NotFound
from constants.languages import languages
from core.rag.datasource.vdb.vector_factory import Vector
from core.rag.datasource.vdb.vector_type import VectorType
from core.rag.models.document import Document
from extensions.ext_database import db
from libs.helper import email as email_validate
@@ -17,6 +20,7 @@ from models.dataset import Dataset, DatasetCollectionBinding, DocumentSegment
from models.dataset import Document as DatasetDocument
from models.model import Account, App, AppAnnotationSetting, AppMode, Conversation, MessageAnnotation
from models.provider import Provider, ProviderModel
from services.account_service import RegisterService, TenantService
@click.command('reset-password', help='Reset the account password.')
@@ -57,7 +61,7 @@ def reset_password(email, new_password, password_confirm):
account.password = base64_password_hashed
account.password_salt = base64_salt
db.session.commit()
click.echo(click.style('Congratulations!, password has been reset.', fg='green'))
click.echo(click.style('Congratulations! Password has been reset.', fg='green'))
@click.command('reset-email', help='Reset the account email.')
@@ -263,15 +267,15 @@ def migrate_knowledge_vector_database():
skipped_count = skipped_count + 1
continue
collection_name = ''
if vector_type == "weaviate":
if vector_type == VectorType.WEAVIATE:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
"type": 'weaviate',
"type": VectorType.WEAVIATE,
"vector_store": {"class_prefix": collection_name}
}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == "qdrant":
elif vector_type == VectorType.QDRANT:
if dataset.collection_binding_id:
dataset_collection_binding = db.session.query(DatasetCollectionBinding). \
filter(DatasetCollectionBinding.id == dataset.collection_binding_id). \
@@ -284,20 +288,20 @@ def migrate_knowledge_vector_database():
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
"type": 'qdrant',
"type": VectorType.QDRANT,
"vector_store": {"class_prefix": collection_name}
}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == "milvus":
elif vector_type == VectorType.MILVUS:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
"type": 'milvus',
"type": VectorType.MILVUS,
"vector_store": {"class_prefix": collection_name}
}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == "relyt":
elif vector_type == VectorType.RELYT:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
@@ -305,16 +309,24 @@ def migrate_knowledge_vector_database():
"vector_store": {"class_prefix": collection_name}
}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == "pgvector":
elif vector_type == VectorType.TENCENT:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
"type": 'pgvector',
"type": VectorType.TENCENT,
"vector_store": {"class_prefix": collection_name}
}
dataset.index_struct = json.dumps(index_struct_dict)
elif vector_type == VectorType.PGVECTOR:
dataset_id = dataset.id
collection_name = Dataset.gen_collection_name_by_id(dataset_id)
index_struct_dict = {
"type": VectorType.PGVECTOR,
"vector_store": {"class_prefix": collection_name}
}
dataset.index_struct = json.dumps(index_struct_dict)
else:
raise ValueError(f"Vector store {config.get('VECTOR_STORE')} is not supported.")
raise ValueError(f"Vector store {vector_type} is not supported.")
vector = Vector(dataset)
click.echo(f"Start to migrate dataset {dataset.id}.")
@@ -501,6 +513,46 @@ def add_qdrant_doc_id_index(field: str):
fg='green'))
@click.command('create-tenant', help='Create account and tenant.')
@click.option('--email', prompt=True, help='The email address of the tenant account.')
@click.option('--language', prompt=True, help='Account language, default: en-US.')
def create_tenant(email: str, language: Optional[str] = None):
"""
Create tenant account
"""
if not email:
click.echo(click.style('Sorry, email is required.', fg='red'))
return
# Create account
email = email.strip()
if '@' not in email:
click.echo(click.style('Sorry, invalid email address.', fg='red'))
return
account_name = email.split('@')[0]
if language not in languages:
language = 'en-US'
# generate random password
new_password = secrets.token_urlsafe(16)
# register account
account = RegisterService.register(
email=email,
name=account_name,
password=new_password,
language=language
)
TenantService.create_owner_tenant_if_not_exist(account)
click.echo(click.style('Congratulations! Account and tenant created.\n'
'Account: {}\nPassword: {}'.format(email, new_password), fg='green'))
def register_commands(app):
app.cli.add_command(reset_password)
app.cli.add_command(reset_email)
@@ -508,4 +560,5 @@ def register_commands(app):
app.cli.add_command(vdb_migrate)
app.cli.add_command(convert_to_agent_apps)
app.cli.add_command(add_qdrant_doc_id_index)
app.cli.add_command(create_tenant)

View File

@@ -24,6 +24,7 @@ DEFAULTS = {
'APP_WEB_URL': 'https://udify.app',
'FILES_URL': '',
'FILES_ACCESS_TIMEOUT': 300,
'S3_USE_AWS_MANAGED_IAM': 'False',
'S3_ADDRESS_STYLE': 'auto',
'STORAGE_TYPE': 'local',
'STORAGE_LOCAL_PATH': 'storage',
@@ -70,6 +71,7 @@ DEFAULTS = {
'INVITE_EXPIRY_HOURS': 72,
'BILLING_ENABLED': 'False',
'CAN_REPLACE_LOGO': 'False',
'MODEL_LB_ENABLED': 'False',
'ETL_TYPE': 'dify',
'KEYWORD_STORE': 'jieba',
'BATCH_UPLOAD_LIMIT': 20,
@@ -81,8 +83,10 @@ DEFAULTS = {
'INNER_API': 'False',
'ENTERPRISE_ENABLED': 'False',
'INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH': 1000,
'WORKFLOW_MAX_EXECUTION_STEPS': 50,
'WORKFLOW_MAX_EXECUTION_TIME': 600,
'WORKFLOW_MAX_EXECUTION_STEPS': 500,
'WORKFLOW_MAX_EXECUTION_TIME': 1200,
'WORKFLOW_CALL_MAX_DEPTH': 5,
'APP_MAX_EXECUTION_TIME': 1200,
}
@@ -113,7 +117,7 @@ class Config:
# ------------------------
# General Configurations.
# ------------------------
self.CURRENT_VERSION = "0.6.9"
self.CURRENT_VERSION = "0.6.11"
self.COMMIT_SHA = get_env('COMMIT_SHA')
self.EDITION = get_env('EDITION')
self.DEPLOY_ENV = get_env('DEPLOY_ENV')
@@ -122,6 +126,7 @@ class Config:
self.LOG_FILE = get_env('LOG_FILE')
self.LOG_FORMAT = get_env('LOG_FORMAT')
self.LOG_DATEFORMAT = get_env('LOG_DATEFORMAT')
self.API_COMPRESSION_ENABLED = get_bool_env('API_COMPRESSION_ENABLED')
# The backend URL prefix of the console API.
# used to concatenate the login authorization callback or notion integration callback.
@@ -209,27 +214,42 @@ class Config:
if self.CELERY_BACKEND == 'database' else self.CELERY_BROKER_URL
self.BROKER_USE_SSL = self.CELERY_BROKER_URL.startswith('rediss://')
# ------------------------
# Code Execution Sandbox Configurations.
# ------------------------
self.CODE_EXECUTION_ENDPOINT = get_env('CODE_EXECUTION_ENDPOINT')
self.CODE_EXECUTION_API_KEY = get_env('CODE_EXECUTION_API_KEY')
# ------------------------
# File Storage Configurations.
# ------------------------
self.STORAGE_TYPE = get_env('STORAGE_TYPE')
self.STORAGE_LOCAL_PATH = get_env('STORAGE_LOCAL_PATH')
# S3 Storage settings
self.S3_USE_AWS_MANAGED_IAM = get_bool_env('S3_USE_AWS_MANAGED_IAM')
self.S3_ENDPOINT = get_env('S3_ENDPOINT')
self.S3_BUCKET_NAME = get_env('S3_BUCKET_NAME')
self.S3_ACCESS_KEY = get_env('S3_ACCESS_KEY')
self.S3_SECRET_KEY = get_env('S3_SECRET_KEY')
self.S3_REGION = get_env('S3_REGION')
self.S3_ADDRESS_STYLE = get_env('S3_ADDRESS_STYLE')
# Azure Blob Storage settings
self.AZURE_BLOB_ACCOUNT_NAME = get_env('AZURE_BLOB_ACCOUNT_NAME')
self.AZURE_BLOB_ACCOUNT_KEY = get_env('AZURE_BLOB_ACCOUNT_KEY')
self.AZURE_BLOB_CONTAINER_NAME = get_env('AZURE_BLOB_CONTAINER_NAME')
self.AZURE_BLOB_ACCOUNT_URL = get_env('AZURE_BLOB_ACCOUNT_URL')
# Aliyun Storage settings
self.ALIYUN_OSS_BUCKET_NAME = get_env('ALIYUN_OSS_BUCKET_NAME')
self.ALIYUN_OSS_ACCESS_KEY = get_env('ALIYUN_OSS_ACCESS_KEY')
self.ALIYUN_OSS_SECRET_KEY = get_env('ALIYUN_OSS_SECRET_KEY')
self.ALIYUN_OSS_ENDPOINT = get_env('ALIYUN_OSS_ENDPOINT')
self.ALIYUN_OSS_REGION = get_env('ALIYUN_OSS_REGION')
self.ALIYUN_OSS_AUTH_VERSION = get_env('ALIYUN_OSS_AUTH_VERSION')
# Google Cloud Storage settings
self.GOOGLE_STORAGE_BUCKET_NAME = get_env('GOOGLE_STORAGE_BUCKET_NAME')
self.GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64 = get_env('GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64')
@@ -239,6 +259,7 @@ class Config:
# ------------------------
self.VECTOR_STORE = get_env('VECTOR_STORE')
self.KEYWORD_STORE = get_env('KEYWORD_STORE')
# qdrant settings
self.QDRANT_URL = get_env('QDRANT_URL')
self.QDRANT_API_KEY = get_env('QDRANT_API_KEY')
@@ -267,6 +288,16 @@ class Config:
self.RELYT_PASSWORD = get_env('RELYT_PASSWORD')
self.RELYT_DATABASE = get_env('RELYT_DATABASE')
# tencent settings
self.TENCENT_VECTOR_DB_URL = get_env('TENCENT_VECTOR_DB_URL')
self.TENCENT_VECTOR_DB_API_KEY = get_env('TENCENT_VECTOR_DB_API_KEY')
self.TENCENT_VECTOR_DB_TIMEOUT = get_env('TENCENT_VECTOR_DB_TIMEOUT')
self.TENCENT_VECTOR_DB_USERNAME = get_env('TENCENT_VECTOR_DB_USERNAME')
self.TENCENT_VECTOR_DB_DATABASE = get_env('TENCENT_VECTOR_DB_DATABASE')
self.TENCENT_VECTOR_DB_SHARD = get_env('TENCENT_VECTOR_DB_SHARD')
self.TENCENT_VECTOR_DB_REPLICAS = get_env('TENCENT_VECTOR_DB_REPLICAS')
# pgvecto rs settings
self.PGVECTO_RS_HOST = get_env('PGVECTO_RS_HOST')
self.PGVECTO_RS_PORT = get_env('PGVECTO_RS_PORT')
@@ -281,6 +312,21 @@ class Config:
self.PGVECTOR_PASSWORD = get_env('PGVECTOR_PASSWORD')
self.PGVECTOR_DATABASE = get_env('PGVECTOR_DATABASE')
# tidb-vector settings
self.TIDB_VECTOR_HOST = get_env('TIDB_VECTOR_HOST')
self.TIDB_VECTOR_PORT = get_env('TIDB_VECTOR_PORT')
self.TIDB_VECTOR_USER = get_env('TIDB_VECTOR_USER')
self.TIDB_VECTOR_PASSWORD = get_env('TIDB_VECTOR_PASSWORD')
self.TIDB_VECTOR_DATABASE = get_env('TIDB_VECTOR_DATABASE')
# chroma settings
self.CHROMA_HOST = get_env('CHROMA_HOST')
self.CHROMA_PORT = get_env('CHROMA_PORT')
self.CHROMA_TENANT = get_env('CHROMA_TENANT')
self.CHROMA_DATABASE = get_env('CHROMA_DATABASE')
self.CHROMA_AUTH_PROVIDER = get_env('CHROMA_AUTH_PROVIDER')
self.CHROMA_AUTH_CREDENTIALS = get_env('CHROMA_AUTH_CREDENTIALS')
# ------------------------
# Mail Configurations.
# ------------------------
@@ -294,6 +340,7 @@ class Config:
self.SMTP_USERNAME = get_env('SMTP_USERNAME')
self.SMTP_PASSWORD = get_env('SMTP_PASSWORD')
self.SMTP_USE_TLS = get_bool_env('SMTP_USE_TLS')
self.SMTP_OPPORTUNISTIC_TLS = get_bool_env('SMTP_OPPORTUNISTIC_TLS')
# ------------------------
# Workspace Configurations.
@@ -321,9 +368,24 @@ class Config:
self.UPLOAD_FILE_SIZE_LIMIT = int(get_env('UPLOAD_FILE_SIZE_LIMIT'))
self.UPLOAD_FILE_BATCH_LIMIT = int(get_env('UPLOAD_FILE_BATCH_LIMIT'))
self.UPLOAD_IMAGE_FILE_SIZE_LIMIT = int(get_env('UPLOAD_IMAGE_FILE_SIZE_LIMIT'))
self.BATCH_UPLOAD_LIMIT = get_env('BATCH_UPLOAD_LIMIT')
# RAG ETL Configurations.
self.ETL_TYPE = get_env('ETL_TYPE')
self.UNSTRUCTURED_API_URL = get_env('UNSTRUCTURED_API_URL')
self.UNSTRUCTURED_API_KEY = get_env('UNSTRUCTURED_API_KEY')
self.KEYWORD_DATA_SOURCE_TYPE = get_env('KEYWORD_DATA_SOURCE_TYPE')
# Indexing Configurations.
self.INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH = get_env('INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH')
# Tool Configurations.
self.TOOL_ICON_CACHE_MAX_AGE = get_env('TOOL_ICON_CACHE_MAX_AGE')
self.WORKFLOW_MAX_EXECUTION_STEPS = int(get_env('WORKFLOW_MAX_EXECUTION_STEPS'))
self.WORKFLOW_MAX_EXECUTION_TIME = int(get_env('WORKFLOW_MAX_EXECUTION_TIME'))
self.WORKFLOW_CALL_MAX_DEPTH = int(get_env('WORKFLOW_CALL_MAX_DEPTH'))
self.APP_MAX_EXECUTION_TIME = int(get_env('APP_MAX_EXECUTION_TIME'))
# Moderation in app Configurations.
self.OUTPUT_MODERATION_BUFFER_SIZE = int(get_env('OUTPUT_MODERATION_BUFFER_SIZE'))
@@ -375,24 +437,15 @@ class Config:
self.HOSTED_FETCH_APP_TEMPLATES_MODE = get_env('HOSTED_FETCH_APP_TEMPLATES_MODE')
self.HOSTED_FETCH_APP_TEMPLATES_REMOTE_DOMAIN = get_env('HOSTED_FETCH_APP_TEMPLATES_REMOTE_DOMAIN')
self.ETL_TYPE = get_env('ETL_TYPE')
self.UNSTRUCTURED_API_URL = get_env('UNSTRUCTURED_API_URL')
self.UNSTRUCTURED_API_KEY = get_env('UNSTRUCTURED_API_KEY')
# Model Load Balancing Configurations.
self.MODEL_LB_ENABLED = get_bool_env('MODEL_LB_ENABLED')
# Platform Billing Configurations.
self.BILLING_ENABLED = get_bool_env('BILLING_ENABLED')
self.CAN_REPLACE_LOGO = get_bool_env('CAN_REPLACE_LOGO')
self.BATCH_UPLOAD_LIMIT = get_env('BATCH_UPLOAD_LIMIT')
self.CODE_EXECUTION_ENDPOINT = get_env('CODE_EXECUTION_ENDPOINT')
self.CODE_EXECUTION_API_KEY = get_env('CODE_EXECUTION_API_KEY')
self.API_COMPRESSION_ENABLED = get_bool_env('API_COMPRESSION_ENABLED')
self.TOOL_ICON_CACHE_MAX_AGE = get_env('TOOL_ICON_CACHE_MAX_AGE')
self.KEYWORD_DATA_SOURCE_TYPE = get_env('KEYWORD_DATA_SOURCE_TYPE')
# ------------------------
# Enterprise feature Configurations.
# **Before using, please contact business@dify.ai by email to inquire about licensing matters.**
# ------------------------
self.ENTERPRISE_ENABLED = get_bool_env('ENTERPRISE_ENABLED')
# ------------------------
# Indexing Configurations.
# ------------------------
self.INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH = get_env('INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH')
self.CAN_REPLACE_LOGO = get_bool_env('CAN_REPLACE_LOGO')

File diff suppressed because one or more lines are too long

View File

@@ -29,13 +29,13 @@ from .app import (
)
# Import auth controllers
from .auth import activate, data_source_oauth, login, oauth
from .auth import activate, data_source_bearer_auth, data_source_oauth, login, oauth
# Import billing controllers
from .billing import billing
# Import datasets controllers
from .datasets import data_source, datasets, datasets_document, datasets_segments, file, hit_testing
from .datasets import data_source, datasets, datasets_document, datasets_segments, file, hit_testing, website
# Import explore controllers
from .explore import (
@@ -54,4 +54,4 @@ from .explore import (
from .tag import tags
# Import workspace controllers
from .workspace import account, members, model_providers, models, tool_providers, workspace
from .workspace import account, load_balancing_config, members, model_providers, models, tool_providers, workspace

View File

@@ -68,8 +68,8 @@ class AppListApi(Resource):
parser.add_argument('icon_background', type=str, location='json')
args = parser.parse_args()
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
if 'mode' not in args or args['mode'] is None:
@@ -89,8 +89,8 @@ class AppImportApi(Resource):
@cloud_edition_billing_resource_check('apps')
def post(self):
"""Import app"""
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
@@ -147,7 +147,7 @@ class AppApi(Resource):
@get_app_model
def delete(self, app_model):
"""Delete app"""
if not current_user.is_admin_or_owner:
if not current_user.is_editor:
raise Forbidden()
app_service = AppService()
@@ -164,8 +164,8 @@ class AppCopyApi(Resource):
@marshal_with(app_detail_fields_with_site)
def post(self, app_model):
"""Copy app"""
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
@@ -238,6 +238,9 @@ class AppSiteStatus(Resource):
@get_app_model
@marshal_with(app_detail_fields)
def post(self, app_model):
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('enable_site', type=bool, required=True, location='json')
args = parser.parse_args()
@@ -255,6 +258,9 @@ class AppApiStatus(Resource):
@get_app_model
@marshal_with(app_detail_fields)
def post(self, app_model):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('enable_api', type=bool, required=True, location='json')
args = parser.parse_args()

View File

@@ -85,7 +85,7 @@ class ChatMessageTextApi(Resource):
response = AudioService.transcript_tts(
app_model=app_model,
text=request.form['text'],
voice=request.form['voice'] if request.form.get('voice') else app_model.app_model_config.text_to_speech_dict.get('voice'),
voice=request.form['voice'],
streaming=False
)

View File

@@ -6,7 +6,7 @@ from flask_restful import Resource, marshal_with, reqparse
from flask_restful.inputs import int_range
from sqlalchemy import func, or_
from sqlalchemy.orm import joinedload
from werkzeug.exceptions import NotFound
from werkzeug.exceptions import Forbidden, NotFound
from controllers.console import api
from controllers.console.app.wraps import get_app_model
@@ -33,6 +33,8 @@ class CompletionConversationApi(Resource):
@get_app_model(mode=AppMode.COMPLETION)
@marshal_with(conversation_pagination_fields)
def get(self, app_model):
if not current_user.is_admin_or_owner:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('keyword', type=str, location='args')
parser.add_argument('start', type=datetime_string('%Y-%m-%d %H:%M'), location='args')
@@ -106,6 +108,8 @@ class CompletionConversationDetailApi(Resource):
@get_app_model(mode=AppMode.COMPLETION)
@marshal_with(conversation_message_detail_fields)
def get(self, app_model, conversation_id):
if not current_user.is_admin_or_owner:
raise Forbidden()
conversation_id = str(conversation_id)
return _get_conversation(app_model, conversation_id)
@@ -115,6 +119,8 @@ class CompletionConversationDetailApi(Resource):
@account_initialization_required
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
def delete(self, app_model, conversation_id):
if not current_user.is_admin_or_owner:
raise Forbidden()
conversation_id = str(conversation_id)
conversation = db.session.query(Conversation) \
@@ -137,6 +143,8 @@ class ChatConversationApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@marshal_with(conversation_with_summary_pagination_fields)
def get(self, app_model):
if not current_user.is_admin_or_owner:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('keyword', type=str, location='args')
parser.add_argument('start', type=datetime_string('%Y-%m-%d %H:%M'), location='args')
@@ -225,6 +233,8 @@ class ChatConversationDetailApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@marshal_with(conversation_detail_fields)
def get(self, app_model, conversation_id):
if not current_user.is_admin_or_owner:
raise Forbidden()
conversation_id = str(conversation_id)
return _get_conversation(app_model, conversation_id)
@@ -234,6 +244,8 @@ class ChatConversationDetailApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@account_initialization_required
def delete(self, app_model, conversation_id):
if not current_user.is_admin_or_owner:
raise Forbidden()
conversation_id = str(conversation_id)
conversation = db.session.query(Conversation) \

View File

@@ -40,8 +40,8 @@ class AppSite(Resource):
def post(self, app_model):
args = parse_app_site_args()
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be editor, admin, or owner
if not current_user.is_editor:
raise Forbidden()
site = db.session.query(Site). \
@@ -65,13 +65,6 @@ class AppSite(Resource):
if value is not None:
setattr(site, attr_name, value)
if attr_name == 'title':
app_model.name = value
elif attr_name == 'icon':
app_model.icon = value
elif attr_name == 'icon_background':
app_model.icon_background = value
db.session.commit()
return site

View File

@@ -0,0 +1,67 @@
from flask_login import current_user
from flask_restful import Resource, reqparse
from werkzeug.exceptions import Forbidden
from controllers.console import api
from controllers.console.auth.error import ApiKeyAuthFailedError
from libs.login import login_required
from services.auth.api_key_auth_service import ApiKeyAuthService
from ..setup import setup_required
from ..wraps import account_initialization_required
class ApiKeyAuthDataSource(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
# The role of the current user in the table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
data_source_api_key_bindings = ApiKeyAuthService.get_provider_auth_list(current_user.current_tenant_id)
if data_source_api_key_bindings:
return {
'settings': [data_source_api_key_binding.to_dict() for data_source_api_key_binding in
data_source_api_key_bindings]}
return {'settings': []}
class ApiKeyAuthDataSourceBinding(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
# The role of the current user in the table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('category', type=str, required=True, nullable=False, location='json')
parser.add_argument('provider', type=str, required=True, nullable=False, location='json')
parser.add_argument('credentials', type=dict, required=True, nullable=False, location='json')
args = parser.parse_args()
ApiKeyAuthService.validate_api_key_auth_args(args)
try:
ApiKeyAuthService.create_provider_auth(current_user.current_tenant_id, args)
except Exception as e:
raise ApiKeyAuthFailedError(str(e))
return {'result': 'success'}, 200
class ApiKeyAuthDataSourceBindingDelete(Resource):
@setup_required
@login_required
@account_initialization_required
def delete(self, binding_id):
# The role of the current user in the table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
ApiKeyAuthService.delete_provider_auth(current_user.current_tenant_id, binding_id)
return {'result': 'success'}, 200
api.add_resource(ApiKeyAuthDataSource, '/api-key-auth/data-source')
api.add_resource(ApiKeyAuthDataSourceBinding, '/api-key-auth/data-source/binding')
api.add_resource(ApiKeyAuthDataSourceBindingDelete, '/api-key-auth/data-source/<uuid:binding_id>')

View File

@@ -0,0 +1,7 @@
from libs.exception import BaseHTTPException
class ApiKeyAuthFailedError(BaseHTTPException):
error_code = 'auth_failed'
description = "{message}"
code = 500

View File

@@ -16,7 +16,7 @@ from extensions.ext_database import db
from fields.data_source_fields import integrate_list_fields, integrate_notion_info_list_fields
from libs.login import login_required
from models.dataset import Document
from models.source import DataSourceBinding
from models.source import DataSourceOauthBinding
from services.dataset_service import DatasetService, DocumentService
from tasks.document_indexing_sync_task import document_indexing_sync_task
@@ -29,9 +29,9 @@ class DataSourceApi(Resource):
@marshal_with(integrate_list_fields)
def get(self):
# get workspace data source integrates
data_source_integrates = db.session.query(DataSourceBinding).filter(
DataSourceBinding.tenant_id == current_user.current_tenant_id,
DataSourceBinding.disabled == False
data_source_integrates = db.session.query(DataSourceOauthBinding).filter(
DataSourceOauthBinding.tenant_id == current_user.current_tenant_id,
DataSourceOauthBinding.disabled == False
).all()
base_url = request.url_root.rstrip('/')
@@ -71,7 +71,7 @@ class DataSourceApi(Resource):
def patch(self, binding_id, action):
binding_id = str(binding_id)
action = str(action)
data_source_binding = DataSourceBinding.query.filter_by(
data_source_binding = DataSourceOauthBinding.query.filter_by(
id=binding_id
).first()
if data_source_binding is None:
@@ -124,7 +124,7 @@ class DataSourceNotionListApi(Resource):
data_source_info = json.loads(document.data_source_info)
exist_page_ids.append(data_source_info['notion_page_id'])
# get all authorized pages
data_source_bindings = DataSourceBinding.query.filter_by(
data_source_bindings = DataSourceOauthBinding.query.filter_by(
tenant_id=current_user.current_tenant_id,
provider='notion',
disabled=False
@@ -163,12 +163,12 @@ class DataSourceNotionApi(Resource):
def get(self, workspace_id, page_id, page_type):
workspace_id = str(workspace_id)
page_id = str(page_id)
data_source_binding = DataSourceBinding.query.filter(
data_source_binding = DataSourceOauthBinding.query.filter(
db.and_(
DataSourceBinding.tenant_id == current_user.current_tenant_id,
DataSourceBinding.provider == 'notion',
DataSourceBinding.disabled == False,
DataSourceBinding.source_info['workspace_id'] == f'"{workspace_id}"'
DataSourceOauthBinding.tenant_id == current_user.current_tenant_id,
DataSourceOauthBinding.provider == 'notion',
DataSourceOauthBinding.disabled == False,
DataSourceOauthBinding.source_info['workspace_id'] == f'"{workspace_id}"'
)
).first()
if not data_source_binding:

View File

@@ -8,13 +8,14 @@ import services
from controllers.console import api
from controllers.console.apikey import api_key_fields, api_key_list
from controllers.console.app.error import ProviderNotInitializeError
from controllers.console.datasets.error import DatasetNameDuplicateError
from controllers.console.datasets.error import DatasetInUseError, DatasetNameDuplicateError
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError
from core.indexing_runner import IndexingRunner
from core.model_runtime.entities.model_entities import ModelType
from core.provider_manager import ProviderManager
from core.rag.datasource.vdb.vector_type import VectorType
from core.rag.extractor.entity.extract_setting import ExtractSetting
from extensions.ext_database import db
from fields.app_fields import related_app_list
@@ -106,8 +107,8 @@ class DatasetListApi(Resource):
help='Invalid indexing technique.')
args = parser.parse_args()
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
try:
@@ -194,8 +195,8 @@ class DatasetApi(Resource):
parser.add_argument('retrieval_model', type=dict, location='json', help='Invalid retrieval model.')
args = parser.parse_args()
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
dataset = DatasetService.update_dataset(
@@ -212,14 +213,17 @@ class DatasetApi(Resource):
def delete(self, dataset_id):
dataset_id_str = str(dataset_id)
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
if DatasetService.delete_dataset(dataset_id_str, current_user):
return {'result': 'success'}, 204
else:
raise NotFound("Dataset not found.")
try:
if DatasetService.delete_dataset(dataset_id_str, current_user):
return {'result': 'success'}, 204
else:
raise NotFound("Dataset not found.")
except services.errors.dataset.DatasetInUseError:
raise DatasetInUseError()
class DatasetQueryApi(Resource):
@@ -311,6 +315,22 @@ class DatasetIndexingEstimateApi(Resource):
document_model=args['doc_form']
)
extract_settings.append(extract_setting)
elif args['info_list']['data_source_type'] == 'website_crawl':
website_info_list = args['info_list']['website_info_list']
for url in website_info_list['urls']:
extract_setting = ExtractSetting(
datasource_type="website_crawl",
website_info={
"provider": website_info_list['provider'],
"job_id": website_info_list['job_id'],
"url": url,
"tenant_id": current_user.current_tenant_id,
"mode": 'crawl',
"only_main_content": website_info_list['only_main_content']
},
document_model=args['doc_form']
)
extract_settings.append(extract_setting)
else:
raise ValueError('Data source type not support')
indexing_runner = IndexingRunner()
@@ -476,20 +496,21 @@ class DatasetRetrievalSettingApi(Resource):
@account_initialization_required
def get(self):
vector_type = current_app.config['VECTOR_STORE']
if vector_type in {"milvus", "relyt", "pgvector", "pgvecto_rs"}:
return {
'retrieval_method': [
'semantic_search'
]
}
elif vector_type in {"qdrant", "weaviate"}:
return {
'retrieval_method': [
'semantic_search', 'full_text_search', 'hybrid_search'
]
}
else:
raise ValueError("Unsupported vector db type.")
match vector_type:
case VectorType.MILVUS | VectorType.RELYT | VectorType.PGVECTOR | VectorType.TIDB_VECTOR | VectorType.CHROMA | VectorType.TENCENT:
return {
'retrieval_method': [
'semantic_search'
]
}
case VectorType.QDRANT | VectorType.WEAVIATE:
return {
'retrieval_method': [
'semantic_search', 'full_text_search', 'hybrid_search'
]
}
case _:
raise ValueError(f"Unsupported vector db type {vector_type}.")
class DatasetRetrievalSettingMockApi(Resource):
@@ -497,20 +518,23 @@ class DatasetRetrievalSettingMockApi(Resource):
@login_required
@account_initialization_required
def get(self, vector_type):
if vector_type in {'milvus', 'relyt', 'pgvector'}:
return {
'retrieval_method': [
'semantic_search'
]
}
elif vector_type in {'qdrant', 'weaviate'}:
return {
'retrieval_method': [
'semantic_search', 'full_text_search', 'hybrid_search'
]
}
else:
raise ValueError("Unsupported vector db type.")
match vector_type:
case VectorType.MILVUS | VectorType.RELYT | VectorType.PGVECTOR | VectorType.TIDB_VECTOR | VectorType.CHROMA | VectorType.TENCEN:
return {
'retrieval_method': [
'semantic_search'
]
}
case VectorType.QDRANT | VectorType.WEAVIATE:
return {
'retrieval_method': [
'semantic_search', 'full_text_search', 'hybrid_search'
]
}
case _:
raise ValueError(f"Unsupported vector db type {vector_type}.")
class DatasetErrorDocs(Resource):
@setup_required

View File

@@ -1,10 +1,12 @@
import logging
from argparse import ArgumentTypeError
from datetime import datetime, timezone
from flask import request
from flask_login import current_user
from flask_restful import Resource, fields, marshal, marshal_with, reqparse
from sqlalchemy import asc, desc
from transformers.hf_argparser import string_to_bool
from werkzeug.exceptions import Forbidden, NotFound
import services
@@ -141,7 +143,11 @@ class DatasetDocumentListApi(Resource):
limit = request.args.get('limit', default=20, type=int)
search = request.args.get('keyword', default=None, type=str)
sort = request.args.get('sort', default='-created_at', type=str)
fetch = request.args.get('fetch', default=False, type=bool)
# "yes", "true", "t", "y", "1" convert to True, while others convert to False.
try:
fetch = string_to_bool(request.args.get('fetch', default='false'))
except (ArgumentTypeError, ValueError, Exception) as e:
fetch = False
dataset = DatasetService.get_dataset(dataset_id)
if not dataset:
raise NotFound('Dataset not found.')
@@ -220,8 +226,8 @@ class DatasetDocumentListApi(Resource):
if not dataset:
raise NotFound('Dataset not found.')
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
try:
@@ -272,8 +278,8 @@ class DatasetInitApi(Resource):
@marshal_with(dataset_and_document_fields)
@cloud_edition_billing_resource_check('vector_space')
def post(self):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
@@ -459,6 +465,20 @@ class DocumentBatchIndexingEstimateApi(DocumentResource):
document_model=document.doc_form
)
extract_settings.append(extract_setting)
elif document.data_source_type == 'website_crawl':
extract_setting = ExtractSetting(
datasource_type="website_crawl",
website_info={
"provider": data_source_info['provider'],
"job_id": data_source_info['job_id'],
"url": data_source_info['url'],
"tenant_id": current_user.current_tenant_id,
"mode": data_source_info['mode'],
"only_main_content": data_source_info['only_main_content']
},
document_model=document.doc_form
)
extract_settings.append(extract_setting)
else:
raise ValueError('Data source type not support')
@@ -626,8 +646,8 @@ class DocumentProcessingApi(DocumentResource):
document_id = str(document_id)
document = self.get_document(dataset_id, document_id)
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
if action == "pause":
@@ -690,8 +710,8 @@ class DocumentMetadataApi(DocumentResource):
doc_type = req_data.get('doc_type')
doc_metadata = req_data.get('doc_metadata')
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
if doc_type is None or doc_metadata is None:
@@ -737,8 +757,8 @@ class DocumentStatusApi(DocumentResource):
document = self.get_document(dataset_id, document_id)
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
indexing_cache_key = 'document_{}_indexing'.format(document.id)
@@ -924,6 +944,55 @@ class DocumentRetryApi(DocumentResource):
return {'result': 'success'}, 204
class DocumentRenameApi(DocumentResource):
@setup_required
@login_required
@account_initialization_required
@marshal_with(document_fields)
def post(self, dataset_id, document_id):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('name', type=str, required=True, nullable=False, location='json')
args = parser.parse_args()
try:
document = DocumentService.rename_document(dataset_id, document_id, args['name'])
except services.errors.document.DocumentIndexingError:
raise DocumentIndexingError('Cannot delete document during indexing.')
return document
class WebsiteDocumentSyncApi(DocumentResource):
@setup_required
@login_required
@account_initialization_required
def get(self, dataset_id, document_id):
"""sync website document."""
dataset_id = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id)
if not dataset:
raise NotFound('Dataset not found.')
document_id = str(document_id)
document = DocumentService.get_document(dataset.id, document_id)
if not document:
raise NotFound('Document not found.')
if document.tenant_id != current_user.current_tenant_id:
raise Forbidden('No permission.')
if document.data_source_type != 'website_crawl':
raise ValueError('Document is not a website document.')
# 403 if document is archived
if DocumentService.check_archived(document):
raise ArchivedDocumentImmutableError()
# sync document
DocumentService.sync_website_document(dataset_id, document)
return {'result': 'success'}, 200
api.add_resource(GetProcessRuleApi, '/datasets/process-rule')
api.add_resource(DatasetDocumentListApi,
'/datasets/<uuid:dataset_id>/documents')
@@ -950,3 +1019,7 @@ api.add_resource(DocumentStatusApi,
api.add_resource(DocumentPauseApi, '/datasets/<uuid:dataset_id>/documents/<uuid:document_id>/processing/pause')
api.add_resource(DocumentRecoverApi, '/datasets/<uuid:dataset_id>/documents/<uuid:document_id>/processing/resume')
api.add_resource(DocumentRetryApi, '/datasets/<uuid:dataset_id>/retry')
api.add_resource(DocumentRenameApi,
'/datasets/<uuid:dataset_id>/documents/<uuid:document_id>/rename')
api.add_resource(WebsiteDocumentSyncApi, '/datasets/<uuid:dataset_id>/documents/<uuid:document_id>/website-sync')

View File

@@ -126,8 +126,8 @@ class DatasetDocumentSegmentApi(Resource):
raise NotFound('Dataset not found.')
# check user's model setting
DatasetService.check_dataset_model_setting(dataset)
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
try:
@@ -302,8 +302,8 @@ class DatasetDocumentSegmentUpdateApi(Resource):
).first()
if not segment:
raise NotFound('Segment not found.')
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
try:
DatasetService.check_dataset_permission(dataset, current_user)

View File

@@ -71,3 +71,15 @@ class InvalidMetadataError(BaseHTTPException):
error_code = 'invalid_metadata'
description = "The metadata content is incorrect. Please check and verify."
code = 400
class WebsiteCrawlError(BaseHTTPException):
error_code = 'crawl_failed'
description = "{message}"
code = 500
class DatasetInUseError(BaseHTTPException):
error_code = 'dataset_in_use'
description = "The dataset is being used by some apps. Please remove the dataset from the apps before deleting it."
code = 409

View File

@@ -0,0 +1,49 @@
from flask_restful import Resource, reqparse
from controllers.console import api
from controllers.console.datasets.error import WebsiteCrawlError
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
from libs.login import login_required
from services.website_service import WebsiteService
class WebsiteCrawlApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument('provider', type=str, choices=['firecrawl'],
required=True, nullable=True, location='json')
parser.add_argument('url', type=str, required=True, nullable=True, location='json')
parser.add_argument('options', type=dict, required=True, nullable=True, location='json')
args = parser.parse_args()
WebsiteService.document_create_args_validate(args)
# crawl url
try:
result = WebsiteService.crawl_url(args)
except Exception as e:
raise WebsiteCrawlError(str(e))
return result, 200
class WebsiteCrawlStatusApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self, job_id: str):
parser = reqparse.RequestParser()
parser.add_argument('provider', type=str, choices=['firecrawl'], required=True, location='args')
args = parser.parse_args()
# get crawl status
try:
result = WebsiteService.get_crawl_status(job_id, args['provider'])
except Exception as e:
raise WebsiteCrawlError(str(e))
return result, 200
api.add_resource(WebsiteCrawlApi, '/website/crawl')
api.add_resource(WebsiteCrawlStatusApi, '/website/crawl/status/<string:job_id>')

View File

@@ -1,22 +1,27 @@
from flask_login import current_user
from flask_restful import Resource
from libs.login import login_required
from services.feature_service import FeatureService
from . import api
from .wraps import cloud_utm_record
from .setup import setup_required
from .wraps import account_initialization_required, cloud_utm_record
class FeatureApi(Resource):
@setup_required
@login_required
@account_initialization_required
@cloud_utm_record
def get(self):
return FeatureService.get_features(current_user.current_tenant_id).dict()
return FeatureService.get_features(current_user.current_tenant_id).model_dump()
class SystemFeatureApi(Resource):
def get(self):
return FeatureService.get_system_features().dict()
return FeatureService.get_system_features().model_dump()
api.add_resource(FeatureApi, '/features')

View File

@@ -35,8 +35,8 @@ class TagListApi(Resource):
@login_required
@account_initialization_required
def post(self):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
@@ -67,8 +67,8 @@ class TagUpdateDeleteApi(Resource):
@account_initialization_required
def patch(self, tag_id):
tag_id = str(tag_id)
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
@@ -94,8 +94,8 @@ class TagUpdateDeleteApi(Resource):
@account_initialization_required
def delete(self, tag_id):
tag_id = str(tag_id)
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
TagService.delete_tag(tag_id)
@@ -109,8 +109,8 @@ class TagBindingCreateApi(Resource):
@login_required
@account_initialization_required
def post(self):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
@@ -134,8 +134,8 @@ class TagBindingDeleteApi(Resource):
@login_required
@account_initialization_required
def post(self):
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()

View File

@@ -17,13 +17,19 @@ class VersionApi(Resource):
args = parser.parse_args()
check_update_url = current_app.config['CHECK_UPDATE_URL']
if not check_update_url:
return {
'version': '0.0.0',
'release_date': '',
'release_notes': '',
'can_auto_update': False
result = {
'version': current_app.config['CURRENT_VERSION'],
'release_date': '',
'release_notes': '',
'can_auto_update': False,
'features': {
'can_replace_logo': current_app.config['CAN_REPLACE_LOGO'],
'model_load_balancing_enabled': current_app.config['MODEL_LB_ENABLED']
}
}
if not check_update_url:
return result
try:
response = requests.get(check_update_url, {
@@ -31,20 +37,15 @@ class VersionApi(Resource):
})
except Exception as error:
logging.warning("Check update version error: {}.".format(str(error)))
return {
'version': args.get('current_version'),
'release_date': '',
'release_notes': '',
'can_auto_update': False
}
result['version'] = args.get('current_version')
return result
content = json.loads(response.content)
return {
'version': content['version'],
'release_date': content['releaseDate'],
'release_notes': content['releaseNotes'],
'can_auto_update': content['canAutoUpdate']
}
result['version'] = content['version']
result['release_date'] = content['releaseDate']
result['release_notes'] = content['releaseNotes']
result['can_auto_update'] = content['canAutoUpdate']
return result
api.add_resource(VersionApi, '/version')

View File

@@ -0,0 +1,106 @@
from flask_restful import Resource, reqparse
from werkzeug.exceptions import Forbidden
from controllers.console import api
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from libs.login import current_user, login_required
from models.account import TenantAccountRole
from services.model_load_balancing_service import ModelLoadBalancingService
class LoadBalancingCredentialsValidateApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self, provider: str):
if not TenantAccountRole.is_privileged_role(current_user.current_tenant.current_role):
raise Forbidden()
tenant_id = current_user.current_tenant_id
parser = reqparse.RequestParser()
parser.add_argument('model', type=str, required=True, nullable=False, location='json')
parser.add_argument('model_type', type=str, required=True, nullable=False,
choices=[mt.value for mt in ModelType], location='json')
parser.add_argument('credentials', type=dict, required=True, nullable=False, location='json')
args = parser.parse_args()
# validate model load balancing credentials
model_load_balancing_service = ModelLoadBalancingService()
result = True
error = None
try:
model_load_balancing_service.validate_load_balancing_credentials(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type'],
credentials=args['credentials']
)
except CredentialsValidateFailedError as ex:
result = False
error = str(ex)
response = {'result': 'success' if result else 'error'}
if not result:
response['error'] = error
return response
class LoadBalancingConfigCredentialsValidateApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self, provider: str, config_id: str):
if not TenantAccountRole.is_privileged_role(current_user.current_tenant.current_role):
raise Forbidden()
tenant_id = current_user.current_tenant_id
parser = reqparse.RequestParser()
parser.add_argument('model', type=str, required=True, nullable=False, location='json')
parser.add_argument('model_type', type=str, required=True, nullable=False,
choices=[mt.value for mt in ModelType], location='json')
parser.add_argument('credentials', type=dict, required=True, nullable=False, location='json')
args = parser.parse_args()
# validate model load balancing config credentials
model_load_balancing_service = ModelLoadBalancingService()
result = True
error = None
try:
model_load_balancing_service.validate_load_balancing_credentials(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type'],
credentials=args['credentials'],
config_id=config_id,
)
except CredentialsValidateFailedError as ex:
result = False
error = str(ex)
response = {'result': 'success' if result else 'error'}
if not result:
response['error'] = error
return response
# Load Balancing Config
api.add_resource(LoadBalancingCredentialsValidateApi,
'/workspaces/current/model-providers/<string:provider>/models/load-balancing-configs/credentials-validate')
api.add_resource(LoadBalancingConfigCredentialsValidateApi,
'/workspaces/current/model-providers/<string:provider>/models/load-balancing-configs/<string:config_id>/credentials-validate')

View File

@@ -43,7 +43,7 @@ class MemberInviteEmailApi(Resource):
invitee_emails = args['emails']
invitee_role = args['role']
interface_language = args['language']
if invitee_role not in [TenantAccountRole.ADMIN, TenantAccountRole.NORMAL]:
if not TenantAccountRole.is_non_owner_role(invitee_role):
return {'code': 'invalid-role', 'message': 'Invalid role'}, 400
inviter = current_user
@@ -114,7 +114,7 @@ class MemberUpdateRoleApi(Resource):
args = parser.parse_args()
new_role = args['role']
if new_role not in ['admin', 'normal', 'owner']:
if not TenantAccountRole.is_valid_role(new_role):
return {'code': 'invalid-role', 'message': 'Invalid role'}, 400
member = Account.query.get(str(member_id))

View File

@@ -11,7 +11,7 @@ from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.utils.encoders import jsonable_encoder
from libs.login import login_required
from models.account import TenantAccountRole
from services.model_load_balancing_service import ModelLoadBalancingService
from services.model_provider_service import ModelProviderService
@@ -42,6 +42,9 @@ class DefaultModelApi(Resource):
@login_required
@account_initialization_required
def post(self):
if not current_user.is_admin_or_owner:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('model_settings', type=list, required=True, nullable=False, location='json')
args = parser.parse_args()
@@ -95,7 +98,7 @@ class ModelProviderModelApi(Resource):
@login_required
@account_initialization_required
def post(self, provider: str):
if not TenantAccountRole.is_privileged_role(current_user.current_tenant.current_role):
if not current_user.is_admin_or_owner:
raise Forbidden()
tenant_id = current_user.current_tenant_id
@@ -104,21 +107,56 @@ class ModelProviderModelApi(Resource):
parser.add_argument('model', type=str, required=True, nullable=False, location='json')
parser.add_argument('model_type', type=str, required=True, nullable=False,
choices=[mt.value for mt in ModelType], location='json')
parser.add_argument('credentials', type=dict, required=True, nullable=False, location='json')
parser.add_argument('credentials', type=dict, required=False, nullable=True, location='json')
parser.add_argument('load_balancing', type=dict, required=False, nullable=True, location='json')
parser.add_argument('config_from', type=str, required=False, nullable=True, location='json')
args = parser.parse_args()
model_provider_service = ModelProviderService()
model_load_balancing_service = ModelLoadBalancingService()
try:
model_provider_service.save_model_credentials(
if ('load_balancing' in args and args['load_balancing'] and
'enabled' in args['load_balancing'] and args['load_balancing']['enabled']):
if 'configs' not in args['load_balancing']:
raise ValueError('invalid load balancing configs')
# save load balancing configs
model_load_balancing_service.update_load_balancing_configs(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type'],
credentials=args['credentials']
configs=args['load_balancing']['configs']
)
except CredentialsValidateFailedError as ex:
raise ValueError(str(ex))
# enable load balancing
model_load_balancing_service.enable_model_load_balancing(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type']
)
else:
# disable load balancing
model_load_balancing_service.disable_model_load_balancing(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type']
)
if args.get('config_from', '') != 'predefined-model':
model_provider_service = ModelProviderService()
try:
model_provider_service.save_model_credentials(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type'],
credentials=args['credentials']
)
except CredentialsValidateFailedError as ex:
raise ValueError(str(ex))
return {'result': 'success'}, 200
@@ -126,7 +164,7 @@ class ModelProviderModelApi(Resource):
@login_required
@account_initialization_required
def delete(self, provider: str):
if not TenantAccountRole.is_privileged_role(current_user.current_tenant.current_role):
if not current_user.is_admin_or_owner:
raise Forbidden()
tenant_id = current_user.current_tenant_id
@@ -170,11 +208,73 @@ class ModelProviderModelCredentialApi(Resource):
model=args['model']
)
model_load_balancing_service = ModelLoadBalancingService()
is_load_balancing_enabled, load_balancing_configs = model_load_balancing_service.get_load_balancing_configs(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type']
)
return {
"credentials": credentials
"credentials": credentials,
"load_balancing": {
"enabled": is_load_balancing_enabled,
"configs": load_balancing_configs
}
}
class ModelProviderModelEnableApi(Resource):
@setup_required
@login_required
@account_initialization_required
def patch(self, provider: str):
tenant_id = current_user.current_tenant_id
parser = reqparse.RequestParser()
parser.add_argument('model', type=str, required=True, nullable=False, location='json')
parser.add_argument('model_type', type=str, required=True, nullable=False,
choices=[mt.value for mt in ModelType], location='json')
args = parser.parse_args()
model_provider_service = ModelProviderService()
model_provider_service.enable_model(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type']
)
return {'result': 'success'}
class ModelProviderModelDisableApi(Resource):
@setup_required
@login_required
@account_initialization_required
def patch(self, provider: str):
tenant_id = current_user.current_tenant_id
parser = reqparse.RequestParser()
parser.add_argument('model', type=str, required=True, nullable=False, location='json')
parser.add_argument('model_type', type=str, required=True, nullable=False,
choices=[mt.value for mt in ModelType], location='json')
args = parser.parse_args()
model_provider_service = ModelProviderService()
model_provider_service.disable_model(
tenant_id=tenant_id,
provider=provider,
model=args['model'],
model_type=args['model_type']
)
return {'result': 'success'}
class ModelProviderModelValidateApi(Resource):
@setup_required
@@ -259,6 +359,10 @@ class ModelProviderAvailableModelApi(Resource):
api.add_resource(ModelProviderModelApi, '/workspaces/current/model-providers/<string:provider>/models')
api.add_resource(ModelProviderModelEnableApi, '/workspaces/current/model-providers/<string:provider>/models/enable',
endpoint='model-provider-model-enable')
api.add_resource(ModelProviderModelDisableApi, '/workspaces/current/model-providers/<string:provider>/models/disable',
endpoint='model-provider-model-disable')
api.add_resource(ModelProviderModelCredentialApi,
'/workspaces/current/model-providers/<string:provider>/models/credentials')
api.add_resource(ModelProviderModelValidateApi,

View File

@@ -1,9 +1,10 @@
from flask import request
from flask_restful import marshal, reqparse
from werkzeug.exceptions import NotFound
import services.dataset_service
from controllers.service_api import api
from controllers.service_api.dataset.error import DatasetNameDuplicateError
from controllers.service_api.dataset.error import DatasetInUseError, DatasetNameDuplicateError
from controllers.service_api.wraps import DatasetApiResource
from core.model_runtime.entities.model_entities import ModelType
from core.provider_manager import ProviderManager
@@ -19,10 +20,12 @@ def _validate_name(name):
return name
class DatasetApi(DatasetApiResource):
"""Resource for get datasets."""
class DatasetListApi(DatasetApiResource):
"""Resource for datasets."""
def get(self, tenant_id):
"""Resource for getting datasets."""
page = request.args.get('page', default=1, type=int)
limit = request.args.get('limit', default=20, type=int)
provider = request.args.get('provider', default="vendor")
@@ -65,9 +68,9 @@ class DatasetApi(DatasetApiResource):
}
return response, 200
"""Resource for datasets."""
def post(self, tenant_id):
"""Resource for creating datasets."""
parser = reqparse.RequestParser()
parser.add_argument('name', nullable=False, required=True,
help='type is required. Name must be between 1 to 40 characters.',
@@ -89,6 +92,34 @@ class DatasetApi(DatasetApiResource):
return marshal(dataset, dataset_detail_fields), 200
class DatasetApi(DatasetApiResource):
"""Resource for dataset."""
api.add_resource(DatasetApi, '/datasets')
def delete(self, _, dataset_id):
"""
Deletes a dataset given its ID.
Args:
dataset_id (UUID): The ID of the dataset to be deleted.
Returns:
dict: A dictionary with a key 'result' and a value 'success'
if the dataset was successfully deleted. Omitted in HTTP response.
int: HTTP status code 204 indicating that the operation was successful.
Raises:
NotFound: If the dataset with the given ID does not exist.
"""
dataset_id_str = str(dataset_id)
try:
if DatasetService.delete_dataset(dataset_id_str, current_user):
return {'result': 'success'}, 204
else:
raise NotFound("Dataset not found.")
except services.errors.dataset.DatasetInUseError:
raise DatasetInUseError()
api.add_resource(DatasetListApi, '/datasets')
api.add_resource(DatasetApi, '/datasets/<uuid:dataset_id>')

View File

@@ -71,3 +71,9 @@ class InvalidMetadataError(BaseHTTPException):
error_code = 'invalid_metadata'
description = "The metadata content is incorrect. Please check and verify."
code = 400
class DatasetInUseError(BaseHTTPException):
error_code = 'dataset_in_use'
description = "The dataset is being used by some apps. Please remove the dataset from the apps before deleting it."
code = 409

View File

@@ -8,7 +8,7 @@ from flask import current_app, request
from flask_login import user_logged_in
from flask_restful import Resource
from pydantic import BaseModel
from werkzeug.exceptions import Forbidden, NotFound, Unauthorized
from werkzeug.exceptions import Forbidden, Unauthorized
from extensions.ext_database import db
from libs.login import _get_user
@@ -39,17 +39,17 @@ def validate_app_token(view: Optional[Callable] = None, *, fetch_user_arg: Optio
app_model = db.session.query(App).filter(App.id == api_token.app_id).first()
if not app_model:
raise NotFound()
raise Forbidden("The app no longer exists.")
if app_model.status != 'normal':
raise NotFound()
raise Forbidden("The app's status is abnormal.")
if not app_model.enable_api:
raise NotFound()
raise Forbidden("The app's API service has been disabled.")
tenant = db.session.query(Tenant).filter(Tenant.id == app_model.tenant_id).first()
if tenant.status == TenantStatus.ARCHIVE:
raise NotFound()
raise Forbidden("The workspace's status is archived.")
kwargs['app_model'] = app_model

View File

@@ -74,7 +74,7 @@ class TextApi(WebApiResource):
app_model=app_model,
text=request.form['text'],
end_user=end_user.external_user_id,
voice=request.form['voice'] if request.form.get('voice') else app_model.app_model_config.text_to_speech_dict.get('voice'),
voice=request.form['voice'] if request.form.get('voice') else None,
streaming=False
)

View File

@@ -6,7 +6,7 @@ from services.feature_service import FeatureService
class SystemFeatureApi(Resource):
def get(self):
return FeatureService.get_system_features().dict()
return FeatureService.get_system_features().model_dump()
api.add_resource(SystemFeatureApi, '/system-features')

View File

@@ -39,6 +39,7 @@ from core.tools.entities.tool_entities import (
from core.tools.tool.dataset_retriever_tool import DatasetRetrieverTool
from core.tools.tool.tool import Tool
from core.tools.tool_manager import ToolManager
from core.tools.utils.tool_parameter_converter import ToolParameterConverter
from extensions.ext_database import db
from models.model import Conversation, Message, MessageAgentThought
from models.tools import ToolConversationVariables
@@ -128,6 +129,8 @@ class BaseAgentRunner(AppRunner):
self.files = application_generate_entity.files
else:
self.files = []
self.query = None
self._current_thoughts: list[PromptMessage] = []
def _repack_app_generate_entity(self, app_generate_entity: AgentChatAppGenerateEntity) \
-> AgentChatAppGenerateEntity:
@@ -184,21 +187,11 @@ class BaseAgentRunner(AppRunner):
if parameter.form != ToolParameter.ToolParameterForm.LLM:
continue
parameter_type = 'string'
parameter_type = ToolParameterConverter.get_parameter_type(parameter.type)
enum = []
if parameter.type == ToolParameter.ToolParameterType.STRING:
parameter_type = 'string'
elif parameter.type == ToolParameter.ToolParameterType.BOOLEAN:
parameter_type = 'boolean'
elif parameter.type == ToolParameter.ToolParameterType.NUMBER:
parameter_type = 'number'
elif parameter.type == ToolParameter.ToolParameterType.SELECT:
for option in parameter.options:
enum.append(option.value)
parameter_type = 'string'
else:
raise ValueError(f"parameter type {parameter.type} is not supported")
if parameter.type == ToolParameter.ToolParameterType.SELECT:
enum = [option.value for option in parameter.options]
message_tool.parameters['properties'][parameter.name] = {
"type": parameter_type,
"description": parameter.llm_description or '',
@@ -279,20 +272,10 @@ class BaseAgentRunner(AppRunner):
if parameter.form != ToolParameter.ToolParameterForm.LLM:
continue
parameter_type = 'string'
parameter_type = ToolParameterConverter.get_parameter_type(parameter.type)
enum = []
if parameter.type == ToolParameter.ToolParameterType.STRING:
parameter_type = 'string'
elif parameter.type == ToolParameter.ToolParameterType.BOOLEAN:
parameter_type = 'boolean'
elif parameter.type == ToolParameter.ToolParameterType.NUMBER:
parameter_type = 'number'
elif parameter.type == ToolParameter.ToolParameterType.SELECT:
for option in parameter.options:
enum.append(option.value)
parameter_type = 'string'
else:
raise ValueError(f"parameter type {parameter.type} is not supported")
if parameter.type == ToolParameter.ToolParameterType.SELECT:
enum = [option.value for option in parameter.options]
prompt_tool.parameters['properties'][parameter.name] = {
"type": parameter_type,
@@ -464,7 +447,7 @@ class BaseAgentRunner(AppRunner):
for message in messages:
if message.id == self.message.id:
continue
result.append(self.organize_agent_user_prompt(message))
agent_thoughts: list[MessageAgentThought] = message.agent_thoughts
if agent_thoughts:

View File

@@ -15,6 +15,7 @@ from core.model_runtime.entities.message_entities import (
ToolPromptMessage,
UserPromptMessage,
)
from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform
from core.tools.entities.tool_entities import ToolInvokeMeta
from core.tools.tool.tool import Tool
from core.tools.tool_engine import ToolEngine
@@ -42,9 +43,9 @@ class CotAgentRunner(BaseAgentRunner, ABC):
self._init_react_state(query)
# check model mode
if 'Observation' not in app_generate_entity.model_config.stop:
if app_generate_entity.model_config.provider not in self._ignore_observation_providers:
app_generate_entity.model_config.stop.append('Observation')
if 'Observation' not in app_generate_entity.model_conf.stop:
if app_generate_entity.model_conf.provider not in self._ignore_observation_providers:
app_generate_entity.model_conf.stop.append('Observation')
app_config = self.app_config
@@ -108,9 +109,9 @@ class CotAgentRunner(BaseAgentRunner, ABC):
# invoke model
chunks: Generator[LLMResultChunk, None, None] = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=app_generate_entity.model_config.parameters,
model_parameters=app_generate_entity.model_conf.parameters,
tools=[],
stop=app_generate_entity.model_config.stop,
stop=app_generate_entity.model_conf.stop,
stream=True,
user=self.user_id,
callbacks=[],
@@ -140,8 +141,8 @@ class CotAgentRunner(BaseAgentRunner, ABC):
if isinstance(chunk, AgentScratchpadUnit.Action):
action = chunk
# detect action
scratchpad.agent_response += json.dumps(chunk.dict())
scratchpad.action_str = json.dumps(chunk.dict())
scratchpad.agent_response += json.dumps(chunk.model_dump())
scratchpad.action_str = json.dumps(chunk.model_dump())
scratchpad.action = action
else:
scratchpad.agent_response += chunk
@@ -373,7 +374,7 @@ class CotAgentRunner(BaseAgentRunner, ABC):
return message
def _organize_historic_prompt_messages(self) -> list[PromptMessage]:
def _organize_historic_prompt_messages(self, current_session_messages: list[PromptMessage] = None) -> list[PromptMessage]:
"""
organize historic prompt messages
"""
@@ -381,6 +382,13 @@ class CotAgentRunner(BaseAgentRunner, ABC):
scratchpad: list[AgentScratchpadUnit] = []
current_scratchpad: AgentScratchpadUnit = None
self.history_prompt_messages = AgentHistoryPromptTransform(
model_config=self.model_config,
prompt_messages=current_session_messages or [],
history_messages=self.history_prompt_messages,
memory=self.memory
).get_prompt()
for message in self.history_prompt_messages:
if isinstance(message, AssistantPromptMessage):
current_scratchpad = AgentScratchpadUnit(

View File

@@ -32,9 +32,6 @@ class CotChatAgentRunner(CotAgentRunner):
# organize system prompt
system_message = self._organize_system_prompt()
# organize historic prompt messages
historic_messages = self._historic_prompt_messages
# organize current assistant messages
agent_scratchpad = self._agent_scratchpad
if not agent_scratchpad:
@@ -57,6 +54,13 @@ class CotChatAgentRunner(CotAgentRunner):
query_messages = UserPromptMessage(content=self._query)
if assistant_messages:
# organize historic prompt messages
historic_messages = self._organize_historic_prompt_messages([
system_message,
query_messages,
*assistant_messages,
UserPromptMessage(content='continue')
])
messages = [
system_message,
*historic_messages,
@@ -65,6 +69,8 @@ class CotChatAgentRunner(CotAgentRunner):
UserPromptMessage(content='continue')
]
else:
# organize historic prompt messages
historic_messages = self._organize_historic_prompt_messages([system_message, query_messages])
messages = [system_message, *historic_messages, query_messages]
# join all messages

View File

@@ -19,11 +19,11 @@ class CotCompletionAgentRunner(CotAgentRunner):
return system_prompt
def _organize_historic_prompt(self) -> str:
def _organize_historic_prompt(self, current_session_messages: list[PromptMessage] = None) -> str:
"""
Organize historic prompt
"""
historic_prompt_messages = self._historic_prompt_messages
historic_prompt_messages = self._organize_historic_prompt_messages(current_session_messages)
historic_prompt = ""
for message in historic_prompt_messages:

View File

@@ -17,6 +17,7 @@ from core.model_runtime.entities.message_entities import (
ToolPromptMessage,
UserPromptMessage,
)
from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform
from core.tools.entities.tool_entities import ToolInvokeMeta
from core.tools.tool_engine import ToolEngine
from models.model import Message
@@ -24,21 +25,18 @@ from models.model import Message
logger = logging.getLogger(__name__)
class FunctionCallAgentRunner(BaseAgentRunner):
def run(self,
message: Message, query: str, **kwargs: Any
) -> Generator[LLMResultChunk, None, None]:
"""
Run FunctionCall agent application
"""
self.query = query
app_generate_entity = self.application_generate_entity
app_config = self.app_config
prompt_template = app_config.prompt_template.simple_prompt_template or ''
prompt_messages = self.history_prompt_messages
prompt_messages = self._init_system_message(prompt_template, prompt_messages)
prompt_messages = self._organize_user_query(query, prompt_messages)
# convert tools into ModelRuntime Tool format
tool_instances, prompt_messages_tools = self._init_prompt_tools()
@@ -81,13 +79,14 @@ class FunctionCallAgentRunner(BaseAgentRunner):
)
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=app_generate_entity.model_config.parameters,
model_parameters=app_generate_entity.model_conf.parameters,
tools=prompt_messages_tools,
stop=app_generate_entity.model_config.stop,
stop=app_generate_entity.model_conf.stop,
stream=self.stream_tool_call,
user=self.user_id,
callbacks=[],
@@ -203,7 +202,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
else:
assistant_message.content = response
prompt_messages.append(assistant_message)
self._current_thoughts.append(assistant_message)
# save thought
self.save_agent_thought(
@@ -265,12 +264,14 @@ class FunctionCallAgentRunner(BaseAgentRunner):
}
tool_responses.append(tool_response)
prompt_messages = self._organize_assistant_message(
tool_call_id=tool_call_id,
tool_call_name=tool_call_name,
tool_response=tool_response['tool_response'],
prompt_messages=prompt_messages,
)
if tool_response['tool_response'] is not None:
self._current_thoughts.append(
ToolPromptMessage(
content=tool_response['tool_response'],
tool_call_id=tool_call_id,
name=tool_call_name,
)
)
if len(tool_responses) > 0:
# save agent thought
@@ -300,8 +301,6 @@ class FunctionCallAgentRunner(BaseAgentRunner):
iteration_step += 1
prompt_messages = self._clear_user_prompt_image_messages(prompt_messages)
self.update_db_variables(self.variables_pool, self.db_variables_pool)
# publish end event
self.queue_manager.publish(QueueMessageEndEvent(llm_result=LLMResult(
@@ -393,24 +392,6 @@ class FunctionCallAgentRunner(BaseAgentRunner):
return prompt_messages
def _organize_assistant_message(self, tool_call_id: str = None, tool_call_name: str = None, tool_response: str = None,
prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
"""
Organize assistant message
"""
prompt_messages = deepcopy(prompt_messages)
if tool_response is not None:
prompt_messages.append(
ToolPromptMessage(
content=tool_response,
tool_call_id=tool_call_id,
name=tool_call_name,
)
)
return prompt_messages
def _clear_user_prompt_image_messages(self, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
As for now, gpt supports both fc and vision at the first iteration.
@@ -428,4 +409,26 @@ class FunctionCallAgentRunner(BaseAgentRunner):
for content in prompt_message.content
])
return prompt_messages
return prompt_messages
def _organize_prompt_messages(self):
prompt_template = self.app_config.prompt_template.simple_prompt_template or ''
self.history_prompt_messages = self._init_system_message(prompt_template, self.history_prompt_messages)
query_prompt_messages = self._organize_user_query(self.query, [])
self.history_prompt_messages = AgentHistoryPromptTransform(
model_config=self.model_config,
prompt_messages=[*query_prompt_messages, *self._current_thoughts],
history_messages=self.history_prompt_messages,
memory=self.memory
).get_prompt()
prompt_messages = [
*self.history_prompt_messages,
*query_prompt_messages,
*self._current_thoughts
]
if len(self._current_thoughts) != 0:
# clear messages after the first iteration
prompt_messages = self._clear_user_prompt_image_messages(prompt_messages)
return prompt_messages

View File

@@ -107,8 +107,8 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
usage=LLMUsage.empty_usage()
)
self._stream_generate_routes = self._get_stream_generate_routes()
self._iteration_nested_relations = self._get_iteration_nested_relations(self._workflow.graph_dict)
self._stream_generate_routes = self._get_stream_generate_routes()
self._conversation_name_generate_thread = None
def process(self) -> Union[ChatbotAppBlockingResponse, Generator[ChatbotAppStreamResponse, None, None]]:
@@ -410,6 +410,18 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
ingoing_edges.append(edge)
if not ingoing_edges:
# check if it's the first node in the iteration
target_node = next((node for node in nodes if node.get('id') == target_node_id), None)
if not target_node:
return []
node_iteration_id = target_node.get('data', {}).get('iteration_id')
# get iteration start node id
for node in nodes:
if node.get('id') == node_iteration_id:
if node.get('data', {}).get('start_node_id') == target_node_id:
return [target_node_id]
return []
start_node_ids = []
@@ -514,6 +526,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
self._task_state.answer += route_chunk.text
yield self._message_to_stream_response(route_chunk.text, self._message.id)
else:
value = None
route_chunk = cast(VarGenerateRouteChunk, route_chunk)
value_selector = route_chunk.value_selector
if not value_selector:
@@ -525,6 +538,20 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
if route_chunk_node_id == 'sys':
# system variable
value = self._workflow_system_variables.get(SystemVariable.value_of(value_selector[1]))
elif route_chunk_node_id in self._iteration_nested_relations:
# it's a iteration variable
if not self._iteration_state or route_chunk_node_id not in self._iteration_state.current_iterations:
continue
iteration_state = self._iteration_state.current_iterations[route_chunk_node_id]
iterator = iteration_state.inputs
if not iterator:
continue
iterator_selector = iterator.get('iterator_selector', [])
if value_selector[1] == 'index':
value = iteration_state.current_index
elif value_selector[1] == 'item':
value = iterator_selector[iteration_state.current_index] if iteration_state.current_index < len(
iterator_selector) else None
else:
# check chunk node id is before current node id or equal to current node id
if route_chunk_node_id not in self._task_state.ran_node_execution_infos:
@@ -554,7 +581,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
else:
value = value.get(key)
if value:
if value is not None:
text = ''
if isinstance(value, str | int | float):
text = str(value)

View File

@@ -107,7 +107,7 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
application_generate_entity = AgentChatAppGenerateEntity(
task_id=str(uuid.uuid4()),
app_config=app_config,
model_config=ModelConfigConverter.convert(app_config),
model_conf=ModelConfigConverter.convert(app_config),
conversation_id=conversation.id if conversation else None,
inputs=conversation.inputs if conversation else self._get_cleaned_inputs(inputs, app_config),
query=query,

View File

@@ -58,7 +58,7 @@ class AgentChatAppRunner(AppRunner):
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -69,8 +69,8 @@ class AgentChatAppRunner(AppRunner):
if application_generate_entity.conversation_id:
# get memory of conversation (read-only)
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_config.provider_model_bundle,
model=application_generate_entity.model_config.model
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,
model=application_generate_entity.model_conf.model
)
memory = TokenBufferMemory(
@@ -83,7 +83,7 @@ class AgentChatAppRunner(AppRunner):
# memory(optional)
prompt_messages, _ = self.organize_prompt_messages(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -152,7 +152,7 @@ class AgentChatAppRunner(AppRunner):
# memory(optional), external data, dataset context(optional)
prompt_messages, _ = self.organize_prompt_messages(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -182,12 +182,12 @@ class AgentChatAppRunner(AppRunner):
# init model instance
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_config.provider_model_bundle,
model=application_generate_entity.model_config.model
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,
model=application_generate_entity.model_conf.model
)
prompt_message, _ = self.organize_prompt_messages(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -225,7 +225,7 @@ class AgentChatAppRunner(AppRunner):
application_generate_entity=application_generate_entity,
conversation=conversation,
app_config=app_config,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
config=agent_entity,
queue_manager=queue_manager,
message=message,

View File

@@ -5,6 +5,7 @@ from collections.abc import Generator
from enum import Enum
from typing import Any
from flask import current_app
from sqlalchemy.orm import DeclarativeMeta
from core.app.entities.app_invoke_entities import InvokeFrom
@@ -46,8 +47,8 @@ class AppQueueManager:
Listen to queue
:return:
"""
# wait for 10 minutes to stop listen
listen_timeout = 600
# wait for APP_MAX_EXECUTION_TIME seconds to stop listen
listen_timeout = current_app.config.get("APP_MAX_EXECUTION_TIME")
start_time = time.time()
last_ping_time = 0
@@ -99,7 +100,7 @@ class AppQueueManager:
:param pub_from:
:return:
"""
self._check_for_sqlalchemy_models(event.dict())
self._check_for_sqlalchemy_models(event.model_dump())
self._publish(event, pub_from)
@abstractmethod

View File

@@ -1,6 +1,6 @@
import time
from collections.abc import Generator
from typing import Optional, Union, cast
from typing import Optional, Union
from core.app.app_config.entities import ExternalDataVariableEntity, PromptTemplateEntity
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
@@ -16,11 +16,11 @@ from core.app.features.hosting_moderation.hosting_moderation import HostingModer
from core.external_data_tool.external_data_fetch import ExternalDataFetch
from core.file.file_obj import FileVar
from core.memory.token_buffer_memory import TokenBufferMemory
from core.model_manager import ModelInstance
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessage
from core.model_runtime.entities.model_entities import ModelPropertyKey
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.moderation.input_moderation import InputModeration
from core.prompt.advanced_prompt_transform import AdvancedPromptTransform
from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig
@@ -45,8 +45,11 @@ class AppRunner:
:param query: query
:return:
"""
model_type_instance = model_config.provider_model_bundle.model_type_instance
model_type_instance = cast(LargeLanguageModel, model_type_instance)
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=model_config.provider_model_bundle,
model=model_config.model
)
model_context_tokens = model_config.model_schema.model_properties.get(ModelPropertyKey.CONTEXT_SIZE)
@@ -73,9 +76,7 @@ class AppRunner:
query=query
)
prompt_tokens = model_type_instance.get_num_tokens(
model_config.model,
model_config.credentials,
prompt_tokens = model_instance.get_llm_num_tokens(
prompt_messages
)
@@ -89,8 +90,10 @@ class AppRunner:
def recalc_llm_max_tokens(self, model_config: ModelConfigWithCredentialsEntity,
prompt_messages: list[PromptMessage]):
# recalc max_tokens if sum(prompt_token + max_tokens) over model token limit
model_type_instance = model_config.provider_model_bundle.model_type_instance
model_type_instance = cast(LargeLanguageModel, model_type_instance)
model_instance = ModelInstance(
provider_model_bundle=model_config.provider_model_bundle,
model=model_config.model
)
model_context_tokens = model_config.model_schema.model_properties.get(ModelPropertyKey.CONTEXT_SIZE)
@@ -107,9 +110,7 @@ class AppRunner:
if max_tokens is None:
max_tokens = 0
prompt_tokens = model_type_instance.get_num_tokens(
model_config.model,
model_config.credentials,
prompt_tokens = model_instance.get_llm_num_tokens(
prompt_messages
)
@@ -217,7 +218,7 @@ class AppRunner:
index = 0
for token in text:
chunk = LLMResultChunk(
model=app_generate_entity.model_config.model,
model=app_generate_entity.model_conf.model,
prompt_messages=prompt_messages,
delta=LLMResultChunkDelta(
index=index,
@@ -236,7 +237,7 @@ class AppRunner:
queue_manager.publish(
QueueMessageEndEvent(
llm_result=LLMResult(
model=app_generate_entity.model_config.model,
model=app_generate_entity.model_conf.model,
prompt_messages=prompt_messages,
message=AssistantPromptMessage(content=text),
usage=usage if usage else LLMUsage.empty_usage()

View File

@@ -104,7 +104,7 @@ class ChatAppGenerator(MessageBasedAppGenerator):
application_generate_entity = ChatAppGenerateEntity(
task_id=str(uuid.uuid4()),
app_config=app_config,
model_config=ModelConfigConverter.convert(app_config),
model_conf=ModelConfigConverter.convert(app_config),
conversation_id=conversation.id if conversation else None,
inputs=conversation.inputs if conversation else self._get_cleaned_inputs(inputs, app_config),
query=query,

View File

@@ -54,7 +54,7 @@ class ChatAppRunner(AppRunner):
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -65,8 +65,8 @@ class ChatAppRunner(AppRunner):
if application_generate_entity.conversation_id:
# get memory of conversation (read-only)
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_config.provider_model_bundle,
model=application_generate_entity.model_config.model
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,
model=application_generate_entity.model_conf.model
)
memory = TokenBufferMemory(
@@ -79,7 +79,7 @@ class ChatAppRunner(AppRunner):
# memory(optional)
prompt_messages, stop = self.organize_prompt_messages(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -159,7 +159,7 @@ class ChatAppRunner(AppRunner):
app_id=app_record.id,
user_id=application_generate_entity.user_id,
tenant_id=app_record.tenant_id,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
config=app_config.dataset,
query=query,
invoke_from=application_generate_entity.invoke_from,
@@ -173,7 +173,7 @@ class ChatAppRunner(AppRunner):
# memory(optional), external data, dataset context(optional)
prompt_messages, stop = self.organize_prompt_messages(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -194,21 +194,21 @@ class ChatAppRunner(AppRunner):
# Re-calculate the max tokens if sum(prompt_token + max_tokens) over model token limit
self.recalc_llm_max_tokens(
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_messages=prompt_messages
)
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_config.provider_model_bundle,
model=application_generate_entity.model_config.model
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,
model=application_generate_entity.model_conf.model
)
db.session.close()
invoke_result = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=application_generate_entity.model_config.parameters,
model_parameters=application_generate_entity.model_conf.parameters,
stop=stop,
stream=application_generate_entity.stream,
user=application_generate_entity.user_id,

View File

@@ -98,7 +98,7 @@ class CompletionAppGenerator(MessageBasedAppGenerator):
application_generate_entity = CompletionAppGenerateEntity(
task_id=str(uuid.uuid4()),
app_config=app_config,
model_config=ModelConfigConverter.convert(app_config),
model_conf=ModelConfigConverter.convert(app_config),
inputs=self._get_cleaned_inputs(inputs, app_config),
query=query,
files=file_objs,
@@ -257,7 +257,7 @@ class CompletionAppGenerator(MessageBasedAppGenerator):
application_generate_entity = CompletionAppGenerateEntity(
task_id=str(uuid.uuid4()),
app_config=app_config,
model_config=ModelConfigConverter.convert(app_config),
model_conf=ModelConfigConverter.convert(app_config),
inputs=message.inputs,
query=message.query,
files=file_objs,

View File

@@ -50,7 +50,7 @@ class CompletionAppRunner(AppRunner):
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -61,7 +61,7 @@ class CompletionAppRunner(AppRunner):
# Include: prompt template, inputs, query(optional), files(optional)
prompt_messages, stop = self.organize_prompt_messages(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -119,7 +119,7 @@ class CompletionAppRunner(AppRunner):
app_id=app_record.id,
user_id=application_generate_entity.user_id,
tenant_id=app_record.tenant_id,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
config=dataset_config,
query=query,
invoke_from=application_generate_entity.invoke_from,
@@ -132,7 +132,7 @@ class CompletionAppRunner(AppRunner):
# memory(optional), external data, dataset context(optional)
prompt_messages, stop = self.organize_prompt_messages(
app_record=app_record,
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
@@ -152,21 +152,21 @@ class CompletionAppRunner(AppRunner):
# Re-calculate the max tokens if sum(prompt_token + max_tokens) over model token limit
self.recalc_llm_max_tokens(
model_config=application_generate_entity.model_config,
model_config=application_generate_entity.model_conf,
prompt_messages=prompt_messages
)
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_config.provider_model_bundle,
model=application_generate_entity.model_config.model
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,
model=application_generate_entity.model_conf.model
)
db.session.close()
invoke_result = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=application_generate_entity.model_config.parameters,
model_parameters=application_generate_entity.model_conf.parameters,
stop=stop,
stream=application_generate_entity.stream,
user=application_generate_entity.user_id,

View File

@@ -158,8 +158,8 @@ class MessageBasedAppGenerator(BaseAppGenerator):
model_id = None
else:
app_model_config_id = app_config.app_model_config_id
model_provider = application_generate_entity.model_config.provider
model_id = application_generate_entity.model_config.model
model_provider = application_generate_entity.model_conf.provider
model_id = application_generate_entity.model_conf.model
override_model_configs = None
if app_config.app_model_config_from == EasyUIBasedAppModelConfigFrom.ARGS \
and app_config.app_mode in [AppMode.AGENT_CHAT, AppMode.CHAT, AppMode.COMPLETION]:

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from core.app.app_config.entities import AppConfig, EasyUIBasedAppConfig, WorkflowUIBasedAppConfig
from core.entities.provider_configuration import ProviderModelBundle
@@ -62,6 +62,9 @@ class ModelConfigWithCredentialsEntity(BaseModel):
parameters: dict[str, Any] = {}
stop: list[str] = []
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
class AppGenerateEntity(BaseModel):
"""
@@ -93,10 +96,13 @@ class EasyUIBasedAppGenerateEntity(AppGenerateEntity):
"""
# app config
app_config: EasyUIBasedAppConfig
model_config: ModelConfigWithCredentialsEntity
model_conf: ModelConfigWithCredentialsEntity
query: Optional[str] = None
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
class ChatAppGenerateEntity(EasyUIBasedAppGenerateEntity):
"""

View File

@@ -1,14 +1,14 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel, validator
from pydantic import BaseModel, field_validator
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk
from core.workflow.entities.base_node_data_entities import BaseNodeData
from core.workflow.entities.node_entities import NodeType
class QueueEvent(Enum):
class QueueEvent(str, Enum):
"""
QueueEvent enum
"""
@@ -47,14 +47,14 @@ class QueueLLMChunkEvent(AppQueueEvent):
"""
QueueLLMChunkEvent entity
"""
event = QueueEvent.LLM_CHUNK
event: QueueEvent = QueueEvent.LLM_CHUNK
chunk: LLMResultChunk
class QueueIterationStartEvent(AppQueueEvent):
"""
QueueIterationStartEvent entity
"""
event = QueueEvent.ITERATION_START
event: QueueEvent = QueueEvent.ITERATION_START
node_id: str
node_type: NodeType
node_data: BaseNodeData
@@ -68,16 +68,17 @@ class QueueIterationNextEvent(AppQueueEvent):
"""
QueueIterationNextEvent entity
"""
event = QueueEvent.ITERATION_NEXT
event: QueueEvent = QueueEvent.ITERATION_NEXT
index: int
node_id: str
node_type: NodeType
node_run_index: int
output: Optional[Any] # output for the current iteration
output: Optional[Any] = None # output for the current iteration
@validator('output', pre=True, always=True)
@classmethod
@field_validator('output', mode='before')
def set_output(cls, v):
"""
Set output
@@ -92,7 +93,7 @@ class QueueIterationCompletedEvent(AppQueueEvent):
"""
QueueIterationCompletedEvent entity
"""
event = QueueEvent.ITERATION_COMPLETED
event:QueueEvent = QueueEvent.ITERATION_COMPLETED
node_id: str
node_type: NodeType
@@ -104,7 +105,7 @@ class QueueTextChunkEvent(AppQueueEvent):
"""
QueueTextChunkEvent entity
"""
event = QueueEvent.TEXT_CHUNK
event: QueueEvent = QueueEvent.TEXT_CHUNK
text: str
metadata: Optional[dict] = None
@@ -113,7 +114,7 @@ class QueueAgentMessageEvent(AppQueueEvent):
"""
QueueMessageEvent entity
"""
event = QueueEvent.AGENT_MESSAGE
event: QueueEvent = QueueEvent.AGENT_MESSAGE
chunk: LLMResultChunk
@@ -121,7 +122,7 @@ class QueueMessageReplaceEvent(AppQueueEvent):
"""
QueueMessageReplaceEvent entity
"""
event = QueueEvent.MESSAGE_REPLACE
event: QueueEvent = QueueEvent.MESSAGE_REPLACE
text: str
@@ -129,7 +130,7 @@ class QueueRetrieverResourcesEvent(AppQueueEvent):
"""
QueueRetrieverResourcesEvent entity
"""
event = QueueEvent.RETRIEVER_RESOURCES
event: QueueEvent = QueueEvent.RETRIEVER_RESOURCES
retriever_resources: list[dict]
@@ -137,7 +138,7 @@ class QueueAnnotationReplyEvent(AppQueueEvent):
"""
QueueAnnotationReplyEvent entity
"""
event = QueueEvent.ANNOTATION_REPLY
event: QueueEvent = QueueEvent.ANNOTATION_REPLY
message_annotation_id: str
@@ -145,7 +146,7 @@ class QueueMessageEndEvent(AppQueueEvent):
"""
QueueMessageEndEvent entity
"""
event = QueueEvent.MESSAGE_END
event: QueueEvent = QueueEvent.MESSAGE_END
llm_result: Optional[LLMResult] = None
@@ -153,28 +154,28 @@ class QueueAdvancedChatMessageEndEvent(AppQueueEvent):
"""
QueueAdvancedChatMessageEndEvent entity
"""
event = QueueEvent.ADVANCED_CHAT_MESSAGE_END
event: QueueEvent = QueueEvent.ADVANCED_CHAT_MESSAGE_END
class QueueWorkflowStartedEvent(AppQueueEvent):
"""
QueueWorkflowStartedEvent entity
"""
event = QueueEvent.WORKFLOW_STARTED
event: QueueEvent = QueueEvent.WORKFLOW_STARTED
class QueueWorkflowSucceededEvent(AppQueueEvent):
"""
QueueWorkflowSucceededEvent entity
"""
event = QueueEvent.WORKFLOW_SUCCEEDED
event: QueueEvent = QueueEvent.WORKFLOW_SUCCEEDED
class QueueWorkflowFailedEvent(AppQueueEvent):
"""
QueueWorkflowFailedEvent entity
"""
event = QueueEvent.WORKFLOW_FAILED
event: QueueEvent = QueueEvent.WORKFLOW_FAILED
error: str
@@ -182,7 +183,7 @@ class QueueNodeStartedEvent(AppQueueEvent):
"""
QueueNodeStartedEvent entity
"""
event = QueueEvent.NODE_STARTED
event: QueueEvent = QueueEvent.NODE_STARTED
node_id: str
node_type: NodeType
@@ -195,7 +196,7 @@ class QueueNodeSucceededEvent(AppQueueEvent):
"""
QueueNodeSucceededEvent entity
"""
event = QueueEvent.NODE_SUCCEEDED
event: QueueEvent = QueueEvent.NODE_SUCCEEDED
node_id: str
node_type: NodeType
@@ -213,7 +214,7 @@ class QueueNodeFailedEvent(AppQueueEvent):
"""
QueueNodeFailedEvent entity
"""
event = QueueEvent.NODE_FAILED
event: QueueEvent = QueueEvent.NODE_FAILED
node_id: str
node_type: NodeType
@@ -230,7 +231,7 @@ class QueueAgentThoughtEvent(AppQueueEvent):
"""
QueueAgentThoughtEvent entity
"""
event = QueueEvent.AGENT_THOUGHT
event: QueueEvent = QueueEvent.AGENT_THOUGHT
agent_thought_id: str
@@ -238,7 +239,7 @@ class QueueMessageFileEvent(AppQueueEvent):
"""
QueueAgentThoughtEvent entity
"""
event = QueueEvent.MESSAGE_FILE
event: QueueEvent = QueueEvent.MESSAGE_FILE
message_file_id: str
@@ -246,15 +247,15 @@ class QueueErrorEvent(AppQueueEvent):
"""
QueueErrorEvent entity
"""
event = QueueEvent.ERROR
error: Any
event: QueueEvent = QueueEvent.ERROR
error: Any = None
class QueuePingEvent(AppQueueEvent):
"""
QueuePingEvent entity
"""
event = QueueEvent.PING
event: QueueEvent = QueueEvent.PING
class QueueStopEvent(AppQueueEvent):
@@ -270,7 +271,7 @@ class QueueStopEvent(AppQueueEvent):
OUTPUT_MODERATION = "output-moderation"
INPUT_MODERATION = "input-moderation"
event = QueueEvent.STOP
event: QueueEvent = QueueEvent.STOP
stopped_by: StopBy

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from core.model_runtime.entities.llm_entities import LLMResult, LLMUsage
from core.model_runtime.utils.encoders import jsonable_encoder
@@ -118,9 +118,7 @@ class ErrorStreamResponse(StreamResponse):
"""
event: StreamEvent = StreamEvent.ERROR
err: Exception
class Config:
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
class MessageStreamResponse(StreamResponse):
@@ -360,7 +358,7 @@ class IterationNodeNextStreamResponse(StreamResponse):
title: str
index: int
created_at: int
pre_iteration_output: Optional[Any]
pre_iteration_output: Optional[Any] = None
extras: dict = {}
event: StreamEvent = StreamEvent.ITERATION_NEXT
@@ -379,12 +377,12 @@ class IterationNodeCompletedStreamResponse(StreamResponse):
node_id: str
node_type: str
title: str
outputs: Optional[dict]
outputs: Optional[dict] = None
created_at: int
extras: dict = None
inputs: dict = None
status: WorkflowNodeExecutionStatus
error: Optional[str]
error: Optional[str] = None
elapsed_time: float
total_tokens: int
finished_at: int

View File

@@ -16,7 +16,7 @@ class HostingModerationFeature:
:param prompt_messages: prompt messages
:return:
"""
model_config = application_generate_entity.model_config
model_config = application_generate_entity.model_conf
text = ""
for prompt_message in prompt_messages:

View File

@@ -37,6 +37,7 @@ from core.app.entities.task_entities import (
)
from core.app.task_pipeline.based_generate_task_pipeline import BasedGenerateTaskPipeline
from core.app.task_pipeline.message_cycle_manage import MessageCycleManage
from core.model_manager import ModelInstance
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import (
AssistantPromptMessage,
@@ -84,7 +85,7 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline, MessageCycleMan
:param stream: stream
"""
super().__init__(application_generate_entity, queue_manager, user, stream)
self._model_config = application_generate_entity.model_config
self._model_config = application_generate_entity.model_conf
self._conversation = conversation
self._message = message
@@ -317,29 +318,30 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline, MessageCycleMan
"""
model_config = self._model_config
model = model_config.model
model_type_instance = model_config.provider_model_bundle.model_type_instance
model_type_instance = cast(LargeLanguageModel, model_type_instance)
model_instance = ModelInstance(
provider_model_bundle=model_config.provider_model_bundle,
model=model_config.model
)
# calculate num tokens
prompt_tokens = 0
if event.stopped_by != QueueStopEvent.StopBy.ANNOTATION_REPLY:
prompt_tokens = model_type_instance.get_num_tokens(
model,
model_config.credentials,
prompt_tokens = model_instance.get_llm_num_tokens(
self._task_state.llm_result.prompt_messages
)
completion_tokens = 0
if event.stopped_by == QueueStopEvent.StopBy.USER_MANUAL:
completion_tokens = model_type_instance.get_num_tokens(
model,
model_config.credentials,
completion_tokens = model_instance.get_llm_num_tokens(
[self._task_state.llm_result.message]
)
credentials = model_config.credentials
# transform usage
model_type_instance = model_config.provider_model_bundle.model_type_instance
model_type_instance = cast(LargeLanguageModel, model_type_instance)
self._task_state.llm_result.usage = model_type_instance._calc_response_usage(
model,
credentials,

View File

@@ -29,7 +29,7 @@ def print_text(
class DifyAgentCallbackHandler(BaseModel):
"""Callback Handler that prints to std out."""
color: Optional[str] = ''
current_loop = 1
current_loop: int = 1
def __init__(self, color: Optional[str] = None) -> None:
super().__init__()

View File

@@ -17,7 +17,7 @@ class PromptMessageFileType(enum.Enum):
class PromptMessageFile(BaseModel):
type: PromptMessageFileType
data: Any
data: Any = None
class ImagePromptMessageFile(PromptMessageFile):

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Optional
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from core.model_runtime.entities.common_entities import I18nObject
from core.model_runtime.entities.model_entities import ModelType, ProviderModel
@@ -16,6 +16,7 @@ class ModelStatus(Enum):
NO_CONFIGURE = "no-configure"
QUOTA_EXCEEDED = "quota-exceeded"
NO_PERMISSION = "no-permission"
DISABLED = "disabled"
class SimpleModelProviderEntity(BaseModel):
@@ -43,12 +44,19 @@ class SimpleModelProviderEntity(BaseModel):
)
class ModelWithProviderEntity(ProviderModel):
class ProviderModelWithStatusEntity(ProviderModel):
"""
Model class for model response.
"""
status: ModelStatus
load_balancing_enabled: bool = False
class ModelWithProviderEntity(ProviderModelWithStatusEntity):
"""
Model with provider entity.
"""
provider: SimpleModelProviderEntity
status: ModelStatus
class DefaultModelProviderEntity(BaseModel):
@@ -69,3 +77,6 @@ class DefaultModelEntity(BaseModel):
model: str
model_type: ModelType
provider: DefaultModelProviderEntity
# pydantic configs
model_config = ConfigDict(protected_namespaces=())

View File

@@ -1,14 +1,20 @@
import datetime
import json
import logging
from collections import defaultdict
from collections.abc import Iterator
from json import JSONDecodeError
from typing import Optional
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from core.entities.model_entities import ModelStatus, ModelWithProviderEntity, SimpleModelProviderEntity
from core.entities.provider_entities import CustomConfiguration, SystemConfiguration, SystemConfigurationStatus
from core.entities.provider_entities import (
CustomConfiguration,
ModelSettings,
SystemConfiguration,
SystemConfigurationStatus,
)
from core.helper import encrypter
from core.helper.model_provider_cache import ProviderCredentialsCache, ProviderCredentialsCacheType
from core.model_runtime.entities.model_entities import FetchFrom, ModelType
@@ -22,7 +28,14 @@ from core.model_runtime.model_providers import model_provider_factory
from core.model_runtime.model_providers.__base.ai_model import AIModel
from core.model_runtime.model_providers.__base.model_provider import ModelProvider
from extensions.ext_database import db
from models.provider import Provider, ProviderModel, ProviderType, TenantPreferredModelProvider
from models.provider import (
LoadBalancingModelConfig,
Provider,
ProviderModel,
ProviderModelSetting,
ProviderType,
TenantPreferredModelProvider,
)
logger = logging.getLogger(__name__)
@@ -39,6 +52,10 @@ class ProviderConfiguration(BaseModel):
using_provider_type: ProviderType
system_configuration: SystemConfiguration
custom_configuration: CustomConfiguration
model_settings: list[ModelSettings]
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
def __init__(self, **data):
super().__init__(**data)
@@ -62,6 +79,14 @@ class ProviderConfiguration(BaseModel):
:param model: model name
:return:
"""
if self.model_settings:
# check if model is disabled by admin
for model_setting in self.model_settings:
if (model_setting.model_type == model_type
and model_setting.model == model):
if not model_setting.enabled:
raise ValueError(f'Model {model} is disabled.')
if self.using_provider_type == ProviderType.SYSTEM:
restrict_models = []
for quota_configuration in self.system_configuration.quota_configurations:
@@ -80,15 +105,17 @@ class ProviderConfiguration(BaseModel):
return copy_credentials
else:
credentials = None
if self.custom_configuration.models:
for model_configuration in self.custom_configuration.models:
if model_configuration.model_type == model_type and model_configuration.model == model:
return model_configuration.credentials
credentials = model_configuration.credentials
break
if self.custom_configuration.provider:
return self.custom_configuration.provider.credentials
else:
return None
credentials = self.custom_configuration.provider.credentials
return credentials
def get_system_configuration_status(self) -> SystemConfigurationStatus:
"""
@@ -130,7 +157,7 @@ class ProviderConfiguration(BaseModel):
return credentials
# Obfuscate credentials
return self._obfuscated_credentials(
return self.obfuscated_credentials(
credentials=credentials,
credential_form_schemas=self.provider.provider_credential_schema.credential_form_schemas
if self.provider.provider_credential_schema else []
@@ -151,7 +178,7 @@ class ProviderConfiguration(BaseModel):
).first()
# Get provider credential secret variables
provider_credential_secret_variables = self._extract_secret_variables(
provider_credential_secret_variables = self.extract_secret_variables(
self.provider.provider_credential_schema.credential_form_schemas
if self.provider.provider_credential_schema else []
)
@@ -274,7 +301,7 @@ class ProviderConfiguration(BaseModel):
return credentials
# Obfuscate credentials
return self._obfuscated_credentials(
return self.obfuscated_credentials(
credentials=credentials,
credential_form_schemas=self.provider.model_credential_schema.credential_form_schemas
if self.provider.model_credential_schema else []
@@ -302,7 +329,7 @@ class ProviderConfiguration(BaseModel):
).first()
# Get provider credential secret variables
provider_credential_secret_variables = self._extract_secret_variables(
provider_credential_secret_variables = self.extract_secret_variables(
self.provider.model_credential_schema.credential_form_schemas
if self.provider.model_credential_schema else []
)
@@ -402,6 +429,160 @@ class ProviderConfiguration(BaseModel):
provider_model_credentials_cache.delete()
def enable_model(self, model_type: ModelType, model: str) -> ProviderModelSetting:
"""
Enable model.
:param model_type: model type
:param model: model name
:return:
"""
model_setting = db.session.query(ProviderModelSetting) \
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model
).first()
if model_setting:
model_setting.enabled = True
model_setting.updated_at = datetime.datetime.now(datetime.timezone.utc).replace(tzinfo=None)
db.session.commit()
else:
model_setting = ProviderModelSetting(
tenant_id=self.tenant_id,
provider_name=self.provider.provider,
model_type=model_type.to_origin_model_type(),
model_name=model,
enabled=True
)
db.session.add(model_setting)
db.session.commit()
return model_setting
def disable_model(self, model_type: ModelType, model: str) -> ProviderModelSetting:
"""
Disable model.
:param model_type: model type
:param model: model name
:return:
"""
model_setting = db.session.query(ProviderModelSetting) \
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model
).first()
if model_setting:
model_setting.enabled = False
model_setting.updated_at = datetime.datetime.now(datetime.timezone.utc).replace(tzinfo=None)
db.session.commit()
else:
model_setting = ProviderModelSetting(
tenant_id=self.tenant_id,
provider_name=self.provider.provider,
model_type=model_type.to_origin_model_type(),
model_name=model,
enabled=False
)
db.session.add(model_setting)
db.session.commit()
return model_setting
def get_provider_model_setting(self, model_type: ModelType, model: str) -> Optional[ProviderModelSetting]:
"""
Get provider model setting.
:param model_type: model type
:param model: model name
:return:
"""
return db.session.query(ProviderModelSetting) \
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model
).first()
def enable_model_load_balancing(self, model_type: ModelType, model: str) -> ProviderModelSetting:
"""
Enable model load balancing.
:param model_type: model type
:param model: model name
:return:
"""
load_balancing_config_count = db.session.query(LoadBalancingModelConfig) \
.filter(
LoadBalancingModelConfig.tenant_id == self.tenant_id,
LoadBalancingModelConfig.provider_name == self.provider.provider,
LoadBalancingModelConfig.model_type == model_type.to_origin_model_type(),
LoadBalancingModelConfig.model_name == model
).count()
if load_balancing_config_count <= 1:
raise ValueError('Model load balancing configuration must be more than 1.')
model_setting = db.session.query(ProviderModelSetting) \
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model
).first()
if model_setting:
model_setting.load_balancing_enabled = True
model_setting.updated_at = datetime.datetime.now(datetime.timezone.utc).replace(tzinfo=None)
db.session.commit()
else:
model_setting = ProviderModelSetting(
tenant_id=self.tenant_id,
provider_name=self.provider.provider,
model_type=model_type.to_origin_model_type(),
model_name=model,
load_balancing_enabled=True
)
db.session.add(model_setting)
db.session.commit()
return model_setting
def disable_model_load_balancing(self, model_type: ModelType, model: str) -> ProviderModelSetting:
"""
Disable model load balancing.
:param model_type: model type
:param model: model name
:return:
"""
model_setting = db.session.query(ProviderModelSetting) \
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model
).first()
if model_setting:
model_setting.load_balancing_enabled = False
model_setting.updated_at = datetime.datetime.now(datetime.timezone.utc).replace(tzinfo=None)
db.session.commit()
else:
model_setting = ProviderModelSetting(
tenant_id=self.tenant_id,
provider_name=self.provider.provider,
model_type=model_type.to_origin_model_type(),
model_name=model,
load_balancing_enabled=False
)
db.session.add(model_setting)
db.session.commit()
return model_setting
def get_provider_instance(self) -> ModelProvider:
"""
Get provider instance.
@@ -453,7 +634,7 @@ class ProviderConfiguration(BaseModel):
db.session.commit()
def _extract_secret_variables(self, credential_form_schemas: list[CredentialFormSchema]) -> list[str]:
def extract_secret_variables(self, credential_form_schemas: list[CredentialFormSchema]) -> list[str]:
"""
Extract secret input form variables.
@@ -467,7 +648,7 @@ class ProviderConfiguration(BaseModel):
return secret_input_form_variables
def _obfuscated_credentials(self, credentials: dict, credential_form_schemas: list[CredentialFormSchema]) -> dict:
def obfuscated_credentials(self, credentials: dict, credential_form_schemas: list[CredentialFormSchema]) -> dict:
"""
Obfuscated credentials.
@@ -476,7 +657,7 @@ class ProviderConfiguration(BaseModel):
:return:
"""
# Get provider credential secret variables
credential_secret_variables = self._extract_secret_variables(
credential_secret_variables = self.extract_secret_variables(
credential_form_schemas
)
@@ -522,15 +703,22 @@ class ProviderConfiguration(BaseModel):
else:
model_types = provider_instance.get_provider_schema().supported_model_types
# Group model settings by model type and model
model_setting_map = defaultdict(dict)
for model_setting in self.model_settings:
model_setting_map[model_setting.model_type][model_setting.model] = model_setting
if self.using_provider_type == ProviderType.SYSTEM:
provider_models = self._get_system_provider_models(
model_types=model_types,
provider_instance=provider_instance
provider_instance=provider_instance,
model_setting_map=model_setting_map
)
else:
provider_models = self._get_custom_provider_models(
model_types=model_types,
provider_instance=provider_instance
provider_instance=provider_instance,
model_setting_map=model_setting_map
)
if only_active:
@@ -541,18 +729,27 @@ class ProviderConfiguration(BaseModel):
def _get_system_provider_models(self,
model_types: list[ModelType],
provider_instance: ModelProvider) -> list[ModelWithProviderEntity]:
provider_instance: ModelProvider,
model_setting_map: dict[ModelType, dict[str, ModelSettings]]) \
-> list[ModelWithProviderEntity]:
"""
Get system provider models.
:param model_types: model types
:param provider_instance: provider instance
:param model_setting_map: model setting map
:return:
"""
provider_models = []
for model_type in model_types:
provider_models.extend(
[
for m in provider_instance.models(model_type):
status = ModelStatus.ACTIVE
if m.model_type in model_setting_map and m.model in model_setting_map[m.model_type]:
model_setting = model_setting_map[m.model_type][m.model]
if model_setting.enabled is False:
status = ModelStatus.DISABLED
provider_models.append(
ModelWithProviderEntity(
model=m.model,
label=m.label,
@@ -562,11 +759,9 @@ class ProviderConfiguration(BaseModel):
model_properties=m.model_properties,
deprecated=m.deprecated,
provider=SimpleModelProviderEntity(self.provider),
status=ModelStatus.ACTIVE
status=status
)
for m in provider_instance.models(model_type)
]
)
)
if self.provider.provider not in original_provider_configurate_methods:
original_provider_configurate_methods[self.provider.provider] = []
@@ -586,7 +781,8 @@ class ProviderConfiguration(BaseModel):
break
if should_use_custom_model:
if original_provider_configurate_methods[self.provider.provider] == [ConfigurateMethod.CUSTOMIZABLE_MODEL]:
if original_provider_configurate_methods[self.provider.provider] == [
ConfigurateMethod.CUSTOMIZABLE_MODEL]:
# only customizable model
for restrict_model in restrict_models:
copy_credentials = self.system_configuration.credentials.copy()
@@ -611,6 +807,13 @@ class ProviderConfiguration(BaseModel):
if custom_model_schema.model_type not in model_types:
continue
status = ModelStatus.ACTIVE
if (custom_model_schema.model_type in model_setting_map
and custom_model_schema.model in model_setting_map[custom_model_schema.model_type]):
model_setting = model_setting_map[custom_model_schema.model_type][custom_model_schema.model]
if model_setting.enabled is False:
status = ModelStatus.DISABLED
provider_models.append(
ModelWithProviderEntity(
model=custom_model_schema.model,
@@ -621,7 +824,7 @@ class ProviderConfiguration(BaseModel):
model_properties=custom_model_schema.model_properties,
deprecated=custom_model_schema.deprecated,
provider=SimpleModelProviderEntity(self.provider),
status=ModelStatus.ACTIVE
status=status
)
)
@@ -632,16 +835,20 @@ class ProviderConfiguration(BaseModel):
m.status = ModelStatus.NO_PERMISSION
elif not quota_configuration.is_valid:
m.status = ModelStatus.QUOTA_EXCEEDED
return provider_models
def _get_custom_provider_models(self,
model_types: list[ModelType],
provider_instance: ModelProvider) -> list[ModelWithProviderEntity]:
provider_instance: ModelProvider,
model_setting_map: dict[ModelType, dict[str, ModelSettings]]) \
-> list[ModelWithProviderEntity]:
"""
Get custom provider models.
:param model_types: model types
:param provider_instance: provider instance
:param model_setting_map: model setting map
:return:
"""
provider_models = []
@@ -656,6 +863,16 @@ class ProviderConfiguration(BaseModel):
models = provider_instance.models(model_type)
for m in models:
status = ModelStatus.ACTIVE if credentials else ModelStatus.NO_CONFIGURE
load_balancing_enabled = False
if m.model_type in model_setting_map and m.model in model_setting_map[m.model_type]:
model_setting = model_setting_map[m.model_type][m.model]
if model_setting.enabled is False:
status = ModelStatus.DISABLED
if len(model_setting.load_balancing_configs) > 1:
load_balancing_enabled = True
provider_models.append(
ModelWithProviderEntity(
model=m.model,
@@ -666,7 +883,8 @@ class ProviderConfiguration(BaseModel):
model_properties=m.model_properties,
deprecated=m.deprecated,
provider=SimpleModelProviderEntity(self.provider),
status=ModelStatus.ACTIVE if credentials else ModelStatus.NO_CONFIGURE
status=status,
load_balancing_enabled=load_balancing_enabled
)
)
@@ -690,6 +908,17 @@ class ProviderConfiguration(BaseModel):
if not custom_model_schema:
continue
status = ModelStatus.ACTIVE
load_balancing_enabled = False
if (custom_model_schema.model_type in model_setting_map
and custom_model_schema.model in model_setting_map[custom_model_schema.model_type]):
model_setting = model_setting_map[custom_model_schema.model_type][custom_model_schema.model]
if model_setting.enabled is False:
status = ModelStatus.DISABLED
if len(model_setting.load_balancing_configs) > 1:
load_balancing_enabled = True
provider_models.append(
ModelWithProviderEntity(
model=custom_model_schema.model,
@@ -700,7 +929,8 @@ class ProviderConfiguration(BaseModel):
model_properties=custom_model_schema.model_properties,
deprecated=custom_model_schema.deprecated,
provider=SimpleModelProviderEntity(self.provider),
status=ModelStatus.ACTIVE
status=status,
load_balancing_enabled=load_balancing_enabled
)
)
@@ -792,7 +1022,6 @@ class ProviderModelBundle(BaseModel):
provider_instance: ModelProvider
model_type_instance: AIModel
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
# pydantic configs
model_config = ConfigDict(arbitrary_types_allowed=True,
protected_namespaces=())

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Optional
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from core.model_runtime.entities.model_entities import ModelType
from models.provider import ProviderQuotaType
@@ -27,6 +27,9 @@ class RestrictModel(BaseModel):
base_model_name: Optional[str] = None
model_type: ModelType
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
class QuotaConfiguration(BaseModel):
"""
@@ -65,6 +68,9 @@ class CustomModelConfiguration(BaseModel):
model_type: ModelType
credentials: dict
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
class CustomConfiguration(BaseModel):
"""
@@ -72,3 +78,25 @@ class CustomConfiguration(BaseModel):
"""
provider: Optional[CustomProviderConfiguration] = None
models: list[CustomModelConfiguration] = []
class ModelLoadBalancingConfiguration(BaseModel):
"""
Class for model load balancing configuration.
"""
id: str
name: str
credentials: dict
class ModelSettings(BaseModel):
"""
Model class for model settings.
"""
model: str
model_type: ModelType
enabled: bool = True
load_balancing_configs: list[ModelLoadBalancingConfiguration] = []
# pydantic configs
model_config = ConfigDict(protected_namespaces=())

View File

@@ -7,7 +7,7 @@ from typing import Any, Optional
from pydantic import BaseModel
from core.utils.position_helper import sort_to_dict_by_position_map
from core.helper.position_helper import sort_to_dict_by_position_map
class ExtensionModule(enum.Enum):
@@ -16,7 +16,7 @@ class ExtensionModule(enum.Enum):
class ModuleExtension(BaseModel):
extension_class: Any
extension_class: Any = None
name: str
label: Optional[dict] = None
form_schema: Optional[list] = None

View File

@@ -28,8 +28,8 @@ class CodeExecutionException(Exception):
class CodeExecutionResponse(BaseModel):
class Data(BaseModel):
stdout: Optional[str]
error: Optional[str]
stdout: Optional[str] = None
error: Optional[str] = None
code: int
message: str
@@ -88,7 +88,7 @@ class CodeExecutor:
}
if dependencies:
data['dependencies'] = [dependency.dict() for dependency in dependencies]
data['dependencies'] = [dependency.model_dump() for dependency in dependencies]
try:
response = post(str(url), json=data, headers=headers, timeout=CODE_EXECUTION_TIMEOUT)

View File

@@ -25,7 +25,7 @@ class CodeNodeProvider(BaseModel):
@classmethod
def get_default_available_packages(cls) -> list[dict]:
return [p.dict() for p in CodeExecutor.list_dependencies(cls.get_language())]
return [p.model_dump() for p in CodeExecutor.list_dependencies(cls.get_language())]
@classmethod
def get_default_config(cls) -> dict:

View File

@@ -4,12 +4,10 @@ from abc import ABC, abstractmethod
from base64 import b64encode
from typing import Optional
from pydantic import BaseModel
from core.helper.code_executor.entities import CodeDependency
class TemplateTransformer(ABC, BaseModel):
class TemplateTransformer(ABC):
_code_placeholder: str = '{{code}}'
_inputs_placeholder: str = '{{inputs}}'
_result_tag: str = '<<RESULT>>'

View File

@@ -9,6 +9,7 @@ from extensions.ext_redis import redis_client
class ProviderCredentialsCacheType(Enum):
PROVIDER = "provider"
MODEL = "provider_model"
LOAD_BALANCING_MODEL = "load_balancing_provider_model"
class ProviderCredentialsCache:

View File

@@ -12,7 +12,6 @@ from flask import Flask, current_app
from flask_login import current_user
from sqlalchemy.orm.exc import ObjectDeletedError
from core.docstore.dataset_docstore import DatasetDocumentStore
from core.errors.error import ProviderTokenNotInitError
from core.llm_generator.llm_generator import LLMGenerator
from core.model_manager import ModelInstance, ModelManager
@@ -20,12 +19,16 @@ from core.model_runtime.entities.model_entities import ModelType, PriceType
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel
from core.rag.datasource.keyword.keyword_factory import Keyword
from core.rag.docstore.dataset_docstore import DatasetDocumentStore
from core.rag.extractor.entity.extract_setting import ExtractSetting
from core.rag.index_processor.index_processor_base import BaseIndexProcessor
from core.rag.index_processor.index_processor_factory import IndexProcessorFactory
from core.rag.models.document import Document
from core.splitter.fixed_text_splitter import EnhanceRecursiveCharacterTextSplitter, FixedRecursiveCharacterTextSplitter
from core.splitter.text_splitter import TextSplitter
from core.rag.splitter.fixed_text_splitter import (
EnhanceRecursiveCharacterTextSplitter,
FixedRecursiveCharacterTextSplitter,
)
from core.rag.splitter.text_splitter import TextSplitter
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from extensions.ext_storage import storage
@@ -283,11 +286,7 @@ class IndexingRunner:
if len(preview_texts) < 5:
preview_texts.append(document.page_content)
if indexing_technique == 'high_quality' or embedding_model_instance:
embedding_model_type_instance = embedding_model_instance.model_type_instance
embedding_model_type_instance = cast(TextEmbeddingModel, embedding_model_type_instance)
tokens += embedding_model_type_instance.get_num_tokens(
model=embedding_model_instance.model,
credentials=embedding_model_instance.credentials,
tokens += embedding_model_instance.get_text_embedding_num_tokens(
texts=[self.filter_string(document.page_content)]
)
@@ -340,7 +339,7 @@ class IndexingRunner:
def _extract(self, index_processor: BaseIndexProcessor, dataset_document: DatasetDocument, process_rule: dict) \
-> list[Document]:
# load file
if dataset_document.data_source_type not in ["upload_file", "notion_import"]:
if dataset_document.data_source_type not in ["upload_file", "notion_import", "website_crawl"]:
return []
data_source_info = dataset_document.data_source_info_dict
@@ -376,6 +375,23 @@ class IndexingRunner:
document_model=dataset_document.doc_form
)
text_docs = index_processor.extract(extract_setting, process_rule_mode=process_rule['mode'])
elif dataset_document.data_source_type == 'website_crawl':
if (not data_source_info or 'provider' not in data_source_info
or 'url' not in data_source_info or 'job_id' not in data_source_info):
raise ValueError("no website import info found")
extract_setting = ExtractSetting(
datasource_type="website_crawl",
website_info={
"provider": data_source_info['provider'],
"job_id": data_source_info['job_id'],
"tenant_id": dataset_document.tenant_id,
"url": data_source_info['url'],
"mode": data_source_info['mode'],
"only_main_content": data_source_info['only_main_content']
},
document_model=dataset_document.doc_form
)
text_docs = index_processor.extract(extract_setting, process_rule_mode=process_rule['mode'])
# update document status to splitting
self._update_document_index_status(
document_id=dataset_document.id,
@@ -551,7 +567,7 @@ class IndexingRunner:
document_qa_list = self.format_split_text(response)
qa_documents = []
for result in document_qa_list:
qa_document = Document(page_content=result['question'], metadata=document_node.metadata.copy())
qa_document = Document(page_content=result['question'], metadata=document_node.metadata.model_copy())
doc_id = str(uuid.uuid4())
hash = helper.generate_text_hash(result['question'])
qa_document.metadata['answer'] = result['answer']
@@ -655,10 +671,6 @@ class IndexingRunner:
tokens = 0
chunk_size = 10
embedding_model_type_instance = None
if embedding_model_instance:
embedding_model_type_instance = embedding_model_instance.model_type_instance
embedding_model_type_instance = cast(TextEmbeddingModel, embedding_model_type_instance)
# create keyword index
create_keyword_thread = threading.Thread(target=self._process_keyword_index,
args=(current_app._get_current_object(),
@@ -671,8 +683,7 @@ class IndexingRunner:
chunk_documents = documents[i:i + chunk_size]
futures.append(executor.submit(self._process_chunk, current_app._get_current_object(), index_processor,
chunk_documents, dataset,
dataset_document, embedding_model_instance,
embedding_model_type_instance))
dataset_document, embedding_model_instance))
for future in futures:
tokens += future.result()
@@ -713,7 +724,7 @@ class IndexingRunner:
db.session.commit()
def _process_chunk(self, flask_app, index_processor, chunk_documents, dataset, dataset_document,
embedding_model_instance, embedding_model_type_instance):
embedding_model_instance):
with flask_app.app_context():
# check document is paused
self._check_document_paused_status(dataset_document.id)
@@ -721,9 +732,7 @@ class IndexingRunner:
tokens = 0
if dataset.indexing_technique == 'high_quality' or embedding_model_type_instance:
tokens += sum(
embedding_model_type_instance.get_num_tokens(
embedding_model_instance.model,
embedding_model_instance.credentials,
embedding_model_instance.get_text_embedding_num_tokens(
[document.page_content]
)
for document in chunk_documents

View File

@@ -1,3 +1,5 @@
from typing import Optional
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.file.message_file_parser import MessageFileParser
from core.model_manager import ModelInstance
@@ -9,8 +11,6 @@ from core.model_runtime.entities.message_entities import (
TextPromptMessageContent,
UserPromptMessage,
)
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.model_providers import model_provider_factory
from extensions.ext_database import db
from models.model import AppMode, Conversation, Message
@@ -21,7 +21,7 @@ class TokenBufferMemory:
self.model_instance = model_instance
def get_history_prompt_messages(self, max_token_limit: int = 2000,
message_limit: int = 10) -> list[PromptMessage]:
message_limit: Optional[int] = None) -> list[PromptMessage]:
"""
Get history prompt messages.
:param max_token_limit: max token limit
@@ -30,10 +30,15 @@ class TokenBufferMemory:
app_record = self.conversation.app
# fetch limited messages, and return reversed
messages = db.session.query(Message).filter(
query = db.session.query(Message).filter(
Message.conversation_id == self.conversation.id,
Message.answer != ''
).order_by(Message.created_at.desc()).limit(message_limit).all()
).order_by(Message.created_at.desc())
if message_limit and message_limit > 0:
messages = query.limit(message_limit).all()
else:
messages = query.all()
messages = list(reversed(messages))
message_file_parser = MessageFileParser(
@@ -78,12 +83,7 @@ class TokenBufferMemory:
return []
# prune the chat message if it exceeds the max token limit
provider_instance = model_provider_factory.get_provider_instance(self.model_instance.provider)
model_type_instance = provider_instance.get_model_instance(ModelType.LLM)
curr_message_tokens = model_type_instance.get_num_tokens(
self.model_instance.model,
self.model_instance.credentials,
curr_message_tokens = self.model_instance.get_llm_num_tokens(
prompt_messages
)
@@ -91,9 +91,7 @@ class TokenBufferMemory:
pruned_memory = []
while curr_message_tokens > max_token_limit and prompt_messages:
pruned_memory.append(prompt_messages.pop(0))
curr_message_tokens = model_type_instance.get_num_tokens(
self.model_instance.model,
self.model_instance.credentials,
curr_message_tokens = self.model_instance.get_llm_num_tokens(
prompt_messages
)
@@ -102,7 +100,7 @@ class TokenBufferMemory:
def get_history_prompt_text(self, human_prefix: str = "Human",
ai_prefix: str = "Assistant",
max_token_limit: int = 2000,
message_limit: int = 10) -> str:
message_limit: Optional[int] = None) -> str:
"""
Get history prompt text.
:param human_prefix: human prefix

View File

@@ -1,7 +1,10 @@
import logging
import os
from collections.abc import Generator
from typing import IO, Optional, Union, cast
from core.entities.provider_configuration import ProviderModelBundle
from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle
from core.entities.provider_entities import ModelLoadBalancingConfiguration
from core.errors.error import ProviderTokenNotInitError
from core.model_runtime.callbacks.base_callback import Callback
from core.model_runtime.entities.llm_entities import LLMResult
@@ -9,6 +12,7 @@ from core.model_runtime.entities.message_entities import PromptMessage, PromptMe
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.entities.rerank_entities import RerankResult
from core.model_runtime.entities.text_embedding_entities import TextEmbeddingResult
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeConnectionError, InvokeRateLimitError
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.model_runtime.model_providers.__base.moderation_model import ModerationModel
from core.model_runtime.model_providers.__base.rerank_model import RerankModel
@@ -16,6 +20,10 @@ from core.model_runtime.model_providers.__base.speech2text_model import Speech2T
from core.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel
from core.model_runtime.model_providers.__base.tts_model import TTSModel
from core.provider_manager import ProviderManager
from extensions.ext_redis import redis_client
from models.provider import ProviderType
logger = logging.getLogger(__name__)
class ModelInstance:
@@ -29,6 +37,12 @@ class ModelInstance:
self.provider = provider_model_bundle.configuration.provider.provider
self.credentials = self._fetch_credentials_from_bundle(provider_model_bundle, model)
self.model_type_instance = self.provider_model_bundle.model_type_instance
self.load_balancing_manager = self._get_load_balancing_manager(
configuration=provider_model_bundle.configuration,
model_type=provider_model_bundle.model_type_instance.model_type,
model=model,
credentials=self.credentials
)
def _fetch_credentials_from_bundle(self, provider_model_bundle: ProviderModelBundle, model: str) -> dict:
"""
@@ -37,8 +51,10 @@ class ModelInstance:
:param model: model name
:return:
"""
credentials = provider_model_bundle.configuration.get_current_credentials(
model_type=provider_model_bundle.model_type_instance.model_type,
configuration = provider_model_bundle.configuration
model_type = provider_model_bundle.model_type_instance.model_type
credentials = configuration.get_current_credentials(
model_type=model_type,
model=model
)
@@ -47,6 +63,43 @@ class ModelInstance:
return credentials
def _get_load_balancing_manager(self, configuration: ProviderConfiguration,
model_type: ModelType,
model: str,
credentials: dict) -> Optional["LBModelManager"]:
"""
Get load balancing model credentials
:param configuration: provider configuration
:param model_type: model type
:param model: model name
:param credentials: model credentials
:return:
"""
if configuration.model_settings and configuration.using_provider_type == ProviderType.CUSTOM:
current_model_setting = None
# check if model is disabled by admin
for model_setting in configuration.model_settings:
if (model_setting.model_type == model_type
and model_setting.model == model):
current_model_setting = model_setting
break
# check if load balancing is enabled
if current_model_setting and current_model_setting.load_balancing_configs:
# use load balancing proxy to choose credentials
lb_model_manager = LBModelManager(
tenant_id=configuration.tenant_id,
provider=configuration.provider.provider,
model_type=model_type,
model=model,
load_balancing_configs=current_model_setting.load_balancing_configs,
managed_credentials=credentials if configuration.custom_configuration.provider else None
)
return lb_model_manager
return None
def invoke_llm(self, prompt_messages: list[PromptMessage], model_parameters: Optional[dict] = None,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
stream: bool = True, user: Optional[str] = None, callbacks: list[Callback] = None) \
@@ -67,7 +120,8 @@ class ModelInstance:
raise Exception("Model type instance is not LargeLanguageModel")
self.model_type_instance = cast(LargeLanguageModel, self.model_type_instance)
return self.model_type_instance.invoke(
return self._round_robin_invoke(
function=self.model_type_instance.invoke,
model=self.model,
credentials=self.credentials,
prompt_messages=prompt_messages,
@@ -79,6 +133,27 @@ class ModelInstance:
callbacks=callbacks
)
def get_llm_num_tokens(self, prompt_messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None) -> int:
"""
Get number of tokens for llm
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:return:
"""
if not isinstance(self.model_type_instance, LargeLanguageModel):
raise Exception("Model type instance is not LargeLanguageModel")
self.model_type_instance = cast(LargeLanguageModel, self.model_type_instance)
return self._round_robin_invoke(
function=self.model_type_instance.get_num_tokens,
model=self.model,
credentials=self.credentials,
prompt_messages=prompt_messages,
tools=tools
)
def invoke_text_embedding(self, texts: list[str], user: Optional[str] = None) \
-> TextEmbeddingResult:
"""
@@ -92,13 +167,32 @@ class ModelInstance:
raise Exception("Model type instance is not TextEmbeddingModel")
self.model_type_instance = cast(TextEmbeddingModel, self.model_type_instance)
return self.model_type_instance.invoke(
return self._round_robin_invoke(
function=self.model_type_instance.invoke,
model=self.model,
credentials=self.credentials,
texts=texts,
user=user
)
def get_text_embedding_num_tokens(self, texts: list[str]) -> int:
"""
Get number of tokens for text embedding
:param texts: texts to embed
:return:
"""
if not isinstance(self.model_type_instance, TextEmbeddingModel):
raise Exception("Model type instance is not TextEmbeddingModel")
self.model_type_instance = cast(TextEmbeddingModel, self.model_type_instance)
return self._round_robin_invoke(
function=self.model_type_instance.get_num_tokens,
model=self.model,
credentials=self.credentials,
texts=texts
)
def invoke_rerank(self, query: str, docs: list[str], score_threshold: Optional[float] = None,
top_n: Optional[int] = None,
user: Optional[str] = None) \
@@ -117,7 +211,8 @@ class ModelInstance:
raise Exception("Model type instance is not RerankModel")
self.model_type_instance = cast(RerankModel, self.model_type_instance)
return self.model_type_instance.invoke(
return self._round_robin_invoke(
function=self.model_type_instance.invoke,
model=self.model,
credentials=self.credentials,
query=query,
@@ -140,7 +235,8 @@ class ModelInstance:
raise Exception("Model type instance is not ModerationModel")
self.model_type_instance = cast(ModerationModel, self.model_type_instance)
return self.model_type_instance.invoke(
return self._round_robin_invoke(
function=self.model_type_instance.invoke,
model=self.model,
credentials=self.credentials,
text=text,
@@ -160,7 +256,8 @@ class ModelInstance:
raise Exception("Model type instance is not Speech2TextModel")
self.model_type_instance = cast(Speech2TextModel, self.model_type_instance)
return self.model_type_instance.invoke(
return self._round_robin_invoke(
function=self.model_type_instance.invoke,
model=self.model,
credentials=self.credentials,
file=file,
@@ -183,7 +280,8 @@ class ModelInstance:
raise Exception("Model type instance is not TTSModel")
self.model_type_instance = cast(TTSModel, self.model_type_instance)
return self.model_type_instance.invoke(
return self._round_robin_invoke(
function=self.model_type_instance.invoke,
model=self.model,
credentials=self.credentials,
content_text=content_text,
@@ -193,7 +291,44 @@ class ModelInstance:
streaming=streaming
)
def get_tts_voices(self, language: str) -> list:
def _round_robin_invoke(self, function: callable, *args, **kwargs):
"""
Round-robin invoke
:param function: function to invoke
:param args: function args
:param kwargs: function kwargs
:return:
"""
if not self.load_balancing_manager:
return function(*args, **kwargs)
last_exception = None
while True:
lb_config = self.load_balancing_manager.fetch_next()
if not lb_config:
if not last_exception:
raise ProviderTokenNotInitError("Model credentials is not initialized.")
else:
raise last_exception
try:
if 'credentials' in kwargs:
del kwargs['credentials']
return function(*args, **kwargs, credentials=lb_config.credentials)
except InvokeRateLimitError as e:
# expire in 60 seconds
self.load_balancing_manager.cooldown(lb_config, expire=60)
last_exception = e
continue
except (InvokeAuthorizationError, InvokeConnectionError) as e:
# expire in 10 seconds
self.load_balancing_manager.cooldown(lb_config, expire=10)
last_exception = e
continue
except Exception as e:
raise e
def get_tts_voices(self, language: Optional[str] = None) -> list:
"""
Invoke large language tts model voices
@@ -226,6 +361,7 @@ class ModelManager:
"""
if not provider:
return self.get_default_model_instance(tenant_id, model_type)
provider_model_bundle = self._provider_manager.get_provider_model_bundle(
tenant_id=tenant_id,
provider=provider,
@@ -255,3 +391,141 @@ class ModelManager:
model_type=model_type,
model=default_model_entity.model
)
class LBModelManager:
def __init__(self, tenant_id: str,
provider: str,
model_type: ModelType,
model: str,
load_balancing_configs: list[ModelLoadBalancingConfiguration],
managed_credentials: Optional[dict] = None) -> None:
"""
Load balancing model manager
:param load_balancing_configs: all load balancing configurations
:param managed_credentials: credentials if load balancing configuration name is __inherit__
"""
self._tenant_id = tenant_id
self._provider = provider
self._model_type = model_type
self._model = model
self._load_balancing_configs = load_balancing_configs
for load_balancing_config in self._load_balancing_configs:
if load_balancing_config.name == "__inherit__":
if not managed_credentials:
# remove __inherit__ if managed credentials is not provided
self._load_balancing_configs.remove(load_balancing_config)
else:
load_balancing_config.credentials = managed_credentials
def fetch_next(self) -> Optional[ModelLoadBalancingConfiguration]:
"""
Get next model load balancing config
Strategy: Round Robin
:return:
"""
cache_key = "model_lb_index:{}:{}:{}:{}".format(
self._tenant_id,
self._provider,
self._model_type.value,
self._model
)
cooldown_load_balancing_configs = []
max_index = len(self._load_balancing_configs)
while True:
current_index = redis_client.incr(cache_key)
if current_index >= 10000000:
current_index = 1
redis_client.set(cache_key, current_index)
redis_client.expire(cache_key, 3600)
if current_index > max_index:
current_index = current_index % max_index
real_index = current_index - 1
if real_index > max_index:
real_index = 0
config = self._load_balancing_configs[real_index]
if self.in_cooldown(config):
cooldown_load_balancing_configs.append(config)
if len(cooldown_load_balancing_configs) >= len(self._load_balancing_configs):
# all configs are in cooldown
return None
continue
if bool(os.environ.get("DEBUG", 'False').lower() == 'true'):
logger.info(f"Model LB\nid: {config.id}\nname:{config.name}\n"
f"tenant_id: {self._tenant_id}\nprovider: {self._provider}\n"
f"model_type: {self._model_type.value}\nmodel: {self._model}")
return config
return None
def cooldown(self, config: ModelLoadBalancingConfiguration, expire: int = 60) -> None:
"""
Cooldown model load balancing config
:param config: model load balancing config
:param expire: cooldown time
:return:
"""
cooldown_cache_key = "model_lb_index:cooldown:{}:{}:{}:{}:{}".format(
self._tenant_id,
self._provider,
self._model_type.value,
self._model,
config.id
)
redis_client.setex(cooldown_cache_key, expire, 'true')
def in_cooldown(self, config: ModelLoadBalancingConfiguration) -> bool:
"""
Check if model load balancing config is in cooldown
:param config: model load balancing config
:return:
"""
cooldown_cache_key = "model_lb_index:cooldown:{}:{}:{}:{}:{}".format(
self._tenant_id,
self._provider,
self._model_type.value,
self._model,
config.id
)
return redis_client.exists(cooldown_cache_key)
@classmethod
def get_config_in_cooldown_and_ttl(cls, tenant_id: str,
provider: str,
model_type: ModelType,
model: str,
config_id: str) -> tuple[bool, int]:
"""
Get model load balancing config is in cooldown and ttl
:param tenant_id: workspace id
:param provider: provider name
:param model_type: model type
:param model: model name
:param config_id: model load balancing config id
:return:
"""
cooldown_cache_key = "model_lb_index:cooldown:{}:{}:{}:{}:{}".format(
tenant_id,
provider,
model_type.value,
model,
config_id
)
ttl = redis_client.ttl(cooldown_cache_key)
if ttl == -2:
return False, 0
return True, ttl

View File

@@ -20,7 +20,7 @@ This module provides the interface for invoking and authenticating various model
![image-20231210143654461](./docs/en_US/images/index/image-20231210143654461.png)
Displays a list of all supported providers, including provider names, icons, supported model types list, predefined model list, configuration method, and credentials form rules, etc. For detailed rule design, see: [Schema](./schema.md).
Displays a list of all supported providers, including provider names, icons, supported model types list, predefined model list, configuration method, and credentials form rules, etc. For detailed rule design, see: [Schema](./docs/en_US/schema.md).
- Selectable model list display

View File

@@ -336,7 +336,7 @@ Inherit the `__base.text2speech_model.Text2SpeechModel` base class and implement
- Invoke Invocation
```python
def _invoke(elf, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None):
def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None):
"""
Invoke large language model

View File

@@ -376,7 +376,7 @@ class XinferenceProvider(Provider):
- Invoke 调用
```python
def _invoke(elf, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None):
def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None):
"""
Invoke large language model

View File

@@ -2,7 +2,7 @@ from abc import ABC
from enum import Enum
from typing import Optional
from pydantic import BaseModel
from pydantic import BaseModel, field_validator
class PromptMessageRole(Enum):
@@ -123,6 +123,13 @@ class AssistantPromptMessage(PromptMessage):
type: str
function: ToolCallFunction
@field_validator('id', mode='before')
def transform_id_to_str(cls, value) -> str:
if not isinstance(value, str):
return str(value)
else:
return value
role: PromptMessageRole = PromptMessageRole.ASSISTANT
tool_calls: list[ToolCall] = []

View File

@@ -2,7 +2,7 @@ from decimal import Decimal
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from core.model_runtime.entities.common_entities import I18nObject
@@ -148,9 +148,7 @@ class ProviderModel(BaseModel):
fetch_from: FetchFrom
model_properties: dict[ModelPropertyKey, Any]
deprecated: bool = False
class Config:
protected_namespaces = ()
model_config = ConfigDict(protected_namespaces=())
class ParameterRule(BaseModel):

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Optional
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from core.model_runtime.entities.common_entities import I18nObject
from core.model_runtime.entities.model_entities import AIModelEntity, ModelType, ProviderModel
@@ -122,8 +122,8 @@ class ProviderEntity(BaseModel):
provider_credential_schema: Optional[ProviderCredentialSchema] = None
model_credential_schema: Optional[ModelCredentialSchema] = None
class Config:
protected_namespaces = ()
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
def to_simple_provider(self) -> SimpleProviderEntity:
"""

View File

@@ -3,6 +3,9 @@ import os
from abc import ABC, abstractmethod
from typing import Optional
from pydantic import ConfigDict
from core.helper.position_helper import get_position_map, sort_by_position_map
from core.model_runtime.entities.common_entities import I18nObject
from core.model_runtime.entities.defaults import PARAMETER_RULE_TEMPLATE
from core.model_runtime.entities.model_entities import (
@@ -17,7 +20,6 @@ from core.model_runtime.entities.model_entities import (
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
from core.model_runtime.model_providers.__base.tokenizers.gpt2_tokenzier import GPT2Tokenizer
from core.tools.utils.yaml_utils import load_yaml_file
from core.utils.position_helper import get_position_map, sort_by_position_map
class AIModel(ABC):
@@ -28,6 +30,9 @@ class AIModel(ABC):
model_schemas: list[AIModelEntity] = None
started_at: float = 0
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
@abstractmethod
def validate_credentials(self, model: str, credentials: dict) -> None:
"""

View File

@@ -6,12 +6,15 @@ from abc import abstractmethod
from collections.abc import Generator
from typing import Optional, Union
from pydantic import ConfigDict
from core.model_runtime.callbacks.base_callback import Callback
from core.model_runtime.callbacks.logging_callback import LoggingCallback
from core.model_runtime.entities.llm_entities import LLMMode, LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import (
AssistantPromptMessage,
PromptMessage,
PromptMessageContentType,
PromptMessageTool,
SystemPromptMessage,
UserPromptMessage,
@@ -34,6 +37,9 @@ class LargeLanguageModel(AIModel):
"""
model_type: ModelType = ModelType.LLM
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
def invoke(self, model: str, credentials: dict,
prompt_messages: list[PromptMessage], model_parameters: Optional[dict] = None,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
@@ -200,8 +206,14 @@ if you are not sure about the structure.
))
if len(prompt_messages) > 0 and isinstance(prompt_messages[-1], UserPromptMessage):
# add ```JSON\n to the last message
prompt_messages[-1].content += f"\n```{code_block}\n"
# add ```JSON\n to the last text message
if isinstance(prompt_messages[-1].content, str):
prompt_messages[-1].content += f"\n```{code_block}\n"
elif isinstance(prompt_messages[-1].content, list):
for i in range(len(prompt_messages[-1].content) - 1, -1, -1):
if prompt_messages[-1].content[i].type == PromptMessageContentType.TEXT:
prompt_messages[-1].content[i].data += f"\n```{code_block}\n"
break
else:
# append a user message
prompt_messages.append(UserPromptMessage(

View File

@@ -1,11 +1,11 @@
import os
from abc import ABC, abstractmethod
from core.helper.module_import_helper import get_subclasses_from_module, import_module_from_source
from core.model_runtime.entities.model_entities import AIModelEntity, ModelType
from core.model_runtime.entities.provider_entities import ProviderEntity
from core.model_runtime.model_providers.__base.ai_model import AIModel
from core.tools.utils.yaml_utils import load_yaml_file
from core.utils.module_import_helper import get_subclasses_from_module, import_module_from_source
class ModelProvider(ABC):

View File

@@ -2,6 +2,8 @@ import time
from abc import abstractmethod
from typing import Optional
from pydantic import ConfigDict
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.model_providers.__base.ai_model import AIModel
@@ -12,6 +14,9 @@ class ModerationModel(AIModel):
"""
model_type: ModelType = ModelType.MODERATION
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
def invoke(self, model: str, credentials: dict,
text: str, user: Optional[str] = None) \
-> bool:

View File

@@ -2,6 +2,8 @@ import os
from abc import abstractmethod
from typing import IO, Optional
from pydantic import ConfigDict
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.model_providers.__base.ai_model import AIModel
@@ -12,6 +14,9 @@ class Speech2TextModel(AIModel):
"""
model_type: ModelType = ModelType.SPEECH2TEXT
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
def invoke(self, model: str, credentials: dict,
file: IO[bytes], user: Optional[str] = None) \
-> str:

View File

@@ -1,6 +1,8 @@
from abc import abstractmethod
from typing import IO, Optional
from pydantic import ConfigDict
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.model_providers.__base.ai_model import AIModel
@@ -11,6 +13,9 @@ class Text2ImageModel(AIModel):
"""
model_type: ModelType = ModelType.TEXT2IMG
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
def invoke(self, model: str, credentials: dict, prompt: str,
model_parameters: dict, user: Optional[str] = None) \
-> list[IO[bytes]]:

View File

@@ -2,6 +2,8 @@ import time
from abc import abstractmethod
from typing import Optional
from pydantic import ConfigDict
from core.model_runtime.entities.model_entities import ModelPropertyKey, ModelType
from core.model_runtime.entities.text_embedding_entities import TextEmbeddingResult
from core.model_runtime.model_providers.__base.ai_model import AIModel
@@ -13,6 +15,9 @@ class TextEmbeddingModel(AIModel):
"""
model_type: ModelType = ModelType.TEXT_EMBEDDING
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
def invoke(self, model: str, credentials: dict,
texts: list[str], user: Optional[str] = None) \
-> TextEmbeddingResult:

View File

@@ -4,6 +4,8 @@ import uuid
from abc import abstractmethod
from typing import Optional
from pydantic import ConfigDict
from core.model_runtime.entities.model_entities import ModelPropertyKey, ModelType
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.model_runtime.model_providers.__base.ai_model import AIModel
@@ -15,6 +17,9 @@ class TTSModel(AIModel):
"""
model_type: ModelType = ModelType.TTS
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
def invoke(self, model: str, tenant_id: str, credentials: dict, content_text: str, voice: str, streaming: bool,
user: Optional[str] = None):
"""

View File

@@ -4,6 +4,7 @@
- google
- vertex_ai
- nvidia
- nvidia_nim
- cohere
- bedrock
- togetherai
@@ -30,3 +31,5 @@
- volcengine_maas
- openai_api_compatible
- deepseek
- hunyuan
- siliconflow

View File

@@ -53,6 +53,15 @@ model_credential_schema:
type: select
required: true
options:
- label:
en_US: 2024-05-01-preview
value: 2024-05-01-preview
- label:
en_US: 2024-04-01-preview
value: 2024-04-01-preview
- label:
en_US: 2024-03-01-preview
value: 2024-03-01-preview
- label:
en_US: 2024-02-15-preview
value: 2024-02-15-preview

View File

@@ -6,7 +6,7 @@ features:
- agent-thought
model_properties:
mode: chat
context_size: 4000
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature

View File

@@ -6,7 +6,7 @@ features:
- agent-thought
model_properties:
mode: chat
context_size: 192000
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature

View File

@@ -0,0 +1,45 @@
model: baichuan3-turbo-128k
label:
en_US: Baichuan3-Turbo-128k
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 128000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
- name: with_search_enhance
label:
zh_Hans: 搜索增强
en_US: Search Enhance
type: boolean
help:
zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。
en_US: Allow the model to perform external search to enhance the generation results.
required: false

Some files were not shown because too many files have changed in this diff Show More