Compare commits

...

17 Commits

Author SHA1 Message Date
dependabot[bot]
121041d0e2 chore(deps): bump the github-actions-dependencies group across 1 directory with 2 updates
Bumps the github-actions-dependencies group with 2 updates in the / directory: [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) and [codecov/codecov-action](https://github.com/codecov/codecov-action).


Updates `astral-sh/setup-uv` from 7.6.0 to 8.0.0
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](37802adc94...cec208311d)

Updates `codecov/codecov-action` from 5.5.3 to 6.0.0
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](1af58845a9...57e3a136b7)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: 8.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: codecov/codecov-action
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-30 04:09:28 +00:00
yyh
905288423f chore(ci): simplify i18n translation workflow (#34238)
Some checks are pending
autofix.ci / autofix (push) Waiting to run
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Waiting to run
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Waiting to run
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Waiting to run
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Waiting to run
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Blocked by required conditions
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Blocked by required conditions
Main CI Pipeline / Skip Duplicate Checks (push) Waiting to run
Main CI Pipeline / Check Changed Files (push) Blocked by required conditions
Main CI Pipeline / Run API Tests (push) Blocked by required conditions
Main CI Pipeline / Skip API Tests (push) Blocked by required conditions
Main CI Pipeline / API Tests (push) Blocked by required conditions
Main CI Pipeline / Run Web Tests (push) Blocked by required conditions
Main CI Pipeline / Skip Web Tests (push) Blocked by required conditions
Main CI Pipeline / Web Tests (push) Blocked by required conditions
Main CI Pipeline / Run Web Full-Stack E2E (push) Blocked by required conditions
Main CI Pipeline / Skip Web Full-Stack E2E (push) Blocked by required conditions
Main CI Pipeline / Web Full-Stack E2E (push) Blocked by required conditions
Main CI Pipeline / Style Check (push) Blocked by required conditions
Main CI Pipeline / Run VDB Tests (push) Blocked by required conditions
Main CI Pipeline / Skip VDB Tests (push) Blocked by required conditions
Main CI Pipeline / VDB Tests (push) Blocked by required conditions
Main CI Pipeline / Run DB Migration Test (push) Blocked by required conditions
Main CI Pipeline / Skip DB Migration Test (push) Blocked by required conditions
Main CI Pipeline / DB Migration Test (push) Blocked by required conditions
2026-03-30 03:57:23 +00:00
yyh
62376f507b chore(web): remove stale i18n check test (#34237) 2026-03-30 03:56:43 +00:00
Xu Haoran
51c8dad753 Docs: unify language switch links across root and localized README files (#34201) 2026-03-30 10:39:14 +08:00
-LAN-
540906fb8a chore(ci): tighten backend workflow path filters (#34217) 2026-03-29 21:55:05 +00:00
-LAN-
b642f5c3e5 chore(ci): split API unit and integration coverage reporting (#34211) 2026-03-29 21:51:51 +00:00
YBoy
b36b077d42 test: migrate workflow service tests to testcontainers (#34206) 2026-03-29 21:50:21 +00:00
dependabot[bot]
fe9c2b0e4b chore(deps-dev): bump happy-dom from 20.8.8 to 20.8.9 in /web (#34243)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-29 21:49:25 +00:00
Stephen Zhou
548cadacff test: init e2e (#34193)
Some checks failed
autofix.ci / autofix (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Main CI Pipeline / Skip Duplicate Checks (push) Has been cancelled
Main CI Pipeline / Check Changed Files (push) Has been cancelled
Main CI Pipeline / Run API Tests (push) Has been cancelled
Main CI Pipeline / Skip API Tests (push) Has been cancelled
Main CI Pipeline / API Tests (push) Has been cancelled
Main CI Pipeline / Run Web Tests (push) Has been cancelled
Main CI Pipeline / Skip Web Tests (push) Has been cancelled
Main CI Pipeline / Web Tests (push) Has been cancelled
Main CI Pipeline / Run Web Full-Stack E2E (push) Has been cancelled
Main CI Pipeline / Skip Web Full-Stack E2E (push) Has been cancelled
Main CI Pipeline / Web Full-Stack E2E (push) Has been cancelled
Main CI Pipeline / Style Check (push) Has been cancelled
Main CI Pipeline / Run VDB Tests (push) Has been cancelled
Main CI Pipeline / Skip VDB Tests (push) Has been cancelled
Main CI Pipeline / VDB Tests (push) Has been cancelled
Main CI Pipeline / Run DB Migration Test (push) Has been cancelled
Main CI Pipeline / Skip DB Migration Test (push) Has been cancelled
Main CI Pipeline / DB Migration Test (push) Has been cancelled
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-29 13:40:24 +00:00
Jasonfish
a1171877a4 fix: Fix docker-compose.yaml's ENV variables (#31101)
Some checks failed
autofix.ci / autofix (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Main CI Pipeline / Skip Duplicate Checks (push) Has been cancelled
Main CI Pipeline / Check Changed Files (push) Has been cancelled
Main CI Pipeline / Run API Tests (push) Has been cancelled
Main CI Pipeline / Skip API Tests (push) Has been cancelled
Main CI Pipeline / API Tests (push) Has been cancelled
Main CI Pipeline / Run Web Tests (push) Has been cancelled
Main CI Pipeline / Skip Web Tests (push) Has been cancelled
Main CI Pipeline / Web Tests (push) Has been cancelled
Main CI Pipeline / Style Check (push) Has been cancelled
Main CI Pipeline / Run VDB Tests (push) Has been cancelled
Main CI Pipeline / Skip VDB Tests (push) Has been cancelled
Main CI Pipeline / VDB Tests (push) Has been cancelled
Main CI Pipeline / Run DB Migration Test (push) Has been cancelled
Main CI Pipeline / Skip DB Migration Test (push) Has been cancelled
Main CI Pipeline / DB Migration Test (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2026-03-28 15:37:51 +00:00
-LAN-
f06cc339cc chore(ci): remove duplicate pyrefly work from style lane (#34213) 2026-03-28 14:04:22 +00:00
-LAN-
6bf8982559 chore(ci): reduce web test shard fan-out (#34215) 2026-03-28 12:28:25 +00:00
Renzo
364d7ebc40 refactor: core/tools, agent, callback_handler, encrypter, llm_generator, plugin, inner_api (#34205)
Some checks failed
autofix.ci / autofix (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Main CI Pipeline / Skip Duplicate Checks (push) Has been cancelled
Main CI Pipeline / Check Changed Files (push) Has been cancelled
Main CI Pipeline / Run API Tests (push) Has been cancelled
Main CI Pipeline / Skip API Tests (push) Has been cancelled
Main CI Pipeline / API Tests (push) Has been cancelled
Main CI Pipeline / Run Web Tests (push) Has been cancelled
Main CI Pipeline / Skip Web Tests (push) Has been cancelled
Main CI Pipeline / Web Tests (push) Has been cancelled
Main CI Pipeline / Style Check (push) Has been cancelled
Main CI Pipeline / Run VDB Tests (push) Has been cancelled
Main CI Pipeline / Skip VDB Tests (push) Has been cancelled
Main CI Pipeline / VDB Tests (push) Has been cancelled
Main CI Pipeline / Run DB Migration Test (push) Has been cancelled
Main CI Pipeline / Skip DB Migration Test (push) Has been cancelled
Main CI Pipeline / DB Migration Test (push) Has been cancelled
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Asuka Minato <i@asukaminato.eu.org>
2026-03-28 10:14:43 +00:00
YBoy
7cc81e9a43 test: migrate workspace service tests to testcontainers (#34218) 2026-03-28 07:50:26 +00:00
YBoy
3409c519e2 test: migrate tag service tests to testcontainers (#34219) 2026-03-28 07:49:27 +00:00
YBoy
5851b42af3 test: migrate metadata service tests to testcontainers (#34220) 2026-03-28 07:48:48 +00:00
Maa-Lee | odeili
c5eae67ac9 refactor: use select for API key auth lookups (#34146)
Some checks failed
autofix.ci / autofix (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Main CI Pipeline / Skip Duplicate Checks (push) Has been cancelled
Main CI Pipeline / Check Changed Files (push) Has been cancelled
Main CI Pipeline / Run API Tests (push) Has been cancelled
Main CI Pipeline / Skip API Tests (push) Has been cancelled
Main CI Pipeline / API Tests (push) Has been cancelled
Main CI Pipeline / Run Web Tests (push) Has been cancelled
Main CI Pipeline / Skip Web Tests (push) Has been cancelled
Main CI Pipeline / Web Tests (push) Has been cancelled
Main CI Pipeline / Style Check (push) Has been cancelled
Main CI Pipeline / Run VDB Tests (push) Has been cancelled
Main CI Pipeline / Skip VDB Tests (push) Has been cancelled
Main CI Pipeline / VDB Tests (push) Has been cancelled
Main CI Pipeline / Run DB Migration Test (push) Has been cancelled
Main CI Pipeline / Skip DB Migration Test (push) Has been cancelled
Main CI Pipeline / DB Migration Test (push) Has been cancelled
Co-authored-by: Asuka Minato <i@asukaminato.eu.org>
2026-03-28 00:01:05 +00:00
88 changed files with 5364 additions and 4379 deletions

View File

@@ -14,11 +14,11 @@ concurrency:
cancel-in-progress: true
jobs:
test:
name: API Tests
api-unit:
name: API Unit Tests
runs-on: ubuntu-latest
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
COVERAGE_FILE: coverage-unit
defaults:
run:
shell: bash
@@ -35,7 +35,7 @@ jobs:
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
@@ -50,6 +50,52 @@ jobs:
- name: Run dify config tests
run: uv run --project api dev/pytest/pytest_config_tests.py
- name: Run Unit Tests
run: uv run --project api bash dev/pytest/pytest_unit_tests.sh
- name: Upload unit coverage data
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: api-coverage-unit
path: coverage-unit
retention-days: 1
api-integration:
name: API Integration Tests
runs-on: ubuntu-latest
env:
COVERAGE_FILE: coverage-integration
STORAGE_TYPE: opendal
OPENDAL_SCHEME: fs
OPENDAL_FS_ROOT: /tmp/dify-storage
defaults:
run:
shell: bash
strategy:
matrix:
python-version:
- "3.12"
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
cache-dependency-glob: api/uv.lock
- name: Check UV lockfile
run: uv lock --project api --check
- name: Install dependencies
run: uv sync --project api --dev
- name: Set up dotenvs
run: |
cp docker/.env.example docker/.env
@@ -73,23 +119,91 @@ jobs:
run: |
cp api/tests/integration_tests/.env.example api/tests/integration_tests/.env
- name: Run API Tests
env:
STORAGE_TYPE: opendal
OPENDAL_SCHEME: fs
OPENDAL_FS_ROOT: /tmp/dify-storage
- name: Run Integration Tests
run: |
uv run --project api pytest \
-n auto \
--timeout "${PYTEST_TIMEOUT:-180}" \
api/tests/integration_tests/workflow \
api/tests/integration_tests/tools \
api/tests/test_containers_integration_tests \
api/tests/unit_tests
api/tests/test_containers_integration_tests
- name: Upload integration coverage data
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: api-coverage-integration
path: coverage-integration
retention-days: 1
api-coverage:
name: API Coverage
runs-on: ubuntu-latest
needs:
- api-unit
- api-integration
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
COVERAGE_FILE: .coverage
defaults:
run:
shell: bash
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"
cache-dependency-glob: api/uv.lock
- name: Install dependencies
run: uv sync --project api --dev
- name: Download coverage data
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
path: coverage-data
pattern: api-coverage-*
merge-multiple: true
- name: Combine coverage
run: |
set -euo pipefail
echo "### API Coverage" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
echo "Merged backend coverage report generated for Codecov project status." >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
unit_coverage="$(find coverage-data -type f -name coverage-unit -print -quit)"
integration_coverage="$(find coverage-data -type f -name coverage-integration -print -quit)"
: "${unit_coverage:?coverage-unit artifact not found}"
: "${integration_coverage:?coverage-integration artifact not found}"
report_file="$(mktemp)"
uv run --project api coverage combine "$unit_coverage" "$integration_coverage"
uv run --project api coverage report --show-missing | tee "$report_file"
echo "Summary: \`$(tail -n 1 "$report_file")\`" >> "$GITHUB_STEP_SUMMARY"
{
echo ""
echo "<details><summary>Coverage report</summary>"
echo ""
echo '```'
cat "$report_file"
echo '```'
echo "</details>"
} >> "$GITHUB_STEP_SUMMARY"
uv run --project api coverage xml -o coverage.xml
- name: Report coverage
if: ${{ env.CODECOV_TOKEN != '' && matrix.python-version == '3.12' }}
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5.5.3
if: ${{ env.CODECOV_TOKEN != '' }}
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
files: ./coverage.xml
disable_search: true

View File

@@ -52,7 +52,7 @@ jobs:
python-version: "3.11"
- if: github.event_name != 'merge_group'
uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
- name: Generate Docker Compose
if: github.event_name != 'merge_group' && steps.docker-compose-changes.outputs.any_changed == 'true'

View File

@@ -19,7 +19,7 @@ jobs:
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"
@@ -69,7 +69,7 @@ jobs:
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"

View File

@@ -42,6 +42,7 @@ jobs:
runs-on: ubuntu-latest
outputs:
api-changed: ${{ steps.changes.outputs.api }}
e2e-changed: ${{ steps.changes.outputs.e2e }}
web-changed: ${{ steps.changes.outputs.web }}
vdb-changed: ${{ steps.changes.outputs.vdb }}
migration-changed: ${{ steps.changes.outputs.migration }}
@@ -53,21 +54,63 @@ jobs:
filters: |
api:
- 'api/**'
- 'docker/**'
- '.github/workflows/api-tests.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.middleware.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
- 'docker/ssrf_proxy/**'
- 'docker/volumes/sandbox/conf/**'
web:
- 'web/**'
- '.github/workflows/web-tests.yml'
- '.github/actions/setup-web/**'
e2e:
- 'api/**'
- 'api/pyproject.toml'
- 'api/uv.lock'
- 'e2e/**'
- 'web/**'
- 'docker/docker-compose.middleware.yaml'
- 'docker/middleware.env.example'
- '.github/workflows/web-e2e.yml'
- '.github/actions/setup-web/**'
vdb:
- 'api/core/rag/datasource/**'
- 'docker/**'
- 'api/tests/integration_tests/vdb/**'
- '.github/workflows/vdb-tests.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
- 'docker/certbot/**'
- 'docker/couchbase-server/**'
- 'docker/elasticsearch/**'
- 'docker/iris/**'
- 'docker/nginx/**'
- 'docker/pgvector/**'
- 'docker/ssrf_proxy/**'
- 'docker/startupscripts/**'
- 'docker/tidb/**'
- 'docker/volumes/**'
- 'api/uv.lock'
- 'api/pyproject.toml'
migration:
- 'api/migrations/**'
- 'api/.env.example'
- '.github/workflows/db-migration-test.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.middleware.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
- 'docker/ssrf_proxy/**'
- 'docker/volumes/sandbox/conf/**'
# Run tests in parallel while always emitting stable required checks.
api-tests-run:
@@ -190,6 +233,65 @@ jobs:
echo "Web tests were not required, but the skip job finished with result: $SKIP_RESULT" >&2
exit 1
web-e2e-run:
name: Run Web Full-Stack E2E
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.e2e-changed == 'true'
uses: ./.github/workflows/web-e2e.yml
web-e2e-skip:
name: Skip Web Full-Stack E2E
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.e2e-changed != 'true'
runs-on: ubuntu-latest
steps:
- name: Report skipped web full-stack e2e
run: echo "No E2E-related changes detected; skipping web full-stack E2E."
web-e2e:
name: Web Full-Stack E2E
if: ${{ always() }}
needs:
- pre_job
- check-changes
- web-e2e-run
- web-e2e-skip
runs-on: ubuntu-latest
steps:
- name: Finalize Web Full-Stack E2E status
env:
SHOULD_SKIP_WORKFLOW: ${{ needs.pre_job.outputs.should_skip }}
TESTS_CHANGED: ${{ needs.check-changes.outputs.e2e-changed }}
RUN_RESULT: ${{ needs.web-e2e-run.result }}
SKIP_RESULT: ${{ needs.web-e2e-skip.result }}
run: |
if [[ "$SHOULD_SKIP_WORKFLOW" == 'true' ]]; then
echo "Web full-stack E2E was skipped because this workflow run duplicated a successful or newer run."
exit 0
fi
if [[ "$TESTS_CHANGED" == 'true' ]]; then
if [[ "$RUN_RESULT" == 'success' ]]; then
echo "Web full-stack E2E ran successfully."
exit 0
fi
echo "Web full-stack E2E was required but finished with result: $RUN_RESULT" >&2
exit 1
fi
if [[ "$SKIP_RESULT" == 'success' ]]; then
echo "Web full-stack E2E was skipped because no E2E-related files changed."
exit 0
fi
echo "Web full-stack E2E was not required, but the skip job finished with result: $SKIP_RESULT" >&2
exit 1
style-check:
name: Style Check
needs: pre_job

View File

@@ -22,7 +22,7 @@ jobs:
fetch-depth: 0
- name: Setup Python & UV
uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true

View File

@@ -33,7 +33,7 @@ jobs:
- name: Setup UV and Python
if: steps.changed-files.outputs.any_changed == 'true'
uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: false
python-version: "3.12"
@@ -49,7 +49,7 @@ jobs:
- name: Run Type Checks
if: steps.changed-files.outputs.any_changed == 'true'
run: make type-check
run: make type-check-core
- name: Dotenv check
if: steps.changed-files.outputs.any_changed == 'true'

View File

@@ -1,12 +1,10 @@
name: Translate i18n Files with Claude Code
# Note: claude-code-action doesn't support push events directly.
# Push events are handled by trigger-i18n-sync.yml which sends repository_dispatch.
# See: https://github.com/langgenius/dify/issues/30743
on:
repository_dispatch:
types: [i18n-sync]
push:
branches: [main]
paths:
- 'web/i18n/en-US/*.json'
workflow_dispatch:
inputs:
files:
@@ -18,9 +16,9 @@ on:
required: false
type: string
mode:
description: 'Sync mode: incremental (only changes) or full (re-check all keys)'
description: 'Sync mode: incremental (compare with previous en-US revision) or full (sync all keys in scope)'
required: false
default: 'incremental'
default: incremental
type: choice
options:
- incremental
@@ -30,6 +28,10 @@ permissions:
contents: write
pull-requests: write
concurrency:
group: translate-i18n-${{ github.ref }}
cancel-in-progress: true
jobs:
translate:
if: github.repository == 'langgenius/dify'
@@ -51,380 +53,132 @@ jobs:
- name: Setup web environment
uses: ./.github/actions/setup-web
- name: Detect changed files and generate diff
id: detect_changes
- name: Prepare sync context
id: context
shell: bash
run: |
if [ "${{ github.event_name }}" == "workflow_dispatch" ]; then
# Manual trigger
if [ -n "${{ github.event.inputs.files }}" ]; then
echo "CHANGED_FILES=${{ github.event.inputs.files }}" >> $GITHUB_OUTPUT
else
# Get all JSON files in en-US directory
files=$(ls web/i18n/en-US/*.json 2>/dev/null | xargs -n1 basename | sed 's/.json$//' | tr '\n' ' ')
echo "CHANGED_FILES=$files" >> $GITHUB_OUTPUT
fi
echo "TARGET_LANGS=${{ github.event.inputs.languages }}" >> $GITHUB_OUTPUT
echo "SYNC_MODE=${{ github.event.inputs.mode || 'incremental' }}" >> $GITHUB_OUTPUT
# For manual trigger with incremental mode, get diff from last commit
# For full mode, we'll do a complete check anyway
if [ "${{ github.event.inputs.mode }}" == "full" ]; then
echo "Full mode: will check all keys" > /tmp/i18n-diff.txt
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
else
git diff HEAD~1..HEAD -- 'web/i18n/en-US/*.json' > /tmp/i18n-diff.txt 2>/dev/null || echo "" > /tmp/i18n-diff.txt
if [ -s /tmp/i18n-diff.txt ]; then
echo "DIFF_AVAILABLE=true" >> $GITHUB_OUTPUT
else
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
fi
fi
elif [ "${{ github.event_name }}" == "repository_dispatch" ]; then
# Triggered by push via trigger-i18n-sync.yml workflow
# Validate required payload fields
if [ -z "${{ github.event.client_payload.changed_files }}" ]; then
echo "Error: repository_dispatch payload missing required 'changed_files' field" >&2
exit 1
fi
echo "CHANGED_FILES=${{ github.event.client_payload.changed_files }}" >> $GITHUB_OUTPUT
echo "TARGET_LANGS=" >> $GITHUB_OUTPUT
echo "SYNC_MODE=${{ github.event.client_payload.sync_mode || 'incremental' }}" >> $GITHUB_OUTPUT
# Decode the base64-encoded diff from the trigger workflow
if [ -n "${{ github.event.client_payload.diff_base64 }}" ]; then
if ! echo "${{ github.event.client_payload.diff_base64 }}" | base64 -d > /tmp/i18n-diff.txt 2>&1; then
echo "Warning: Failed to decode base64 diff payload" >&2
echo "" > /tmp/i18n-diff.txt
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
elif [ -s /tmp/i18n-diff.txt ]; then
echo "DIFF_AVAILABLE=true" >> $GITHUB_OUTPUT
else
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
fi
else
echo "" > /tmp/i18n-diff.txt
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
if [ "${{ github.event_name }}" = "push" ]; then
BASE_SHA="${{ github.event.before }}"
if [ -z "$BASE_SHA" ] || [ "$BASE_SHA" = "0000000000000000000000000000000000000000" ]; then
BASE_SHA=$(git rev-parse HEAD~1 2>/dev/null || true)
fi
HEAD_SHA="${{ github.sha }}"
CHANGED_FILES=$(git diff --name-only "$BASE_SHA" "$HEAD_SHA" -- 'web/i18n/en-US/*.json' 2>/dev/null | sed -n 's@^.*/@@p' | sed 's/\.json$//' | tr '\n' ' ' | sed 's/[[:space:]]*$//')
TARGET_LANGS=""
SYNC_MODE="incremental"
else
echo "Unsupported event type: ${{ github.event_name }}"
exit 1
HEAD_SHA=$(git rev-parse HEAD)
TARGET_LANGS="${{ github.event.inputs.languages }}"
SYNC_MODE="${{ github.event.inputs.mode || 'incremental' }}"
if [ -n "${{ github.event.inputs.files }}" ]; then
CHANGED_FILES="${{ github.event.inputs.files }}"
else
CHANGED_FILES=$(find web/i18n/en-US -maxdepth 1 -type f -name '*.json' -print | sed -n 's@^.*/@@p' | sed 's/\.json$//' | sort | tr '\n' ' ' | sed 's/[[:space:]]*$//')
fi
if [ "$SYNC_MODE" = "incremental" ]; then
BASE_SHA=$(git rev-parse HEAD~1 2>/dev/null || true)
else
BASE_SHA=""
fi
fi
# Truncate diff if too large (keep first 50KB)
if [ -f /tmp/i18n-diff.txt ]; then
head -c 50000 /tmp/i18n-diff.txt > /tmp/i18n-diff-truncated.txt
mv /tmp/i18n-diff-truncated.txt /tmp/i18n-diff.txt
FILE_ARGS=""
if [ -n "$CHANGED_FILES" ]; then
FILE_ARGS="--file $CHANGED_FILES"
fi
echo "Detected files: $(cat $GITHUB_OUTPUT | grep CHANGED_FILES || echo 'none')"
LANG_ARGS=""
if [ -n "$TARGET_LANGS" ]; then
LANG_ARGS="--lang $TARGET_LANGS"
fi
{
echo "BASE_SHA=$BASE_SHA"
echo "HEAD_SHA=$HEAD_SHA"
echo "CHANGED_FILES=$CHANGED_FILES"
echo "TARGET_LANGS=$TARGET_LANGS"
echo "SYNC_MODE=$SYNC_MODE"
echo "FILE_ARGS=$FILE_ARGS"
echo "LANG_ARGS=$LANG_ARGS"
} >> "$GITHUB_OUTPUT"
echo "Files: ${CHANGED_FILES:-<none>}"
echo "Languages: ${TARGET_LANGS:-<all supported>}"
echo "Mode: $SYNC_MODE"
- name: Run Claude Code for Translation Sync
if: steps.detect_changes.outputs.CHANGED_FILES != ''
uses: anthropics/claude-code-action@ff9acae5886d41a99ed4ec14b7dc147d55834722 # v1.0.77
if: steps.context.outputs.CHANGED_FILES != ''
uses: anthropics/claude-code-action@88c168b39e7e64da0286d812b6e9fbebb6708185 # v1.0.82
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
# Allow github-actions bot to trigger this workflow via repository_dispatch
# See: https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
allowed_bots: 'github-actions[bot]'
prompt: |
You are a professional i18n synchronization engineer for the Dify project.
Your task is to keep all language translations in sync with the English source (en-US).
You are the i18n sync agent for the Dify repository.
Your job is to keep translations synchronized with the English source files under `${{ github.workspace }}/web/i18n/en-US/`, then open a PR with the result.
## CRITICAL TOOL RESTRICTIONS
- Use **Read** tool to read files (NOT cat or bash)
- Use **Edit** tool to modify JSON files (NOT node, jq, or bash scripts)
- Use **Bash** ONLY for: git commands, gh commands, pnpm commands
- Run bash commands ONE BY ONE, never combine with && or ||
- NEVER use `$()` command substitution - it's not supported. Split into separate commands instead.
Use absolute paths at all times:
- Repo root: `${{ github.workspace }}`
- Web directory: `${{ github.workspace }}/web`
- Language config: `${{ github.workspace }}/web/i18n-config/languages.ts`
## WORKING DIRECTORY & ABSOLUTE PATHS
Claude Code sandbox working directory may vary. Always use absolute paths:
- For pnpm: `pnpm --dir ${{ github.workspace }}/web <command>`
- For git: `git -C ${{ github.workspace }} <command>`
- For gh: `gh --repo ${{ github.repository }} <command>`
- For file paths: `${{ github.workspace }}/web/i18n/`
Inputs:
- Files in scope: `${{ steps.context.outputs.CHANGED_FILES }}`
- Target languages: `${{ steps.context.outputs.TARGET_LANGS }}`
- Sync mode: `${{ steps.context.outputs.SYNC_MODE }}`
- Base SHA: `${{ steps.context.outputs.BASE_SHA }}`
- Head SHA: `${{ steps.context.outputs.HEAD_SHA }}`
- Scoped file args: `${{ steps.context.outputs.FILE_ARGS }}`
- Scoped language args: `${{ steps.context.outputs.LANG_ARGS }}`
## EFFICIENCY RULES
- **ONE Edit per language file** - batch all key additions into a single Edit
- Insert new keys at the beginning of JSON (after `{`), lint:fix will sort them
- Translate ALL keys for a language mentally first, then do ONE Edit
## Context
- Changed/target files: ${{ steps.detect_changes.outputs.CHANGED_FILES }}
- Target languages (empty means all supported): ${{ steps.detect_changes.outputs.TARGET_LANGS }}
- Sync mode: ${{ steps.detect_changes.outputs.SYNC_MODE }}
- Translation files are located in: ${{ github.workspace }}/web/i18n/{locale}/{filename}.json
- Language configuration is in: ${{ github.workspace }}/web/i18n-config/languages.ts
- Git diff is available: ${{ steps.detect_changes.outputs.DIFF_AVAILABLE }}
## CRITICAL DESIGN: Verify First, Then Sync
You MUST follow this three-phase approach:
═══════════════════════════════════════════════════════════════
║ PHASE 1: VERIFY - Analyze and Generate Change Report ║
═══════════════════════════════════════════════════════════════
### Step 1.1: Analyze Git Diff (for incremental mode)
Use the Read tool to read `/tmp/i18n-diff.txt` to see the git diff.
Parse the diff to categorize changes:
- Lines with `+` (not `+++`): Added or modified values
- Lines with `-` (not `---`): Removed or old values
- Identify specific keys for each category:
* ADD: Keys that appear only in `+` lines (new keys)
* UPDATE: Keys that appear in both `-` and `+` lines (value changed)
* DELETE: Keys that appear only in `-` lines (removed keys)
### Step 1.2: Read Language Configuration
Use the Read tool to read `${{ github.workspace }}/web/i18n-config/languages.ts`.
Extract all languages with `supported: true`.
### Step 1.3: Run i18n:check for Each Language
```bash
pnpm --dir ${{ github.workspace }}/web install --frozen-lockfile
```
```bash
pnpm --dir ${{ github.workspace }}/web run i18n:check
```
This will report:
- Missing keys (need to ADD)
- Extra keys (need to DELETE)
### Step 1.4: Generate Change Report
Create a structured report identifying:
```
╔══════════════════════════════════════════════════════════════╗
║ I18N SYNC CHANGE REPORT ║
╠══════════════════════════════════════════════════════════════╣
║ Files to process: [list] ║
║ Languages to sync: [list] ║
╠══════════════════════════════════════════════════════════════╣
║ ADD (New Keys): ║
║ - [filename].[key]: "English value" ║
║ ... ║
╠══════════════════════════════════════════════════════════════╣
║ UPDATE (Modified Keys - MUST re-translate): ║
║ - [filename].[key]: "Old value" → "New value" ║
║ ... ║
╠══════════════════════════════════════════════════════════════╣
║ DELETE (Extra Keys): ║
║ - [language]/[filename].[key] ║
║ ... ║
╚══════════════════════════════════════════════════════════════╝
```
**IMPORTANT**: For UPDATE detection, compare git diff to find keys where
the English value changed. These MUST be re-translated even if target
language already has a translation (it's now stale!).
═══════════════════════════════════════════════════════════════
║ PHASE 2: SYNC - Execute Changes Based on Report ║
═══════════════════════════════════════════════════════════════
### Step 2.1: Process ADD Operations (BATCH per language file)
**CRITICAL WORKFLOW for efficiency:**
1. First, translate ALL new keys for ALL languages mentally
2. Then, for EACH language file, do ONE Edit operation:
- Read the file once
- Insert ALL new keys at the beginning (right after the opening `{`)
- Don't worry about alphabetical order - lint:fix will sort them later
Example Edit (adding 3 keys to zh-Hans/app.json):
```
old_string: '{\n "accessControl"'
new_string: '{\n "newKey1": "translation1",\n "newKey2": "translation2",\n "newKey3": "translation3",\n "accessControl"'
```
**IMPORTANT**:
- ONE Edit per language file (not one Edit per key!)
- Always use the Edit tool. NEVER use bash scripts, node, or jq.
### Step 2.2: Process UPDATE Operations
**IMPORTANT: Special handling for zh-Hans and ja-JP**
If zh-Hans or ja-JP files were ALSO modified in the same push:
- Run: `git -C ${{ github.workspace }} diff HEAD~1 --name-only` and check for zh-Hans or ja-JP files
- If found, it means someone manually translated them. Apply these rules:
1. **Missing keys**: Still ADD them (completeness required)
2. **Existing translations**: Compare with the NEW English value:
- If translation is **completely wrong** or **unrelated** → Update it
- If translation is **roughly correct** (captures the meaning) → Keep it, respect manual work
- When in doubt, **keep the manual translation**
Example:
- English changed: "Save" → "Save Changes"
- Manual translation: "保存更改" → Keep it (correct meaning)
- Manual translation: "删除" → Update it (completely wrong)
For other languages:
Use Edit tool to replace the old value with the new translation.
You can batch multiple updates in one Edit if they are adjacent.
### Step 2.3: Process DELETE Operations
For extra keys reported by i18n:check:
- Run: `pnpm --dir ${{ github.workspace }}/web run i18n:check --auto-remove`
- Or manually remove from target language JSON files
## Translation Guidelines
- PRESERVE all placeholders exactly as-is:
- `{{variable}}` - Mustache interpolation
- `${variable}` - Template literal
- `<tag>content</tag>` - HTML tags
- `_one`, `_other` - Pluralization suffixes (these are KEY suffixes, not values)
**CRITICAL: Variable names and tag names MUST stay in English - NEVER translate them**
✅ CORRECT examples:
- English: "{{count}} items" → Japanese: "{{count}} 個のアイテム"
- English: "{{name}} updated" → Korean: "{{name}} 업데이트됨"
- English: "<email>{{email}}</email>" → Chinese: "<email>{{email}}</email>"
- English: "<CustomLink>Marketplace</CustomLink>" → Japanese: "<CustomLink>マーケットプレイス</CustomLink>"
❌ WRONG examples (NEVER do this - will break the application):
- "{{count}}" → "{{カウント}}" ❌ (variable name translated to Japanese)
- "{{name}}" → "{{이름}}" ❌ (variable name translated to Korean)
- "{{email}}" → "{{邮箱}}" ❌ (variable name translated to Chinese)
- "<email>" → "<メール>" ❌ (tag name translated)
- "<CustomLink>" → "<自定义链接>" ❌ (component name translated)
- Use appropriate language register (formal/informal) based on existing translations
- Match existing translation style in each language
- Technical terms: check existing conventions per language
- For CJK languages: no spaces between characters unless necessary
- For RTL languages (ar-TN, fa-IR): ensure proper text handling
## Output Format Requirements
- Alphabetical key ordering (if original file uses it)
- 2-space indentation
- Trailing newline at end of file
- Valid JSON (use proper escaping for special characters)
═══════════════════════════════════════════════════════════════
║ PHASE 3: RE-VERIFY - Confirm All Issues Resolved ║
═══════════════════════════════════════════════════════════════
### Step 3.1: Run Lint Fix (IMPORTANT!)
```bash
pnpm --dir ${{ github.workspace }}/web lint:fix --quiet -- 'i18n/**/*.json'
```
This ensures:
- JSON keys are sorted alphabetically (jsonc/sort-keys rule)
- Valid i18n keys (dify-i18n/valid-i18n-keys rule)
- No extra keys (dify-i18n/no-extra-keys rule)
### Step 3.2: Run Final i18n Check
```bash
pnpm --dir ${{ github.workspace }}/web run i18n:check
```
### Step 3.3: Fix Any Remaining Issues
If check reports issues:
- Go back to PHASE 2 for unresolved items
- Repeat until check passes
### Step 3.4: Generate Final Summary
```
╔══════════════════════════════════════════════════════════════╗
║ SYNC COMPLETED SUMMARY ║
╠══════════════════════════════════════════════════════════════╣
║ Language │ Added │ Updated │ Deleted │ Status ║
╠══════════════════════════════════════════════════════════════╣
║ zh-Hans │ 5 │ 2 │ 1 │ ✓ Complete ║
║ ja-JP │ 5 │ 2 │ 1 │ ✓ Complete ║
║ ... │ ... │ ... │ ... │ ... ║
╠══════════════════════════════════════════════════════════════╣
║ i18n:check │ PASSED - All keys in sync ║
╚══════════════════════════════════════════════════════════════╝
```
## Mode-Specific Behavior
**SYNC_MODE = "incremental"** (default):
- Focus on keys identified from git diff
- Also check i18n:check output for any missing/extra keys
- Efficient for small changes
**SYNC_MODE = "full"**:
- Compare ALL keys between en-US and each language
- Run i18n:check to identify all discrepancies
- Use for first-time sync or fixing historical issues
## Important Notes
1. Always run i18n:check BEFORE and AFTER making changes
2. The check script is the source of truth for missing/extra keys
3. For UPDATE scenario: git diff is the source of truth for changed values
4. Create a single commit with all translation changes
5. If any translation fails, continue with others and report failures
═══════════════════════════════════════════════════════════════
║ PHASE 4: COMMIT AND CREATE PR ║
═══════════════════════════════════════════════════════════════
After all translations are complete and verified:
### Step 4.1: Check for changes
```bash
git -C ${{ github.workspace }} status --porcelain
```
If there are changes:
### Step 4.2: Create a new branch and commit
Run these git commands ONE BY ONE (not combined with &&).
**IMPORTANT**: Do NOT use `$()` command substitution. Use two separate commands:
1. First, get the timestamp:
```bash
date +%Y%m%d-%H%M%S
```
(Note the output, e.g., "20260115-143052")
2. Then create branch using the timestamp value:
```bash
git -C ${{ github.workspace }} checkout -b chore/i18n-sync-20260115-143052
```
(Replace "20260115-143052" with the actual timestamp from step 1)
3. Stage changes:
```bash
git -C ${{ github.workspace }} add web/i18n/
```
4. Commit:
```bash
git -C ${{ github.workspace }} commit -m "chore(i18n): sync translations with en-US - Mode: ${{ steps.detect_changes.outputs.SYNC_MODE }}"
```
5. Push:
```bash
git -C ${{ github.workspace }} push origin HEAD
```
### Step 4.3: Create Pull Request
```bash
gh pr create --repo ${{ github.repository }} --title "chore(i18n): sync translations with en-US" --body "## Summary
This PR was automatically generated to sync i18n translation files.
### Changes
- Mode: ${{ steps.detect_changes.outputs.SYNC_MODE }}
- Files processed: ${{ steps.detect_changes.outputs.CHANGED_FILES }}
### Verification
- [x] \`i18n:check\` passed
- [x] \`lint:fix\` applied
🤖 Generated with Claude Code GitHub Action" --base main
```
Tool rules:
- Use Read for repository files.
- Use Edit for JSON updates.
- Use Bash only for `git`, `gh`, `pnpm`, and `date`.
- Run Bash commands one by one. Do not combine commands with `&&`, `||`, pipes, or command substitution.
Required execution plan:
1. Resolve target languages.
- If no target languages were provided, read `${{ github.workspace }}/web/i18n-config/languages.ts` and use every language with `supported: true`.
2. Stay strictly in scope.
- Only process the files listed in `Files in scope`.
- Only process the resolved target languages.
- Do not touch unrelated i18n files.
3. Detect English changes per file.
- Read the current English JSON file for each file in scope.
- If sync mode is `incremental` and `Base SHA` is not empty, run:
`git -C ${{ github.workspace }} show <Base SHA>:web/i18n/en-US/<file>.json`
- If sync mode is `full` or `Base SHA` is empty, skip historical comparison and treat the current English file as the only source of truth for structural sync.
- If the file did not exist at Base SHA, treat all current keys as ADD.
- Compare previous and current English JSON to identify:
- ADD: key only in current
- UPDATE: key exists in both and the English value changed
- DELETE: key only in previous
- Do not rely on a truncated diff file.
4. Run a scoped pre-check before editing:
- `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }}`
- Use this command as the source of truth for missing and extra keys inside the current scope.
5. Apply translations.
- For every target language and scoped file:
- ADD missing keys.
- UPDATE stale translations when the English value changed.
- DELETE removed keys. Prefer `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }} --auto-remove` for extra keys so deletions stay in scope.
- For `zh-Hans` and `ja-JP`, if the locale file also changed between Base SHA and Head SHA, preserve manual translations unless they are clearly wrong for the new English value. If in doubt, keep the manual translation.
- Preserve placeholders exactly: `{{variable}}`, `${variable}`, HTML tags, component tags, and variable names.
- Match the existing terminology and register used by each locale.
- Prefer one Edit per file when stable, but prioritize correctness over batching.
6. Verify only the edited files.
- Run `pnpm --dir ${{ github.workspace }}/web lint:fix --quiet -- <relative edited i18n file paths>`
- Run `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }}`
- If verification fails, fix the remaining problems before continuing.
7. Create a PR only when there are changes in `web/i18n/`.
- Check `git -C ${{ github.workspace }} status --porcelain -- web/i18n/`
- Create branch `chore/i18n-sync-<timestamp>`
- Commit message: `chore(i18n): sync translations with en-US`
- Push the branch and open a PR against `main`
- PR title: `chore(i18n): sync translations with en-US`
- PR body: summarize files, languages, sync mode, and verification commands
8. If there are no translation changes after verification, do not create a branch, commit, or PR.
claude_args: |
--max-turns 150
--allowedTools "Read,Write,Edit,Bash(git *),Bash(git:*),Bash(gh *),Bash(gh:*),Bash(pnpm *),Bash(pnpm:*),Bash(date *),Bash(date:*),Glob,Grep"
--max-turns 80
--allowedTools "Read,Edit,Bash(git *),Bash(git:*),Bash(gh *),Bash(gh:*),Bash(pnpm *),Bash(pnpm:*),Bash(date *),Bash(date:*),Glob,Grep"

View File

@@ -1,66 +0,0 @@
name: Trigger i18n Sync on Push
# This workflow bridges the push event to repository_dispatch
# because claude-code-action doesn't support push events directly.
# See: https://github.com/langgenius/dify/issues/30743
on:
push:
branches: [main]
paths:
- 'web/i18n/en-US/*.json'
permissions:
contents: write
jobs:
trigger:
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Detect changed files and generate diff
id: detect
run: |
BEFORE_SHA="${{ github.event.before }}"
# Handle edge case: force push may have null/zero SHA
if [ -z "$BEFORE_SHA" ] || [ "$BEFORE_SHA" = "0000000000000000000000000000000000000000" ]; then
BEFORE_SHA="HEAD~1"
fi
# Detect changed i18n files
changed=$(git diff --name-only "$BEFORE_SHA" "${{ github.sha }}" -- 'web/i18n/en-US/*.json' 2>/dev/null | xargs -n1 basename 2>/dev/null | sed 's/.json$//' | tr '\n' ' ' || echo "")
echo "changed_files=$changed" >> $GITHUB_OUTPUT
# Generate diff for context
git diff "$BEFORE_SHA" "${{ github.sha }}" -- 'web/i18n/en-US/*.json' > /tmp/i18n-diff.txt 2>/dev/null || echo "" > /tmp/i18n-diff.txt
# Truncate if too large (keep first 50KB to match receiving workflow)
head -c 50000 /tmp/i18n-diff.txt > /tmp/i18n-diff-truncated.txt
mv /tmp/i18n-diff-truncated.txt /tmp/i18n-diff.txt
# Base64 encode the diff for safe JSON transport (portable, single-line)
diff_base64=$(base64 < /tmp/i18n-diff.txt | tr -d '\n')
echo "diff_base64=$diff_base64" >> $GITHUB_OUTPUT
if [ -n "$changed" ]; then
echo "has_changes=true" >> $GITHUB_OUTPUT
echo "Detected changed files: $changed"
else
echo "has_changes=false" >> $GITHUB_OUTPUT
echo "No i18n changes detected"
fi
- name: Trigger i18n sync workflow
if: steps.detect.outputs.has_changes == 'true'
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.GITHUB_TOKEN }}
event-type: i18n-sync
client-payload: '{"changed_files": "${{ steps.detect.outputs.changed_files }}", "diff_base64": "${{ steps.detect.outputs.diff_base64 }}", "sync_mode": "incremental", "trigger_sha": "${{ github.sha }}"}'

View File

@@ -30,7 +30,7 @@ jobs:
remove_tool_cache: true
- name: Setup UV and Python
uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}

72
.github/workflows/web-e2e.yml vendored Normal file
View File

@@ -0,0 +1,72 @@
name: Web Full-Stack E2E
on:
workflow_call:
permissions:
contents: read
concurrency:
group: web-e2e-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
name: Web Full-Stack E2E
runs-on: ubuntu-latest
defaults:
run:
shell: bash
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Setup web dependencies
uses: ./.github/actions/setup-web
- name: Install E2E package dependencies
working-directory: ./e2e
run: vp install --frozen-lockfile
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"
cache-dependency-glob: api/uv.lock
- name: Install API dependencies
run: uv sync --project api --dev
- name: Install Playwright browser
working-directory: ./e2e
run: vp run e2e:install
- name: Run isolated source-api and built-web Cucumber E2E tests
working-directory: ./e2e
env:
E2E_ADMIN_EMAIL: e2e-admin@example.com
E2E_ADMIN_NAME: E2E Admin
E2E_ADMIN_PASSWORD: E2eAdmin12345
E2E_FORCE_WEB_BUILD: "1"
E2E_INIT_PASSWORD: E2eInit12345
run: vp run e2e:full
- name: Upload Cucumber report
if: ${{ !cancelled() }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: cucumber-report
path: e2e/cucumber-report
retention-days: 7
- name: Upload E2E logs
if: ${{ !cancelled() }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: e2e-logs
path: e2e/.logs
retention-days: 7

View File

@@ -22,8 +22,8 @@ jobs:
strategy:
fail-fast: false
matrix:
shardIndex: [1, 2, 3, 4, 5, 6]
shardTotal: [6]
shardIndex: [1, 2, 3, 4]
shardTotal: [4]
defaults:
run:
shell: bash
@@ -66,7 +66,6 @@ jobs:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup web environment
@@ -84,7 +83,7 @@ jobs:
- name: Report coverage
if: ${{ env.CODECOV_TOKEN != '' }}
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5.5.3
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
directory: web/coverage
flags: web

View File

@@ -74,6 +74,12 @@ type-check:
@uv --directory api run mypy --exclude-gitignore --exclude 'tests/' --exclude 'migrations/' --check-untyped-defs --disable-error-code=import-untyped .
@echo "✅ Type checks complete"
type-check-core:
@echo "📝 Running core type checks (basedpyright + mypy)..."
@./dev/basedpyright-check $(PATH_TO_CHECK)
@uv --directory api run mypy --exclude-gitignore --exclude 'tests/' --exclude 'migrations/' --check-untyped-defs --disable-error-code=import-untyped .
@echo "✅ Core type checks complete"
test:
@echo "🧪 Running backend unit tests..."
@if [ -n "$(TARGET_TESTS)" ]; then \
@@ -133,6 +139,7 @@ help:
@echo " make check - Check code with ruff"
@echo " make lint - Format, fix, and lint code (ruff, imports, dotenv)"
@echo " make type-check - Run type checks (basedpyright, pyrefly, mypy)"
@echo " make type-check-core - Run core type checks (basedpyright, mypy)"
@echo " make test - Run backend unit tests (or TARGET_TESTS=./api/tests/<target_tests>)"
@echo ""
@echo "Docker Build Targets:"

View File

@@ -53,7 +53,11 @@
<a href="./docs/tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./docs/vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./docs/de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="./docs/it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="./docs/pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="./docs/sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="./docs/bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="./docs/hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify is an open-source LLM app development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features (including [Opik](https://www.comet.com/docs/opik/integrations/dify), [Langfuse](https://docs.langfuse.com), and [Arize Phoenix](https://docs.arize.com/phoenix)) and more, letting you quickly go from prototype to production. Here's a list of the core features:

View File

@@ -127,7 +127,8 @@ ALIYUN_OSS_AUTH_VERSION=v1
ALIYUN_OSS_REGION=your-region
# Don't start with '/'. OSS doesn't support leading slash in object names.
ALIYUN_OSS_PATH=your-path
ALIYUN_CLOUDBOX_ID=your-cloudbox-id
# Optional CloudBox ID for Aliyun OSS, DO NOT enable it if you are not using CloudBox.
#ALIYUN_CLOUDBOX_ID=your-cloudbox-id
# Google Storage configuration
GOOGLE_STORAGE_BUCKET_NAME=your-bucket-name

View File

@@ -8,6 +8,7 @@ Go admin-api caller.
from flask import request
from flask_restx import Resource
from pydantic import BaseModel, Field
from sqlalchemy import select
from sqlalchemy.orm import Session
from controllers.common.schema import register_schema_model
@@ -87,7 +88,7 @@ class EnterpriseAppDSLExport(Resource):
"""Export an app's DSL as YAML."""
include_secret = request.args.get("include_secret", "false").lower() == "true"
app_model = db.session.query(App).filter_by(id=app_id).first()
app_model = db.session.get(App, app_id)
if not app_model:
return {"message": "app not found"}, 404
@@ -104,7 +105,7 @@ def _get_active_account(email: str) -> Account | None:
Workspace membership is already validated by the Go admin-api caller.
"""
account = db.session.query(Account).filter_by(email=email).first()
account = db.session.scalar(select(Account).where(Account.email == email).limit(1))
if account is None or account.status != AccountStatus.ACTIVE:
return None
return account

View File

@@ -18,7 +18,7 @@ from graphon.model_runtime.entities import (
from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes
from graphon.model_runtime.entities.model_entities import ModelFeature
from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from sqlalchemy import select
from sqlalchemy import func, select
from core.agent.entities import AgentEntity, AgentToolEntity
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
@@ -104,11 +104,14 @@ class BaseAgentRunner(AppRunner):
)
# get how many agent thoughts have been created
self.agent_thought_count = (
db.session.query(MessageAgentThought)
.where(
MessageAgentThought.message_id == self.message.id,
db.session.scalar(
select(func.count())
.select_from(MessageAgentThought)
.where(
MessageAgentThought.message_id == self.message.id,
)
)
.count()
or 0
)
db.session.close()

View File

@@ -1,7 +1,7 @@
import logging
from collections.abc import Sequence
from sqlalchemy import select
from sqlalchemy import select, update
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.entities.app_invoke_entities import InvokeFrom
@@ -70,23 +70,21 @@ class DatasetIndexToolCallbackHandler:
)
child_chunk = db.session.scalar(child_chunk_stmt)
if child_chunk:
_ = (
db.session.query(DocumentSegment)
db.session.execute(
update(DocumentSegment)
.where(DocumentSegment.id == child_chunk.segment_id)
.update(
{DocumentSegment.hit_count: DocumentSegment.hit_count + 1}, synchronize_session=False
)
.values(hit_count=DocumentSegment.hit_count + 1)
)
else:
query = db.session.query(DocumentSegment).where(
DocumentSegment.index_node_id == document.metadata["doc_id"]
)
conditions = [DocumentSegment.index_node_id == document.metadata["doc_id"]]
if "dataset_id" in document.metadata:
query = query.where(DocumentSegment.dataset_id == document.metadata["dataset_id"])
conditions.append(DocumentSegment.dataset_id == document.metadata["dataset_id"])
# add hit count to document segment
query.update({DocumentSegment.hit_count: DocumentSegment.hit_count + 1}, synchronize_session=False)
db.session.execute(
update(DocumentSegment).where(*conditions).values(hit_count=DocumentSegment.hit_count + 1)
)
db.session.commit()

View File

@@ -19,7 +19,7 @@ def encrypt_token(tenant_id: str, token: str):
from extensions.ext_database import db
from models.account import Tenant
if not (tenant := db.session.query(Tenant).where(Tenant.id == tenant_id).first()):
if not (tenant := db.session.get(Tenant, tenant_id)):
raise ValueError(f"Tenant with id {tenant_id} not found")
assert tenant.encrypt_public_key is not None
encrypted_token = rsa.encrypt(token, tenant.encrypt_public_key)

View File

@@ -10,6 +10,7 @@ from graphon.model_runtime.entities.llm_entities import LLMResult
from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage
from graphon.model_runtime.entities.model_entities import ModelType
from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
from sqlalchemy import select
from core.app.app_config.entities import ModelConfig
from core.llm_generator.entities import RuleCodeGeneratePayload, RuleGeneratePayload, RuleStructuredOutputPayload
@@ -410,8 +411,8 @@ class LLMGenerator:
model_config: ModelConfig,
ideal_output: str | None,
):
last_run: Message | None = (
db.session.query(Message).where(Message.app_id == flow_id).order_by(Message.created_at.desc()).first()
last_run: Message | None = db.session.scalar(
select(Message).where(Message.app_id == flow_id).order_by(Message.created_at.desc()).limit(1)
)
if not last_run:
return LLMGenerator.__instruction_modify_common(

View File

@@ -227,7 +227,7 @@ class PluginAppBackwardsInvocation(BaseBackwardsInvocation):
get app
"""
try:
app = db.session.query(App).where(App.id == app_id).where(App.tenant_id == tenant_id).first()
app = db.session.scalar(select(App).where(App.id == app_id, App.tenant_id == tenant_id).limit(1))
except Exception:
raise ValueError("app not found")

View File

@@ -1,4 +1,4 @@
from sqlalchemy import select
from sqlalchemy import delete, select
from core.tools.__base.tool_provider import ToolProviderController
from core.tools.builtin_tool.provider import BuiltinToolProviderController
@@ -31,7 +31,7 @@ class ToolLabelManager:
raise ValueError("Unsupported tool type")
# delete old labels
db.session.query(ToolLabelBinding).where(ToolLabelBinding.tool_id == provider_id).delete()
db.session.execute(delete(ToolLabelBinding).where(ToolLabelBinding.tool_id == provider_id))
# insert new labels
for label in labels:

View File

@@ -255,11 +255,11 @@ class ToolManager:
if builtin_provider is None:
raise ToolProviderNotFoundError(f"no default provider for {provider_id}")
else:
builtin_provider = (
db.session.query(BuiltinToolProvider)
builtin_provider = db.session.scalar(
select(BuiltinToolProvider)
.where(BuiltinToolProvider.tenant_id == tenant_id, (BuiltinToolProvider.provider == provider_id))
.order_by(BuiltinToolProvider.is_default.desc(), BuiltinToolProvider.created_at.asc())
.first()
.limit(1)
)
if builtin_provider is None:
@@ -818,13 +818,13 @@ class ToolManager:
:return: the provider controller, the credentials
"""
provider: ApiToolProvider | None = (
db.session.query(ApiToolProvider)
provider: ApiToolProvider | None = db.session.scalar(
select(ApiToolProvider)
.where(
ApiToolProvider.id == provider_id,
ApiToolProvider.tenant_id == tenant_id,
)
.first()
.limit(1)
)
if provider is None:
@@ -872,13 +872,13 @@ class ToolManager:
get api provider
"""
provider_name = provider
provider_obj: ApiToolProvider | None = (
db.session.query(ApiToolProvider)
provider_obj: ApiToolProvider | None = db.session.scalar(
select(ApiToolProvider)
.where(
ApiToolProvider.tenant_id == tenant_id,
ApiToolProvider.name == provider,
)
.first()
.limit(1)
)
if provider_obj is None:
@@ -964,10 +964,10 @@ class ToolManager:
@classmethod
def generate_workflow_tool_icon_url(cls, tenant_id: str, provider_id: str) -> EmojiIconDict:
try:
workflow_provider: WorkflowToolProvider | None = (
db.session.query(WorkflowToolProvider)
workflow_provider: WorkflowToolProvider | None = db.session.scalar(
select(WorkflowToolProvider)
.where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == provider_id)
.first()
.limit(1)
)
if workflow_provider is None:
@@ -981,10 +981,10 @@ class ToolManager:
@classmethod
def generate_api_tool_icon_url(cls, tenant_id: str, provider_id: str) -> EmojiIconDict:
try:
api_provider: ApiToolProvider | None = (
db.session.query(ApiToolProvider)
api_provider: ApiToolProvider | None = db.session.scalar(
select(ApiToolProvider)
.where(ApiToolProvider.tenant_id == tenant_id, ApiToolProvider.id == provider_id)
.first()
.limit(1)
)
if api_provider is None:

View File

@@ -110,7 +110,7 @@ class DatasetMultiRetrieverTool(DatasetRetrieverBaseTool):
context_list: list[RetrievalSourceMetadata] = []
resource_number = 1
for segment in sorted_segments:
dataset = db.session.query(Dataset).filter_by(id=segment.dataset_id).first()
dataset = db.session.get(Dataset, segment.dataset_id)
document_stmt = select(Document).where(
Document.id == segment.document_id,
Document.enabled == True,

View File

@@ -205,7 +205,7 @@ class DatasetRetrieverTool(DatasetRetrieverBaseTool):
if self.return_resource:
for record in records:
segment = record.segment
dataset = db.session.query(Dataset).filter_by(id=segment.dataset_id).first()
dataset = db.session.get(Dataset, segment.dataset_id)
dataset_document_stmt = select(DatasetDocument).where(
DatasetDocument.id == segment.document_id,
DatasetDocument.enabled == True,

View File

@@ -35,15 +35,13 @@ class ApiKeyAuthService:
@staticmethod
def get_auth_credentials(tenant_id: str, category: str, provider: str):
data_source_api_key_bindings = (
db.session.query(DataSourceApiKeyAuthBinding)
.where(
data_source_api_key_bindings = db.session.scalar(
select(DataSourceApiKeyAuthBinding).where(
DataSourceApiKeyAuthBinding.tenant_id == tenant_id,
DataSourceApiKeyAuthBinding.category == category,
DataSourceApiKeyAuthBinding.provider == provider,
DataSourceApiKeyAuthBinding.disabled.is_(False),
)
.first()
)
if not data_source_api_key_bindings:
return None
@@ -54,10 +52,11 @@ class ApiKeyAuthService:
@staticmethod
def delete_provider_auth(tenant_id: str, binding_id: str):
data_source_api_key_binding = (
db.session.query(DataSourceApiKeyAuthBinding)
.where(DataSourceApiKeyAuthBinding.tenant_id == tenant_id, DataSourceApiKeyAuthBinding.id == binding_id)
.first()
data_source_api_key_binding = db.session.scalar(
select(DataSourceApiKeyAuthBinding).where(
DataSourceApiKeyAuthBinding.tenant_id == tenant_id,
DataSourceApiKeyAuthBinding.id == binding_id,
)
)
if data_source_api_key_binding:
db.session.delete(data_source_api_key_binding)

View File

@@ -555,6 +555,124 @@ class TestWorkflowService:
assert len(result_workflows) == 2
assert all(wf.marked_name for wf in result_workflows)
def test_get_all_published_workflow_no_workflow_id(self, db_session_with_containers: Session):
"""Test that an app with no workflow_id returns empty results."""
# Arrange
fake = Faker()
app = self._create_test_app(db_session_with_containers, fake)
app.workflow_id = None
db_session_with_containers.commit()
workflow_service = WorkflowService()
# Act
result_workflows, has_more = workflow_service.get_all_published_workflow(
session=db_session_with_containers, app_model=app, page=1, limit=10, user_id=None
)
# Assert
assert result_workflows == []
assert has_more is False
def test_get_all_published_workflow_basic(self, db_session_with_containers: Session):
"""Test basic retrieval of published workflows."""
# Arrange
fake = Faker()
account = self._create_test_account(db_session_with_containers, fake)
app = self._create_test_app(db_session_with_containers, fake)
workflow1 = self._create_test_workflow(db_session_with_containers, app, account, fake)
workflow1.version = "2024.01.01.001"
workflow2 = self._create_test_workflow(db_session_with_containers, app, account, fake)
workflow2.version = "2024.01.02.001"
app.workflow_id = workflow1.id
db_session_with_containers.commit()
workflow_service = WorkflowService()
# Act
result_workflows, has_more = workflow_service.get_all_published_workflow(
session=db_session_with_containers, app_model=app, page=1, limit=10, user_id=None
)
# Assert
assert len(result_workflows) == 2
assert has_more is False
def test_get_all_published_workflow_combined_filters(self, db_session_with_containers: Session):
"""Test combined user_id and named_only filters."""
# Arrange
fake = Faker()
account1 = self._create_test_account(db_session_with_containers, fake)
account2 = self._create_test_account(db_session_with_containers, fake)
app = self._create_test_app(db_session_with_containers, fake)
# account1 named
wf1 = self._create_test_workflow(db_session_with_containers, app, account1, fake)
wf1.version = "2024.01.01.001"
wf1.marked_name = "Named by user1"
wf1.created_by = account1.id
# account1 unnamed
wf2 = self._create_test_workflow(db_session_with_containers, app, account1, fake)
wf2.version = "2024.01.02.001"
wf2.marked_name = ""
wf2.created_by = account1.id
# account2 named
wf3 = self._create_test_workflow(db_session_with_containers, app, account2, fake)
wf3.version = "2024.01.03.001"
wf3.marked_name = "Named by user2"
wf3.created_by = account2.id
app.workflow_id = wf1.id
db_session_with_containers.commit()
workflow_service = WorkflowService()
# Act - Filter by account1 + named_only
result_workflows, has_more = workflow_service.get_all_published_workflow(
session=db_session_with_containers,
app_model=app,
page=1,
limit=10,
user_id=account1.id,
named_only=True,
)
# Assert - Only wf1 matches (account1 + named)
assert len(result_workflows) == 1
assert result_workflows[0].marked_name == "Named by user1"
assert result_workflows[0].created_by == account1.id
def test_get_all_published_workflow_empty_result(self, db_session_with_containers: Session):
"""Test that querying with no matching workflows returns empty."""
# Arrange
fake = Faker()
account = self._create_test_account(db_session_with_containers, fake)
app = self._create_test_app(db_session_with_containers, fake)
# Create a draft workflow (no version set = draft)
workflow = self._create_test_workflow(db_session_with_containers, app, account, fake)
app.workflow_id = workflow.id
db_session_with_containers.commit()
workflow_service = WorkflowService()
# Act - Filter by a user that has no workflows
result_workflows, has_more = workflow_service.get_all_published_workflow(
session=db_session_with_containers,
app_model=app,
page=1,
limit=10,
user_id="00000000-0000-0000-0000-000000000000",
)
# Assert
assert result_workflows == []
assert has_more is False
def test_sync_draft_workflow_create_new(self, db_session_with_containers: Session):
"""
Test creating a new draft workflow through sync operation.

View File

@@ -1,4 +1,6 @@
from unittest.mock import patch
from __future__ import annotations
from unittest.mock import MagicMock, patch
import pytest
from faker import Faker
@@ -534,3 +536,283 @@ class TestWorkspaceService:
# Verify database state
db_session_with_containers.refresh(tenant)
assert tenant.id is not None
def test_get_tenant_info_should_raise_assertion_when_join_missing(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""TenantAccountJoin must exist; missing join should raise AssertionError."""
fake = Faker()
account = Account(email=fake.email(), name=fake.name(), interface_language="en-US", status="active")
db_session_with_containers.add(account)
db_session_with_containers.commit()
tenant = Tenant(name=fake.company(), status="normal", plan="basic")
db_session_with_containers.add(tenant)
db_session_with_containers.commit()
# No TenantAccountJoin created
with patch("services.workspace_service.current_user", account):
with pytest.raises(AssertionError, match="TenantAccountJoin not found"):
WorkspaceService.get_tenant_info(tenant)
def test_get_tenant_info_should_set_replace_webapp_logo_to_none_when_flag_absent(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""replace_webapp_logo should be None when custom_config_dict does not have the key."""
import json
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
tenant.custom_config = json.dumps({})
db_session_with_containers.commit()
mock_external_service_dependencies["feature_service"].get_features.return_value.can_replace_logo = True
mock_external_service_dependencies["tenant_service"].has_roles.return_value = True
with patch("services.workspace_service.current_user", account):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert result["custom_config"]["replace_webapp_logo"] is None
def test_get_tenant_info_should_use_files_url_for_logo_url(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""The logo URL should use dify_config.FILES_URL as the base."""
import json
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
tenant.custom_config = json.dumps({"replace_webapp_logo": True})
db_session_with_containers.commit()
custom_base = "https://cdn.mycompany.io"
mock_external_service_dependencies["dify_config"].FILES_URL = custom_base
mock_external_service_dependencies["feature_service"].get_features.return_value.can_replace_logo = True
mock_external_service_dependencies["tenant_service"].has_roles.return_value = True
with patch("services.workspace_service.current_user", account):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert result["custom_config"]["replace_webapp_logo"].startswith(custom_base)
def test_get_tenant_info_should_not_include_cloud_fields_in_self_hosted(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""next_credit_reset_date and trial_credits should NOT appear in SELF_HOSTED mode."""
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
mock_external_service_dependencies["dify_config"].EDITION = "SELF_HOSTED"
mock_external_service_dependencies["feature_service"].get_features.return_value.can_replace_logo = False
mock_external_service_dependencies["tenant_service"].has_roles.return_value = False
with patch("services.workspace_service.current_user", account):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert "next_credit_reset_date" not in result
assert "trial_credits" not in result
assert "trial_credits_used" not in result
def test_get_tenant_info_cloud_credit_reset_date(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""next_credit_reset_date should be present in CLOUD edition."""
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
mock_external_service_dependencies["dify_config"].EDITION = "CLOUD"
feature = mock_external_service_dependencies["feature_service"].get_features.return_value
feature.can_replace_logo = False
feature.next_credit_reset_date = "2025-02-01"
feature.billing.subscription.plan = "professional"
mock_external_service_dependencies["tenant_service"].has_roles.return_value = False
with (
patch("services.workspace_service.current_user", account),
patch("services.credit_pool_service.CreditPoolService.get_pool", return_value=None),
):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert result["next_credit_reset_date"] == "2025-02-01"
def test_get_tenant_info_cloud_paid_pool_not_full(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""trial_credits come from paid pool when plan is not sandbox and pool is not full."""
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
mock_external_service_dependencies["dify_config"].EDITION = "CLOUD"
feature = mock_external_service_dependencies["feature_service"].get_features.return_value
feature.can_replace_logo = False
feature.next_credit_reset_date = "2025-02-01"
feature.billing.subscription.plan = "professional"
mock_external_service_dependencies["tenant_service"].has_roles.return_value = False
paid_pool = MagicMock(quota_limit=1000, quota_used=200)
with (
patch("services.workspace_service.current_user", account),
patch("services.credit_pool_service.CreditPoolService.get_pool", return_value=paid_pool),
):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert result["trial_credits"] == 1000
assert result["trial_credits_used"] == 200
def test_get_tenant_info_cloud_paid_pool_unlimited(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""quota_limit == -1 means unlimited; service should use paid pool."""
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
mock_external_service_dependencies["dify_config"].EDITION = "CLOUD"
feature = mock_external_service_dependencies["feature_service"].get_features.return_value
feature.can_replace_logo = False
feature.next_credit_reset_date = "2025-02-01"
feature.billing.subscription.plan = "professional"
mock_external_service_dependencies["tenant_service"].has_roles.return_value = False
paid_pool = MagicMock(quota_limit=-1, quota_used=999)
with (
patch("services.workspace_service.current_user", account),
patch("services.credit_pool_service.CreditPoolService.get_pool", side_effect=[paid_pool, None]),
):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert result["trial_credits"] == -1
assert result["trial_credits_used"] == 999
def test_get_tenant_info_cloud_fall_back_to_trial_when_paid_full(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""When paid pool is exhausted, switch to trial pool."""
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
mock_external_service_dependencies["dify_config"].EDITION = "CLOUD"
feature = mock_external_service_dependencies["feature_service"].get_features.return_value
feature.can_replace_logo = False
feature.next_credit_reset_date = "2025-02-01"
feature.billing.subscription.plan = "professional"
mock_external_service_dependencies["tenant_service"].has_roles.return_value = False
paid_pool = MagicMock(quota_limit=500, quota_used=500)
trial_pool = MagicMock(quota_limit=100, quota_used=10)
with (
patch("services.workspace_service.current_user", account),
patch("services.credit_pool_service.CreditPoolService.get_pool", side_effect=[paid_pool, trial_pool]),
):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert result["trial_credits"] == 100
assert result["trial_credits_used"] == 10
def test_get_tenant_info_cloud_fall_back_to_trial_when_paid_none(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""When paid_pool is None, fall back to trial pool."""
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
mock_external_service_dependencies["dify_config"].EDITION = "CLOUD"
feature = mock_external_service_dependencies["feature_service"].get_features.return_value
feature.can_replace_logo = False
feature.next_credit_reset_date = "2025-02-01"
feature.billing.subscription.plan = "professional"
mock_external_service_dependencies["tenant_service"].has_roles.return_value = False
trial_pool = MagicMock(quota_limit=50, quota_used=5)
with (
patch("services.workspace_service.current_user", account),
patch("services.credit_pool_service.CreditPoolService.get_pool", side_effect=[None, trial_pool]),
):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert result["trial_credits"] == 50
assert result["trial_credits_used"] == 5
def test_get_tenant_info_cloud_sandbox_uses_trial_pool(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""When plan is SANDBOX, skip paid pool and use trial pool."""
from enums.cloud_plan import CloudPlan
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
mock_external_service_dependencies["dify_config"].EDITION = "CLOUD"
feature = mock_external_service_dependencies["feature_service"].get_features.return_value
feature.can_replace_logo = False
feature.next_credit_reset_date = "2025-02-01"
feature.billing.subscription.plan = CloudPlan.SANDBOX
mock_external_service_dependencies["tenant_service"].has_roles.return_value = False
paid_pool = MagicMock(quota_limit=1000, quota_used=0)
trial_pool = MagicMock(quota_limit=200, quota_used=20)
with (
patch("services.workspace_service.current_user", account),
patch("services.credit_pool_service.CreditPoolService.get_pool", side_effect=[paid_pool, trial_pool]),
):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert result["trial_credits"] == 200
assert result["trial_credits_used"] == 20
def test_get_tenant_info_cloud_both_pools_none(
self, db_session_with_containers: Session, mock_external_service_dependencies
):
"""When both paid and trial pools are absent, trial_credits should not be set."""
fake = Faker()
account, tenant = self._create_test_account_and_tenant(
db_session_with_containers, mock_external_service_dependencies
)
mock_external_service_dependencies["dify_config"].EDITION = "CLOUD"
feature = mock_external_service_dependencies["feature_service"].get_features.return_value
feature.can_replace_logo = False
feature.next_credit_reset_date = "2025-02-01"
feature.billing.subscription.plan = "professional"
mock_external_service_dependencies["tenant_service"].has_roles.return_value = False
with (
patch("services.workspace_service.current_user", account),
patch("services.credit_pool_service.CreditPoolService.get_pool", side_effect=[None, None]),
):
result = WorkspaceService.get_tenant_info(tenant)
assert result is not None
assert "trial_credits" not in result
assert "trial_credits_used" not in result

View File

@@ -64,18 +64,18 @@ class TestGetActiveAccount:
def test_returns_active_account(self, mock_db):
mock_account = MagicMock()
mock_account.status = "active"
mock_db.session.query.return_value.filter_by.return_value.first.return_value = mock_account
mock_db.session.scalar.return_value = mock_account
result = _get_active_account("user@example.com")
assert result is mock_account
mock_db.session.query.return_value.filter_by.assert_called_once_with(email="user@example.com")
mock_db.session.scalar.assert_called_once()
@patch("controllers.inner_api.app.dsl.db")
def test_returns_none_for_inactive_account(self, mock_db):
mock_account = MagicMock()
mock_account.status = "banned"
mock_db.session.query.return_value.filter_by.return_value.first.return_value = mock_account
mock_db.session.scalar.return_value = mock_account
result = _get_active_account("banned@example.com")
@@ -83,7 +83,7 @@ class TestGetActiveAccount:
@patch("controllers.inner_api.app.dsl.db")
def test_returns_none_for_nonexistent_email(self, mock_db):
mock_db.session.query.return_value.filter_by.return_value.first.return_value = None
mock_db.session.scalar.return_value = None
result = _get_active_account("missing@example.com")
@@ -205,7 +205,7 @@ class TestEnterpriseAppDSLExport:
@patch("controllers.inner_api.app.dsl.db")
def test_export_success_returns_200(self, mock_db, mock_dsl_cls, api_instance, app: Flask):
mock_app = MagicMock()
mock_db.session.query.return_value.filter_by.return_value.first.return_value = mock_app
mock_db.session.get.return_value = mock_app
mock_dsl_cls.export_dsl.return_value = "version: 0.6.0\nkind: app\n"
unwrapped = inspect.unwrap(api_instance.get)
@@ -221,7 +221,7 @@ class TestEnterpriseAppDSLExport:
@patch("controllers.inner_api.app.dsl.db")
def test_export_with_secret(self, mock_db, mock_dsl_cls, api_instance, app: Flask):
mock_app = MagicMock()
mock_db.session.query.return_value.filter_by.return_value.first.return_value = mock_app
mock_db.session.get.return_value = mock_app
mock_dsl_cls.export_dsl.return_value = "yaml-data"
unwrapped = inspect.unwrap(api_instance.get)
@@ -234,7 +234,7 @@ class TestEnterpriseAppDSLExport:
@patch("controllers.inner_api.app.dsl.db")
def test_export_app_not_found_returns_404(self, mock_db, api_instance, app: Flask):
mock_db.session.query.return_value.filter_by.return_value.first.return_value = None
mock_db.session.get.return_value = None
unwrapped = inspect.unwrap(api_instance.get)
with app.test_request_context("?include_secret=false"):

View File

@@ -621,7 +621,7 @@ class TestConvertDatasetRetrieverTool:
class TestBaseAgentRunnerInit:
def test_init_sets_stream_tool_call_and_files(self, mocker):
session = mocker.MagicMock()
session.query.return_value.where.return_value.count.return_value = 2
session.scalar.return_value = 2
mocker.patch.object(module.db, "session", session)
mocker.patch.object(BaseAgentRunner, "organize_agent_history", return_value=[])

View File

@@ -114,13 +114,9 @@ class TestOnToolEnd:
document = mocker.Mock()
document.metadata = {"document_id": "doc-1", "doc_id": "node-1"}
mock_query = mocker.Mock()
mock_db.session.query.return_value = mock_query
mock_query.where.return_value = mock_query
handler.on_tool_end([document])
mock_query.update.assert_called_once()
mock_db.session.execute.assert_called_once()
mock_db.session.commit.assert_called_once()
def test_on_tool_end_non_parent_child_index(self, handler, mocker):
@@ -138,13 +134,9 @@ class TestOnToolEnd:
"dataset_id": "dataset-1",
}
mock_query = mocker.Mock()
mock_db.session.query.return_value = mock_query
mock_query.where.return_value = mock_query
handler.on_tool_end([document])
mock_query.update.assert_called_once()
mock_db.session.execute.assert_called_once()
mock_db.session.commit.assert_called_once()
def test_on_tool_end_empty_documents(self, handler):

View File

@@ -38,13 +38,13 @@ class TestObfuscatedToken:
class TestEncryptToken:
@patch("models.engine.db.session.query")
@patch("extensions.ext_database.db.session.get")
@patch("libs.rsa.encrypt")
def test_successful_encryption(self, mock_encrypt, mock_query):
"""Test successful token encryption"""
mock_tenant = MagicMock()
mock_tenant.encrypt_public_key = "mock_public_key"
mock_query.return_value.where.return_value.first.return_value = mock_tenant
mock_query.return_value = mock_tenant
mock_encrypt.return_value = b"encrypted_data"
result = encrypt_token("tenant-123", "test_token")
@@ -52,10 +52,10 @@ class TestEncryptToken:
assert result == base64.b64encode(b"encrypted_data").decode()
mock_encrypt.assert_called_with("test_token", "mock_public_key")
@patch("models.engine.db.session.query")
@patch("extensions.ext_database.db.session.get")
def test_tenant_not_found(self, mock_query):
"""Test error when tenant doesn't exist"""
mock_query.return_value.where.return_value.first.return_value = None
mock_query.return_value = None
with pytest.raises(ValueError) as exc_info:
encrypt_token("invalid-tenant", "test_token")
@@ -119,7 +119,7 @@ class TestGetDecryptDecoding:
class TestEncryptDecryptIntegration:
@patch("models.engine.db.session.query")
@patch("extensions.ext_database.db.session.get")
@patch("libs.rsa.encrypt")
@patch("libs.rsa.decrypt")
def test_should_encrypt_and_decrypt_consistently(self, mock_decrypt, mock_encrypt, mock_query):
@@ -127,7 +127,7 @@ class TestEncryptDecryptIntegration:
# Setup mock tenant
mock_tenant = MagicMock()
mock_tenant.encrypt_public_key = "mock_public_key"
mock_query.return_value.where.return_value.first.return_value = mock_tenant
mock_query.return_value = mock_tenant
# Setup mock encryption/decryption
original_token = "test_token_123"
@@ -146,14 +146,14 @@ class TestEncryptDecryptIntegration:
class TestSecurity:
"""Critical security tests for encryption system"""
@patch("models.engine.db.session.query")
@patch("extensions.ext_database.db.session.get")
@patch("libs.rsa.encrypt")
def test_cross_tenant_isolation(self, mock_encrypt, mock_query):
"""Ensure tokens encrypted for one tenant cannot be used by another"""
# Setup mock tenant
mock_tenant = MagicMock()
mock_tenant.encrypt_public_key = "tenant1_public_key"
mock_query.return_value.where.return_value.first.return_value = mock_tenant
mock_query.return_value = mock_tenant
mock_encrypt.return_value = b"encrypted_for_tenant1"
# Encrypt token for tenant1
@@ -181,12 +181,12 @@ class TestSecurity:
with pytest.raises(Exception, match="Decryption error"):
decrypt_token("tenant-123", tampered)
@patch("models.engine.db.session.query")
@patch("extensions.ext_database.db.session.get")
@patch("libs.rsa.encrypt")
def test_encryption_randomness(self, mock_encrypt, mock_query):
"""Ensure same plaintext produces different ciphertext"""
mock_tenant = MagicMock(encrypt_public_key="key")
mock_query.return_value.where.return_value.first.return_value = mock_tenant
mock_query.return_value = mock_tenant
# Different outputs for same input
mock_encrypt.side_effect = [b"enc1", b"enc2", b"enc3"]
@@ -205,13 +205,13 @@ class TestEdgeCases:
# Test empty string (which is a valid str type)
assert obfuscated_token("") == ""
@patch("models.engine.db.session.query")
@patch("extensions.ext_database.db.session.get")
@patch("libs.rsa.encrypt")
def test_should_handle_empty_token_encryption(self, mock_encrypt, mock_query):
"""Test encryption of empty token"""
mock_tenant = MagicMock()
mock_tenant.encrypt_public_key = "mock_public_key"
mock_query.return_value.where.return_value.first.return_value = mock_tenant
mock_query.return_value = mock_tenant
mock_encrypt.return_value = b"encrypted_empty"
result = encrypt_token("tenant-123", "")
@@ -219,13 +219,13 @@ class TestEdgeCases:
assert result == base64.b64encode(b"encrypted_empty").decode()
mock_encrypt.assert_called_with("", "mock_public_key")
@patch("models.engine.db.session.query")
@patch("extensions.ext_database.db.session.get")
@patch("libs.rsa.encrypt")
def test_should_handle_special_characters_in_token(self, mock_encrypt, mock_query):
"""Test tokens containing special/unicode characters"""
mock_tenant = MagicMock()
mock_tenant.encrypt_public_key = "mock_public_key"
mock_query.return_value.where.return_value.first.return_value = mock_tenant
mock_query.return_value = mock_tenant
mock_encrypt.return_value = b"encrypted_special"
# Test various special characters
@@ -242,13 +242,13 @@ class TestEdgeCases:
assert result == base64.b64encode(b"encrypted_special").decode()
mock_encrypt.assert_called_with(token, "mock_public_key")
@patch("models.engine.db.session.query")
@patch("extensions.ext_database.db.session.get")
@patch("libs.rsa.encrypt")
def test_should_handle_rsa_size_limits(self, mock_encrypt, mock_query):
"""Test behavior when token exceeds RSA encryption limits"""
mock_tenant = MagicMock()
mock_tenant.encrypt_public_key = "mock_public_key"
mock_query.return_value.where.return_value.first.return_value = mock_tenant
mock_query.return_value = mock_tenant
# RSA 2048-bit can only encrypt ~245 bytes
# The actual limit depends on padding scheme

View File

@@ -314,8 +314,8 @@ class TestLLMGenerator:
assert "An unexpected error occurred" in result["error"]
def test_instruction_modify_legacy_no_last_run(self, mock_model_instance, model_config_entity):
with patch("extensions.ext_database.db.session.query") as mock_query:
mock_query.return_value.where.return_value.order_by.return_value.first.return_value = None
with patch("extensions.ext_database.db.session.scalar") as mock_scalar:
mock_scalar.return_value = None
# Mock __instruction_modify_common call via invoke_llm
mock_response = MagicMock()
@@ -328,12 +328,12 @@ class TestLLMGenerator:
assert result == {"modified": "prompt"}
def test_instruction_modify_legacy_with_last_run(self, mock_model_instance, model_config_entity):
with patch("extensions.ext_database.db.session.query") as mock_query:
with patch("extensions.ext_database.db.session.scalar") as mock_scalar:
last_run = MagicMock()
last_run.query = "q"
last_run.answer = "a"
last_run.error = "e"
mock_query.return_value.where.return_value.order_by.return_value.first.return_value = last_run
mock_scalar.return_value = last_run
mock_response = MagicMock()
mock_response.message.get_text_content.return_value = '{"modified": "prompt"}'
@@ -483,8 +483,8 @@ class TestLLMGenerator:
def test_instruction_modify_common_placeholders(self, mock_model_instance, model_config_entity):
# Testing placeholders replacement via instruction_modify_legacy for convenience
with patch("extensions.ext_database.db.session.query") as mock_query:
mock_query.return_value.where.return_value.order_by.return_value.first.return_value = None
with patch("extensions.ext_database.db.session.scalar") as mock_scalar:
mock_scalar.return_value = None
mock_response = MagicMock()
mock_response.message.get_text_content.return_value = '{"ok": true}'
@@ -504,8 +504,8 @@ class TestLLMGenerator:
assert "current_val" in user_msg_dict["instruction"]
def test_instruction_modify_common_no_braces(self, mock_model_instance, model_config_entity):
with patch("extensions.ext_database.db.session.query") as mock_query:
mock_query.return_value.where.return_value.order_by.return_value.first.return_value = None
with patch("extensions.ext_database.db.session.scalar") as mock_scalar:
mock_scalar.return_value = None
mock_response = MagicMock()
mock_response.message.get_text_content.return_value = "No braces here"
mock_model_instance.invoke_llm.return_value = mock_response
@@ -516,8 +516,8 @@ class TestLLMGenerator:
assert "Could not find a valid JSON object" in result["error"]
def test_instruction_modify_common_not_dict(self, mock_model_instance, model_config_entity):
with patch("extensions.ext_database.db.session.query") as mock_query:
mock_query.return_value.where.return_value.order_by.return_value.first.return_value = None
with patch("extensions.ext_database.db.session.scalar") as mock_scalar:
mock_scalar.return_value = None
mock_response = MagicMock()
mock_response.message.get_text_content.return_value = "[1, 2, 3]"
mock_model_instance.invoke_llm.return_value = mock_response
@@ -556,8 +556,8 @@ class TestLLMGenerator:
)
def test_instruction_modify_common_invoke_error(self, mock_model_instance, model_config_entity):
with patch("extensions.ext_database.db.session.query") as mock_query:
mock_query.return_value.where.return_value.order_by.return_value.first.return_value = None
with patch("extensions.ext_database.db.session.scalar") as mock_scalar:
mock_scalar.return_value = None
mock_model_instance.invoke_llm.side_effect = InvokeError("Invoke Failed")
result = LLMGenerator.instruction_modify_legacy(
@@ -566,8 +566,8 @@ class TestLLMGenerator:
assert "Failed to generate code" in result["error"]
def test_instruction_modify_common_exception(self, mock_model_instance, model_config_entity):
with patch("extensions.ext_database.db.session.query") as mock_query:
mock_query.return_value.where.return_value.order_by.return_value.first.return_value = None
with patch("extensions.ext_database.db.session.scalar") as mock_scalar:
mock_scalar.return_value = None
mock_model_instance.invoke_llm.side_effect = Exception("Random error")
result = LLMGenerator.instruction_modify_legacy(
@@ -576,8 +576,8 @@ class TestLLMGenerator:
assert "An unexpected error occurred" in result["error"]
def test_instruction_modify_common_json_error(self, mock_model_instance, model_config_entity):
with patch("extensions.ext_database.db.session.query") as mock_query:
mock_query.return_value.where.return_value.order_by.return_value.first.return_value = None
with patch("extensions.ext_database.db.session.scalar") as mock_scalar:
mock_scalar.return_value = None
mock_response = MagicMock()
mock_response.message.get_text_content.return_value = "No JSON here"

View File

@@ -332,27 +332,21 @@ class TestPluginAppBackwardsInvocation:
PluginAppBackwardsInvocation._get_user("uid")
def test_get_app_returns_app(self, mocker):
query_chain = MagicMock()
query_chain.where.return_value = query_chain
app_obj = MagicMock(id="app")
query_chain.first.return_value = app_obj
db = SimpleNamespace(session=MagicMock(query=MagicMock(return_value=query_chain)))
db = SimpleNamespace(session=MagicMock(scalar=MagicMock(return_value=app_obj)))
mocker.patch("core.plugin.backwards_invocation.app.db", db)
assert PluginAppBackwardsInvocation._get_app("app", "tenant") is app_obj
def test_get_app_raises_when_missing(self, mocker):
query_chain = MagicMock()
query_chain.where.return_value = query_chain
query_chain.first.return_value = None
db = SimpleNamespace(session=MagicMock(query=MagicMock(return_value=query_chain)))
db = SimpleNamespace(session=MagicMock(scalar=MagicMock(return_value=None)))
mocker.patch("core.plugin.backwards_invocation.app.db", db)
with pytest.raises(ValueError, match="app not found"):
PluginAppBackwardsInvocation._get_app("app", "tenant")
def test_get_app_raises_when_query_fails(self, mocker):
db = SimpleNamespace(session=MagicMock(query=MagicMock(side_effect=RuntimeError("db down"))))
db = SimpleNamespace(session=MagicMock(scalar=MagicMock(side_effect=RuntimeError("db down"))))
mocker.patch("core.plugin.backwards_invocation.app.db", db)
with pytest.raises(ValueError, match="app not found"):

View File

@@ -38,11 +38,9 @@ def test_tool_label_manager_filter_tool_labels():
def test_tool_label_manager_update_tool_labels_db():
controller = _api_controller("api-1")
with patch("core.tools.tool_label_manager.db") as mock_db:
delete_query = mock_db.session.query.return_value.where.return_value
delete_query.delete.return_value = None
ToolLabelManager.update_tool_labels(controller, ["search", "search", "invalid"])
delete_query.delete.assert_called_once()
mock_db.session.execute.assert_called_once()
# only one valid unique label should be inserted.
assert mock_db.session.add.call_count == 1
mock_db.session.commit.assert_called_once()

View File

@@ -220,9 +220,7 @@ def test_get_tool_runtime_builtin_with_credentials_decrypts_and_forks():
with patch.object(ToolManager, "get_builtin_provider", return_value=controller):
with patch("core.helper.credential_utils.check_credential_policy_compliance"):
with patch("core.tools.tool_manager.db") as mock_db:
mock_db.session.query.return_value.where.return_value.order_by.return_value.first.return_value = (
builtin_provider
)
mock_db.session.scalar.return_value = builtin_provider
encrypter = Mock()
encrypter.decrypt.return_value = {"api_key": "secret"}
cache = Mock()
@@ -274,7 +272,7 @@ def test_get_tool_runtime_builtin_refreshes_expired_oauth_credentials(
)
refreshed = SimpleNamespace(credentials={"token": "new"}, expires_at=123456)
mock_db.session.query.return_value.where.return_value.order_by.return_value.first.return_value = builtin_provider
mock_db.session.scalar.return_value = builtin_provider
encrypter = Mock()
encrypter.decrypt.return_value = {"token": "old"}
encrypter.encrypt.return_value = {"token": "encrypted"}
@@ -698,12 +696,10 @@ def test_get_api_provider_controller_returns_controller_and_credentials():
privacy_policy="privacy",
custom_disclaimer="disclaimer",
)
db_query = Mock()
db_query.where.return_value.first.return_value = provider
controller = Mock()
with patch("core.tools.tool_manager.db") as mock_db:
mock_db.session.query.return_value = db_query
mock_db.session.scalar.return_value = provider
with patch(
"core.tools.tool_manager.ApiToolProviderController.from_db", return_value=controller
) as mock_from_db:
@@ -730,12 +726,10 @@ def test_user_get_api_provider_masks_credentials_and_adds_labels():
privacy_policy="privacy",
custom_disclaimer="disclaimer",
)
db_query = Mock()
db_query.where.return_value.first.return_value = provider
controller = Mock()
with patch("core.tools.tool_manager.db") as mock_db:
mock_db.session.query.return_value = db_query
mock_db.session.scalar.return_value = provider
with patch("core.tools.tool_manager.ApiToolProviderController.from_db", return_value=controller):
encrypter = Mock()
encrypter.decrypt.return_value = {"api_key_value": "secret"}
@@ -750,7 +744,7 @@ def test_user_get_api_provider_masks_credentials_and_adds_labels():
def test_get_api_provider_controller_not_found_raises():
with patch("core.tools.tool_manager.db") as mock_db:
mock_db.session.query.return_value.where.return_value.first.return_value = None
mock_db.session.scalar.return_value = None
with pytest.raises(ToolProviderNotFoundError, match="api provider missing not found"):
ToolManager.get_api_provider_controller("tenant-1", "missing")
@@ -809,14 +803,14 @@ def test_generate_tool_icon_urls_for_workflow_and_api():
workflow_provider = SimpleNamespace(icon='{"background": "#222", "content": "W"}')
api_provider = SimpleNamespace(icon='{"background": "#333", "content": "A"}')
with patch("core.tools.tool_manager.db") as mock_db:
mock_db.session.query.return_value.where.return_value.first.side_effect = [workflow_provider, api_provider]
mock_db.session.scalar.side_effect = [workflow_provider, api_provider]
assert ToolManager.generate_workflow_tool_icon_url("tenant-1", "wf-1") == {"background": "#222", "content": "W"}
assert ToolManager.generate_api_tool_icon_url("tenant-1", "api-1") == {"background": "#333", "content": "A"}
def test_generate_tool_icon_urls_missing_workflow_and_api_use_default():
with patch("core.tools.tool_manager.db") as mock_db:
mock_db.session.query.return_value.where.return_value.first.return_value = None
mock_db.session.scalar.return_value = None
assert ToolManager.generate_workflow_tool_icon_url("tenant-1", "missing")["background"] == "#252525"
assert ToolManager.generate_api_tool_icon_url("tenant-1", "missing")["background"] == "#252525"

View File

@@ -263,7 +263,7 @@ def test_single_dataset_retriever_non_economy_run_sorts_context_and_resources():
)
db_session = Mock()
db_session.scalar.side_effect = [dataset, lookup_doc_low, lookup_doc_high]
db_session.query.return_value.filter_by.return_value.first.return_value = dataset
db_session.get.return_value = dataset
tool = SingleDatasetRetrieverTool(
tenant_id="tenant-1",
@@ -444,7 +444,7 @@ def test_multi_dataset_retriever_run_orders_segments_and_returns_resources():
)
db_session = Mock()
db_session.scalars.return_value.all.return_value = [segment_for_node_2, segment_for_node_1]
db_session.query.return_value.filter_by.return_value.first.side_effect = [
db_session.get.side_effect = [
SimpleNamespace(id="dataset-2", name="Dataset Two"),
SimpleNamespace(id="dataset-1", name="Dataset One"),
]

View File

@@ -1,558 +0,0 @@
from __future__ import annotations
from dataclasses import dataclass
from datetime import UTC, datetime
from types import SimpleNamespace
from typing import Any, cast
from unittest.mock import MagicMock
import pytest
from pytest_mock import MockerFixture
from core.rag.index_processor.constant.built_in_field import BuiltInField, MetadataDataSource
from models.dataset import Dataset
from services.entities.knowledge_entities.knowledge_entities import (
DocumentMetadataOperation,
MetadataArgs,
MetadataDetail,
MetadataOperationData,
)
from services.metadata_service import MetadataService
@dataclass
class _DocumentStub:
id: str
name: str
uploader: str
upload_date: datetime
last_update_date: datetime
data_source_type: str
doc_metadata: dict[str, object] | None
@pytest.fixture
def mock_db(mocker: MockerFixture) -> MagicMock:
mocked_db = mocker.patch("services.metadata_service.db")
mocked_db.session = MagicMock()
return mocked_db
@pytest.fixture
def mock_redis_client(mocker: MockerFixture) -> MagicMock:
return mocker.patch("services.metadata_service.redis_client")
@pytest.fixture
def mock_current_account(mocker: MockerFixture) -> MagicMock:
mock_user = SimpleNamespace(id="user-1")
return mocker.patch("services.metadata_service.current_account_with_tenant", return_value=(mock_user, "tenant-1"))
def _build_document(document_id: str, doc_metadata: dict[str, object] | None = None) -> _DocumentStub:
now = datetime(2025, 1, 1, 10, 30, tzinfo=UTC)
return _DocumentStub(
id=document_id,
name=f"doc-{document_id}",
uploader="qa@example.com",
upload_date=now,
last_update_date=now,
data_source_type="upload_file",
doc_metadata=doc_metadata,
)
def _dataset(**kwargs: Any) -> Dataset:
return cast(Dataset, SimpleNamespace(**kwargs))
def test_create_metadata_should_raise_value_error_when_name_exceeds_limit() -> None:
# Arrange
metadata_args = MetadataArgs(type="string", name="x" * 256)
# Act + Assert
with pytest.raises(ValueError, match="cannot exceed 255"):
MetadataService.create_metadata("dataset-1", metadata_args)
def test_create_metadata_should_raise_value_error_when_metadata_name_already_exists(
mock_db: MagicMock,
mock_current_account: MagicMock,
) -> None:
# Arrange
metadata_args = MetadataArgs(type="string", name="priority")
mock_db.session.query.return_value.filter_by.return_value.first.return_value = object()
# Act + Assert
with pytest.raises(ValueError, match="already exists"):
MetadataService.create_metadata("dataset-1", metadata_args)
# Assert
mock_current_account.assert_called_once()
def test_create_metadata_should_raise_value_error_when_name_collides_with_builtin(
mock_db: MagicMock, mock_current_account: MagicMock
) -> None:
# Arrange
metadata_args = MetadataArgs(type="string", name=BuiltInField.document_name)
mock_db.session.query.return_value.filter_by.return_value.first.return_value = None
# Act + Assert
with pytest.raises(ValueError, match="Built-in fields"):
MetadataService.create_metadata("dataset-1", metadata_args)
def test_create_metadata_should_persist_metadata_when_input_is_valid(
mock_db: MagicMock, mock_current_account: MagicMock
) -> None:
# Arrange
metadata_args = MetadataArgs(type="number", name="score")
mock_db.session.query.return_value.filter_by.return_value.first.return_value = None
# Act
result = MetadataService.create_metadata("dataset-1", metadata_args)
# Assert
assert result.tenant_id == "tenant-1"
assert result.dataset_id == "dataset-1"
assert result.type == "number"
assert result.name == "score"
assert result.created_by == "user-1"
mock_db.session.add.assert_called_once_with(result)
mock_db.session.commit.assert_called_once()
mock_current_account.assert_called_once()
def test_update_metadata_name_should_raise_value_error_when_name_exceeds_limit() -> None:
# Arrange
too_long_name = "x" * 256
# Act + Assert
with pytest.raises(ValueError, match="cannot exceed 255"):
MetadataService.update_metadata_name("dataset-1", "metadata-1", too_long_name)
def test_update_metadata_name_should_raise_value_error_when_duplicate_name_exists(
mock_db: MagicMock, mock_current_account: MagicMock
) -> None:
# Arrange
mock_db.session.query.return_value.filter_by.return_value.first.return_value = object()
# Act + Assert
with pytest.raises(ValueError, match="already exists"):
MetadataService.update_metadata_name("dataset-1", "metadata-1", "duplicate")
# Assert
mock_current_account.assert_called_once()
def test_update_metadata_name_should_raise_value_error_when_name_collides_with_builtin(
mock_db: MagicMock,
mock_current_account: MagicMock,
) -> None:
# Arrange
mock_db.session.query.return_value.filter_by.return_value.first.return_value = None
# Act + Assert
with pytest.raises(ValueError, match="Built-in fields"):
MetadataService.update_metadata_name("dataset-1", "metadata-1", BuiltInField.source)
# Assert
mock_current_account.assert_called_once()
def test_update_metadata_name_should_update_bound_documents_and_return_metadata(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mock_current_account: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
fixed_now = datetime(2025, 2, 1, 0, 0, tzinfo=UTC)
mocker.patch("services.metadata_service.naive_utc_now", return_value=fixed_now)
metadata = SimpleNamespace(id="metadata-1", name="old_name", updated_by=None, updated_at=None)
bindings = [SimpleNamespace(document_id="doc-1"), SimpleNamespace(document_id="doc-2")]
query_duplicate = MagicMock()
query_duplicate.filter_by.return_value.first.return_value = None
query_metadata = MagicMock()
query_metadata.filter_by.return_value.first.return_value = metadata
query_bindings = MagicMock()
query_bindings.filter_by.return_value.all.return_value = bindings
mock_db.session.query.side_effect = [query_duplicate, query_metadata, query_bindings]
doc_1 = _build_document("1", {"old_name": "value", "other": "keep"})
doc_2 = _build_document("2", None)
mock_get_documents = mocker.patch("services.metadata_service.DocumentService.get_document_by_ids")
mock_get_documents.return_value = [doc_1, doc_2]
# Act
result = MetadataService.update_metadata_name("dataset-1", "metadata-1", "new_name")
# Assert
assert result is metadata
assert metadata.name == "new_name"
assert metadata.updated_by == "user-1"
assert metadata.updated_at == fixed_now
assert doc_1.doc_metadata == {"other": "keep", "new_name": "value"}
assert doc_2.doc_metadata == {"new_name": None}
mock_get_documents.assert_called_once_with(["doc-1", "doc-2"])
mock_db.session.commit.assert_called_once()
mock_redis_client.delete.assert_called_once_with("dataset_metadata_lock_dataset-1")
mock_current_account.assert_called_once()
def test_update_metadata_name_should_return_none_when_metadata_does_not_exist(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mock_current_account: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
mock_logger = mocker.patch("services.metadata_service.logger")
query_duplicate = MagicMock()
query_duplicate.filter_by.return_value.first.return_value = None
query_metadata = MagicMock()
query_metadata.filter_by.return_value.first.return_value = None
mock_db.session.query.side_effect = [query_duplicate, query_metadata]
# Act
result = MetadataService.update_metadata_name("dataset-1", "missing-id", "new_name")
# Assert
assert result is None
mock_logger.exception.assert_called_once()
mock_redis_client.delete.assert_called_once_with("dataset_metadata_lock_dataset-1")
mock_current_account.assert_called_once()
def test_delete_metadata_should_remove_metadata_and_related_document_fields(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
metadata = SimpleNamespace(id="metadata-1", name="obsolete")
bindings = [SimpleNamespace(document_id="doc-1")]
query_metadata = MagicMock()
query_metadata.filter_by.return_value.first.return_value = metadata
query_bindings = MagicMock()
query_bindings.filter_by.return_value.all.return_value = bindings
mock_db.session.query.side_effect = [query_metadata, query_bindings]
document = _build_document("1", {"obsolete": "legacy", "remaining": "value"})
mocker.patch("services.metadata_service.DocumentService.get_document_by_ids", return_value=[document])
# Act
result = MetadataService.delete_metadata("dataset-1", "metadata-1")
# Assert
assert result is metadata
assert document.doc_metadata == {"remaining": "value"}
mock_db.session.delete.assert_called_once_with(metadata)
mock_db.session.commit.assert_called_once()
mock_redis_client.delete.assert_called_once_with("dataset_metadata_lock_dataset-1")
def test_delete_metadata_should_return_none_when_metadata_is_missing(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
mock_db.session.query.return_value.filter_by.return_value.first.return_value = None
mock_logger = mocker.patch("services.metadata_service.logger")
# Act
result = MetadataService.delete_metadata("dataset-1", "missing-id")
# Assert
assert result is None
mock_logger.exception.assert_called_once()
mock_redis_client.delete.assert_called_once_with("dataset_metadata_lock_dataset-1")
def test_get_built_in_fields_should_return_all_expected_fields() -> None:
# Arrange
expected_names = {
BuiltInField.document_name,
BuiltInField.uploader,
BuiltInField.upload_date,
BuiltInField.last_update_date,
BuiltInField.source,
}
# Act
result = MetadataService.get_built_in_fields()
# Assert
assert {item["name"] for item in result} == expected_names
assert [item["type"] for item in result] == ["string", "string", "time", "time", "string"]
def test_enable_built_in_field_should_return_immediately_when_already_enabled(
mock_db: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
dataset = _dataset(id="dataset-1", built_in_field_enabled=True)
get_docs = mocker.patch("services.metadata_service.DocumentService.get_working_documents_by_dataset_id")
# Act
MetadataService.enable_built_in_field(dataset)
# Assert
get_docs.assert_not_called()
mock_db.session.commit.assert_not_called()
def test_enable_built_in_field_should_populate_documents_and_enable_flag(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
dataset = _dataset(id="dataset-1", built_in_field_enabled=False)
doc_1 = _build_document("1", {"custom": "value"})
doc_2 = _build_document("2", None)
mocker.patch(
"services.metadata_service.DocumentService.get_working_documents_by_dataset_id",
return_value=[doc_1, doc_2],
)
# Act
MetadataService.enable_built_in_field(dataset)
# Assert
assert dataset.built_in_field_enabled is True
assert doc_1.doc_metadata is not None
assert doc_1.doc_metadata[BuiltInField.document_name] == "doc-1"
assert doc_1.doc_metadata[BuiltInField.source] == MetadataDataSource.upload_file
assert doc_2.doc_metadata is not None
assert doc_2.doc_metadata[BuiltInField.uploader] == "qa@example.com"
mock_db.session.commit.assert_called_once()
mock_redis_client.delete.assert_called_once_with("dataset_metadata_lock_dataset-1")
def test_disable_built_in_field_should_return_immediately_when_already_disabled(
mock_db: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
dataset = _dataset(id="dataset-1", built_in_field_enabled=False)
get_docs = mocker.patch("services.metadata_service.DocumentService.get_working_documents_by_dataset_id")
# Act
MetadataService.disable_built_in_field(dataset)
# Assert
get_docs.assert_not_called()
mock_db.session.commit.assert_not_called()
def test_disable_built_in_field_should_remove_builtin_keys_and_disable_flag(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
dataset = _dataset(id="dataset-1", built_in_field_enabled=True)
document = _build_document(
"1",
{
BuiltInField.document_name: "doc",
BuiltInField.uploader: "user",
BuiltInField.upload_date: 1.0,
BuiltInField.last_update_date: 2.0,
BuiltInField.source: MetadataDataSource.upload_file,
"custom": "keep",
},
)
mocker.patch(
"services.metadata_service.DocumentService.get_working_documents_by_dataset_id",
return_value=[document],
)
# Act
MetadataService.disable_built_in_field(dataset)
# Assert
assert dataset.built_in_field_enabled is False
assert document.doc_metadata == {"custom": "keep"}
mock_db.session.commit.assert_called_once()
mock_redis_client.delete.assert_called_once_with("dataset_metadata_lock_dataset-1")
def test_update_documents_metadata_should_replace_metadata_and_create_bindings_on_full_update(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mock_current_account: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
dataset = _dataset(id="dataset-1", built_in_field_enabled=False)
document = _build_document("1", {"legacy": "value"})
mocker.patch("services.metadata_service.DocumentService.get_document", return_value=document)
delete_chain = mock_db.session.query.return_value.filter_by.return_value
delete_chain.delete.return_value = 1
operation = DocumentMetadataOperation(
document_id="1",
metadata_list=[MetadataDetail(id="meta-1", name="priority", value="high")],
partial_update=False,
)
metadata_args = MetadataOperationData(operation_data=[operation])
# Act
MetadataService.update_documents_metadata(dataset, metadata_args)
# Assert
assert document.doc_metadata == {"priority": "high"}
delete_chain.delete.assert_called_once()
assert mock_db.session.commit.call_count == 1
mock_redis_client.delete.assert_called_once_with("document_metadata_lock_1")
mock_current_account.assert_called_once()
def test_update_documents_metadata_should_skip_existing_binding_and_preserve_existing_fields_on_partial_update(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mock_current_account: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
dataset = _dataset(id="dataset-1", built_in_field_enabled=True)
document = _build_document("1", {"existing": "value"})
mocker.patch("services.metadata_service.DocumentService.get_document", return_value=document)
mock_db.session.query.return_value.filter_by.return_value.first.return_value = object()
operation = DocumentMetadataOperation(
document_id="1",
metadata_list=[MetadataDetail(id="meta-1", name="new_key", value="new_value")],
partial_update=True,
)
metadata_args = MetadataOperationData(operation_data=[operation])
# Act
MetadataService.update_documents_metadata(dataset, metadata_args)
# Assert
assert document.doc_metadata is not None
assert document.doc_metadata["existing"] == "value"
assert document.doc_metadata["new_key"] == "new_value"
assert document.doc_metadata[BuiltInField.source] == MetadataDataSource.upload_file
assert mock_db.session.commit.call_count == 1
assert mock_db.session.add.call_count == 1
mock_redis_client.delete.assert_called_once_with("document_metadata_lock_1")
mock_current_account.assert_called_once()
def test_update_documents_metadata_should_raise_and_rollback_when_document_not_found(
mock_db: MagicMock,
mock_redis_client: MagicMock,
mocker: MockerFixture,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
dataset = _dataset(id="dataset-1", built_in_field_enabled=False)
mocker.patch("services.metadata_service.DocumentService.get_document", return_value=None)
operation = DocumentMetadataOperation(document_id="404", metadata_list=[], partial_update=True)
metadata_args = MetadataOperationData(operation_data=[operation])
# Act + Assert
with pytest.raises(ValueError, match="Document not found"):
MetadataService.update_documents_metadata(dataset, metadata_args)
# Assert
mock_db.session.rollback.assert_called_once()
mock_redis_client.delete.assert_called_once_with("document_metadata_lock_404")
@pytest.mark.parametrize(
("dataset_id", "document_id", "expected_key"),
[
("dataset-1", None, "dataset_metadata_lock_dataset-1"),
(None, "doc-1", "document_metadata_lock_doc-1"),
],
)
def test_knowledge_base_metadata_lock_check_should_set_lock_when_not_already_locked(
dataset_id: str | None,
document_id: str | None,
expected_key: str,
mock_redis_client: MagicMock,
) -> None:
# Arrange
mock_redis_client.get.return_value = None
# Act
MetadataService.knowledge_base_metadata_lock_check(dataset_id, document_id)
# Assert
mock_redis_client.set.assert_called_once_with(expected_key, 1, ex=3600)
def test_knowledge_base_metadata_lock_check_should_raise_when_dataset_lock_exists(
mock_redis_client: MagicMock,
) -> None:
# Arrange
mock_redis_client.get.return_value = 1
# Act + Assert
with pytest.raises(ValueError, match="knowledge base metadata operation is running"):
MetadataService.knowledge_base_metadata_lock_check("dataset-1", None)
def test_knowledge_base_metadata_lock_check_should_raise_when_document_lock_exists(
mock_redis_client: MagicMock,
) -> None:
# Arrange
mock_redis_client.get.return_value = 1
# Act + Assert
with pytest.raises(ValueError, match="document metadata operation is running"):
MetadataService.knowledge_base_metadata_lock_check(None, "doc-1")
def test_get_dataset_metadatas_should_exclude_builtin_and_include_binding_counts(mock_db: MagicMock) -> None:
# Arrange
dataset = _dataset(
id="dataset-1",
built_in_field_enabled=True,
doc_metadata=[
{"id": "meta-1", "name": "priority", "type": "string"},
{"id": "built-in", "name": "ignored", "type": "string"},
{"id": "meta-2", "name": "score", "type": "number"},
],
)
count_chain = mock_db.session.query.return_value.filter_by.return_value
count_chain.count.side_effect = [3, 1]
# Act
result = MetadataService.get_dataset_metadatas(dataset)
# Assert
assert result["built_in_field_enabled"] is True
assert result["doc_metadata"] == [
{"id": "meta-1", "name": "priority", "type": "string", "count": 3},
{"id": "meta-2", "name": "score", "type": "number", "count": 1},
]
def test_get_dataset_metadatas_should_return_empty_list_when_no_metadata(mock_db: MagicMock) -> None:
# Arrange
dataset = _dataset(id="dataset-1", built_in_field_enabled=False, doc_metadata=None)
# Act
result = MetadataService.get_dataset_metadatas(dataset)
# Assert
assert result == {"doc_metadata": [], "built_in_field_enabled": False}
mock_db.session.query.assert_not_called()

File diff suppressed because it is too large Load Diff

View File

@@ -1,576 +0,0 @@
from __future__ import annotations
from types import SimpleNamespace
from typing import Any, cast
from unittest.mock import MagicMock
import pytest
from pytest_mock import MockerFixture
from models.account import Tenant
# ---------------------------------------------------------------------------
# Constants used throughout the tests
# ---------------------------------------------------------------------------
TENANT_ID = "tenant-abc"
ACCOUNT_ID = "account-xyz"
FILES_BASE_URL = "https://files.example.com"
DB_PATH = "services.workspace_service.db"
FEATURE_SERVICE_PATH = "services.workspace_service.FeatureService.get_features"
TENANT_SERVICE_PATH = "services.workspace_service.TenantService.has_roles"
DIFY_CONFIG_PATH = "services.workspace_service.dify_config"
CURRENT_USER_PATH = "services.workspace_service.current_user"
CREDIT_POOL_SERVICE_PATH = "services.credit_pool_service.CreditPoolService.get_pool"
# ---------------------------------------------------------------------------
# Helpers / factories
# ---------------------------------------------------------------------------
def _make_tenant(
tenant_id: str = TENANT_ID,
name: str = "My Workspace",
plan: str = "sandbox",
status: str = "active",
custom_config: dict | None = None,
) -> Tenant:
"""Create a minimal Tenant-like namespace."""
return cast(
Tenant,
SimpleNamespace(
id=tenant_id,
name=name,
plan=plan,
status=status,
created_at="2024-01-01T00:00:00Z",
custom_config_dict=custom_config or {},
),
)
def _make_feature(
can_replace_logo: bool = False,
next_credit_reset_date: str | None = None,
billing_plan: str = "sandbox",
) -> MagicMock:
"""Create a feature namespace matching what FeatureService.get_features returns."""
feature = MagicMock()
feature.can_replace_logo = can_replace_logo
feature.next_credit_reset_date = next_credit_reset_date
feature.billing.subscription.plan = billing_plan
return feature
def _make_pool(quota_limit: int, quota_used: int) -> MagicMock:
pool = MagicMock()
pool.quota_limit = quota_limit
pool.quota_used = quota_used
return pool
def _make_tenant_account_join(role: str = "normal") -> SimpleNamespace:
return SimpleNamespace(role=role)
def _tenant_info(result: object) -> dict[str, Any] | None:
return cast(dict[str, Any] | None, result)
# ---------------------------------------------------------------------------
# Shared fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def mock_current_user() -> SimpleNamespace:
"""Return a lightweight current_user stand-in."""
return SimpleNamespace(id=ACCOUNT_ID)
@pytest.fixture
def basic_mocks(mocker: MockerFixture, mock_current_user: SimpleNamespace) -> dict:
"""
Patch the common external boundaries used by WorkspaceService.get_tenant_info.
Returns a dict of named mocks so individual tests can customise them.
"""
mocker.patch(CURRENT_USER_PATH, mock_current_user)
mock_db_session = mocker.patch(f"{DB_PATH}.session")
mock_query_chain = MagicMock()
mock_db_session.query.return_value = mock_query_chain
mock_query_chain.where.return_value = mock_query_chain
mock_query_chain.first.return_value = _make_tenant_account_join(role="owner")
mock_feature = mocker.patch(FEATURE_SERVICE_PATH, return_value=_make_feature())
mock_has_roles = mocker.patch(TENANT_SERVICE_PATH, return_value=False)
mock_config = mocker.patch(DIFY_CONFIG_PATH)
mock_config.EDITION = "SELF_HOSTED"
mock_config.FILES_URL = FILES_BASE_URL
return {
"db_session": mock_db_session,
"query_chain": mock_query_chain,
"get_features": mock_feature,
"has_roles": mock_has_roles,
"config": mock_config,
}
# ---------------------------------------------------------------------------
# 1. None Tenant Handling
# ---------------------------------------------------------------------------
def test_get_tenant_info_should_return_none_when_tenant_is_none() -> None:
"""get_tenant_info should short-circuit and return None for a falsy tenant."""
from services.workspace_service import WorkspaceService
# Arrange
tenant = None
# Act
result = WorkspaceService.get_tenant_info(cast(Tenant, tenant))
# Assert
assert result is None
def test_get_tenant_info_should_return_none_when_tenant_is_falsy() -> None:
"""get_tenant_info treats any falsy value as absent (e.g. empty string, 0)."""
from services.workspace_service import WorkspaceService
# Arrange / Act / Assert
assert WorkspaceService.get_tenant_info("") is None # type: ignore[arg-type]
# ---------------------------------------------------------------------------
# 2. Basic Tenant Info — happy path
# ---------------------------------------------------------------------------
def test_get_tenant_info_should_return_base_fields(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""get_tenant_info should always return the six base scalar fields."""
from services.workspace_service import WorkspaceService
# Arrange
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["id"] == TENANT_ID
assert result["name"] == "My Workspace"
assert result["plan"] == "sandbox"
assert result["status"] == "active"
assert result["created_at"] == "2024-01-01T00:00:00Z"
assert result["trial_end_reason"] is None
def test_get_tenant_info_should_populate_role_from_tenant_account_join(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""The 'role' field should be taken from TenantAccountJoin, not the default."""
from services.workspace_service import WorkspaceService
# Arrange
basic_mocks["query_chain"].first.return_value = _make_tenant_account_join(role="admin")
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["role"] == "admin"
def test_get_tenant_info_should_raise_assertion_when_tenant_account_join_missing(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""
The service asserts that TenantAccountJoin exists.
Missing join should raise AssertionError.
"""
from services.workspace_service import WorkspaceService
# Arrange
basic_mocks["query_chain"].first.return_value = None
tenant = _make_tenant()
# Act + Assert
with pytest.raises(AssertionError, match="TenantAccountJoin not found"):
WorkspaceService.get_tenant_info(tenant)
# ---------------------------------------------------------------------------
# 3. Logo Customisation
# ---------------------------------------------------------------------------
def test_get_tenant_info_should_include_custom_config_when_logo_allowed_and_admin(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""custom_config block should appear for OWNER/ADMIN when can_replace_logo is True."""
from services.workspace_service import WorkspaceService
# Arrange
basic_mocks["get_features"].return_value = _make_feature(can_replace_logo=True)
basic_mocks["has_roles"].return_value = True
tenant = _make_tenant(
custom_config={
"replace_webapp_logo": True,
"remove_webapp_brand": True,
}
)
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert "custom_config" in result
assert result["custom_config"]["remove_webapp_brand"] is True
expected_logo_url = f"{FILES_BASE_URL}/files/workspaces/{TENANT_ID}/webapp-logo"
assert result["custom_config"]["replace_webapp_logo"] == expected_logo_url
def test_get_tenant_info_should_set_replace_webapp_logo_to_none_when_flag_absent(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""replace_webapp_logo should be None when custom_config_dict does not have the key."""
from services.workspace_service import WorkspaceService
# Arrange
basic_mocks["get_features"].return_value = _make_feature(can_replace_logo=True)
basic_mocks["has_roles"].return_value = True
tenant = _make_tenant(custom_config={}) # no replace_webapp_logo key
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["custom_config"]["replace_webapp_logo"] is None
def test_get_tenant_info_should_not_include_custom_config_when_logo_not_allowed(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""custom_config should be absent when can_replace_logo is False."""
from services.workspace_service import WorkspaceService
# Arrange
basic_mocks["get_features"].return_value = _make_feature(can_replace_logo=False)
basic_mocks["has_roles"].return_value = True
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert "custom_config" not in result
def test_get_tenant_info_should_not_include_custom_config_when_user_not_admin(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""custom_config block is gated on OWNER or ADMIN role."""
from services.workspace_service import WorkspaceService
# Arrange
basic_mocks["get_features"].return_value = _make_feature(can_replace_logo=True)
basic_mocks["has_roles"].return_value = False # regular member
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert "custom_config" not in result
def test_get_tenant_info_should_use_files_url_for_logo_url(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""The logo URL should use dify_config.FILES_URL as the base."""
from services.workspace_service import WorkspaceService
# Arrange
custom_base = "https://cdn.mycompany.io"
basic_mocks["config"].FILES_URL = custom_base
basic_mocks["get_features"].return_value = _make_feature(can_replace_logo=True)
basic_mocks["has_roles"].return_value = True
tenant = _make_tenant(custom_config={"replace_webapp_logo": True})
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["custom_config"]["replace_webapp_logo"].startswith(custom_base)
# ---------------------------------------------------------------------------
# 4. Cloud-Edition Credit Features
# ---------------------------------------------------------------------------
CLOUD_BILLING_PLAN_NON_SANDBOX = "professional" # any plan that is not SANDBOX
@pytest.fixture
def cloud_mocks(mocker: MockerFixture, mock_current_user: SimpleNamespace) -> dict:
"""Patches for CLOUD edition tests, billing plan = professional by default."""
mocker.patch(CURRENT_USER_PATH, mock_current_user)
mock_db_session = mocker.patch(f"{DB_PATH}.session")
mock_query_chain = MagicMock()
mock_db_session.query.return_value = mock_query_chain
mock_query_chain.where.return_value = mock_query_chain
mock_query_chain.first.return_value = _make_tenant_account_join(role="owner")
mock_feature = mocker.patch(
FEATURE_SERVICE_PATH,
return_value=_make_feature(
can_replace_logo=False,
next_credit_reset_date="2025-02-01",
billing_plan=CLOUD_BILLING_PLAN_NON_SANDBOX,
),
)
mocker.patch(TENANT_SERVICE_PATH, return_value=False)
mock_config = mocker.patch(DIFY_CONFIG_PATH)
mock_config.EDITION = "CLOUD"
mock_config.FILES_URL = FILES_BASE_URL
return {
"db_session": mock_db_session,
"query_chain": mock_query_chain,
"get_features": mock_feature,
"config": mock_config,
}
def test_get_tenant_info_should_add_next_credit_reset_date_in_cloud_edition(
mocker: MockerFixture,
cloud_mocks: dict,
) -> None:
"""next_credit_reset_date should be present in CLOUD edition."""
from services.workspace_service import WorkspaceService
# Arrange
mocker.patch(
CREDIT_POOL_SERVICE_PATH,
side_effect=[None, None], # both paid and trial pools absent
)
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["next_credit_reset_date"] == "2025-02-01"
def test_get_tenant_info_should_use_paid_pool_when_plan_is_not_sandbox_and_pool_not_full(
mocker: MockerFixture,
cloud_mocks: dict,
) -> None:
"""trial_credits/trial_credits_used come from the paid pool when conditions are met."""
from services.workspace_service import WorkspaceService
# Arrange
paid_pool = _make_pool(quota_limit=1000, quota_used=200)
mocker.patch(CREDIT_POOL_SERVICE_PATH, return_value=paid_pool)
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["trial_credits"] == 1000
assert result["trial_credits_used"] == 200
def test_get_tenant_info_should_use_paid_pool_when_quota_limit_is_infinite(
mocker: MockerFixture,
cloud_mocks: dict,
) -> None:
"""quota_limit == -1 means unlimited; service should still use the paid pool."""
from services.workspace_service import WorkspaceService
# Arrange
paid_pool = _make_pool(quota_limit=-1, quota_used=999)
mocker.patch(CREDIT_POOL_SERVICE_PATH, side_effect=[paid_pool, None])
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["trial_credits"] == -1
assert result["trial_credits_used"] == 999
def test_get_tenant_info_should_fall_back_to_trial_pool_when_paid_pool_is_full(
mocker: MockerFixture,
cloud_mocks: dict,
) -> None:
"""When paid pool is exhausted (used >= limit), switch to trial pool."""
from services.workspace_service import WorkspaceService
# Arrange
paid_pool = _make_pool(quota_limit=500, quota_used=500) # exactly full
trial_pool = _make_pool(quota_limit=100, quota_used=10)
mocker.patch(CREDIT_POOL_SERVICE_PATH, side_effect=[paid_pool, trial_pool])
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["trial_credits"] == 100
assert result["trial_credits_used"] == 10
def test_get_tenant_info_should_fall_back_to_trial_pool_when_paid_pool_is_none(
mocker: MockerFixture,
cloud_mocks: dict,
) -> None:
"""When paid_pool is None, fall back to trial pool."""
from services.workspace_service import WorkspaceService
# Arrange
trial_pool = _make_pool(quota_limit=50, quota_used=5)
mocker.patch(CREDIT_POOL_SERVICE_PATH, side_effect=[None, trial_pool])
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["trial_credits"] == 50
assert result["trial_credits_used"] == 5
def test_get_tenant_info_should_fall_back_to_trial_pool_for_sandbox_plan(
mocker: MockerFixture,
cloud_mocks: dict,
) -> None:
"""
When the subscription plan IS SANDBOX, the paid pool branch is skipped
entirely and we fall back to the trial pool.
"""
from enums.cloud_plan import CloudPlan
from services.workspace_service import WorkspaceService
# Arrange — override billing plan to SANDBOX
cloud_mocks["get_features"].return_value = _make_feature(
next_credit_reset_date="2025-02-01",
billing_plan=CloudPlan.SANDBOX,
)
paid_pool = _make_pool(quota_limit=1000, quota_used=0)
trial_pool = _make_pool(quota_limit=200, quota_used=20)
mocker.patch(CREDIT_POOL_SERVICE_PATH, side_effect=[paid_pool, trial_pool])
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert result["trial_credits"] == 200
assert result["trial_credits_used"] == 20
def test_get_tenant_info_should_omit_trial_credits_when_both_pools_are_none(
mocker: MockerFixture,
cloud_mocks: dict,
) -> None:
"""When both paid and trial pools are absent, trial_credits should not be set."""
from services.workspace_service import WorkspaceService
# Arrange
mocker.patch(CREDIT_POOL_SERVICE_PATH, side_effect=[None, None])
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert "trial_credits" not in result
assert "trial_credits_used" not in result
# ---------------------------------------------------------------------------
# 5. Self-hosted / Non-Cloud Edition
# ---------------------------------------------------------------------------
def test_get_tenant_info_should_not_include_cloud_fields_in_self_hosted(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""next_credit_reset_date and trial_credits should NOT appear in SELF_HOSTED mode."""
from services.workspace_service import WorkspaceService
# Arrange (basic_mocks already sets EDITION = "SELF_HOSTED")
tenant = _make_tenant()
# Act
result = _tenant_info(WorkspaceService.get_tenant_info(tenant))
# Assert
assert result is not None
assert "next_credit_reset_date" not in result
assert "trial_credits" not in result
assert "trial_credits_used" not in result
# ---------------------------------------------------------------------------
# 6. DB query integrity
# ---------------------------------------------------------------------------
def test_get_tenant_info_should_query_tenant_account_join_with_correct_ids(
mocker: MockerFixture,
basic_mocks: dict,
) -> None:
"""
The DB query for TenantAccountJoin must be scoped to the correct
tenant_id and current_user.id.
"""
from services.workspace_service import WorkspaceService
# Arrange
tenant = _make_tenant(tenant_id="my-special-tenant")
mock_current_user = mocker.patch(CURRENT_USER_PATH)
mock_current_user.id = "special-user-id"
# Act
WorkspaceService.get_tenant_info(tenant)
# Assert — db.session.query was invoked (at least once)
basic_mocks["db_session"].query.assert_called()

View File

@@ -1,415 +0,0 @@
from contextlib import nullcontext
from types import SimpleNamespace
from unittest.mock import MagicMock
import pytest
from graphon.entities.graph_config import NodeConfigDictAdapter
from graphon.enums import BuiltinNodeTypes
from graphon.nodes.human_input.entities import FormInput, HumanInputNodeData, UserAction
from graphon.nodes.human_input.enums import FormInputType
from models.model import App
from models.workflow import Workflow
from services import workflow_service as workflow_service_module
from services.workflow_service import WorkflowService
class TestWorkflowService:
@pytest.fixture
def workflow_service(self):
mock_session_maker = MagicMock()
return WorkflowService(mock_session_maker)
@pytest.fixture
def mock_app(self):
app = MagicMock(spec=App)
app.id = "app-id-1"
app.workflow_id = "workflow-id-1"
app.tenant_id = "tenant-id-1"
return app
@pytest.fixture
def mock_workflows(self):
workflows = []
for i in range(5):
workflow = MagicMock(spec=Workflow)
workflow.id = f"workflow-id-{i}"
workflow.app_id = "app-id-1"
workflow.created_at = f"2023-01-0{5 - i}" # Descending date order
workflow.created_by = "user-id-1" if i % 2 == 0 else "user-id-2"
workflow.marked_name = f"Workflow {i}" if i % 2 == 0 else ""
workflows.append(workflow)
return workflows
@pytest.fixture
def dummy_session_cls(self):
class DummySession:
def __init__(self, *args, **kwargs):
self.commit = MagicMock()
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def begin(self):
return nullcontext()
return DummySession
def test_get_all_published_workflow_no_workflow_id(self, workflow_service, mock_app):
mock_app.workflow_id = None
mock_session = MagicMock()
workflows, has_more = workflow_service.get_all_published_workflow(
session=mock_session, app_model=mock_app, page=1, limit=10, user_id=None
)
assert workflows == []
assert has_more is False
mock_session.scalars.assert_not_called()
def test_get_all_published_workflow_basic(self, workflow_service, mock_app, mock_workflows):
mock_session = MagicMock()
mock_scalar_result = MagicMock()
mock_scalar_result.all.return_value = mock_workflows[:3]
mock_session.scalars.return_value = mock_scalar_result
workflows, has_more = workflow_service.get_all_published_workflow(
session=mock_session, app_model=mock_app, page=1, limit=3, user_id=None
)
assert workflows == mock_workflows[:3]
assert has_more is False
mock_session.scalars.assert_called_once()
def test_get_all_published_workflow_pagination(self, workflow_service, mock_app, mock_workflows):
mock_session = MagicMock()
mock_scalar_result = MagicMock()
# Return 4 items when limit is 3, which should indicate has_more=True
mock_scalar_result.all.return_value = mock_workflows[:4]
mock_session.scalars.return_value = mock_scalar_result
workflows, has_more = workflow_service.get_all_published_workflow(
session=mock_session, app_model=mock_app, page=1, limit=3, user_id=None
)
# Should return only the first 3 items
assert len(workflows) == 3
assert workflows == mock_workflows[:3]
assert has_more is True
# Test page 2
mock_scalar_result.all.return_value = mock_workflows[3:]
mock_session.scalars.return_value = mock_scalar_result
workflows, has_more = workflow_service.get_all_published_workflow(
session=mock_session, app_model=mock_app, page=2, limit=3, user_id=None
)
assert len(workflows) == 2
assert has_more is False
def test_get_all_published_workflow_user_filter(self, workflow_service, mock_app, mock_workflows):
mock_session = MagicMock()
mock_scalar_result = MagicMock()
# Filter workflows for user-id-1
filtered_workflows = [w for w in mock_workflows if w.created_by == "user-id-1"]
mock_scalar_result.all.return_value = filtered_workflows
mock_session.scalars.return_value = mock_scalar_result
workflows, has_more = workflow_service.get_all_published_workflow(
session=mock_session, app_model=mock_app, page=1, limit=10, user_id="user-id-1"
)
assert workflows == filtered_workflows
assert has_more is False
mock_session.scalars.assert_called_once()
# Verify that the select contains a user filter clause
args = mock_session.scalars.call_args[0][0]
assert "created_by" in str(args)
def test_get_all_published_workflow_named_only(self, workflow_service, mock_app, mock_workflows):
mock_session = MagicMock()
mock_scalar_result = MagicMock()
# Filter workflows that have a marked_name
named_workflows = [w for w in mock_workflows if w.marked_name]
mock_scalar_result.all.return_value = named_workflows
mock_session.scalars.return_value = mock_scalar_result
workflows, has_more = workflow_service.get_all_published_workflow(
session=mock_session, app_model=mock_app, page=1, limit=10, user_id=None, named_only=True
)
assert workflows == named_workflows
assert has_more is False
mock_session.scalars.assert_called_once()
# Verify that the select contains a named_only filter clause
args = mock_session.scalars.call_args[0][0]
assert "marked_name !=" in str(args)
def test_get_all_published_workflow_combined_filters(self, workflow_service, mock_app, mock_workflows):
mock_session = MagicMock()
mock_scalar_result = MagicMock()
# Combined filter: user-id-1 and has marked_name
filtered_workflows = [w for w in mock_workflows if w.created_by == "user-id-1" and w.marked_name]
mock_scalar_result.all.return_value = filtered_workflows
mock_session.scalars.return_value = mock_scalar_result
workflows, has_more = workflow_service.get_all_published_workflow(
session=mock_session, app_model=mock_app, page=1, limit=10, user_id="user-id-1", named_only=True
)
assert workflows == filtered_workflows
assert has_more is False
mock_session.scalars.assert_called_once()
# Verify that both filters are applied
args = mock_session.scalars.call_args[0][0]
assert "created_by" in str(args)
assert "marked_name !=" in str(args)
def test_get_all_published_workflow_empty_result(self, workflow_service, mock_app):
mock_session = MagicMock()
mock_scalar_result = MagicMock()
mock_scalar_result.all.return_value = []
mock_session.scalars.return_value = mock_scalar_result
workflows, has_more = workflow_service.get_all_published_workflow(
session=mock_session, app_model=mock_app, page=1, limit=10, user_id=None
)
assert workflows == []
assert has_more is False
mock_session.scalars.assert_called_once()
def test_submit_human_input_form_preview_uses_rendered_content(
self,
workflow_service: WorkflowService,
monkeypatch: pytest.MonkeyPatch,
dummy_session_cls,
) -> None:
service = workflow_service
node_data = HumanInputNodeData(
title="Human Input",
form_content="<p>{{#$output.name#}}</p>",
inputs=[FormInput(type=FormInputType.TEXT_INPUT, output_variable_name="name")],
user_actions=[UserAction(id="approve", title="Approve")],
)
node = MagicMock()
node.node_data = node_data
node.render_form_content_before_submission.return_value = "<p>preview</p>"
node.render_form_content_with_outputs.return_value = "<p>rendered</p>"
service._build_human_input_variable_pool = MagicMock(return_value=MagicMock()) # type: ignore[method-assign]
service._build_human_input_node = MagicMock(return_value=node) # type: ignore[method-assign]
workflow = MagicMock()
node_config = NodeConfigDictAdapter.validate_python(
{"id": "node-1", "data": {"type": BuiltinNodeTypes.HUMAN_INPUT}}
)
workflow.get_node_config_by_id.return_value = node_config
workflow.get_enclosing_node_type_and_id.return_value = None
service.get_draft_workflow = MagicMock(return_value=workflow) # type: ignore[method-assign]
saved_outputs: dict[str, object] = {}
class DummySaver:
def __init__(self, *args, **kwargs):
pass
def save(self, outputs, process_data):
saved_outputs.update(outputs)
monkeypatch.setattr(workflow_service_module, "Session", dummy_session_cls)
monkeypatch.setattr(workflow_service_module, "DraftVariableSaver", DummySaver)
monkeypatch.setattr(workflow_service_module, "db", SimpleNamespace(engine=MagicMock()))
app_model = SimpleNamespace(id="app-1", tenant_id="tenant-1")
account = SimpleNamespace(id="account-1")
result = service.submit_human_input_form_preview(
app_model=app_model,
account=account,
node_id="node-1",
form_inputs={"name": "Ada", "extra": "ignored"},
inputs={"#node-0.result#": "LLM output"},
action="approve",
)
service._build_human_input_variable_pool.assert_called_once_with(
app_model=app_model,
workflow=workflow,
node_config=node_config,
manual_inputs={"#node-0.result#": "LLM output"},
user_id="account-1",
)
node.render_form_content_with_outputs.assert_called_once()
called_args = node.render_form_content_with_outputs.call_args.args
assert called_args[0] == "<p>preview</p>"
assert called_args[2] == node_data.outputs_field_names()
rendered_outputs = called_args[1]
assert rendered_outputs["name"] == "Ada"
assert rendered_outputs["extra"] == "ignored"
assert "extra" in saved_outputs
assert "extra" in result
assert saved_outputs["name"] == "Ada"
assert result["name"] == "Ada"
assert result["__action_id"] == "approve"
assert "__rendered_content" in result
def test_submit_human_input_form_preview_missing_inputs_message(self, workflow_service: WorkflowService) -> None:
service = workflow_service
node_data = HumanInputNodeData(
title="Human Input",
form_content="<p>{{#$output.name#}}</p>",
inputs=[FormInput(type=FormInputType.TEXT_INPUT, output_variable_name="name")],
user_actions=[UserAction(id="approve", title="Approve")],
)
node = MagicMock()
node.node_data = node_data
node._render_form_content_before_submission.return_value = "<p>preview</p>"
node._render_form_content_with_outputs.return_value = "<p>rendered</p>"
service._build_human_input_variable_pool = MagicMock(return_value=MagicMock()) # type: ignore[method-assign]
service._build_human_input_node = MagicMock(return_value=node) # type: ignore[method-assign]
workflow = MagicMock()
workflow.get_node_config_by_id.return_value = NodeConfigDictAdapter.validate_python(
{"id": "node-1", "data": {"type": BuiltinNodeTypes.HUMAN_INPUT}}
)
service.get_draft_workflow = MagicMock(return_value=workflow) # type: ignore[method-assign]
app_model = SimpleNamespace(id="app-1", tenant_id="tenant-1")
account = SimpleNamespace(id="account-1")
with pytest.raises(ValueError) as exc_info:
service.submit_human_input_form_preview(
app_model=app_model,
account=account,
node_id="node-1",
form_inputs={},
inputs={},
action="approve",
)
assert "Missing required inputs" in str(exc_info.value)
def test_run_draft_workflow_node_successful_behavior(
self, workflow_service, mock_app, monkeypatch, dummy_session_cls
):
"""Behavior: When a basic workflow node runs, it correctly sets up context,
executes the node, and saves outputs."""
service = workflow_service
account = SimpleNamespace(id="account-1")
mock_workflow = MagicMock()
mock_workflow.id = "wf-1"
mock_workflow.tenant_id = "tenant-1"
mock_workflow.environment_variables = []
mock_workflow.conversation_variables = []
# Mock node config
mock_workflow.get_node_config_by_id.return_value = NodeConfigDictAdapter.validate_python(
{"id": "node-1", "data": {"type": BuiltinNodeTypes.LLM}}
)
mock_workflow.get_enclosing_node_type_and_id.return_value = None
# Mock class methods
monkeypatch.setattr(workflow_service_module, "WorkflowDraftVariableService", MagicMock())
monkeypatch.setattr(workflow_service_module, "DraftVarLoader", MagicMock())
# Mock workflow entry execution
mock_node_exec = MagicMock()
mock_node_exec.id = "exec-1"
mock_node_exec.process_data = {}
mock_run = MagicMock()
monkeypatch.setattr(workflow_service_module.WorkflowEntry, "single_step_run", mock_run)
# Mock execution handling
service._handle_single_step_result = MagicMock(return_value=mock_node_exec)
# Mock repository
mock_repo = MagicMock()
mock_repo.get_execution_by_id.return_value = mock_node_exec
mock_repo_factory = MagicMock(return_value=mock_repo)
monkeypatch.setattr(
workflow_service_module.DifyCoreRepositoryFactory,
"create_workflow_node_execution_repository",
mock_repo_factory,
)
service._node_execution_service_repo = mock_repo
# Set up node execution service repo mock to return our exec node
mock_node_exec.load_full_outputs.return_value = {"output_var": "result_value"}
mock_node_exec.node_id = "node-1"
mock_node_exec.node_type = "llm"
# Mock draft variable saver
mock_saver = MagicMock()
monkeypatch.setattr(workflow_service_module, "DraftVariableSaver", MagicMock(return_value=mock_saver))
# Mock DB
monkeypatch.setattr(workflow_service_module, "db", SimpleNamespace(engine=MagicMock()))
monkeypatch.setattr(workflow_service_module, "Session", dummy_session_cls)
# Act
result = service.run_draft_workflow_node(
app_model=mock_app,
draft_workflow=mock_workflow,
node_id="node-1",
user_inputs={"input_val": "test"},
account=account,
)
# Assert
assert result == mock_node_exec
service._handle_single_step_result.assert_called_once()
mock_repo.save.assert_called_once_with(mock_node_exec)
mock_saver.save.assert_called_once_with(process_data={}, outputs={"output_var": "result_value"})
def test_run_draft_workflow_node_failure_behavior(self, workflow_service, mock_app, monkeypatch, dummy_session_cls):
"""Behavior: If retrieving the saved execution fails, an appropriate error bubble matches expectations."""
service = workflow_service
account = SimpleNamespace(id="account-1")
mock_workflow = MagicMock()
mock_workflow.tenant_id = "tenant-1"
mock_workflow.environment_variables = []
mock_workflow.conversation_variables = []
mock_workflow.get_node_config_by_id.return_value = NodeConfigDictAdapter.validate_python(
{"id": "node-1", "data": {"type": BuiltinNodeTypes.LLM}}
)
mock_workflow.get_enclosing_node_type_and_id.return_value = None
monkeypatch.setattr(workflow_service_module, "WorkflowDraftVariableService", MagicMock())
monkeypatch.setattr(workflow_service_module, "DraftVarLoader", MagicMock())
monkeypatch.setattr(workflow_service_module.WorkflowEntry, "single_step_run", MagicMock())
mock_node_exec = MagicMock()
mock_node_exec.id = "exec-invalid"
service._handle_single_step_result = MagicMock(return_value=mock_node_exec)
mock_repo = MagicMock()
mock_repo_factory = MagicMock(return_value=mock_repo)
monkeypatch.setattr(
workflow_service_module.DifyCoreRepositoryFactory,
"create_workflow_node_execution_repository",
mock_repo_factory,
)
service._node_execution_service_repo = mock_repo
# Simulate failure to retrieve the saved execution
mock_repo.get_execution_by_id.return_value = None
monkeypatch.setattr(workflow_service_module, "db", SimpleNamespace(engine=MagicMock()))
monkeypatch.setattr(workflow_service_module, "Session", dummy_session_cls)
# Act & Assert
with pytest.raises(ValueError, match="WorkflowNodeExecution with id exec-invalid not found after saving"):
service.run_draft_workflow_node(
app_model=mock_app, draft_workflow=mock_workflow, node_id="node-1", user_inputs={}, account=account
)

View File

@@ -13,4 +13,4 @@ PYTEST_XDIST_ARGS="${PYTEST_XDIST_ARGS:--n auto}"
pytest --timeout "${PYTEST_TIMEOUT}" ${PYTEST_XDIST_ARGS} api/tests/unit_tests --ignore=api/tests/unit_tests/controllers
# Run controller tests sequentially to avoid import race conditions
pytest --timeout "${PYTEST_TIMEOUT}" api/tests/unit_tests/controllers
pytest --timeout "${PYTEST_TIMEOUT}" --cov-append api/tests/unit_tests/controllers

View File

@@ -488,7 +488,8 @@ ALIYUN_OSS_REGION=ap-southeast-1
ALIYUN_OSS_AUTH_VERSION=v4
# Don't start with '/'. OSS doesn't support leading slash in object names.
ALIYUN_OSS_PATH=your-path
ALIYUN_CLOUDBOX_ID=your-cloudbox-id
# Optional CloudBox ID for Aliyun OSS, DO NOT enable it if you are not using CloudBox.
#ALIYUN_CLOUDBOX_ID=your-cloudbox-id
# Tencent COS Configuration
#

View File

@@ -275,6 +275,7 @@ services:
# Use the shared environment variables.
<<: *shared-api-worker-env
DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin}
DB_SSL_MODE: ${DB_SSL_MODE:-disable}
SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002}
SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi}
MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}

View File

@@ -127,6 +127,8 @@ services:
restart: always
env_file:
- ./middleware.env
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
# Use the shared environment variables.
LOG_OUTPUT_FORMAT: ${LOG_OUTPUT_FORMAT:-text}

View File

@@ -146,7 +146,6 @@ x-shared-env: &shared-api-worker-env
ALIYUN_OSS_REGION: ${ALIYUN_OSS_REGION:-ap-southeast-1}
ALIYUN_OSS_AUTH_VERSION: ${ALIYUN_OSS_AUTH_VERSION:-v4}
ALIYUN_OSS_PATH: ${ALIYUN_OSS_PATH:-your-path}
ALIYUN_CLOUDBOX_ID: ${ALIYUN_CLOUDBOX_ID:-your-cloudbox-id}
TENCENT_COS_BUCKET_NAME: ${TENCENT_COS_BUCKET_NAME:-your-bucket-name}
TENCENT_COS_SECRET_KEY: ${TENCENT_COS_SECRET_KEY:-your-secret-key}
TENCENT_COS_SECRET_ID: ${TENCENT_COS_SECRET_ID:-your-secret-id}
@@ -985,6 +984,7 @@ services:
# Use the shared environment variables.
<<: *shared-api-worker-env
DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin}
DB_SSL_MODE: ${DB_SSL_MODE:-disable}
SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002}
SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi}
MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
<div style="text-align: right;">

View File

@@ -57,7 +57,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
ডিফাই একটি ওপেন-সোর্স LLM অ্যাপ ডেভেলপমেন্ট প্ল্যাটফর্ম। এটি ইন্টুইটিভ ইন্টারফেস, এজেন্টিক AI ওয়ার্কফ্লো, RAG পাইপলাইন, এজেন্ট ক্যাপাবিলিটি, মডেল ম্যানেজমেন্ট, মনিটরিং সুবিধা এবং আরও অনেক কিছু একত্রিত করে, যা দ্রুত প্রোটোটাইপ থেকে প্রোডাকশন পর্যন্ত নিয়ে যেতে সহায়তা করে।

View File

@@ -57,7 +57,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify ist eine Open-Source-Plattform zur Entwicklung von LLM-Anwendungen. Ihre intuitive Benutzeroberfläche vereint agentenbasierte KI-Workflows, RAG-Pipelines, Agentenfunktionen, Modellverwaltung, Überwachungsfunktionen und mehr, sodass Sie schnell von einem Prototyp in die Produktion übergehen können.

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
#

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
#

View File

@@ -58,6 +58,8 @@
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>

View File

@@ -58,7 +58,10 @@
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify è una piattaforma open-source per lo sviluppo di applicazioni LLM. La sua interfaccia intuitiva combina flussi di lavoro AI basati su agenti, pipeline RAG, funzionalità di agenti, gestione dei modelli, funzionalità di monitoraggio e altro ancora, permettendovi di passare rapidamente da un prototipo alla produzione.

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
#

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify는 오픈 소스 LLM 앱 개발 플랫폼입니다. 직관적인 인터페이스를 통해 AI 워크플로우, RAG 파이프라인, 에이전트 기능, 모델 관리, 관찰 기능 등을 결합하여 프로토타입에서 프로덕션까지 빠르게 전환할 수 있습니다. 주요 기능 목록은 다음과 같습니다:</br> </br>

View File

@@ -58,7 +58,10 @@
<a href="../vi-VN/README.md"><img alt="README em Vietnamita" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português - BR" src="https://img.shields.io/badge/Portugu%C3%AAs-BR?style=flat&label=BR&color=d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify é uma plataforma de desenvolvimento de aplicativos LLM de código aberto. Sua interface intuitiva combina workflow de IA, pipeline RAG, capacidades de agente, gerenciamento de modelos, recursos de observabilidade e muito mais, permitindo que você vá rapidamente do protótipo à produção. Aqui está uma lista das principais funcionalidades:

View File

@@ -53,9 +53,12 @@
<a href="../ar-SA/README.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify je odprtokodna platforma za razvoj aplikacij LLM. Njegov intuitivni vmesnik združuje agentski potek dela z umetno inteligenco, cevovod RAG, zmogljivosti agentov, upravljanje modelov, funkcije opazovanja in več, kar vam omogoča hiter prehod od prototipa do proizvodnje.

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
#

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify, açık kaynaklı bir LLM uygulama geliştirme platformudur. Sezgisel arayüzü, AI iş akışı, RAG pipeline'ı, ajan yetenekleri, model yönetimi, gözlemlenebilirlik özellikleri ve daha fazlasını birleştirerek, prototipten üretime hızlıca geçmenizi sağlar. İşte temel özelliklerin bir listesi:

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify là một nền tảng phát triển ứng dụng LLM mã nguồn mở. Giao diện trực quan kết hợp quy trình làm việc AI, mô hình RAG, khả năng tác nhân, quản lý mô hình, tính năng quan sát và hơn thế nữa, cho phép bạn nhanh chóng chuyển từ nguyên mẫu sang sản phẩm. Đây là danh sách các tính năng cốt lõi:

View File

@@ -53,7 +53,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</div>
#

View File

@@ -57,6 +57,11 @@
<a href="../tr-TR/README.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="../vi-VN/README.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="../de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="../it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="../pt-BR/README.md"><img alt="README em Português do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="../sl-SI/README.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="../bn-BD/README.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
<a href="../hi-IN/README.md"><img alt="README in हिन्दी" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
</p>
Dify 是一個開源的 LLM 應用程式開發平台。其直觀的界面結合了智能代理工作流程、RAG 管道、代理功能、模型管理、可觀察性功能等,讓您能夠快速從原型進展到生產環境。

6
e2e/.gitignore vendored Normal file
View File

@@ -0,0 +1,6 @@
node_modules/
.auth/
playwright-report/
test-results/
cucumber-report/
.logs/

164
e2e/AGENTS.md Normal file
View File

@@ -0,0 +1,164 @@
# E2E
This package contains the repository-level end-to-end tests for Dify.
This file is the canonical package guide for `e2e/`. Keep detailed workflow, architecture, debugging, and reporting documentation here. Keep `README.md` as a minimal pointer to this file so the two documents do not drift.
The suite uses Cucumber for scenario definitions and Playwright as the browser execution layer.
It tests:
- backend API started from source
- frontend served from the production artifact
- middleware services started from Docker
## Prerequisites
- Node.js `^22.22.1`
- `pnpm`
- `uv`
- Docker
Install Playwright browsers once:
```bash
cd e2e
pnpm install
pnpm e2e:install
pnpm check
```
Use `pnpm check` as the default local verification step after editing E2E TypeScript, Cucumber support code, or feature glue. It runs formatting, linting, and type checks for this package.
Common commands:
```bash
# authenticated-only regression (default excludes @fresh)
# expects backend API, frontend artifact, and middleware stack to already be running
pnpm e2e
# full reset + fresh install + authenticated scenarios
# starts required middleware/dependencies for you
pnpm e2e:full
# run a tagged subset
pnpm e2e -- --tags @smoke
# headed browser
pnpm e2e:headed -- --tags @smoke
# slow down browser actions for local debugging
E2E_SLOW_MO=500 pnpm e2e:headed -- --tags @smoke
```
Frontend artifact behavior:
- if `web/.next/BUILD_ID` exists, E2E reuses the existing build by default
- if you set `E2E_FORCE_WEB_BUILD=1`, E2E rebuilds the frontend before starting it
## Lifecycle
```mermaid
flowchart TD
A["Start E2E run"] --> B["run-cucumber.ts orchestrates setup/API/frontend"]
B --> C["support/web-server.ts starts or reuses frontend directly"]
C --> D["Cucumber loads config, steps, and support modules"]
D --> E["BeforeAll bootstraps shared auth state via /install"]
E --> F{"Which command is running?"}
F -->|`pnpm e2e`| G["Run config default tags: not @fresh and not @skip"]
F -->|`pnpm e2e:full*`| H["Override tags to not @skip"]
G --> I["Per-scenario BrowserContext from shared browser"]
H --> I
I --> J["Failure artifacts written to cucumber-report/artifacts"]
```
Ownership is split like this:
- `scripts/setup.ts` is the single environment entrypoint for reset, middleware, backend, and frontend startup
- `run-cucumber.ts` orchestrates the E2E run and Cucumber invocation
- `support/web-server.ts` manages frontend reuse, startup, readiness, and shutdown
- `features/support/hooks.ts` manages auth bootstrap, scenario lifecycle, and diagnostics
- `features/support/world.ts` owns per-scenario typed context
- `features/step-definitions/` holds domain-oriented glue so the official VS Code Cucumber plugin works with default conventions when `e2e/` is opened as the workspace root
Package layout:
- `features/`: Gherkin scenarios grouped by capability
- `features/step-definitions/`: domain-oriented step definitions
- `features/support/hooks.ts`: suite lifecycle, auth-state bootstrap, diagnostics
- `features/support/world.ts`: shared scenario context
- `support/web-server.ts`: typed frontend startup/reuse logic
- `scripts/setup.ts`: reset and service lifecycle commands
- `scripts/run-cucumber.ts`: Cucumber orchestration entrypoint
Behavior depends on instance state:
- uninitialized instance: completes install and stores authenticated state
- initialized instance: signs in and reuses authenticated state
Because of that, the `@fresh` install scenario only runs in the `pnpm e2e:full*` flows. The default `pnpm e2e*` flows exclude `@fresh` via Cucumber config tags so they can be re-run against an already initialized instance.
Reset all persisted E2E state:
```bash
pnpm e2e:reset
```
This removes:
- `docker/volumes/db/data`
- `docker/volumes/redis/data`
- `docker/volumes/weaviate`
- `docker/volumes/plugin_daemon`
- `e2e/.auth`
- `e2e/.logs`
- `e2e/cucumber-report`
Start the full middleware stack:
```bash
pnpm e2e:middleware:up
```
Stop the full middleware stack:
```bash
pnpm e2e:middleware:down
```
The middleware stack includes:
- PostgreSQL
- Redis
- Weaviate
- Sandbox
- SSRF proxy
- Plugin daemon
Fresh install verification:
```bash
pnpm e2e:full
```
Run the Cucumber suite against an already running middleware stack:
```bash
pnpm e2e:middleware:up
pnpm e2e
pnpm e2e:middleware:down
```
Artifacts and diagnostics:
- `cucumber-report/report.html`: HTML report
- `cucumber-report/report.json`: JSON report
- `cucumber-report/artifacts/`: failure screenshots and HTML captures
- `.logs/cucumber-api.log`: backend startup log
- `.logs/cucumber-web.log`: frontend startup log
Open the HTML report locally with:
```bash
open cucumber-report/report.html
```

3
e2e/README.md Normal file
View File

@@ -0,0 +1,3 @@
# E2E
Canonical documentation for this package lives in [AGENTS.md](./AGENTS.md).

19
e2e/cucumber.config.ts Normal file
View File

@@ -0,0 +1,19 @@
import type { IConfiguration } from '@cucumber/cucumber'
const config = {
format: [
'progress-bar',
'summary',
'html:./cucumber-report/report.html',
'json:./cucumber-report/report.json',
],
import: ['features/**/*.ts'],
parallel: 1,
paths: ['features/**/*.feature'],
tags: process.env.E2E_CUCUMBER_TAGS || 'not @fresh and not @skip',
timeout: 60_000,
} satisfies Partial<IConfiguration> & {
timeout: number
}
export default config

View File

@@ -0,0 +1,10 @@
@apps @authenticated
Feature: Create app
Scenario: Create a new blank app and redirect to the editor
Given I am signed in as the default E2E admin
When I open the apps console
And I start creating a blank app
And I enter a unique E2E app name
And I confirm app creation
Then I should land on the app editor
And I should see the "Orchestrate" text

View File

@@ -0,0 +1,8 @@
@smoke @authenticated
Feature: Authenticated app console
Scenario: Open the apps console with the shared authenticated state
Given I am signed in as the default E2E admin
When I open the apps console
Then I should stay on the apps console
And I should see the "Create from Blank" button
And I should not see the "Sign in" button

View File

@@ -0,0 +1,7 @@
@smoke @fresh
Feature: Fresh installation bootstrap
Scenario: Complete the initial installation bootstrap on a fresh instance
Given the last authentication bootstrap came from a fresh install
When I open the apps console
Then I should stay on the apps console
And I should see the "Create from Blank" button

View File

@@ -0,0 +1,29 @@
import { Then, When } from '@cucumber/cucumber'
import { expect } from '@playwright/test'
import type { DifyWorld } from '../../support/world'
When('I start creating a blank app', async function (this: DifyWorld) {
const page = this.getPage()
await expect(page.getByRole('button', { name: 'Create from Blank' })).toBeVisible()
await page.getByRole('button', { name: 'Create from Blank' }).click()
})
When('I enter a unique E2E app name', async function (this: DifyWorld) {
const appName = `E2E App ${Date.now()}`
await this.getPage().getByPlaceholder('Give your app a name').fill(appName)
})
When('I confirm app creation', async function (this: DifyWorld) {
const createButton = this.getPage()
.getByRole('button', { name: /^Create(?:\s|$)/ })
.last()
await expect(createButton).toBeEnabled()
await createButton.click()
})
Then('I should land on the app editor', async function (this: DifyWorld) {
await expect(this.getPage()).toHaveURL(/\/app\/[^/]+\/(workflow|configuration)(?:\?.*)?$/)
})

View File

@@ -0,0 +1,11 @@
import { Given } from '@cucumber/cucumber'
import type { DifyWorld } from '../../support/world'
Given('I am signed in as the default E2E admin', async function (this: DifyWorld) {
const session = await this.getAuthSession()
this.attach(
`Authenticated as ${session.adminEmail} using ${session.mode} flow at ${session.baseURL}.`,
'text/plain',
)
})

View File

@@ -0,0 +1,23 @@
import { Then, When } from '@cucumber/cucumber'
import { expect } from '@playwright/test'
import type { DifyWorld } from '../../support/world'
When('I open the apps console', async function (this: DifyWorld) {
await this.getPage().goto('/apps')
})
Then('I should stay on the apps console', async function (this: DifyWorld) {
await expect(this.getPage()).toHaveURL(/\/apps(?:\?.*)?$/)
})
Then('I should see the {string} button', async function (this: DifyWorld, label: string) {
await expect(this.getPage().getByRole('button', { name: label })).toBeVisible()
})
Then('I should not see the {string} button', async function (this: DifyWorld, label: string) {
await expect(this.getPage().getByRole('button', { name: label })).not.toBeVisible()
})
Then('I should see the {string} text', async function (this: DifyWorld, text: string) {
await expect(this.getPage().getByText(text)).toBeVisible({ timeout: 30_000 })
})

View File

@@ -0,0 +1,12 @@
import { Given } from '@cucumber/cucumber'
import { expect } from '@playwright/test'
import type { DifyWorld } from '../../support/world'
Given(
'the last authentication bootstrap came from a fresh install',
async function (this: DifyWorld) {
const session = await this.getAuthSession()
expect(session.mode).toBe('install')
},
)

View File

@@ -0,0 +1,90 @@
import { After, AfterAll, Before, BeforeAll, Status, setDefaultTimeout } from '@cucumber/cucumber'
import { chromium, type Browser } from '@playwright/test'
import { mkdir, writeFile } from 'node:fs/promises'
import path from 'node:path'
import { fileURLToPath } from 'node:url'
import { ensureAuthenticatedState } from '../../fixtures/auth'
import { baseURL, cucumberHeadless, cucumberSlowMo } from '../../test-env'
import type { DifyWorld } from './world'
const e2eRoot = fileURLToPath(new URL('../..', import.meta.url))
const artifactsDir = path.join(e2eRoot, 'cucumber-report', 'artifacts')
let browser: Browser | undefined
setDefaultTimeout(60_000)
const sanitizeForPath = (value: string) =>
value.replaceAll(/[^a-zA-Z0-9_-]+/g, '-').replaceAll(/^-+|-+$/g, '')
const writeArtifact = async (
scenarioName: string,
extension: 'html' | 'png',
contents: Buffer | string,
) => {
const artifactPath = path.join(
artifactsDir,
`${Date.now()}-${sanitizeForPath(scenarioName || 'scenario')}.${extension}`,
)
await writeFile(artifactPath, contents)
return artifactPath
}
BeforeAll(async () => {
await mkdir(artifactsDir, { recursive: true })
browser = await chromium.launch({
headless: cucumberHeadless,
slowMo: cucumberSlowMo,
})
console.log(`[e2e] session cache bootstrap against ${baseURL}`)
await ensureAuthenticatedState(browser, baseURL)
})
Before(async function (this: DifyWorld, { pickle }) {
if (!browser) throw new Error('Shared Playwright browser is not available.')
await this.startAuthenticatedSession(browser)
this.scenarioStartedAt = Date.now()
const tags = pickle.tags.map((tag) => tag.name).join(' ')
console.log(`[e2e] start ${pickle.name}${tags ? ` ${tags}` : ''}`)
})
After(async function (this: DifyWorld, { pickle, result }) {
const elapsedMs = this.scenarioStartedAt ? Date.now() - this.scenarioStartedAt : undefined
if (result?.status !== Status.PASSED && this.page) {
const screenshot = await this.page.screenshot({
fullPage: true,
})
const screenshotPath = await writeArtifact(pickle.name, 'png', screenshot)
this.attach(screenshot, 'image/png')
const html = await this.page.content()
const htmlPath = await writeArtifact(pickle.name, 'html', html)
this.attach(html, 'text/html')
if (this.consoleErrors.length > 0)
this.attach(`Console Errors:\n${this.consoleErrors.join('\n')}`, 'text/plain')
if (this.pageErrors.length > 0)
this.attach(`Page Errors:\n${this.pageErrors.join('\n')}`, 'text/plain')
this.attach(`Artifacts:\n${[screenshotPath, htmlPath].join('\n')}`, 'text/plain')
}
const status = result?.status || 'UNKNOWN'
console.log(
`[e2e] end ${pickle.name} status=${status}${elapsedMs ? ` durationMs=${elapsedMs}` : ''}`,
)
await this.closeSession()
})
AfterAll(async () => {
await browser?.close()
browser = undefined
})

View File

@@ -0,0 +1,68 @@
import { type IWorldOptions, World, setWorldConstructor } from '@cucumber/cucumber'
import type { Browser, BrowserContext, ConsoleMessage, Page } from '@playwright/test'
import {
authStatePath,
readAuthSessionMetadata,
type AuthSessionMetadata,
} from '../../fixtures/auth'
import { baseURL, defaultLocale } from '../../test-env'
export class DifyWorld extends World {
context: BrowserContext | undefined
page: Page | undefined
consoleErrors: string[] = []
pageErrors: string[] = []
scenarioStartedAt: number | undefined
session: AuthSessionMetadata | undefined
constructor(options: IWorldOptions) {
super(options)
this.resetScenarioState()
}
resetScenarioState() {
this.consoleErrors = []
this.pageErrors = []
}
async startAuthenticatedSession(browser: Browser) {
this.resetScenarioState()
this.context = await browser.newContext({
baseURL,
locale: defaultLocale,
storageState: authStatePath,
})
this.context.setDefaultTimeout(30_000)
this.page = await this.context.newPage()
this.page.setDefaultTimeout(30_000)
this.page.on('console', (message: ConsoleMessage) => {
if (message.type() === 'error') this.consoleErrors.push(message.text())
})
this.page.on('pageerror', (error) => {
this.pageErrors.push(error.message)
})
}
getPage() {
if (!this.page) throw new Error('Playwright page has not been initialized for this scenario.')
return this.page
}
async getAuthSession() {
this.session ??= await readAuthSessionMetadata()
return this.session
}
async closeSession() {
await this.context?.close()
this.context = undefined
this.page = undefined
this.session = undefined
this.scenarioStartedAt = undefined
this.resetScenarioState()
}
}
setWorldConstructor(DifyWorld)

148
e2e/fixtures/auth.ts Normal file
View File

@@ -0,0 +1,148 @@
import type { Browser, Page } from '@playwright/test'
import { expect } from '@playwright/test'
import { mkdir, readFile, writeFile } from 'node:fs/promises'
import path from 'node:path'
import { fileURLToPath } from 'node:url'
import { defaultBaseURL, defaultLocale } from '../test-env'
export type AuthSessionMetadata = {
adminEmail: string
baseURL: string
mode: 'install' | 'login'
usedInitPassword: boolean
}
const WAIT_TIMEOUT_MS = 120_000
const e2eRoot = fileURLToPath(new URL('..', import.meta.url))
export const authDir = path.join(e2eRoot, '.auth')
export const authStatePath = path.join(authDir, 'admin.json')
export const authMetadataPath = path.join(authDir, 'session.json')
export const adminCredentials = {
email: process.env.E2E_ADMIN_EMAIL || 'e2e-admin@example.com',
name: process.env.E2E_ADMIN_NAME || 'E2E Admin',
password: process.env.E2E_ADMIN_PASSWORD || 'E2eAdmin12345',
}
const initPassword = process.env.E2E_INIT_PASSWORD || 'E2eInit12345'
export const resolveBaseURL = (configuredBaseURL?: string) =>
configuredBaseURL || process.env.E2E_BASE_URL || defaultBaseURL
export const readAuthSessionMetadata = async () => {
const content = await readFile(authMetadataPath, 'utf8')
return JSON.parse(content) as AuthSessionMetadata
}
const escapeRegex = (value: string) => value.replaceAll(/[.*+?^${}()|[\]\\]/g, '\\$&')
const appURL = (baseURL: string, pathname: string) => new URL(pathname, baseURL).toString()
const waitForPageState = async (page: Page) => {
const installHeading = page.getByRole('heading', { name: 'Setting up an admin account' })
const signInButton = page.getByRole('button', { name: 'Sign in' })
const initPasswordField = page.getByLabel('Admin initialization password')
const deadline = Date.now() + WAIT_TIMEOUT_MS
while (Date.now() < deadline) {
if (await installHeading.isVisible().catch(() => false)) return 'install' as const
if (await signInButton.isVisible().catch(() => false)) return 'login' as const
if (await initPasswordField.isVisible().catch(() => false)) return 'init' as const
await page.waitForTimeout(1_000)
}
throw new Error(`Unable to determine auth page state for ${page.url()}`)
}
const completeInitPasswordIfNeeded = async (page: Page) => {
const initPasswordField = page.getByLabel('Admin initialization password')
if (!(await initPasswordField.isVisible({ timeout: 3_000 }).catch(() => false))) return false
await initPasswordField.fill(initPassword)
await page.getByRole('button', { name: 'Validate' }).click()
await expect(page.getByRole('heading', { name: 'Setting up an admin account' })).toBeVisible({
timeout: WAIT_TIMEOUT_MS,
})
return true
}
const completeInstall = async (page: Page, baseURL: string) => {
await expect(page.getByRole('heading', { name: 'Setting up an admin account' })).toBeVisible({
timeout: WAIT_TIMEOUT_MS,
})
await page.getByLabel('Email address').fill(adminCredentials.email)
await page.getByLabel('Username').fill(adminCredentials.name)
await page.getByLabel('Password').fill(adminCredentials.password)
await page.getByRole('button', { name: 'Set up' }).click()
await expect(page).toHaveURL(new RegExp(`^${escapeRegex(baseURL)}/apps(?:\\?.*)?$`), {
timeout: WAIT_TIMEOUT_MS,
})
}
const completeLogin = async (page: Page, baseURL: string) => {
await expect(page.getByRole('button', { name: 'Sign in' })).toBeVisible({
timeout: WAIT_TIMEOUT_MS,
})
await page.getByLabel('Email address').fill(adminCredentials.email)
await page.getByLabel('Password').fill(adminCredentials.password)
await page.getByRole('button', { name: 'Sign in' }).click()
await expect(page).toHaveURL(new RegExp(`^${escapeRegex(baseURL)}/apps(?:\\?.*)?$`), {
timeout: WAIT_TIMEOUT_MS,
})
}
export const ensureAuthenticatedState = async (browser: Browser, configuredBaseURL?: string) => {
const baseURL = resolveBaseURL(configuredBaseURL)
await mkdir(authDir, { recursive: true })
const context = await browser.newContext({
baseURL,
locale: defaultLocale,
})
const page = await context.newPage()
try {
await page.goto(appURL(baseURL, '/install'), { waitUntil: 'networkidle' })
let usedInitPassword = await completeInitPasswordIfNeeded(page)
let pageState = await waitForPageState(page)
while (pageState === 'init') {
const completedInitPassword = await completeInitPasswordIfNeeded(page)
if (!completedInitPassword)
throw new Error(`Unable to validate initialization password for ${page.url()}`)
usedInitPassword = true
pageState = await waitForPageState(page)
}
if (pageState === 'install') await completeInstall(page, baseURL)
else await completeLogin(page, baseURL)
await expect(page.getByRole('button', { name: 'Create from Blank' })).toBeVisible({
timeout: WAIT_TIMEOUT_MS,
})
await context.storageState({ path: authStatePath })
const metadata: AuthSessionMetadata = {
adminEmail: adminCredentials.email,
baseURL,
mode: pageState,
usedInitPassword,
}
await writeFile(authMetadataPath, `${JSON.stringify(metadata, null, 2)}\n`, 'utf8')
} finally {
await context.close()
}
}

34
e2e/package.json Normal file
View File

@@ -0,0 +1,34 @@
{
"name": "dify-e2e",
"private": true,
"type": "module",
"scripts": {
"check": "vp check --fix",
"e2e": "tsx ./scripts/run-cucumber.ts",
"e2e:full": "tsx ./scripts/run-cucumber.ts --full",
"e2e:full:headed": "tsx ./scripts/run-cucumber.ts --full --headed",
"e2e:headed": "tsx ./scripts/run-cucumber.ts --headed",
"e2e:install": "playwright install --with-deps chromium",
"e2e:middleware:down": "tsx ./scripts/setup.ts middleware-down",
"e2e:middleware:up": "tsx ./scripts/setup.ts middleware-up",
"e2e:reset": "tsx ./scripts/setup.ts reset"
},
"devDependencies": {
"@cucumber/cucumber": "12.7.0",
"@playwright/test": "1.51.1",
"@types/node": "25.5.0",
"tsx": "4.21.0",
"typescript": "5.9.3",
"vite-plus": "latest"
},
"engines": {
"node": "^22.22.1"
},
"packageManager": "pnpm@10.32.1",
"pnpm": {
"overrides": {
"vite": "npm:@voidzero-dev/vite-plus-core@latest",
"vitest": "npm:@voidzero-dev/vite-plus-test@latest"
}
}
}

2632
e2e/pnpm-lock.yaml generated Normal file

File diff suppressed because it is too large Load Diff

242
e2e/scripts/common.ts Normal file
View File

@@ -0,0 +1,242 @@
import { spawn, type ChildProcess } from 'node:child_process'
import { access, copyFile, readFile, writeFile } from 'node:fs/promises'
import net from 'node:net'
import path from 'node:path'
import { fileURLToPath, pathToFileURL } from 'node:url'
import { sleep } from '../support/process'
type RunCommandOptions = {
command: string
args: string[]
cwd: string
env?: NodeJS.ProcessEnv
stdio?: 'inherit' | 'pipe'
}
type RunCommandResult = {
exitCode: number
stdout: string
stderr: string
}
type ForegroundProcessOptions = {
command: string
args: string[]
cwd: string
env?: NodeJS.ProcessEnv
}
export const rootDir = fileURLToPath(new URL('../..', import.meta.url))
export const e2eDir = path.join(rootDir, 'e2e')
export const apiDir = path.join(rootDir, 'api')
export const dockerDir = path.join(rootDir, 'docker')
export const webDir = path.join(rootDir, 'web')
export const middlewareComposeFile = path.join(dockerDir, 'docker-compose.middleware.yaml')
export const middlewareEnvFile = path.join(dockerDir, 'middleware.env')
export const middlewareEnvExampleFile = path.join(dockerDir, 'middleware.env.example')
export const webEnvLocalFile = path.join(webDir, '.env.local')
export const webEnvExampleFile = path.join(webDir, '.env.example')
export const apiEnvExampleFile = path.join(apiDir, 'tests', 'integration_tests', '.env.example')
const formatCommand = (command: string, args: string[]) => [command, ...args].join(' ')
export const isMainModule = (metaUrl: string) => {
const entrypoint = process.argv[1]
if (!entrypoint) return false
return pathToFileURL(entrypoint).href === metaUrl
}
export const runCommand = async ({
command,
args,
cwd,
env,
stdio = 'inherit',
}: RunCommandOptions): Promise<RunCommandResult> => {
const childProcess = spawn(command, args, {
cwd,
env: {
...process.env,
...env,
},
stdio: stdio === 'inherit' ? 'inherit' : 'pipe',
})
let stdout = ''
let stderr = ''
if (stdio === 'pipe') {
childProcess.stdout?.on('data', (chunk: Buffer | string) => {
stdout += chunk.toString()
})
childProcess.stderr?.on('data', (chunk: Buffer | string) => {
stderr += chunk.toString()
})
}
return await new Promise<RunCommandResult>((resolve, reject) => {
childProcess.once('error', reject)
childProcess.once('exit', (code) => {
resolve({
exitCode: code ?? 1,
stdout,
stderr,
})
})
})
}
export const runCommandOrThrow = async (options: RunCommandOptions) => {
const result = await runCommand(options)
if (result.exitCode !== 0) {
throw new Error(
`Command failed (${result.exitCode}): ${formatCommand(options.command, options.args)}`,
)
}
return result
}
const forwardSignalsToChild = (childProcess: ChildProcess) => {
const handleSignal = (signal: NodeJS.Signals) => {
if (childProcess.exitCode === null) childProcess.kill(signal)
}
const onSigint = () => handleSignal('SIGINT')
const onSigterm = () => handleSignal('SIGTERM')
process.on('SIGINT', onSigint)
process.on('SIGTERM', onSigterm)
return () => {
process.off('SIGINT', onSigint)
process.off('SIGTERM', onSigterm)
}
}
export const runForegroundProcess = async ({
command,
args,
cwd,
env,
}: ForegroundProcessOptions) => {
const childProcess = spawn(command, args, {
cwd,
env: {
...process.env,
...env,
},
stdio: 'inherit',
})
const cleanupSignals = forwardSignalsToChild(childProcess)
const exitCode = await new Promise<number>((resolve, reject) => {
childProcess.once('error', reject)
childProcess.once('exit', (code) => {
resolve(code ?? 1)
})
})
cleanupSignals()
process.exit(exitCode)
}
export const ensureFileExists = async (filePath: string, exampleFilePath: string) => {
try {
await access(filePath)
} catch {
await copyFile(exampleFilePath, filePath)
}
}
export const ensureLineInFile = async (filePath: string, line: string) => {
const fileContent = await readFile(filePath, 'utf8')
const lines = fileContent.split(/\r?\n/)
const assignmentPrefix = line.includes('=') ? `${line.slice(0, line.indexOf('='))}=` : null
if (lines.includes(line)) return
if (assignmentPrefix && lines.some((existingLine) => existingLine.startsWith(assignmentPrefix)))
return
const normalizedContent = fileContent.endsWith('\n') ? fileContent : `${fileContent}\n`
await writeFile(filePath, `${normalizedContent}${line}\n`, 'utf8')
}
export const ensureWebEnvLocal = async () => {
await ensureFileExists(webEnvLocalFile, webEnvExampleFile)
const fileContent = await readFile(webEnvLocalFile, 'utf8')
const nextContent = fileContent.replaceAll('http://localhost:5001', 'http://127.0.0.1:5001')
if (nextContent !== fileContent) await writeFile(webEnvLocalFile, nextContent, 'utf8')
}
export const readSimpleDotenv = async (filePath: string) => {
const fileContent = await readFile(filePath, 'utf8')
const entries = fileContent
.split(/\r?\n/)
.map((line) => line.trim())
.filter((line) => line && !line.startsWith('#'))
.map<[string, string]>((line) => {
const separatorIndex = line.indexOf('=')
const key = separatorIndex === -1 ? line : line.slice(0, separatorIndex).trim()
const rawValue = separatorIndex === -1 ? '' : line.slice(separatorIndex + 1).trim()
if (
(rawValue.startsWith('"') && rawValue.endsWith('"')) ||
(rawValue.startsWith("'") && rawValue.endsWith("'"))
) {
return [key, rawValue.slice(1, -1)]
}
return [key, rawValue]
})
return Object.fromEntries(entries)
}
export const waitForCondition = async ({
check,
description,
intervalMs,
timeoutMs,
}: {
check: () => Promise<boolean> | boolean
description: string
intervalMs: number
timeoutMs: number
}) => {
const deadline = Date.now() + timeoutMs
while (Date.now() < deadline) {
if (await check()) return
await sleep(intervalMs)
}
throw new Error(`Timed out waiting for ${description} after ${timeoutMs}ms.`)
}
export const isTcpPortReachable = async (host: string, port: number, timeoutMs = 1_000) => {
return await new Promise<boolean>((resolve) => {
const socket = net.createConnection({
host,
port,
})
const finish = (result: boolean) => {
socket.removeAllListeners()
socket.destroy()
resolve(result)
}
socket.setTimeout(timeoutMs)
socket.once('connect', () => finish(true))
socket.once('timeout', () => finish(false))
socket.once('error', () => finish(false))
})
}

154
e2e/scripts/run-cucumber.ts Normal file
View File

@@ -0,0 +1,154 @@
import { mkdir, rm } from 'node:fs/promises'
import path from 'node:path'
import { startWebServer, stopWebServer } from '../support/web-server'
import { waitForUrl, startLoggedProcess, stopManagedProcess } from '../support/process'
import { apiURL, baseURL, reuseExistingWebServer } from '../test-env'
import { e2eDir, isMainModule, runCommand } from './common'
import { resetState, startMiddleware, stopMiddleware } from './setup'
type RunOptions = {
forwardArgs: string[]
full: boolean
headed: boolean
}
const parseArgs = (argv: string[]): RunOptions => {
let full = false
let headed = false
const forwardArgs: string[] = []
for (let index = 0; index < argv.length; index += 1) {
const arg = argv[index]
if (arg === '--') {
forwardArgs.push(...argv.slice(index + 1))
break
}
if (arg === '--full') {
full = true
continue
}
if (arg === '--headed') {
headed = true
continue
}
forwardArgs.push(arg)
}
return {
forwardArgs,
full,
headed,
}
}
const hasCustomTags = (forwardArgs: string[]) =>
forwardArgs.some((arg) => arg === '--tags' || arg.startsWith('--tags='))
const main = async () => {
const { forwardArgs, full, headed } = parseArgs(process.argv.slice(2))
const startMiddlewareForRun = full
const resetStateForRun = full
if (resetStateForRun) await resetState()
if (startMiddlewareForRun) await startMiddleware()
const cucumberReportDir = path.join(e2eDir, 'cucumber-report')
const logDir = path.join(e2eDir, '.logs')
await rm(cucumberReportDir, { force: true, recursive: true })
await mkdir(logDir, { recursive: true })
const apiProcess = await startLoggedProcess({
command: 'npx',
args: ['tsx', './scripts/setup.ts', 'api'],
cwd: e2eDir,
label: 'api server',
logFilePath: path.join(logDir, 'cucumber-api.log'),
})
let cleanupPromise: Promise<void> | undefined
const cleanup = async () => {
if (!cleanupPromise) {
cleanupPromise = (async () => {
await stopWebServer()
await stopManagedProcess(apiProcess)
if (startMiddlewareForRun) {
try {
await stopMiddleware()
} catch {
// Cleanup should continue even if middleware shutdown fails.
}
}
})()
}
await cleanupPromise
}
const onTerminate = () => {
void cleanup().finally(() => {
process.exit(1)
})
}
process.once('SIGINT', onTerminate)
process.once('SIGTERM', onTerminate)
try {
try {
await waitForUrl(`${apiURL}/health`, 180_000, 1_000)
} catch {
throw new Error(`API did not become ready at ${apiURL}/health.`)
}
await startWebServer({
baseURL,
command: 'npx',
args: ['tsx', './scripts/setup.ts', 'web'],
cwd: e2eDir,
logFilePath: path.join(logDir, 'cucumber-web.log'),
reuseExistingServer: reuseExistingWebServer,
timeoutMs: 300_000,
})
const cucumberEnv: NodeJS.ProcessEnv = {
...process.env,
CUCUMBER_HEADLESS: headed ? '0' : '1',
}
if (startMiddlewareForRun && !hasCustomTags(forwardArgs))
cucumberEnv.E2E_CUCUMBER_TAGS = 'not @skip'
const result = await runCommand({
command: 'npx',
args: [
'tsx',
'./node_modules/@cucumber/cucumber/bin/cucumber.js',
'--config',
'./cucumber.config.ts',
...forwardArgs,
],
cwd: e2eDir,
env: cucumberEnv,
})
process.exitCode = result.exitCode
} finally {
process.off('SIGINT', onTerminate)
process.off('SIGTERM', onTerminate)
await cleanup()
}
}
if (isMainModule(import.meta.url)) {
void main().catch((error) => {
console.error(error instanceof Error ? error.message : String(error))
process.exit(1)
})
}

306
e2e/scripts/setup.ts Normal file
View File

@@ -0,0 +1,306 @@
import { access, mkdir, rm } from 'node:fs/promises'
import path from 'node:path'
import { waitForUrl } from '../support/process'
import {
apiDir,
apiEnvExampleFile,
dockerDir,
e2eDir,
ensureFileExists,
ensureLineInFile,
ensureWebEnvLocal,
isMainModule,
isTcpPortReachable,
middlewareComposeFile,
middlewareEnvExampleFile,
middlewareEnvFile,
readSimpleDotenv,
runCommand,
runCommandOrThrow,
runForegroundProcess,
waitForCondition,
webDir,
} from './common'
const buildIdPath = path.join(webDir, '.next', 'BUILD_ID')
const middlewareDataPaths = [
path.join(dockerDir, 'volumes', 'db', 'data'),
path.join(dockerDir, 'volumes', 'plugin_daemon'),
path.join(dockerDir, 'volumes', 'redis', 'data'),
path.join(dockerDir, 'volumes', 'weaviate'),
]
const e2eStatePaths = [
path.join(e2eDir, '.auth'),
path.join(e2eDir, 'cucumber-report'),
path.join(e2eDir, '.logs'),
path.join(e2eDir, 'playwright-report'),
path.join(e2eDir, 'test-results'),
]
const composeArgs = [
'compose',
'-f',
middlewareComposeFile,
'--profile',
'postgresql',
'--profile',
'weaviate',
]
const getApiEnvironment = async () => {
const envFromExample = await readSimpleDotenv(apiEnvExampleFile)
return {
...envFromExample,
FLASK_APP: 'app.py',
}
}
const getServiceContainerId = async (service: string) => {
const result = await runCommandOrThrow({
command: 'docker',
args: ['compose', '-f', middlewareComposeFile, 'ps', '-q', service],
cwd: dockerDir,
stdio: 'pipe',
})
return result.stdout.trim()
}
const getContainerHealth = async (containerId: string) => {
const result = await runCommand({
command: 'docker',
args: ['inspect', '-f', '{{.State.Health.Status}}', containerId],
cwd: dockerDir,
stdio: 'pipe',
})
if (result.exitCode !== 0) return ''
return result.stdout.trim()
}
const printComposeLogs = async (services: string[]) => {
await runCommand({
command: 'docker',
args: ['compose', '-f', middlewareComposeFile, 'logs', ...services],
cwd: dockerDir,
})
}
const waitForDependency = async ({
description,
services,
wait,
}: {
description: string
services: string[]
wait: () => Promise<void>
}) => {
console.log(`Waiting for ${description}...`)
try {
await wait()
} catch (error) {
await printComposeLogs(services)
throw error
}
}
export const ensureWebBuild = async () => {
await ensureWebEnvLocal()
if (process.env.E2E_FORCE_WEB_BUILD === '1') {
await runCommandOrThrow({
command: 'pnpm',
args: ['run', 'build'],
cwd: webDir,
})
return
}
try {
await access(buildIdPath)
console.log('Reusing existing web build artifact.')
} catch {
await runCommandOrThrow({
command: 'pnpm',
args: ['run', 'build'],
cwd: webDir,
})
}
}
export const startWeb = async () => {
await ensureWebBuild()
await runForegroundProcess({
command: 'pnpm',
args: ['run', 'start'],
cwd: webDir,
env: {
HOSTNAME: '127.0.0.1',
PORT: '3000',
},
})
}
export const startApi = async () => {
const env = await getApiEnvironment()
await runCommandOrThrow({
command: 'uv',
args: ['run', '--project', '.', 'flask', 'upgrade-db'],
cwd: apiDir,
env,
})
await runForegroundProcess({
command: 'uv',
args: ['run', '--project', '.', 'flask', 'run', '--host', '127.0.0.1', '--port', '5001'],
cwd: apiDir,
env,
})
}
export const stopMiddleware = async () => {
await runCommandOrThrow({
command: 'docker',
args: [...composeArgs, 'down', '--remove-orphans'],
cwd: dockerDir,
})
}
export const resetState = async () => {
console.log('Stopping middleware services...')
try {
await stopMiddleware()
} catch {
// Reset should continue even if middleware is already stopped.
}
console.log('Removing persisted middleware data...')
await Promise.all(
middlewareDataPaths.map(async (targetPath) => {
await rm(targetPath, { force: true, recursive: true })
await mkdir(targetPath, { recursive: true })
}),
)
console.log('Removing E2E local state...')
await Promise.all(
e2eStatePaths.map((targetPath) => rm(targetPath, { force: true, recursive: true })),
)
console.log('E2E state reset complete.')
}
export const startMiddleware = async () => {
await ensureFileExists(middlewareEnvFile, middlewareEnvExampleFile)
await ensureLineInFile(middlewareEnvFile, 'COMPOSE_PROFILES=postgresql,weaviate')
console.log('Starting middleware services...')
await runCommandOrThrow({
command: 'docker',
args: [
...composeArgs,
'up',
'-d',
'db_postgres',
'redis',
'weaviate',
'sandbox',
'ssrf_proxy',
'plugin_daemon',
],
cwd: dockerDir,
})
const [postgresContainerId, redisContainerId] = await Promise.all([
getServiceContainerId('db_postgres'),
getServiceContainerId('redis'),
])
await waitForDependency({
description: 'PostgreSQL and Redis health checks',
services: ['db_postgres', 'redis'],
wait: () =>
waitForCondition({
check: async () => {
const [postgresStatus, redisStatus] = await Promise.all([
getContainerHealth(postgresContainerId),
getContainerHealth(redisContainerId),
])
return postgresStatus === 'healthy' && redisStatus === 'healthy'
},
description: 'PostgreSQL and Redis health checks',
intervalMs: 2_000,
timeoutMs: 240_000,
}),
})
await waitForDependency({
description: 'Weaviate readiness',
services: ['weaviate'],
wait: () => waitForUrl('http://127.0.0.1:8080/v1/.well-known/ready', 120_000, 2_000),
})
await waitForDependency({
description: 'sandbox health',
services: ['sandbox', 'ssrf_proxy'],
wait: () => waitForUrl('http://127.0.0.1:8194/health', 120_000, 2_000),
})
await waitForDependency({
description: 'plugin daemon port',
services: ['plugin_daemon'],
wait: () =>
waitForCondition({
check: async () => isTcpPortReachable('127.0.0.1', 5002),
description: 'plugin daemon port',
intervalMs: 2_000,
timeoutMs: 120_000,
}),
})
console.log('Full middleware stack is ready.')
}
const printUsage = () => {
console.log('Usage: tsx ./scripts/setup.ts <reset|middleware-up|middleware-down|api|web>')
}
const main = async () => {
const command = process.argv[2]
switch (command) {
case 'api':
await startApi()
return
case 'middleware-down':
await stopMiddleware()
return
case 'middleware-up':
await startMiddleware()
return
case 'reset':
await resetState()
return
case 'web':
await startWeb()
return
default:
printUsage()
process.exitCode = 1
}
}
if (isMainModule(import.meta.url)) {
void main().catch((error) => {
console.error(error instanceof Error ? error.message : String(error))
process.exit(1)
})
}

178
e2e/support/process.ts Normal file
View File

@@ -0,0 +1,178 @@
import type { ChildProcess } from 'node:child_process'
import { spawn } from 'node:child_process'
import { createWriteStream, type WriteStream } from 'node:fs'
import { mkdir } from 'node:fs/promises'
import net from 'node:net'
import { dirname } from 'node:path'
type ManagedProcessOptions = {
command: string
args?: string[]
cwd: string
env?: NodeJS.ProcessEnv
label: string
logFilePath: string
}
export type ManagedProcess = {
childProcess: ChildProcess
label: string
logFilePath: string
logStream: WriteStream
}
export const sleep = (ms: number) =>
new Promise<void>((resolve) => {
setTimeout(resolve, ms)
})
export const isPortReachable = async (host: string, port: number, timeoutMs = 1_000) => {
return await new Promise<boolean>((resolve) => {
const socket = net.createConnection({
host,
port,
})
const finish = (result: boolean) => {
socket.removeAllListeners()
socket.destroy()
resolve(result)
}
socket.setTimeout(timeoutMs)
socket.once('connect', () => finish(true))
socket.once('timeout', () => finish(false))
socket.once('error', () => finish(false))
})
}
export const waitForUrl = async (
url: string,
timeoutMs: number,
intervalMs = 1_000,
requestTimeoutMs = Math.max(intervalMs, 1_000),
) => {
const deadline = Date.now() + timeoutMs
while (Date.now() < deadline) {
try {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), requestTimeoutMs)
try {
const response = await fetch(url, {
signal: controller.signal,
})
if (response.ok) return
} finally {
clearTimeout(timeout)
}
} catch {
// Keep polling until timeout.
}
await sleep(intervalMs)
}
throw new Error(`Timed out waiting for ${url} after ${timeoutMs}ms.`)
}
export const startLoggedProcess = async ({
command,
args = [],
cwd,
env,
label,
logFilePath,
}: ManagedProcessOptions): Promise<ManagedProcess> => {
await mkdir(dirname(logFilePath), { recursive: true })
const logStream = createWriteStream(logFilePath, { flags: 'a' })
const childProcess = spawn(command, args, {
cwd,
env: {
...process.env,
...env,
},
detached: process.platform !== 'win32',
stdio: ['ignore', 'pipe', 'pipe'],
})
const formattedCommand = [command, ...args].join(' ')
logStream.write(`[${new Date().toISOString()}] Starting ${label}: ${formattedCommand}\n`)
childProcess.stdout?.pipe(logStream, { end: false })
childProcess.stderr?.pipe(logStream, { end: false })
return {
childProcess,
label,
logFilePath,
logStream,
}
}
const waitForProcessExit = (childProcess: ChildProcess, timeoutMs: number) =>
new Promise<void>((resolve) => {
if (childProcess.exitCode !== null) {
resolve()
return
}
const timeout = setTimeout(() => {
cleanup()
resolve()
}, timeoutMs)
const onExit = () => {
cleanup()
resolve()
}
const cleanup = () => {
clearTimeout(timeout)
childProcess.off('exit', onExit)
}
childProcess.once('exit', onExit)
})
const signalManagedProcess = (childProcess: ChildProcess, signal: NodeJS.Signals) => {
const { pid } = childProcess
if (!pid) return
try {
if (process.platform !== 'win32') {
process.kill(-pid, signal)
return
}
childProcess.kill(signal)
} catch {
// Best-effort shutdown. Cleanup continues even when the process is already gone.
}
}
export const stopManagedProcess = async (managedProcess?: ManagedProcess) => {
if (!managedProcess) return
const { childProcess, logStream } = managedProcess
if (childProcess.exitCode === null) {
signalManagedProcess(childProcess, 'SIGTERM')
await waitForProcessExit(childProcess, 5_000)
}
if (childProcess.exitCode === null) {
signalManagedProcess(childProcess, 'SIGKILL')
await waitForProcessExit(childProcess, 5_000)
}
childProcess.stdout?.unpipe(logStream)
childProcess.stderr?.unpipe(logStream)
childProcess.stdout?.destroy()
childProcess.stderr?.destroy()
await new Promise<void>((resolve) => {
logStream.end(() => resolve())
})
}

83
e2e/support/web-server.ts Normal file
View File

@@ -0,0 +1,83 @@
import type { ManagedProcess } from './process'
import { isPortReachable, startLoggedProcess, stopManagedProcess, waitForUrl } from './process'
type WebServerStartOptions = {
baseURL: string
command: string
args?: string[]
cwd: string
logFilePath: string
reuseExistingServer: boolean
timeoutMs: number
}
let activeProcess: ManagedProcess | undefined
const getUrlHostAndPort = (url: string) => {
const parsedUrl = new URL(url)
const isHttps = parsedUrl.protocol === 'https:'
return {
host: parsedUrl.hostname,
port: parsedUrl.port ? Number(parsedUrl.port) : isHttps ? 443 : 80,
}
}
export const startWebServer = async ({
baseURL,
command,
args = [],
cwd,
logFilePath,
reuseExistingServer,
timeoutMs,
}: WebServerStartOptions) => {
const { host, port } = getUrlHostAndPort(baseURL)
if (reuseExistingServer && (await isPortReachable(host, port))) return
activeProcess = await startLoggedProcess({
command,
args,
cwd,
label: 'web server',
logFilePath,
})
let startupError: Error | undefined
activeProcess.childProcess.once('error', (error) => {
startupError = error
})
activeProcess.childProcess.once('exit', (code, signal) => {
if (startupError) return
startupError = new Error(
`Web server exited before readiness (code: ${code ?? 'unknown'}, signal: ${signal ?? 'none'}).`,
)
})
const deadline = Date.now() + timeoutMs
while (Date.now() < deadline) {
if (startupError) {
await stopManagedProcess(activeProcess)
activeProcess = undefined
throw startupError
}
try {
await waitForUrl(baseURL, 1_000, 250, 1_000)
return
} catch {
// Continue polling until timeout or child exit.
}
}
await stopManagedProcess(activeProcess)
activeProcess = undefined
throw new Error(`Timed out waiting for web server readiness at ${baseURL} after ${timeoutMs}ms.`)
}
export const stopWebServer = async () => {
await stopManagedProcess(activeProcess)
activeProcess = undefined
}

12
e2e/test-env.ts Normal file
View File

@@ -0,0 +1,12 @@
export const defaultBaseURL = 'http://127.0.0.1:3000'
export const defaultApiURL = 'http://127.0.0.1:5001'
export const defaultLocale = 'en-US'
export const baseURL = process.env.E2E_BASE_URL || defaultBaseURL
export const apiURL = process.env.E2E_API_URL || defaultApiURL
export const cucumberHeadless = process.env.CUCUMBER_HEADLESS !== '0'
export const cucumberSlowMo = Number(process.env.E2E_SLOW_MO || 0)
export const reuseExistingWebServer = process.env.E2E_REUSE_WEB_SERVER
? process.env.E2E_REUSE_WEB_SERVER !== '0'
: !process.env.CI

25
e2e/tsconfig.json Normal file
View File

@@ -0,0 +1,25 @@
{
"compilerOptions": {
"target": "ES2023",
"lib": ["ES2023", "DOM"],
"module": "ESNext",
"moduleResolution": "Bundler",
"allowJs": false,
"resolveJsonModule": true,
"noEmit": true,
"strict": true,
"skipLibCheck": true,
"types": ["node", "@playwright/test", "@cucumber/cucumber"],
"isolatedModules": true,
"verbatimModuleSyntax": true
},
"include": ["./**/*.ts"],
"exclude": [
"./node_modules",
"./playwright-report",
"./test-results",
"./.auth",
"./cucumber-report",
"./.logs"
]
}

15
e2e/vite.config.ts Normal file
View File

@@ -0,0 +1,15 @@
import { defineConfig } from 'vite-plus'
export default defineConfig({
lint: {
options: {
typeAware: true,
typeCheck: true,
denyWarnings: true,
},
},
fmt: {
singleQuote: true,
semi: false,
},
})

View File

@@ -1,860 +0,0 @@
import fs from 'node:fs'
import path from 'node:path'
import vm from 'node:vm'
import { transpile } from 'typescript'
describe('i18n:check script functionality', () => {
const testDir = path.join(__dirname, '../i18n-test')
const testEnDir = path.join(testDir, 'en-US')
const testZhDir = path.join(testDir, 'zh-Hans')
// Helper function that replicates the getKeysFromLanguage logic
async function getKeysFromLanguage(language: string, testPath = testDir): Promise<string[]> {
return new Promise((resolve, reject) => {
const folderPath = path.resolve(testPath, language)
const allKeys: string[] = []
if (!fs.existsSync(folderPath)) {
resolve([])
return
}
fs.readdir(folderPath, (err, files) => {
if (err) {
reject(err)
return
}
const translationFiles = files.filter(file => /\.(ts|js)$/.test(file))
translationFiles.forEach((file) => {
const filePath = path.join(folderPath, file)
const fileName = file.replace(/\.[^/.]+$/, '')
const camelCaseFileName = fileName.replace(/[-_](.)/g, (_, c) =>
c.toUpperCase())
try {
const content = fs.readFileSync(filePath, 'utf8')
const moduleExports = {}
const context = {
exports: moduleExports,
module: { exports: moduleExports },
require,
console,
__filename: filePath,
__dirname: folderPath,
}
vm.runInNewContext(transpile(content), context)
const translationObj = (context.module.exports as any).default || context.module.exports
if (!translationObj || typeof translationObj !== 'object')
throw new Error(`Error parsing file: ${filePath}`)
const nestedKeys: string[] = []
const iterateKeys = (obj: any, prefix = '') => {
for (const key in obj) {
const nestedKey = prefix ? `${prefix}.${key}` : key
if (typeof obj[key] === 'object' && obj[key] !== null && !Array.isArray(obj[key])) {
// This is an object (but not array), recurse into it but don't add it as a key
iterateKeys(obj[key], nestedKey)
}
else {
// This is a leaf node (string, number, boolean, array, etc.), add it as a key
nestedKeys.push(nestedKey)
}
}
}
iterateKeys(translationObj)
const fileKeys = nestedKeys.map(key => `${camelCaseFileName}.${key}`)
allKeys.push(...fileKeys)
}
catch (error) {
reject(error)
}
})
resolve(allKeys)
})
})
}
beforeEach(() => {
// Clean up and create test directories
if (fs.existsSync(testDir))
fs.rmSync(testDir, { recursive: true })
fs.mkdirSync(testDir, { recursive: true })
fs.mkdirSync(testEnDir, { recursive: true })
fs.mkdirSync(testZhDir, { recursive: true })
})
afterEach(() => {
// Clean up test files
if (fs.existsSync(testDir))
fs.rmSync(testDir, { recursive: true })
})
describe('Key extraction logic', () => {
it('should extract only leaf node keys, not intermediate objects', async () => {
const testContent = `const translation = {
simple: 'Simple Value',
nested: {
level1: 'Level 1 Value',
deep: {
level2: 'Level 2 Value'
}
},
array: ['not extracted'],
number: 42,
boolean: true
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'test.ts'), testContent)
const keys = await getKeysFromLanguage('en-US')
expect(keys).toEqual([
'test.simple',
'test.nested.level1',
'test.nested.deep.level2',
'test.array',
'test.number',
'test.boolean',
])
// Should not include intermediate object keys
expect(keys).not.toContain('test.nested')
expect(keys).not.toContain('test.nested.deep')
})
it('should handle camelCase file name conversion correctly', async () => {
const testContent = `const translation = {
key: 'value'
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'app-debug.ts'), testContent)
fs.writeFileSync(path.join(testEnDir, 'user_profile.ts'), testContent)
const keys = await getKeysFromLanguage('en-US')
expect(keys).toContain('appDebug.key')
expect(keys).toContain('userProfile.key')
})
})
describe('Missing keys detection', () => {
it('should detect missing keys in target language', async () => {
const enContent = `const translation = {
common: {
save: 'Save',
cancel: 'Cancel',
delete: 'Delete'
},
app: {
title: 'My App',
version: '1.0'
}
}
export default translation
`
const zhContent = `const translation = {
common: {
save: '保存',
cancel: '取消'
// missing 'delete'
},
app: {
title: '我的应用'
// missing 'version'
}
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'test.ts'), enContent)
fs.writeFileSync(path.join(testZhDir, 'test.ts'), zhContent)
const enKeys = await getKeysFromLanguage('en-US')
const zhKeys = await getKeysFromLanguage('zh-Hans')
const missingKeys = enKeys.filter(key => !zhKeys.includes(key))
expect(missingKeys).toContain('test.common.delete')
expect(missingKeys).toContain('test.app.version')
expect(missingKeys).toHaveLength(2)
})
})
describe('Extra keys detection', () => {
it('should detect extra keys in target language', async () => {
const enContent = `const translation = {
common: {
save: 'Save',
cancel: 'Cancel'
}
}
export default translation
`
const zhContent = `const translation = {
common: {
save: '保存',
cancel: '取消',
delete: '删除', // extra key
extra: '额外的' // another extra key
},
newSection: {
someKey: '某个值' // extra section
}
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'test.ts'), enContent)
fs.writeFileSync(path.join(testZhDir, 'test.ts'), zhContent)
const enKeys = await getKeysFromLanguage('en-US')
const zhKeys = await getKeysFromLanguage('zh-Hans')
const extraKeys = zhKeys.filter(key => !enKeys.includes(key))
expect(extraKeys).toContain('test.common.delete')
expect(extraKeys).toContain('test.common.extra')
expect(extraKeys).toContain('test.newSection.someKey')
expect(extraKeys).toHaveLength(3)
})
})
describe('File filtering logic', () => {
it('should filter keys by specific file correctly', async () => {
// Create multiple files
const file1Content = `const translation = {
button: 'Button',
text: 'Text'
}
export default translation
`
const file2Content = `const translation = {
title: 'Title',
description: 'Description'
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'components.ts'), file1Content)
fs.writeFileSync(path.join(testEnDir, 'pages.ts'), file2Content)
fs.writeFileSync(path.join(testZhDir, 'components.ts'), file1Content)
fs.writeFileSync(path.join(testZhDir, 'pages.ts'), file2Content)
const allEnKeys = await getKeysFromLanguage('en-US')
// Test file filtering logic
const targetFile = 'components'
const filteredEnKeys = allEnKeys.filter(key =>
key.startsWith(targetFile.replace(/[-_](.)/g, (_, c) => c.toUpperCase())),
)
expect(allEnKeys).toHaveLength(4) // 2 keys from each file
expect(filteredEnKeys).toHaveLength(2) // only components keys
expect(filteredEnKeys).toContain('components.button')
expect(filteredEnKeys).toContain('components.text')
expect(filteredEnKeys).not.toContain('pages.title')
expect(filteredEnKeys).not.toContain('pages.description')
})
})
describe('Complex nested structure handling', () => {
it('should handle deeply nested objects correctly', async () => {
const complexContent = `const translation = {
level1: {
level2: {
level3: {
level4: {
deepValue: 'Deep Value'
},
anotherValue: 'Another Value'
},
simpleValue: 'Simple Value'
},
directValue: 'Direct Value'
},
rootValue: 'Root Value'
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'complex.ts'), complexContent)
const keys = await getKeysFromLanguage('en-US')
expect(keys).toContain('complex.level1.level2.level3.level4.deepValue')
expect(keys).toContain('complex.level1.level2.level3.anotherValue')
expect(keys).toContain('complex.level1.level2.simpleValue')
expect(keys).toContain('complex.level1.directValue')
expect(keys).toContain('complex.rootValue')
// Should not include intermediate objects
expect(keys).not.toContain('complex.level1')
expect(keys).not.toContain('complex.level1.level2')
expect(keys).not.toContain('complex.level1.level2.level3')
expect(keys).not.toContain('complex.level1.level2.level3.level4')
})
})
describe('Edge cases', () => {
it('should handle empty objects', async () => {
const emptyContent = `const translation = {
empty: {},
withValue: 'value'
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'empty.ts'), emptyContent)
const keys = await getKeysFromLanguage('en-US')
expect(keys).toContain('empty.withValue')
expect(keys).not.toContain('empty.empty')
})
it('should handle special characters in keys', async () => {
const specialContent = `const translation = {
'key-with-dash': 'value1',
'key_with_underscore': 'value2',
'key.with.dots': 'value3',
normalKey: 'value4'
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'special.ts'), specialContent)
const keys = await getKeysFromLanguage('en-US')
expect(keys).toContain('special.key-with-dash')
expect(keys).toContain('special.key_with_underscore')
expect(keys).toContain('special.key.with.dots')
expect(keys).toContain('special.normalKey')
})
it('should handle different value types', async () => {
const typesContent = `const translation = {
stringValue: 'string',
numberValue: 42,
booleanValue: true,
nullValue: null,
undefinedValue: undefined,
arrayValue: ['array', 'values'],
objectValue: {
nested: 'nested value'
}
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'types.ts'), typesContent)
const keys = await getKeysFromLanguage('en-US')
expect(keys).toContain('types.stringValue')
expect(keys).toContain('types.numberValue')
expect(keys).toContain('types.booleanValue')
expect(keys).toContain('types.nullValue')
expect(keys).toContain('types.undefinedValue')
expect(keys).toContain('types.arrayValue')
expect(keys).toContain('types.objectValue.nested')
expect(keys).not.toContain('types.objectValue')
})
})
describe('Real-world scenario tests', () => {
it('should handle app-debug structure like real files', async () => {
const appDebugEn = `const translation = {
pageTitle: {
line1: 'Prompt',
line2: 'Engineering'
},
operation: {
applyConfig: 'Publish',
resetConfig: 'Reset',
debugConfig: 'Debug'
},
generate: {
instruction: 'Instructions',
generate: 'Generate',
resTitle: 'Generated Prompt',
noDataLine1: 'Describe your use case on the left,',
noDataLine2: 'the orchestration preview will show here.'
}
}
export default translation
`
const appDebugZh = `const translation = {
pageTitle: {
line1: '提示词',
line2: '编排'
},
operation: {
applyConfig: '发布',
resetConfig: '重置',
debugConfig: '调试'
},
generate: {
instruction: '指令',
generate: '生成',
resTitle: '生成的提示词',
noData: '在左侧描述您的用例,编排预览将在此处显示。' // This is extra
}
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'app-debug.ts'), appDebugEn)
fs.writeFileSync(path.join(testZhDir, 'app-debug.ts'), appDebugZh)
const enKeys = await getKeysFromLanguage('en-US')
const zhKeys = await getKeysFromLanguage('zh-Hans')
const missingKeys = enKeys.filter(key => !zhKeys.includes(key))
const extraKeys = zhKeys.filter(key => !enKeys.includes(key))
expect(missingKeys).toContain('appDebug.generate.noDataLine1')
expect(missingKeys).toContain('appDebug.generate.noDataLine2')
expect(extraKeys).toContain('appDebug.generate.noData')
expect(missingKeys).toHaveLength(2)
expect(extraKeys).toHaveLength(1)
})
it('should handle time structure with operation nested keys', async () => {
const timeEn = `const translation = {
months: {
January: 'January',
February: 'February'
},
operation: {
now: 'Now',
ok: 'OK',
cancel: 'Cancel',
pickDate: 'Pick Date'
},
title: {
pickTime: 'Pick Time'
},
defaultPlaceholder: 'Pick a time...'
}
export default translation
`
const timeZh = `const translation = {
months: {
January: '一月',
February: '二月'
},
operation: {
now: '此刻',
ok: '确定',
cancel: '取消',
pickDate: '选择日期'
},
title: {
pickTime: '选择时间'
},
pickDate: '选择日期', // This is extra - duplicates operation.pickDate
defaultPlaceholder: '请选择时间...'
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'time.ts'), timeEn)
fs.writeFileSync(path.join(testZhDir, 'time.ts'), timeZh)
const enKeys = await getKeysFromLanguage('en-US')
const zhKeys = await getKeysFromLanguage('zh-Hans')
const missingKeys = enKeys.filter(key => !zhKeys.includes(key))
const extraKeys = zhKeys.filter(key => !enKeys.includes(key))
expect(missingKeys).toHaveLength(0) // No missing keys
expect(extraKeys).toContain('time.pickDate') // Extra root-level pickDate
expect(extraKeys).toHaveLength(1)
// Should have both keys available
expect(zhKeys).toContain('time.operation.pickDate') // Correct nested key
expect(zhKeys).toContain('time.pickDate') // Extra duplicate key
})
})
describe('Statistics calculation', () => {
it('should calculate correct difference statistics', async () => {
const enContent = `const translation = {
key1: 'value1',
key2: 'value2',
key3: 'value3'
}
export default translation
`
const zhContentMissing = `const translation = {
key1: 'value1',
key2: 'value2'
// missing key3
}
export default translation
`
const zhContentExtra = `const translation = {
key1: 'value1',
key2: 'value2',
key3: 'value3',
key4: 'extra',
key5: 'extra2'
}
export default translation
`
fs.writeFileSync(path.join(testEnDir, 'stats.ts'), enContent)
// Test missing keys scenario
fs.writeFileSync(path.join(testZhDir, 'stats.ts'), zhContentMissing)
const enKeys = await getKeysFromLanguage('en-US')
const zhKeysMissing = await getKeysFromLanguage('zh-Hans')
expect(enKeys.length - zhKeysMissing.length).toBe(1) // +1 means 1 missing key
// Test extra keys scenario
fs.writeFileSync(path.join(testZhDir, 'stats.ts'), zhContentExtra)
const zhKeysExtra = await getKeysFromLanguage('zh-Hans')
expect(enKeys.length - zhKeysExtra.length).toBe(-2) // -2 means 2 extra keys
})
})
describe('Auto-remove multiline key-value pairs', () => {
// Helper function to simulate removeExtraKeysFromFile logic
function removeExtraKeysFromFile(content: string, keysToRemove: string[]): string {
const lines = content.split('\n')
const linesToRemove: number[] = []
for (const keyToRemove of keysToRemove) {
let targetLineIndex = -1
const linesToRemoveForKey: number[] = []
// Find the key line (simplified for single-level keys in test)
for (let i = 0; i < lines.length; i++) {
const line = lines[i]
const keyPattern = new RegExp(`^\\s*${keyToRemove}\\s*:`)
if (keyPattern.test(line)) {
targetLineIndex = i
break
}
}
if (targetLineIndex !== -1) {
linesToRemoveForKey.push(targetLineIndex)
// Check if this is a multiline key-value pair
const keyLine = lines[targetLineIndex]
const trimmedKeyLine = keyLine.trim()
// If key line ends with ":" (not complete value), it's likely multiline
if (trimmedKeyLine.endsWith(':') && !trimmedKeyLine.includes('{') && !/:\s*['"`]/.exec(trimmedKeyLine)) {
// Find the value lines that belong to this key
let currentLine = targetLineIndex + 1
let foundValue = false
while (currentLine < lines.length) {
const line = lines[currentLine]
const trimmed = line.trim()
// Skip empty lines
if (trimmed === '') {
currentLine++
continue
}
// Check if this line starts a new key (indicates end of current value)
if (/^\w+\s*:/.exec(trimmed))
break
// Check if this line is part of the value
if (trimmed.startsWith('\'') || trimmed.startsWith('"') || trimmed.startsWith('`') || foundValue) {
linesToRemoveForKey.push(currentLine)
foundValue = true
// Check if this line ends the value (ends with quote and comma/no comma)
if ((trimmed.endsWith('\',') || trimmed.endsWith('",') || trimmed.endsWith('`,')
|| trimmed.endsWith('\'') || trimmed.endsWith('"') || trimmed.endsWith('`'))
&& !trimmed.startsWith('//')) {
break
}
}
else {
break
}
currentLine++
}
}
linesToRemove.push(...linesToRemoveForKey)
}
}
// Remove duplicates and sort in reverse order
const uniqueLinesToRemove = [...new Set(linesToRemove)].sort((a, b) => b - a)
for (const lineIndex of uniqueLinesToRemove)
lines.splice(lineIndex, 1)
return lines.join('\n')
}
it('should remove single-line key-value pairs correctly', () => {
const content = `const translation = {
keepThis: 'This should stay',
removeThis: 'This should be removed',
alsoKeep: 'This should also stay',
}
export default translation`
const result = removeExtraKeysFromFile(content, ['removeThis'])
expect(result).toContain('keepThis: \'This should stay\'')
expect(result).toContain('alsoKeep: \'This should also stay\'')
expect(result).not.toContain('removeThis: \'This should be removed\'')
})
it('should remove multiline key-value pairs completely', () => {
const content = `const translation = {
keepThis: 'This should stay',
removeMultiline:
'This is a multiline value that should be removed completely',
alsoKeep: 'This should also stay',
}
export default translation`
const result = removeExtraKeysFromFile(content, ['removeMultiline'])
expect(result).toContain('keepThis: \'This should stay\'')
expect(result).toContain('alsoKeep: \'This should also stay\'')
expect(result).not.toContain('removeMultiline:')
expect(result).not.toContain('This is a multiline value that should be removed completely')
})
it('should handle mixed single-line and multiline removals', () => {
const content = `const translation = {
keepThis: 'Keep this',
removeSingle: 'Remove this single line',
removeMultiline:
'Remove this multiline value',
anotherMultiline:
'Another multiline that spans multiple lines',
keepAnother: 'Keep this too',
}
export default translation`
const result = removeExtraKeysFromFile(content, ['removeSingle', 'removeMultiline', 'anotherMultiline'])
expect(result).toContain('keepThis: \'Keep this\'')
expect(result).toContain('keepAnother: \'Keep this too\'')
expect(result).not.toContain('removeSingle:')
expect(result).not.toContain('removeMultiline:')
expect(result).not.toContain('anotherMultiline:')
expect(result).not.toContain('Remove this single line')
expect(result).not.toContain('Remove this multiline value')
expect(result).not.toContain('Another multiline that spans multiple lines')
})
it('should properly detect multiline vs single-line patterns', () => {
const multilineContent = `const translation = {
singleLine: 'This is single line',
multilineKey:
'This is multiline',
keyWithColon: 'Value with: colon inside',
objectKey: {
nested: 'value'
},
}
export default translation`
// Test that single line with colon in value is not treated as multiline
const result1 = removeExtraKeysFromFile(multilineContent, ['keyWithColon'])
expect(result1).not.toContain('keyWithColon:')
expect(result1).not.toContain('Value with: colon inside')
// Test that true multiline is handled correctly
const result2 = removeExtraKeysFromFile(multilineContent, ['multilineKey'])
expect(result2).not.toContain('multilineKey:')
expect(result2).not.toContain('This is multiline')
// Test that object key removal works (note: this is a simplified test)
// In real scenario, object removal would be more complex
const result3 = removeExtraKeysFromFile(multilineContent, ['objectKey'])
expect(result3).not.toContain('objectKey: {')
// Note: Our simplified test function doesn't handle nested object removal perfectly
// This is acceptable as it's testing the main multiline string removal functionality
})
it('should handle real-world Polish translation structure', () => {
const polishContent = `const translation = {
createApp: 'UTWÓRZ APLIKACJĘ',
newApp: {
captionAppType: 'Jaki typ aplikacji chcesz stworzyć?',
chatbotDescription:
'Zbuduj aplikację opartą na czacie. Ta aplikacja używa formatu pytań i odpowiedzi.',
agentDescription:
'Zbuduj inteligentnego agenta, który może autonomicznie wybierać narzędzia.',
basic: 'Podstawowy',
},
}
export default translation`
const result = removeExtraKeysFromFile(polishContent, ['captionAppType', 'chatbotDescription', 'agentDescription'])
expect(result).toContain('createApp: \'UTWÓRZ APLIKACJĘ\'')
expect(result).toContain('basic: \'Podstawowy\'')
expect(result).not.toContain('captionAppType:')
expect(result).not.toContain('chatbotDescription:')
expect(result).not.toContain('agentDescription:')
expect(result).not.toContain('Jaki typ aplikacji')
expect(result).not.toContain('Zbuduj aplikację opartą na czacie')
expect(result).not.toContain('Zbuduj inteligentnego agenta')
})
})
describe('Performance and Scalability', () => {
it('should handle large translation files efficiently', async () => {
// Create a large translation file with 1000 keys
const largeContent = `const translation = {
${Array.from({ length: 1000 }, (_, i) => ` key${i}: 'value${i}',`).join('\n')}
}
export default translation`
fs.writeFileSync(path.join(testEnDir, 'large.ts'), largeContent)
const startTime = Date.now()
const keys = await getKeysFromLanguage('en-US')
const endTime = Date.now()
expect(keys.length).toBe(1000)
expect(endTime - startTime).toBeLessThan(10000)
})
it('should handle multiple translation files concurrently', async () => {
// Create multiple files
for (let i = 0; i < 10; i++) {
const content = `const translation = {
key${i}: 'value${i}',
nested${i}: {
subkey: 'subvalue'
}
}
export default translation`
fs.writeFileSync(path.join(testEnDir, `file${i}.ts`), content)
}
const startTime = Date.now()
const keys = await getKeysFromLanguage('en-US')
const endTime = Date.now()
expect(keys.length).toBe(20) // 10 files * 2 keys each
expect(endTime - startTime).toBeLessThan(10000)
})
})
describe('Unicode and Internationalization', () => {
it('should handle Unicode characters in keys and values', async () => {
const unicodeContent = `const translation = {
'中文键': '中文值',
'العربية': 'قيمة',
'emoji_😀': 'value with emoji 🎉',
'mixed_中文_English': 'mixed value'
}
export default translation`
fs.writeFileSync(path.join(testEnDir, 'unicode.ts'), unicodeContent)
const keys = await getKeysFromLanguage('en-US')
expect(keys).toContain('unicode.中文键')
expect(keys).toContain('unicode.العربية')
expect(keys).toContain('unicode.emoji_😀')
expect(keys).toContain('unicode.mixed_中文_English')
})
it('should handle RTL language files', async () => {
const rtlContent = `const translation = {
مرحبا: 'Hello',
العالم: 'World',
nested: {
مفتاح: 'key'
}
}
export default translation`
fs.writeFileSync(path.join(testEnDir, 'rtl.ts'), rtlContent)
const keys = await getKeysFromLanguage('en-US')
expect(keys).toContain('rtl.مرحبا')
expect(keys).toContain('rtl.العالم')
expect(keys).toContain('rtl.nested.مفتاح')
})
})
describe('Error Recovery', () => {
it('should handle syntax errors in translation files gracefully', async () => {
const invalidContent = `const translation = {
validKey: 'valid value',
invalidKey: 'missing quote,
anotherKey: 'another value'
}
export default translation`
fs.writeFileSync(path.join(testEnDir, 'invalid.ts'), invalidContent)
await expect(getKeysFromLanguage('en-US')).rejects.toThrow()
})
})
})

View File

@@ -220,7 +220,7 @@
"eslint-plugin-react-refresh": "0.5.2",
"eslint-plugin-sonarjs": "4.0.2",
"eslint-plugin-storybook": "10.3.1",
"happy-dom": "20.8.8",
"happy-dom": "20.8.9",
"hono": "4.12.8",
"husky": "9.1.7",
"iconify-import-svg": "0.1.2",

104
web/pnpm-lock.yaml generated
View File

@@ -371,7 +371,7 @@ importers:
devDependencies:
'@antfu/eslint-config':
specifier: 7.7.3
version: 7.7.3(@eslint-react/eslint-plugin@3.0.0(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@next/eslint-plugin-next@16.2.1)(@typescript-eslint/rule-tester@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@typescript-eslint/typescript-estree@8.57.1(typescript@5.9.3))(@typescript-eslint/utils@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(@vue/compiler-sfc@3.5.30)(eslint-plugin-react-hooks@7.0.1(eslint@10.1.0(jiti@1.21.7)))(eslint-plugin-react-refresh@0.5.2(eslint@10.1.0(jiti@1.21.7)))(eslint@10.1.0(jiti@1.21.7))(oxlint@1.56.0(oxlint-tsgolint@0.17.1))(typescript@5.9.3)
version: 7.7.3(@eslint-react/eslint-plugin@3.0.0(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@next/eslint-plugin-next@16.2.1)(@typescript-eslint/rule-tester@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@typescript-eslint/typescript-estree@8.57.1(typescript@5.9.3))(@typescript-eslint/utils@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(@vue/compiler-sfc@3.5.30)(eslint-plugin-react-hooks@7.0.1(eslint@10.1.0(jiti@1.21.7)))(eslint-plugin-react-refresh@0.5.2(eslint@10.1.0(jiti@1.21.7)))(eslint@10.1.0(jiti@1.21.7))(oxlint@1.56.0(oxlint-tsgolint@0.17.1))(typescript@5.9.3)
'@chromatic-com/storybook':
specifier: 5.0.2
version: 5.0.2(storybook@10.3.1(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))
@@ -506,7 +506,7 @@ importers:
version: 0.5.21(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(react-dom@19.2.4(react@19.2.4))(react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(react@19.2.4)
'@vitest/coverage-v8':
specifier: 4.1.0
version: 4.1.0(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))
version: 4.1.0(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))
agentation:
specifier: 2.3.3
version: 2.3.3(react-dom@19.2.4(react@19.2.4))(react@19.2.4)
@@ -547,8 +547,8 @@ importers:
specifier: 10.3.1
version: 10.3.1(eslint@10.1.0(jiti@1.21.7))(storybook@10.3.1(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@5.9.3)
happy-dom:
specifier: 20.8.8
version: 20.8.8
specifier: 20.8.9
version: 20.8.9
hono:
specifier: 4.12.8
version: 4.12.8
@@ -605,13 +605,13 @@ importers:
version: 11.3.3(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))
vite-plus:
specifier: 0.1.13
version: 0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)
version: 0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)
vitest:
specifier: npm:@voidzero-dev/vite-plus-test@0.1.13
version: '@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
version: '@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
vitest-canvas-mock:
specifier: 1.1.3
version: 1.1.3(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))
version: 1.1.3(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))
packages:
@@ -768,12 +768,12 @@ packages:
'@antfu/utils@8.1.1':
resolution: {integrity: sha512-Mex9nXf9vR6AhcXmMrlz/HVgYYZpVGJ6YlPgwl7UnaFpnshXs6EK/oa5Gpf3CzENMjkvEx2tQtntGnb7UtSTOQ==}
'@asamuzakjp/css-color@5.0.1':
resolution: {integrity: sha512-2SZFvqMyvboVV1d15lMf7XiI3m7SDqXUuKaTymJYLN6dSGadqp+fVojqJlVoMlbZnlTmu3S0TLwLTJpvBMO1Aw==}
'@asamuzakjp/css-color@5.1.1':
resolution: {integrity: sha512-iGWN8E45Ws0XWx3D44Q1t6vX2LqhCKcwfmwBYCDsFrYFS6m4q/Ks61L2veETaLv+ckDC6+dTETJoaAAb7VjLiw==}
engines: {node: ^20.19.0 || ^22.12.0 || >=24.0.0}
'@asamuzakjp/dom-selector@7.0.3':
resolution: {integrity: sha512-Q6mU0Z6bfj6YvnX2k9n0JxiIwrCFN59x/nWmYQnAqP000ruX/yV+5bp/GRcF5T8ncvfwJQ7fgfP74DlpKExILA==}
'@asamuzakjp/dom-selector@7.0.4':
resolution: {integrity: sha512-jXR6x4AcT3eIrS2fSNAwJpwirOkGcd+E7F7CP3zjdTqz9B/2huHOL8YJZBgekKwLML+u7qB/6P1LXQuMScsx0w==}
engines: {node: ^20.19.0 || ^22.12.0 || >=24.0.0}
'@asamuzakjp/nwsapi@2.3.9':
@@ -957,8 +957,8 @@ packages:
peerDependencies:
'@csstools/css-tokenizer': ^4.0.0
'@csstools/css-syntax-patches-for-csstree@1.1.1':
resolution: {integrity: sha512-BvqN0AMWNAnLk9G8jnUT77D+mUbY/H2b3uDTvg2isJkHaOufUE2R3AOwxWo7VBQKT1lOdwdvorddo2B/lk64+w==}
'@csstools/css-syntax-patches-for-csstree@1.1.2':
resolution: {integrity: sha512-5GkLzz4prTIpoyeUiIu3iV6CSG3Plo7xRVOFPKI7FVEJ3mZ0A8SwK0XU3Gl7xAkiQ+mDyam+NNp875/C5y+jSA==}
peerDependencies:
css-tree: ^3.2.1
peerDependenciesMeta:
@@ -5356,8 +5356,8 @@ packages:
hachure-fill@0.5.2:
resolution: {integrity: sha512-3GKBOn+m2LX9iq+JC1064cSFprJY4jL1jCXTcpnfER5HYE2l/4EfWSGzkPa/ZDBmYI0ZOEj5VHV/eKnPGkHuOg==}
happy-dom@20.8.8:
resolution: {integrity: sha512-5/F8wxkNxYtsN0bXfMwIyNLZ9WYsoOYPbmoluqVJqv8KBUbcyKZawJ7uYK4WTX8IHBLYv+VXIwfeNDPy1oKMwQ==}
happy-dom@20.8.9:
resolution: {integrity: sha512-Tz23LR9T9jOGVZm2x1EPdXqwA37G/owYMxRwU0E4miurAtFsPMQ1d2Jc2okUaSjZqAFz2oEn3FLXC5a0a+siyA==}
engines: {node: '>=20.0.0'}
has-flag@4.0.0:
@@ -7313,6 +7313,10 @@ packages:
resolution: {integrity: sha512-g9ljZiwki/LfxmQADO3dEY1CbpmXT5Hm2fJ+QaGKwSXUylMybePR7/67YW7jOrrvjEgL1Fmz5kzyAjWVWLlucg==}
engines: {node: '>=6'}
tapable@2.3.2:
resolution: {integrity: sha512-1MOpMXuhGzGL5TTCZFItxCc0AARf1EZFQkGqMm7ERKj8+Hgr5oLvJOVFcC+lRmR8hCe2S3jC4T5D7Vg/d7/fhA==}
engines: {node: '>=6'}
tar-fs@2.1.4:
resolution: {integrity: sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ==}
@@ -7532,8 +7536,8 @@ packages:
resolution: {integrity: sha512-jxytwMHhsbdpBXxLAcuu0fzlQeXCNnWdDyRHpvWsUl8vd98UwYdl9YTyn8/HcpcJPC3pwUveefsa3zTxyD/ERg==}
engines: {node: '>=20.18.1'}
undici@7.24.5:
resolution: {integrity: sha512-3IWdCpjgxp15CbJnsi/Y9TCDE7HWVN19j1hmzVhoAkY/+CJx449tVxT5wZc1Gwg8J+P0LWvzlBzxYRnHJ+1i7Q==}
undici@7.24.6:
resolution: {integrity: sha512-Xi4agocCbRzt0yYMZGMA6ApD7gvtUFaxm4ZmeacWI4cZxaF6C+8I8QfofC20NAePiB/IcvZmzkJ7XPa471AEtA==}
engines: {node: '>=20.18.1'}
unicode-trie@2.0.0:
@@ -7680,7 +7684,7 @@ packages:
resolution: {integrity: sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q==}
vinext@https://pkg.pr.new/vinext@b6a2cac:
resolution: {integrity: sha512-/Jm507qqC1dCOhCaorb9H8/I5JEqkcsiUJw0Wgprg7Znym4eyLUvcWcRLVyM9z22Tm0+O1PugcSDA8oNvbqPuQ==, tarball: https://pkg.pr.new/vinext@b6a2cac}
resolution: {tarball: https://pkg.pr.new/vinext@b6a2cac}
version: 0.0.5
engines: {node: '>=22'}
hasBin: true
@@ -7882,6 +7886,18 @@ packages:
utf-8-validate:
optional: true
ws@8.20.0:
resolution: {integrity: sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA==}
engines: {node: '>=10.0.0'}
peerDependencies:
bufferutil: ^4.0.1
utf-8-validate: '>=5.0.2'
peerDependenciesMeta:
bufferutil:
optional: true
utf-8-validate:
optional: true
wsl-utils@0.1.0:
resolution: {integrity: sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw==}
engines: {node: '>=18'}
@@ -8141,7 +8157,7 @@ snapshots:
idb: 8.0.0
tslib: 2.8.1
'@antfu/eslint-config@7.7.3(@eslint-react/eslint-plugin@3.0.0(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@next/eslint-plugin-next@16.2.1)(@typescript-eslint/rule-tester@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@typescript-eslint/typescript-estree@8.57.1(typescript@5.9.3))(@typescript-eslint/utils@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(@vue/compiler-sfc@3.5.30)(eslint-plugin-react-hooks@7.0.1(eslint@10.1.0(jiti@1.21.7)))(eslint-plugin-react-refresh@0.5.2(eslint@10.1.0(jiti@1.21.7)))(eslint@10.1.0(jiti@1.21.7))(oxlint@1.56.0(oxlint-tsgolint@0.17.1))(typescript@5.9.3)':
'@antfu/eslint-config@7.7.3(@eslint-react/eslint-plugin@3.0.0(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@next/eslint-plugin-next@16.2.1)(@typescript-eslint/rule-tester@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@typescript-eslint/typescript-estree@8.57.1(typescript@5.9.3))(@typescript-eslint/utils@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(@vue/compiler-sfc@3.5.30)(eslint-plugin-react-hooks@7.0.1(eslint@10.1.0(jiti@1.21.7)))(eslint-plugin-react-refresh@0.5.2(eslint@10.1.0(jiti@1.21.7)))(eslint@10.1.0(jiti@1.21.7))(oxlint@1.56.0(oxlint-tsgolint@0.17.1))(typescript@5.9.3)':
dependencies:
'@antfu/install-pkg': 1.1.0
'@clack/prompts': 1.1.0
@@ -8151,7 +8167,7 @@ snapshots:
'@stylistic/eslint-plugin': 5.10.0(eslint@10.1.0(jiti@1.21.7))
'@typescript-eslint/eslint-plugin': 8.57.1(@typescript-eslint/parser@8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3))(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3)
'@typescript-eslint/parser': 8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3)
'@vitest/eslint-plugin': 1.6.12(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3)
'@vitest/eslint-plugin': 1.6.12(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3)
ansis: 4.2.0
cac: 7.0.0
eslint: 10.1.0(jiti@1.21.7)
@@ -8211,7 +8227,7 @@ snapshots:
'@antfu/utils@8.1.1': {}
'@asamuzakjp/css-color@5.0.1':
'@asamuzakjp/css-color@5.1.1':
dependencies:
'@csstools/css-calc': 3.1.1(@csstools/css-parser-algorithms@4.0.0(@csstools/css-tokenizer@4.0.0))(@csstools/css-tokenizer@4.0.0)
'@csstools/css-color-parser': 4.0.2(@csstools/css-parser-algorithms@4.0.0(@csstools/css-tokenizer@4.0.0))(@csstools/css-tokenizer@4.0.0)
@@ -8220,7 +8236,7 @@ snapshots:
lru-cache: 11.2.7
optional: true
'@asamuzakjp/dom-selector@7.0.3':
'@asamuzakjp/dom-selector@7.0.4':
dependencies:
'@asamuzakjp/nwsapi': 2.3.9
bidi-js: 1.0.3
@@ -8480,7 +8496,7 @@ snapshots:
'@csstools/css-tokenizer': 4.0.0
optional: true
'@csstools/css-syntax-patches-for-csstree@1.1.1(css-tree@3.2.1)':
'@csstools/css-syntax-patches-for-csstree@1.1.2(css-tree@3.2.1)':
optionalDependencies:
css-tree: 3.2.1
optional: true
@@ -10368,7 +10384,7 @@ snapshots:
'@tanstack/devtools-event-bus@0.4.1':
dependencies:
ws: 8.19.0
ws: 8.20.0
transitivePeerDependencies:
- bufferutil
- utf-8-validate
@@ -11037,7 +11053,7 @@ snapshots:
optionalDependencies:
react-server-dom-webpack: 19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))
'@vitest/coverage-v8@4.1.0(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))':
'@vitest/coverage-v8@4.1.0(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))':
dependencies:
'@bcoe/v8-coverage': 1.0.2
'@vitest/utils': 4.1.0
@@ -11049,16 +11065,16 @@ snapshots:
obug: 2.1.1
std-env: 4.0.0
tinyrainbow: 3.1.0
vitest: '@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
vitest: '@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
'@vitest/eslint-plugin@1.6.12(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3)':
'@vitest/eslint-plugin@1.6.12(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3)':
dependencies:
'@typescript-eslint/scope-manager': 8.57.1
'@typescript-eslint/utils': 8.57.1(eslint@10.1.0(jiti@1.21.7))(typescript@5.9.3)
eslint: 10.1.0(jiti@1.21.7)
optionalDependencies:
typescript: 5.9.3
vitest: '@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
vitest: '@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
transitivePeerDependencies:
- supports-color
@@ -11123,7 +11139,7 @@ snapshots:
'@voidzero-dev/vite-plus-linux-x64-gnu@0.1.13':
optional: true
'@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)':
'@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)':
dependencies:
'@standard-schema/spec': 1.1.0
'@types/chai': 5.2.3
@@ -11141,7 +11157,7 @@ snapshots:
ws: 8.19.0
optionalDependencies:
'@types/node': 25.5.0
happy-dom: 20.8.8
happy-dom: 20.8.9
jsdom: 29.0.1(canvas@3.2.2)
transitivePeerDependencies:
- '@arethetypeswrong/core'
@@ -12915,14 +12931,14 @@ snapshots:
hachure-fill@0.5.2: {}
happy-dom@20.8.8:
happy-dom@20.8.9:
dependencies:
'@types/node': 25.5.0
'@types/whatwg-mimetype': 3.0.2
'@types/ws': 8.18.1
entities: 7.0.1
whatwg-mimetype: 3.0.0
ws: 8.19.0
ws: 8.20.0
transitivePeerDependencies:
- bufferutil
- utf-8-validate
@@ -13295,10 +13311,10 @@ snapshots:
jsdom@29.0.1(canvas@3.2.2):
dependencies:
'@asamuzakjp/css-color': 5.0.1
'@asamuzakjp/dom-selector': 7.0.3
'@asamuzakjp/css-color': 5.1.1
'@asamuzakjp/dom-selector': 7.0.4
'@bramus/specificity': 2.4.2
'@csstools/css-syntax-patches-for-csstree': 1.1.1(css-tree@3.2.1)
'@csstools/css-syntax-patches-for-csstree': 1.1.2(css-tree@3.2.1)
'@exodus/bytes': 1.15.0
css-tree: 3.2.1
data-urls: 7.0.0
@@ -13310,7 +13326,7 @@ snapshots:
saxes: 6.0.0
symbol-tree: 3.2.4
tough-cookie: 6.0.1
undici: 7.24.5
undici: 7.24.6
w3c-xmlserializer: 5.0.0
webidl-conversions: 8.0.1
whatwg-mimetype: 5.0.0
@@ -15474,6 +15490,8 @@ snapshots:
tapable@2.3.0: {}
tapable@2.3.2: {}
tar-fs@2.1.4:
dependencies:
chownr: 1.1.4
@@ -15688,7 +15706,7 @@ snapshots:
undici@7.24.0: {}
undici@7.24.5:
undici@7.24.6:
optional: true
unicode-trie@2.0.0:
@@ -15913,11 +15931,11 @@ snapshots:
- supports-color
- typescript
vite-plus@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3):
vite-plus@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3):
dependencies:
'@oxc-project/types': 0.120.0
'@voidzero-dev/vite-plus-core': 0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)
'@voidzero-dev/vite-plus-test': 0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)
'@voidzero-dev/vite-plus-test': 0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)
cac: 7.0.0
cross-spawn: 7.0.6
oxfmt: 0.41.0
@@ -15984,11 +16002,11 @@ snapshots:
optionalDependencies:
vite: '@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
vitest-canvas-mock@1.1.3(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)):
vitest-canvas-mock@1.1.3(@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)):
dependencies:
cssfontparser: 1.2.1
moo-color: 1.0.3
vitest: '@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.8)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
vitest: '@voidzero-dev/vite-plus-test@0.1.13(@types/node@25.5.0)(@voidzero-dev/vite-plus-core@0.1.13(@types/node@25.5.0)(esbuild@0.27.2)(jiti@1.21.7)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.8.9)(jiti@1.21.7)(jsdom@29.0.1(canvas@3.2.2))(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.3)'
void-elements@3.1.0: {}
@@ -16067,7 +16085,7 @@ snapshots:
mime-types: 2.1.35
neo-async: 2.6.2
schema-utils: 4.3.3
tapable: 2.3.0
tapable: 2.3.2
terser-webpack-plugin: 5.4.0(esbuild@0.27.2)(uglify-js@3.19.3)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))
watchpack: 2.5.1
webpack-sources: 3.3.4
@@ -16112,6 +16130,8 @@ snapshots:
ws@8.19.0: {}
ws@8.20.0: {}
wsl-utils@0.1.0:
dependencies:
is-wsl: 3.1.1