Compare commits

...

17 Commits

Author SHA1 Message Date
-LAN-
57dc7e0b2c chore: bump version to 1.10.1-fix.1 (#29176)
Some checks failed
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Has been cancelled
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Has been cancelled
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Has been cancelled
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Has been cancelled
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Has been cancelled
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-12-09 11:05:56 +08:00
Wu Tianwei
cbcdecf9a8 refactor: update useNodes import to use reactflow across multiple components (#29195) 2025-12-05 16:47:03 +08:00
kenwoodjw
7b514b1147 fix: bump pyarrow to 17.0.0, werkzeug to 3.1.4, urllib3 to 2.5.0 (#29089)
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
2025-12-05 12:08:25 +08:00
NFish
37d4371901 chore: upgrade React to 19.2.1,fix cve-2025-55182 (#29121)
Co-authored-by: zhsama <torvalds@linux.do>
2025-12-05 12:06:39 +08:00
NFish
0eb5b8a4eb chore: update Next.js dev dependencies to 15.5.7 (#29120) 2025-12-05 12:05:54 +08:00
dependabot[bot]
7ee57f34ce chore(deps): bump next from 15.5.6 to 15.5.7 in /web (#29105)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-05 12:05:00 +08:00
zyssyz123
0f99a7e3f1 Fix/app list compatible (#29123) 2025-12-04 14:56:15 +08:00
-LAN-
b353a126d8 chore: bump version to 1.10.1 (#28696) 2025-11-26 18:32:10 +08:00
Joel
ef0e1031b0 pref: reduce the times of useNodes reRender (#28682)
Some checks are pending
autofix.ci / autofix (push) Waiting to run
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Waiting to run
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Waiting to run
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Waiting to run
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Waiting to run
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Blocked by required conditions
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Blocked by required conditions
Main CI Pipeline / Check Changed Files (push) Waiting to run
Main CI Pipeline / API Tests (push) Blocked by required conditions
Main CI Pipeline / Web Tests (push) Blocked by required conditions
Main CI Pipeline / Style Check (push) Waiting to run
Main CI Pipeline / VDB Tests (push) Blocked by required conditions
Main CI Pipeline / DB Migration Test (push) Blocked by required conditions
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 16:52:47 +08:00
Eric Guo
d7010f582f Fix 500 error in knowledge base, select weightedScore and click retrieve. (#28586)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 16:44:00 +08:00
-LAN-
d696b9f35e Use pnpm dev in dev/start-web (#28684) 2025-11-26 16:24:01 +08:00
Ethan Lee
665d49d375 Fixes session scope bug in FileService.delete_file (#27911)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2025-11-26 16:21:33 +08:00
-LAN-
26a1c84881 chore: upgrade system libraries and Python dependencies (#28624)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
2025-11-26 15:25:28 +08:00
Coding On Star
dbecba710b frontend auto testing rules (#28679)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
Co-authored-by: 姜涵煦 <hanxujiang@jianghanxudeMacBook-Pro.local>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-26 15:18:07 +08:00
CrabSAMA
591414307a fix: fixed workflow as tool files field return empty problem (#27925)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: QuantumGhost <obelisk.reg+git@gmail.com>
2025-11-26 14:00:36 +08:00
非法操作
1241cab113 chore: enhance the hint when the user triggers an invalid webhook request (#28671) 2025-11-26 14:00:16 +08:00
wangxiaolei
490b7ac43c fix: fix feedback like or dislike not display in logs (#28652) 2025-11-26 13:59:47 +08:00
52 changed files with 4722 additions and 1070 deletions

6
.cursorrules Normal file
View File

@@ -0,0 +1,6 @@
# Cursor Rules for Dify Project
## Automated Test Generation
- Use `web/testing/testing.md` as the canonical instruction set for generating frontend automated tests.
- When proposing or saving tests, re-read that document and follow every requirement.

12
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,12 @@
# Copilot Instructions
GitHub Copilot must follow the unified frontend testing requirements documented in `web/testing/testing.md`.
Key reminders:
- Generate tests using the mandated tech stack, naming, and code style (AAA pattern, `fireEvent`, descriptive test names, cleans up mocks).
- Cover rendering, prop combinations, and edge cases by default; extend coverage for hooks, routing, async flows, and domain-specific components when applicable.
- Target >95% line and branch coverage and 100% function/statement coverage.
- Apply the project's mocking conventions for i18n, toast notifications, and Next.js utilities.
Any suggestions from Copilot that conflict with `web/testing/testing.md` should be revised before acceptance.

View File

@@ -0,0 +1,5 @@
# Windsurf Testing Rules
- Use `web/testing/testing.md` as the single source of truth for frontend automated testing.
- Honor every requirement in that document when generating or accepting tests.
- When proposing or saving tests, re-read that document and follow every requirement.

View File

@@ -77,6 +77,8 @@ How we prioritize:
For setting up the frontend service, please refer to our comprehensive [guide](https://github.com/langgenius/dify/blob/main/web/README.md) in the `web/README.md` file. This document provides detailed instructions to help you set up the frontend environment properly.
**Testing**: All React components must have comprehensive test coverage. See [web/testing/testing.md](https://github.com/langgenius/dify/blob/main/web/testing/testing.md) for the canonical frontend testing guidelines and follow every requirement described there.
#### Backend
For setting up the backend service, kindly refer to our detailed [instructions](https://github.com/langgenius/dify/blob/main/api/README.md) in the `api/README.md` file. This document contains step-by-step guidance to help you get the backend up and running smoothly.

View File

@@ -57,7 +57,7 @@ RUN \
# for gmpy2 \
libgmp-dev libmpfr-dev libmpc-dev \
# For Security
expat libldap-2.5-0 perl libsqlite3-0 zlib1g \
expat libldap-2.5-0=2.5.13+dfsg-5 perl libsqlite3-0=3.40.1-2+deb12u2 zlib1g=1:1.2.13.dfsg-1 \
# install fonts to support the use of tools like pypdfium2
fonts-noto-cjk \
# install a package to improve the accuracy of guessing mime type and file extension

View File

@@ -242,10 +242,13 @@ class AppListApi(Resource):
NodeType.TRIGGER_PLUGIN,
}
for workflow in draft_workflows:
for _, node_data in workflow.walk_nodes():
if node_data.get("type") in trigger_node_types:
draft_trigger_app_ids.add(str(workflow.app_id))
break
try:
for _, node_data in workflow.walk_nodes():
if node_data.get("type") in trigger_node_types:
draft_trigger_app_ids.add(str(workflow.app_id))
break
except Exception:
continue
for app in app_pagination.items:
app.has_draft_trigger = str(app.id) in draft_trigger_app_ids

View File

@@ -369,6 +369,58 @@ class MessageSuggestedQuestionApi(Resource):
return {"data": questions}
# Shared parser for feedback export (used for both documentation and runtime parsing)
feedback_export_parser = (
console_ns.parser()
.add_argument("from_source", type=str, choices=["user", "admin"], location="args", help="Filter by feedback source")
.add_argument("rating", type=str, choices=["like", "dislike"], location="args", help="Filter by rating")
.add_argument("has_comment", type=bool, location="args", help="Only include feedback with comments")
.add_argument("start_date", type=str, location="args", help="Start date (YYYY-MM-DD)")
.add_argument("end_date", type=str, location="args", help="End date (YYYY-MM-DD)")
.add_argument("format", type=str, choices=["csv", "json"], default="csv", location="args", help="Export format")
)
@console_ns.route("/apps/<uuid:app_id>/feedbacks/export")
class MessageFeedbackExportApi(Resource):
@console_ns.doc("export_feedbacks")
@console_ns.doc(description="Export user feedback data for Google Sheets")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(feedback_export_parser)
@console_ns.response(200, "Feedback data exported successfully")
@console_ns.response(400, "Invalid parameters")
@console_ns.response(500, "Internal server error")
@get_app_model
@setup_required
@login_required
@account_initialization_required
def get(self, app_model):
args = feedback_export_parser.parse_args()
# Import the service function
from services.feedback_service import FeedbackService
try:
export_data = FeedbackService.export_feedbacks(
app_id=app_model.id,
from_source=args.get("from_source"),
rating=args.get("rating"),
has_comment=args.get("has_comment"),
start_date=args.get("start_date"),
end_date=args.get("end_date"),
format_type=args.get("format", "csv"),
)
return export_data
except ValueError as e:
logger.exception("Parameter validation error in feedback export")
return {"error": f"Parameter validation error: {str(e)}"}, 400
except Exception as e:
logger.exception("Error exporting feedback data")
raise InternalServerError(str(e))
@console_ns.route("/apps/<uuid:app_id>/messages/<uuid:message_id>")
class MessageApi(Resource):
@console_ns.doc("get_message")

View File

@@ -1,7 +1,7 @@
import logging
import time
from flask import jsonify
from flask import jsonify, request
from werkzeug.exceptions import NotFound, RequestEntityTooLarge
from controllers.trigger import bp
@@ -28,8 +28,14 @@ def _prepare_webhook_execution(webhook_id: str, is_debug: bool = False):
webhook_data = WebhookService.extract_and_validate_webhook_data(webhook_trigger, node_config)
return webhook_trigger, workflow, node_config, webhook_data, None
except ValueError as e:
# Fall back to raw extraction for error reporting
webhook_data = WebhookService.extract_webhook_data(webhook_trigger)
# Provide minimal context for error reporting without risking another parse failure
webhook_data = {
"method": request.method,
"headers": dict(request.headers),
"query_params": dict(request.args),
"body": {},
"files": {},
}
return webhook_trigger, workflow, node_config, webhook_data, str(e)

View File

@@ -1,20 +1,110 @@
import re
from operator import itemgetter
from typing import cast
class JiebaKeywordTableHandler:
def __init__(self):
from core.rag.datasource.keyword.jieba.stopwords import STOPWORDS
tfidf = self._load_tfidf_extractor()
tfidf.stop_words = STOPWORDS # type: ignore[attr-defined]
self._tfidf = tfidf
def _load_tfidf_extractor(self):
"""
Load jieba TFIDF extractor with fallback strategy.
Loading Flow:
┌─────────────────────────────────────────────────────────────────────┐
│ jieba.analyse.default_tfidf │
│ exists? │
└─────────────────────────────────────────────────────────────────────┘
│ │
YES NO
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────────────────────┐
│ Return default │ │ jieba.analyse.TFIDF exists? │
│ TFIDF │ └──────────────────────────────────┘
└──────────────────┘ │ │
YES NO
│ │
│ ▼
│ ┌────────────────────────────┐
│ │ Try import from │
│ │ jieba.analyse.tfidf.TFIDF │
│ └────────────────────────────┘
│ │ │
│ SUCCESS FAILED
│ │ │
▼ ▼ ▼
┌────────────────────────┐ ┌─────────────────┐
│ Instantiate TFIDF() │ │ Build fallback │
│ & cache to default │ │ _SimpleTFIDF │
└────────────────────────┘ └─────────────────┘
"""
import jieba.analyse # type: ignore
tfidf = getattr(jieba.analyse, "default_tfidf", None)
if tfidf is not None:
return tfidf
tfidf_class = getattr(jieba.analyse, "TFIDF", None)
if tfidf_class is None:
try:
from jieba.analyse.tfidf import TFIDF # type: ignore
tfidf_class = TFIDF
except Exception:
tfidf_class = None
if tfidf_class is not None:
tfidf = tfidf_class()
jieba.analyse.default_tfidf = tfidf # type: ignore[attr-defined]
return tfidf
return self._build_fallback_tfidf()
@staticmethod
def _build_fallback_tfidf():
"""Fallback lightweight TFIDF for environments missing jieba's TFIDF."""
import jieba # type: ignore
from core.rag.datasource.keyword.jieba.stopwords import STOPWORDS
jieba.analyse.default_tfidf.stop_words = STOPWORDS # type: ignore
class _SimpleTFIDF:
def __init__(self):
self.stop_words = STOPWORDS
self._lcut = getattr(jieba, "lcut", None)
def extract_tags(self, sentence: str, top_k: int | None = 20, **kwargs):
# Basic frequency-based keyword extraction as a fallback when TF-IDF is unavailable.
top_k = kwargs.pop("topK", top_k)
cut = getattr(jieba, "cut", None)
if self._lcut:
tokens = self._lcut(sentence)
elif callable(cut):
tokens = list(cut(sentence))
else:
tokens = re.findall(r"\w+", sentence)
words = [w for w in tokens if w and w not in self.stop_words]
freq: dict[str, int] = {}
for w in words:
freq[w] = freq.get(w, 0) + 1
sorted_words = sorted(freq.items(), key=itemgetter(1), reverse=True)
if top_k is not None:
sorted_words = sorted_words[:top_k]
return [item[0] for item in sorted_words]
return _SimpleTFIDF()
def extract_keywords(self, text: str, max_keywords_per_chunk: int | None = 10) -> set[str]:
"""Extract keywords with JIEBA tfidf."""
import jieba.analyse # type: ignore
keywords = jieba.analyse.extract_tags(
keywords = self._tfidf.extract_tags(
sentence=text,
topK=max_keywords_per_chunk,
)

View File

@@ -329,7 +329,15 @@ class ToolNode(Node):
json.append(message.message.json_object)
elif message.type == ToolInvokeMessage.MessageType.LINK:
assert isinstance(message.message, ToolInvokeMessage.TextMessage)
stream_text = f"Link: {message.message.text}\n"
# Check if this LINK message is a file link
file_obj = (message.meta or {}).get("file")
if isinstance(file_obj, File):
files.append(file_obj)
stream_text = f"File: {message.message.text}\n"
else:
stream_text = f"Link: {message.message.text}\n"
text += stream_text
yield StreamChunkEvent(
selector=[node_id, "text"],

View File

@@ -112,7 +112,7 @@ class Storage:
def exists(self, filename):
return self.storage_runner.exists(filename)
def delete(self, filename):
def delete(self, filename: str):
return self.storage_runner.delete(filename)
def scan(self, path: str, files: bool = True, directories: bool = False) -> list[str]:

View File

@@ -1,6 +1,6 @@
[project]
name = "dify-api"
version = "1.10.0"
version = "1.10.1"
requires-python = ">=3.11,<3.13"
dependencies = [

View File

@@ -0,0 +1,185 @@
import csv
import io
import json
from datetime import datetime
from flask import Response
from sqlalchemy import or_
from extensions.ext_database import db
from models.model import Account, App, Conversation, Message, MessageFeedback
class FeedbackService:
@staticmethod
def export_feedbacks(
app_id: str,
from_source: str | None = None,
rating: str | None = None,
has_comment: bool | None = None,
start_date: str | None = None,
end_date: str | None = None,
format_type: str = "csv",
):
"""
Export feedback data with message details for analysis
Args:
app_id: Application ID
from_source: Filter by feedback source ('user' or 'admin')
rating: Filter by rating ('like' or 'dislike')
has_comment: Only include feedback with comments
start_date: Start date filter (YYYY-MM-DD)
end_date: End date filter (YYYY-MM-DD)
format_type: Export format ('csv' or 'json')
"""
# Validate format early to avoid hitting DB when unnecessary
fmt = (format_type or "csv").lower()
if fmt not in {"csv", "json"}:
raise ValueError(f"Unsupported format: {format_type}")
# Build base query
query = (
db.session.query(MessageFeedback, Message, Conversation, App, Account)
.join(Message, MessageFeedback.message_id == Message.id)
.join(Conversation, MessageFeedback.conversation_id == Conversation.id)
.join(App, MessageFeedback.app_id == App.id)
.outerjoin(Account, MessageFeedback.from_account_id == Account.id)
.where(MessageFeedback.app_id == app_id)
)
# Apply filters
if from_source:
query = query.filter(MessageFeedback.from_source == from_source)
if rating:
query = query.filter(MessageFeedback.rating == rating)
if has_comment is not None:
if has_comment:
query = query.filter(MessageFeedback.content.isnot(None), MessageFeedback.content != "")
else:
query = query.filter(or_(MessageFeedback.content.is_(None), MessageFeedback.content == ""))
if start_date:
try:
start_dt = datetime.strptime(start_date, "%Y-%m-%d")
query = query.filter(MessageFeedback.created_at >= start_dt)
except ValueError:
raise ValueError(f"Invalid start_date format: {start_date}. Use YYYY-MM-DD")
if end_date:
try:
end_dt = datetime.strptime(end_date, "%Y-%m-%d")
query = query.filter(MessageFeedback.created_at <= end_dt)
except ValueError:
raise ValueError(f"Invalid end_date format: {end_date}. Use YYYY-MM-DD")
# Order by creation date (newest first)
query = query.order_by(MessageFeedback.created_at.desc())
# Execute query
results = query.all()
# Prepare data for export
export_data = []
for feedback, message, conversation, app, account in results:
# Get the user query from the message
user_query = message.query or message.inputs.get("query", "") if message.inputs else ""
# Format the feedback data
feedback_record = {
"feedback_id": str(feedback.id),
"app_name": app.name,
"app_id": str(app.id),
"conversation_id": str(conversation.id),
"conversation_name": conversation.name or "",
"message_id": str(message.id),
"user_query": user_query,
"ai_response": message.answer[:500] + "..."
if len(message.answer) > 500
else message.answer, # Truncate long responses
"feedback_rating": "👍" if feedback.rating == "like" else "👎",
"feedback_rating_raw": feedback.rating,
"feedback_comment": feedback.content or "",
"feedback_source": feedback.from_source,
"feedback_date": feedback.created_at.strftime("%Y-%m-%d %H:%M:%S"),
"message_date": message.created_at.strftime("%Y-%m-%d %H:%M:%S"),
"from_account_name": account.name if account else "",
"from_end_user_id": str(feedback.from_end_user_id) if feedback.from_end_user_id else "",
"has_comment": "Yes" if feedback.content and feedback.content.strip() else "No",
}
export_data.append(feedback_record)
# Export based on format
if fmt == "csv":
return FeedbackService._export_csv(export_data, app_id)
else: # fmt == "json"
return FeedbackService._export_json(export_data, app_id)
@staticmethod
def _export_csv(data, app_id):
"""Export data as CSV"""
if not data:
pass # allow empty CSV with headers only
# Create CSV in memory
output = io.StringIO()
# Define headers
headers = [
"feedback_id",
"app_name",
"app_id",
"conversation_id",
"conversation_name",
"message_id",
"user_query",
"ai_response",
"feedback_rating",
"feedback_rating_raw",
"feedback_comment",
"feedback_source",
"feedback_date",
"message_date",
"from_account_name",
"from_end_user_id",
"has_comment",
]
writer = csv.DictWriter(output, fieldnames=headers)
writer.writeheader()
writer.writerows(data)
# Create response without requiring app context
response = Response(output.getvalue(), mimetype="text/csv; charset=utf-8-sig")
response.headers["Content-Disposition"] = (
f"attachment; filename=dify_feedback_export_{app_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv"
)
return response
@staticmethod
def _export_json(data, app_id):
"""Export data as JSON"""
response_data = {
"export_info": {
"app_id": app_id,
"export_date": datetime.now().isoformat(),
"total_records": len(data),
"data_source": "dify_feedback_export",
},
"feedback_data": data,
}
# Create response without requiring app context
response = Response(
json.dumps(response_data, ensure_ascii=False, indent=2),
mimetype="application/json; charset=utf-8",
)
response.headers["Content-Disposition"] = (
f"attachment; filename=dify_feedback_export_{app_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
)
return response

View File

@@ -3,8 +3,8 @@ import os
import uuid
from typing import Literal, Union
from sqlalchemy import Engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy import Engine, select
from sqlalchemy.orm import Session, sessionmaker
from werkzeug.exceptions import NotFound
from configs import dify_config
@@ -29,7 +29,7 @@ PREVIEW_WORDS_LIMIT = 3000
class FileService:
_session_maker: sessionmaker
_session_maker: sessionmaker[Session]
def __init__(self, session_factory: sessionmaker | Engine | None = None):
if isinstance(session_factory, Engine):
@@ -236,11 +236,10 @@ class FileService:
return content.decode("utf-8")
def delete_file(self, file_id: str):
with self._session_maker(expire_on_commit=False) as session:
upload_file: UploadFile | None = session.query(UploadFile).where(UploadFile.id == file_id).first()
with self._session_maker() as session, session.begin():
upload_file = session.scalar(select(UploadFile).where(UploadFile.id == file_id))
if not upload_file:
return
storage.delete(upload_file.key)
session.delete(upload_file)
session.commit()
if not upload_file:
return
storage.delete(upload_file.key)
session.delete(upload_file)

View File

@@ -5,6 +5,7 @@ import secrets
from collections.abc import Mapping
from typing import Any
import orjson
from flask import request
from pydantic import BaseModel
from sqlalchemy import select
@@ -169,7 +170,7 @@ class WebhookService:
- method: HTTP method
- headers: Request headers
- query_params: Query parameters as strings
- body: Request body (varies by content type)
- body: Request body (varies by content type; JSON parsing errors raise ValueError)
- files: Uploaded files (if any)
"""
cls._validate_content_length()
@@ -255,14 +256,21 @@ class WebhookService:
Returns:
tuple: (body_data, files_data) where:
- body_data: Parsed JSON content or empty dict if parsing fails
- body_data: Parsed JSON content
- files_data: Empty dict (JSON requests don't contain files)
Raises:
ValueError: If JSON parsing fails
"""
raw_body = request.get_data(cache=True)
if not raw_body or raw_body.strip() == b"":
return {}, {}
try:
body = request.get_json() or {}
except Exception:
logger.warning("Failed to parse JSON body")
body = {}
body = orjson.loads(raw_body)
except orjson.JSONDecodeError as exc:
logger.warning("Failed to parse JSON body: %s", exc)
raise ValueError(f"Invalid JSON body: {exc}") from exc
return body, {}
@classmethod

View File

@@ -0,0 +1,106 @@
"""Basic integration tests for Feedback API endpoints."""
import uuid
from flask.testing import FlaskClient
class TestFeedbackApiBasic:
"""Basic tests for feedback API endpoints."""
def test_feedback_export_endpoint_exists(self, test_client: FlaskClient, auth_header):
"""Test that feedback export endpoint exists and handles basic requests."""
app_id = str(uuid.uuid4())
# Test endpoint exists (even if it fails, it should return 500 or 403, not 404)
response = test_client.get(
f"/console/api/apps/{app_id}/feedbacks/export", headers=auth_header, query_string={"format": "csv"}
)
# Should not return 404 (endpoint exists)
assert response.status_code != 404
# Should return authentication or permission error
assert response.status_code in [401, 403, 500] # 500 if app doesn't exist, 403 if no permission
def test_feedback_summary_endpoint_exists(self, test_client: FlaskClient, auth_header):
"""Test that feedback summary endpoint exists and handles basic requests."""
app_id = str(uuid.uuid4())
# Test endpoint exists
response = test_client.get(f"/console/api/apps/{app_id}/feedbacks/summary", headers=auth_header)
# Should not return 404 (endpoint exists)
assert response.status_code != 404
# Should return authentication or permission error
assert response.status_code in [401, 403, 500]
def test_feedback_export_invalid_format(self, test_client: FlaskClient, auth_header):
"""Test feedback export endpoint with invalid format parameter."""
app_id = str(uuid.uuid4())
# Test with invalid format
response = test_client.get(
f"/console/api/apps/{app_id}/feedbacks/export",
headers=auth_header,
query_string={"format": "invalid_format"},
)
# Should not return 404
assert response.status_code != 404
def test_feedback_export_with_filters(self, test_client: FlaskClient, auth_header):
"""Test feedback export endpoint with various filter parameters."""
app_id = str(uuid.uuid4())
# Test with various filter combinations
filter_params = [
{"from_source": "user"},
{"rating": "like"},
{"has_comment": True},
{"start_date": "2024-01-01"},
{"end_date": "2024-12-31"},
{"format": "json"},
{
"from_source": "admin",
"rating": "dislike",
"has_comment": True,
"start_date": "2024-01-01",
"end_date": "2024-12-31",
"format": "csv",
},
]
for params in filter_params:
response = test_client.get(
f"/console/api/apps/{app_id}/feedbacks/export", headers=auth_header, query_string=params
)
# Should not return 404
assert response.status_code != 404
def test_feedback_export_invalid_dates(self, test_client: FlaskClient, auth_header):
"""Test feedback export endpoint with invalid date formats."""
app_id = str(uuid.uuid4())
# Test with invalid date formats
invalid_dates = [
{"start_date": "invalid-date"},
{"end_date": "not-a-date"},
{"start_date": "2024-13-01"}, # Invalid month
{"end_date": "2024-12-32"}, # Invalid day
]
for params in invalid_dates:
response = test_client.get(
f"/console/api/apps/{app_id}/feedbacks/export", headers=auth_header, query_string=params
)
# Should not return 404
assert response.status_code != 404

View File

@@ -0,0 +1,334 @@
"""Integration tests for Feedback Export API endpoints."""
import json
import uuid
from datetime import datetime
from types import SimpleNamespace
from unittest import mock
import pytest
from flask.testing import FlaskClient
from controllers.console.app import message as message_api
from controllers.console.app import wraps
from libs.datetime_utils import naive_utc_now
from models import App, Tenant
from models.account import Account, TenantAccountJoin, TenantAccountRole
from models.model import AppMode, MessageFeedback
from services.feedback_service import FeedbackService
class TestFeedbackExportApi:
"""Test feedback export API endpoints."""
@pytest.fixture
def mock_app_model(self):
"""Create a mock App model for testing."""
app = App()
app.id = str(uuid.uuid4())
app.mode = AppMode.CHAT
app.tenant_id = str(uuid.uuid4())
app.status = "normal"
app.name = "Test App"
return app
@pytest.fixture
def mock_account(self, monkeypatch: pytest.MonkeyPatch):
"""Create a mock Account for testing."""
account = Account(
name="Test User",
email="test@example.com",
)
account.last_active_at = naive_utc_now()
account.created_at = naive_utc_now()
account.updated_at = naive_utc_now()
account.id = str(uuid.uuid4())
# Create mock tenant
tenant = Tenant(name="Test Tenant")
tenant.id = str(uuid.uuid4())
mock_session_instance = mock.Mock()
mock_tenant_join = TenantAccountJoin(role=TenantAccountRole.OWNER)
monkeypatch.setattr(mock_session_instance, "scalar", mock.Mock(return_value=mock_tenant_join))
mock_scalars_result = mock.Mock()
mock_scalars_result.one.return_value = tenant
monkeypatch.setattr(mock_session_instance, "scalars", mock.Mock(return_value=mock_scalars_result))
mock_session_context = mock.Mock()
mock_session_context.__enter__.return_value = mock_session_instance
monkeypatch.setattr("models.account.Session", lambda _, expire_on_commit: mock_session_context)
account.current_tenant = tenant
return account
@pytest.fixture
def sample_feedback_data(self):
"""Create sample feedback data for testing."""
app_id = str(uuid.uuid4())
conversation_id = str(uuid.uuid4())
message_id = str(uuid.uuid4())
# Mock feedback data
user_feedback = MessageFeedback(
id=str(uuid.uuid4()),
app_id=app_id,
conversation_id=conversation_id,
message_id=message_id,
rating="like",
from_source="user",
content=None,
from_end_user_id=str(uuid.uuid4()),
from_account_id=None,
created_at=naive_utc_now(),
)
admin_feedback = MessageFeedback(
id=str(uuid.uuid4()),
app_id=app_id,
conversation_id=conversation_id,
message_id=message_id,
rating="dislike",
from_source="admin",
content="The response was not helpful",
from_end_user_id=None,
from_account_id=str(uuid.uuid4()),
created_at=naive_utc_now(),
)
# Mock message and conversation
mock_message = SimpleNamespace(
id=message_id,
conversation_id=conversation_id,
query="What is the weather today?",
answer="It's sunny and 25 degrees outside.",
inputs={"query": "What is the weather today?"},
created_at=naive_utc_now(),
)
mock_conversation = SimpleNamespace(id=conversation_id, name="Weather Conversation", app_id=app_id)
mock_app = SimpleNamespace(id=app_id, name="Weather App")
return {
"user_feedback": user_feedback,
"admin_feedback": admin_feedback,
"message": mock_message,
"conversation": mock_conversation,
"app": mock_app,
}
@pytest.mark.parametrize(
("role", "status"),
[
(TenantAccountRole.OWNER, 200),
(TenantAccountRole.ADMIN, 200),
(TenantAccountRole.EDITOR, 200),
(TenantAccountRole.NORMAL, 403),
(TenantAccountRole.DATASET_OPERATOR, 403),
],
)
def test_feedback_export_permissions(
self,
test_client: FlaskClient,
auth_header,
monkeypatch,
mock_app_model,
mock_account,
role: TenantAccountRole,
status: int,
):
"""Test feedback export endpoint permissions."""
# Setup mocks
mock_load_app_model = mock.Mock(return_value=mock_app_model)
monkeypatch.setattr(wraps, "_load_app_model", mock_load_app_model)
mock_export_feedbacks = mock.Mock(return_value="mock csv response")
monkeypatch.setattr(FeedbackService, "export_feedbacks", mock_export_feedbacks)
monkeypatch.setattr(message_api, "current_user", mock_account)
# Set user role
mock_account.role = role
response = test_client.get(
f"/console/api/apps/{mock_app_model.id}/feedbacks/export",
headers=auth_header,
query_string={"format": "csv"},
)
assert response.status_code == status
if status == 200:
mock_export_feedbacks.assert_called_once()
def test_feedback_export_csv_format(
self, test_client: FlaskClient, auth_header, monkeypatch, mock_app_model, mock_account, sample_feedback_data
):
"""Test feedback export in CSV format."""
# Setup mocks
mock_load_app_model = mock.Mock(return_value=mock_app_model)
monkeypatch.setattr(wraps, "_load_app_model", mock_load_app_model)
# Create mock CSV response
mock_csv_content = (
"feedback_id,app_name,conversation_id,user_query,ai_response,feedback_rating,feedback_comment\n"
)
mock_csv_content += f"{sample_feedback_data['user_feedback'].id},{sample_feedback_data['app'].name},"
mock_csv_content += f"{sample_feedback_data['conversation'].id},{sample_feedback_data['message'].query},"
mock_csv_content += f"{sample_feedback_data['message'].answer},👍,\n"
mock_response = mock.Mock()
mock_response.headers = {"Content-Type": "text/csv; charset=utf-8-sig"}
mock_response.data = mock_csv_content.encode("utf-8")
mock_export_feedbacks = mock.Mock(return_value=mock_response)
monkeypatch.setattr(FeedbackService, "export_feedbacks", mock_export_feedbacks)
monkeypatch.setattr(message_api, "current_user", mock_account)
response = test_client.get(
f"/console/api/apps/{mock_app_model.id}/feedbacks/export",
headers=auth_header,
query_string={"format": "csv", "from_source": "user"},
)
assert response.status_code == 200
assert "text/csv" in response.content_type
def test_feedback_export_json_format(
self, test_client: FlaskClient, auth_header, monkeypatch, mock_app_model, mock_account, sample_feedback_data
):
"""Test feedback export in JSON format."""
# Setup mocks
mock_load_app_model = mock.Mock(return_value=mock_app_model)
monkeypatch.setattr(wraps, "_load_app_model", mock_load_app_model)
mock_json_response = {
"export_info": {
"app_id": mock_app_model.id,
"export_date": datetime.now().isoformat(),
"total_records": 2,
"data_source": "dify_feedback_export",
},
"feedback_data": [
{
"feedback_id": sample_feedback_data["user_feedback"].id,
"feedback_rating": "👍",
"feedback_rating_raw": "like",
"feedback_comment": "",
}
],
}
mock_response = mock.Mock()
mock_response.headers = {"Content-Type": "application/json; charset=utf-8"}
mock_response.data = json.dumps(mock_json_response).encode("utf-8")
mock_export_feedbacks = mock.Mock(return_value=mock_response)
monkeypatch.setattr(FeedbackService, "export_feedbacks", mock_export_feedbacks)
monkeypatch.setattr(message_api, "current_user", mock_account)
response = test_client.get(
f"/console/api/apps/{mock_app_model.id}/feedbacks/export",
headers=auth_header,
query_string={"format": "json"},
)
assert response.status_code == 200
assert "application/json" in response.content_type
def test_feedback_export_with_filters(
self, test_client: FlaskClient, auth_header, monkeypatch, mock_app_model, mock_account
):
"""Test feedback export with various filters."""
# Setup mocks
mock_load_app_model = mock.Mock(return_value=mock_app_model)
monkeypatch.setattr(wraps, "_load_app_model", mock_load_app_model)
mock_export_feedbacks = mock.Mock(return_value="mock filtered response")
monkeypatch.setattr(FeedbackService, "export_feedbacks", mock_export_feedbacks)
monkeypatch.setattr(message_api, "current_user", mock_account)
# Test with multiple filters
response = test_client.get(
f"/console/api/apps/{mock_app_model.id}/feedbacks/export",
headers=auth_header,
query_string={
"from_source": "user",
"rating": "dislike",
"has_comment": True,
"start_date": "2024-01-01",
"end_date": "2024-12-31",
"format": "csv",
},
)
assert response.status_code == 200
# Verify service was called with correct parameters
mock_export_feedbacks.assert_called_once_with(
app_id=mock_app_model.id,
from_source="user",
rating="dislike",
has_comment=True,
start_date="2024-01-01",
end_date="2024-12-31",
format_type="csv",
)
def test_feedback_export_invalid_date_format(
self, test_client: FlaskClient, auth_header, monkeypatch, mock_app_model, mock_account
):
"""Test feedback export with invalid date format."""
# Setup mocks
mock_load_app_model = mock.Mock(return_value=mock_app_model)
monkeypatch.setattr(wraps, "_load_app_model", mock_load_app_model)
# Mock the service to raise ValueError for invalid date
mock_export_feedbacks = mock.Mock(side_effect=ValueError("Invalid date format"))
monkeypatch.setattr(FeedbackService, "export_feedbacks", mock_export_feedbacks)
monkeypatch.setattr(message_api, "current_user", mock_account)
response = test_client.get(
f"/console/api/apps/{mock_app_model.id}/feedbacks/export",
headers=auth_header,
query_string={"start_date": "invalid-date", "format": "csv"},
)
assert response.status_code == 400
response_json = response.get_json()
assert "Parameter validation error" in response_json["error"]
def test_feedback_export_server_error(
self, test_client: FlaskClient, auth_header, monkeypatch, mock_app_model, mock_account
):
"""Test feedback export with server error."""
# Setup mocks
mock_load_app_model = mock.Mock(return_value=mock_app_model)
monkeypatch.setattr(wraps, "_load_app_model", mock_load_app_model)
# Mock the service to raise an exception
mock_export_feedbacks = mock.Mock(side_effect=Exception("Database connection failed"))
monkeypatch.setattr(FeedbackService, "export_feedbacks", mock_export_feedbacks)
monkeypatch.setattr(message_api, "current_user", mock_account)
response = test_client.get(
f"/console/api/apps/{mock_app_model.id}/feedbacks/export",
headers=auth_header,
query_string={"format": "csv"},
)
assert response.status_code == 500

View File

@@ -0,0 +1,386 @@
"""Unit tests for FeedbackService."""
import json
from datetime import datetime
from types import SimpleNamespace
from unittest import mock
import pytest
from extensions.ext_database import db
from models.model import App, Conversation, Message
from services.feedback_service import FeedbackService
class TestFeedbackService:
"""Test FeedbackService methods."""
@pytest.fixture
def mock_db_session(self, monkeypatch):
"""Mock database session."""
mock_session = mock.Mock()
monkeypatch.setattr(db, "session", mock_session)
return mock_session
@pytest.fixture
def sample_data(self):
"""Create sample data for testing."""
app_id = "test-app-id"
# Create mock models
app = App(id=app_id, name="Test App")
conversation = Conversation(id="test-conversation-id", app_id=app_id, name="Test Conversation")
message = Message(
id="test-message-id",
conversation_id="test-conversation-id",
query="What is AI?",
answer="AI is artificial intelligence.",
inputs={"query": "What is AI?"},
created_at=datetime(2024, 1, 1, 10, 0, 0),
)
# Use SimpleNamespace to avoid ORM model constructor issues
user_feedback = SimpleNamespace(
id="user-feedback-id",
app_id=app_id,
conversation_id="test-conversation-id",
message_id="test-message-id",
rating="like",
from_source="user",
content="Great answer!",
from_end_user_id="user-123",
from_account_id=None,
from_account=None, # Mock account object
created_at=datetime(2024, 1, 1, 10, 5, 0),
)
admin_feedback = SimpleNamespace(
id="admin-feedback-id",
app_id=app_id,
conversation_id="test-conversation-id",
message_id="test-message-id",
rating="dislike",
from_source="admin",
content="Could be more detailed",
from_end_user_id=None,
from_account_id="admin-456",
from_account=SimpleNamespace(name="Admin User"), # Mock account object
created_at=datetime(2024, 1, 1, 10, 10, 0),
)
return {
"app": app,
"conversation": conversation,
"message": message,
"user_feedback": user_feedback,
"admin_feedback": admin_feedback,
}
def test_export_feedbacks_csv_format(self, mock_db_session, sample_data):
"""Test exporting feedback data in CSV format."""
# Setup mock query result
mock_query = mock.Mock()
mock_query.join.return_value = mock_query
mock_query.outerjoin.return_value = mock_query
mock_query.where.return_value = mock_query
mock_query.filter.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.all.return_value = [
(
sample_data["user_feedback"],
sample_data["message"],
sample_data["conversation"],
sample_data["app"],
sample_data["user_feedback"].from_account,
)
]
mock_db_session.query.return_value = mock_query
# Test CSV export
result = FeedbackService.export_feedbacks(app_id=sample_data["app"].id, format_type="csv")
# Verify response structure
assert hasattr(result, "headers")
assert "text/csv" in result.headers["Content-Type"]
assert "attachment" in result.headers["Content-Disposition"]
# Check CSV content
csv_content = result.get_data(as_text=True)
# Verify essential headers exist (order may include additional columns)
assert "feedback_id" in csv_content
assert "app_name" in csv_content
assert "conversation_id" in csv_content
assert sample_data["app"].name in csv_content
assert sample_data["message"].query in csv_content
def test_export_feedbacks_json_format(self, mock_db_session, sample_data):
"""Test exporting feedback data in JSON format."""
# Setup mock query result
mock_query = mock.Mock()
mock_query.join.return_value = mock_query
mock_query.outerjoin.return_value = mock_query
mock_query.where.return_value = mock_query
mock_query.filter.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.all.return_value = [
(
sample_data["admin_feedback"],
sample_data["message"],
sample_data["conversation"],
sample_data["app"],
sample_data["admin_feedback"].from_account,
)
]
mock_db_session.query.return_value = mock_query
# Test JSON export
result = FeedbackService.export_feedbacks(app_id=sample_data["app"].id, format_type="json")
# Verify response structure
assert hasattr(result, "headers")
assert "application/json" in result.headers["Content-Type"]
assert "attachment" in result.headers["Content-Disposition"]
# Check JSON content
json_content = json.loads(result.get_data(as_text=True))
assert "export_info" in json_content
assert "feedback_data" in json_content
assert json_content["export_info"]["app_id"] == sample_data["app"].id
assert json_content["export_info"]["total_records"] == 1
def test_export_feedbacks_with_filters(self, mock_db_session, sample_data):
"""Test exporting feedback with various filters."""
# Setup mock query result
mock_query = mock.Mock()
mock_query.join.return_value = mock_query
mock_query.outerjoin.return_value = mock_query
mock_query.where.return_value = mock_query
mock_query.filter.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.all.return_value = [
(
sample_data["admin_feedback"],
sample_data["message"],
sample_data["conversation"],
sample_data["app"],
sample_data["admin_feedback"].from_account,
)
]
mock_db_session.query.return_value = mock_query
# Test with filters
result = FeedbackService.export_feedbacks(
app_id=sample_data["app"].id,
from_source="admin",
rating="dislike",
has_comment=True,
start_date="2024-01-01",
end_date="2024-12-31",
format_type="csv",
)
# Verify filters were applied
assert mock_query.filter.called
filter_calls = mock_query.filter.call_args_list
# At least three filter invocations are expected (source, rating, comment)
assert len(filter_calls) >= 3
def test_export_feedbacks_no_data(self, mock_db_session, sample_data):
"""Test exporting feedback when no data exists."""
# Setup mock query result with no data
mock_query = mock.Mock()
mock_query.join.return_value = mock_query
mock_query.outerjoin.return_value = mock_query
mock_query.where.return_value = mock_query
mock_query.filter.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.all.return_value = []
mock_db_session.query.return_value = mock_query
result = FeedbackService.export_feedbacks(app_id=sample_data["app"].id, format_type="csv")
# Should return an empty CSV with headers only
assert hasattr(result, "headers")
assert "text/csv" in result.headers["Content-Type"]
csv_content = result.get_data(as_text=True)
# Headers should exist (order can include additional columns)
assert "feedback_id" in csv_content
assert "app_name" in csv_content
assert "conversation_id" in csv_content
# No data rows expected
assert len([line for line in csv_content.strip().splitlines() if line.strip()]) == 1
def test_export_feedbacks_invalid_date_format(self, mock_db_session, sample_data):
"""Test exporting feedback with invalid date format."""
# Test with invalid start_date
with pytest.raises(ValueError, match="Invalid start_date format"):
FeedbackService.export_feedbacks(app_id=sample_data["app"].id, start_date="invalid-date-format")
# Test with invalid end_date
with pytest.raises(ValueError, match="Invalid end_date format"):
FeedbackService.export_feedbacks(app_id=sample_data["app"].id, end_date="invalid-date-format")
def test_export_feedbacks_invalid_format(self, mock_db_session, sample_data):
"""Test exporting feedback with unsupported format."""
with pytest.raises(ValueError, match="Unsupported format"):
FeedbackService.export_feedbacks(
app_id=sample_data["app"].id,
format_type="xml", # Unsupported format
)
def test_export_feedbacks_long_response_truncation(self, mock_db_session, sample_data):
"""Test that long AI responses are truncated in export."""
# Create message with long response
long_message = Message(
id="long-message-id",
conversation_id="test-conversation-id",
query="What is AI?",
answer="A" * 600, # 600 character response
inputs={"query": "What is AI?"},
created_at=datetime(2024, 1, 1, 10, 0, 0),
)
# Setup mock query result
mock_query = mock.Mock()
mock_query.join.return_value = mock_query
mock_query.outerjoin.return_value = mock_query
mock_query.where.return_value = mock_query
mock_query.filter.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.all.return_value = [
(
sample_data["user_feedback"],
long_message,
sample_data["conversation"],
sample_data["app"],
sample_data["user_feedback"].from_account,
)
]
mock_db_session.query.return_value = mock_query
# Test export
result = FeedbackService.export_feedbacks(app_id=sample_data["app"].id, format_type="json")
# Check JSON content
json_content = json.loads(result.get_data(as_text=True))
exported_answer = json_content["feedback_data"][0]["ai_response"]
# Should be truncated with ellipsis
assert len(exported_answer) <= 503 # 500 + "..."
assert exported_answer.endswith("...")
assert len(exported_answer) > 500 # Should be close to limit
def test_export_feedbacks_unicode_content(self, mock_db_session, sample_data):
"""Test exporting feedback with unicode content (Chinese characters)."""
# Create feedback with Chinese content (use SimpleNamespace to avoid ORM constructor constraints)
chinese_feedback = SimpleNamespace(
id="chinese-feedback-id",
app_id=sample_data["app"].id,
conversation_id="test-conversation-id",
message_id="test-message-id",
rating="dislike",
from_source="user",
content="回答不够详细,需要更多信息",
from_end_user_id="user-123",
from_account_id=None,
created_at=datetime(2024, 1, 1, 10, 5, 0),
)
# Create Chinese message
chinese_message = Message(
id="chinese-message-id",
conversation_id="test-conversation-id",
query="什么是人工智能?",
answer="人工智能是模拟人类智能的技术。",
inputs={"query": "什么是人工智能?"},
created_at=datetime(2024, 1, 1, 10, 0, 0),
)
# Setup mock query result
mock_query = mock.Mock()
mock_query.join.return_value = mock_query
mock_query.outerjoin.return_value = mock_query
mock_query.where.return_value = mock_query
mock_query.filter.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.all.return_value = [
(
chinese_feedback,
chinese_message,
sample_data["conversation"],
sample_data["app"],
None, # No account for user feedback
)
]
mock_db_session.query.return_value = mock_query
# Test export
result = FeedbackService.export_feedbacks(app_id=sample_data["app"].id, format_type="csv")
# Check that unicode content is preserved
csv_content = result.get_data(as_text=True)
assert "什么是人工智能?" in csv_content
assert "回答不够详细,需要更多信息" in csv_content
assert "人工智能是模拟人类智能的技术" in csv_content
def test_export_feedbacks_emoji_ratings(self, mock_db_session, sample_data):
"""Test that rating emojis are properly formatted in export."""
# Setup mock query result with both like and dislike feedback
mock_query = mock.Mock()
mock_query.join.return_value = mock_query
mock_query.outerjoin.return_value = mock_query
mock_query.where.return_value = mock_query
mock_query.filter.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.all.return_value = [
(
sample_data["user_feedback"],
sample_data["message"],
sample_data["conversation"],
sample_data["app"],
sample_data["user_feedback"].from_account,
),
(
sample_data["admin_feedback"],
sample_data["message"],
sample_data["conversation"],
sample_data["app"],
sample_data["admin_feedback"].from_account,
),
]
mock_db_session.query.return_value = mock_query
# Test export
result = FeedbackService.export_feedbacks(app_id=sample_data["app"].id, format_type="json")
# Check JSON content for emoji ratings
json_content = json.loads(result.get_data(as_text=True))
feedback_data = json_content["feedback_data"]
# Should have both feedback records
assert len(feedback_data) == 2
# Check that emojis are properly set
like_feedback = next(f for f in feedback_data if f["feedback_rating_raw"] == "like")
dislike_feedback = next(f for f in feedback_data if f["feedback_rating_raw"] == "dislike")
assert like_feedback["feedback_rating"] == "👍"
assert dislike_feedback["feedback_rating"] == "👎"

View File

@@ -0,0 +1,160 @@
import sys
import types
from collections.abc import Generator
from typing import TYPE_CHECKING, Any
from unittest.mock import MagicMock, patch
import pytest
from core.file import File, FileTransferMethod, FileType
from core.model_runtime.entities.llm_entities import LLMUsage
from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.utils.message_transformer import ToolFileMessageTransformer
from core.variables.segments import ArrayFileSegment
from core.workflow.entities import GraphInitParams
from core.workflow.node_events import StreamChunkEvent, StreamCompletedEvent
from core.workflow.runtime import GraphRuntimeState, VariablePool
from core.workflow.system_variable import SystemVariable
if TYPE_CHECKING: # pragma: no cover - imported for type checking only
from core.workflow.nodes.tool.tool_node import ToolNode
@pytest.fixture
def tool_node(monkeypatch) -> "ToolNode":
module_name = "core.ops.ops_trace_manager"
if module_name not in sys.modules:
ops_stub = types.ModuleType(module_name)
ops_stub.TraceQueueManager = object # pragma: no cover - stub attribute
ops_stub.TraceTask = object # pragma: no cover - stub attribute
monkeypatch.setitem(sys.modules, module_name, ops_stub)
from core.workflow.nodes.tool.tool_node import ToolNode
graph_config: dict[str, Any] = {
"nodes": [
{
"id": "tool-node",
"data": {
"type": "tool",
"title": "Tool",
"desc": "",
"provider_id": "provider",
"provider_type": "builtin",
"provider_name": "provider",
"tool_name": "tool",
"tool_label": "tool",
"tool_configurations": {},
"tool_parameters": {},
},
}
],
"edges": [],
}
init_params = GraphInitParams(
tenant_id="tenant-id",
app_id="app-id",
workflow_id="workflow-id",
graph_config=graph_config,
user_id="user-id",
user_from="account",
invoke_from="debugger",
call_depth=0,
)
variable_pool = VariablePool(system_variables=SystemVariable(user_id="user-id"))
graph_runtime_state = GraphRuntimeState(variable_pool=variable_pool, start_at=0.0)
config = graph_config["nodes"][0]
node = ToolNode(
id="node-instance",
config=config,
graph_init_params=init_params,
graph_runtime_state=graph_runtime_state,
)
node.init_node_data(config["data"])
return node
def _collect_events(generator: Generator) -> tuple[list[Any], LLMUsage]:
events: list[Any] = []
try:
while True:
events.append(next(generator))
except StopIteration as stop:
return events, stop.value
def _run_transform(tool_node: "ToolNode", message: ToolInvokeMessage) -> tuple[list[Any], LLMUsage]:
def _identity_transform(messages, *_args, **_kwargs):
return messages
tool_runtime = MagicMock()
with patch.object(ToolFileMessageTransformer, "transform_tool_invoke_messages", side_effect=_identity_transform):
generator = tool_node._transform_message(
messages=iter([message]),
tool_info={"provider_type": "builtin", "provider_id": "provider"},
parameters_for_log={},
user_id="user-id",
tenant_id="tenant-id",
node_id=tool_node._node_id,
tool_runtime=tool_runtime,
)
return _collect_events(generator)
def test_link_messages_with_file_populate_files_output(tool_node: "ToolNode"):
file_obj = File(
tenant_id="tenant-id",
type=FileType.DOCUMENT,
transfer_method=FileTransferMethod.TOOL_FILE,
related_id="file-id",
filename="demo.pdf",
extension=".pdf",
mime_type="application/pdf",
size=123,
storage_key="file-key",
)
message = ToolInvokeMessage(
type=ToolInvokeMessage.MessageType.LINK,
message=ToolInvokeMessage.TextMessage(text="/files/tools/file-id.pdf"),
meta={"file": file_obj},
)
events, usage = _run_transform(tool_node, message)
assert isinstance(usage, LLMUsage)
chunk_events = [event for event in events if isinstance(event, StreamChunkEvent)]
assert chunk_events
assert chunk_events[0].chunk == "File: /files/tools/file-id.pdf\n"
completed_events = [event for event in events if isinstance(event, StreamCompletedEvent)]
assert len(completed_events) == 1
outputs = completed_events[0].node_run_result.outputs
assert outputs["text"] == "File: /files/tools/file-id.pdf\n"
files_segment = outputs["files"]
assert isinstance(files_segment, ArrayFileSegment)
assert files_segment.value == [file_obj]
def test_plain_link_messages_remain_links(tool_node: "ToolNode"):
message = ToolInvokeMessage(
type=ToolInvokeMessage.MessageType.LINK,
message=ToolInvokeMessage.TextMessage(text="https://dify.ai"),
meta=None,
)
events, _ = _run_transform(tool_node, message)
chunk_events = [event for event in events if isinstance(event, StreamChunkEvent)]
assert chunk_events
assert chunk_events[0].chunk == "Link: https://dify.ai\n"
completed_events = [event for event in events if isinstance(event, StreamCompletedEvent)]
assert len(completed_events) == 1
files_segment = completed_events[0].node_run_result.outputs["files"]
assert isinstance(files_segment, ArrayFileSegment)
assert files_segment.value == []

View File

@@ -118,10 +118,8 @@ class TestWebhookServiceUnit:
"/webhook", method="POST", headers={"Content-Type": "application/json"}, data="invalid json"
):
webhook_trigger = MagicMock()
webhook_data = WebhookService.extract_webhook_data(webhook_trigger)
assert webhook_data["method"] == "POST"
assert webhook_data["body"] == {} # Should default to empty dict
with pytest.raises(ValueError, match="Invalid JSON body"):
WebhookService.extract_webhook_data(webhook_trigger)
def test_generate_webhook_response_default(self):
"""Test webhook response generation with default values."""
@@ -435,6 +433,27 @@ class TestWebhookServiceUnit:
assert result["body"]["message"] == "hello" # Already string
assert result["body"]["age"] == 25 # Already number
def test_extract_and_validate_webhook_data_invalid_json_error(self):
"""Invalid JSON should bubble up as a ValueError with details."""
app = Flask(__name__)
with app.test_request_context(
"/webhook",
method="POST",
headers={"Content-Type": "application/json"},
data='{"invalid": }',
):
webhook_trigger = MagicMock()
node_config = {
"data": {
"method": "post",
"content_type": "application/json",
}
}
with pytest.raises(ValueError, match="Invalid JSON body"):
WebhookService.extract_and_validate_webhook_data(webhook_trigger, node_config)
def test_extract_and_validate_webhook_data_validation_error(self):
"""Test unified data extraction with validation error."""
app = Flask(__name__)

751
api/uv.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -5,4 +5,4 @@ set -x
SCRIPT_DIR="$(dirname "$(realpath "$0")")"
cd "$SCRIPT_DIR/../web"
pnpm install && pnpm build && pnpm start
pnpm install && pnpm dev

View File

@@ -2,7 +2,7 @@ x-shared-env: &shared-api-worker-env
services:
# API service
api:
image: langgenius/dify-api:1.10.0
image: langgenius/dify-api:1.10.1-fix.1
restart: always
environment:
# Use the shared environment variables.
@@ -41,7 +41,7 @@ services:
# worker service
# The Celery worker for processing all queues (dataset, workflow, mail, etc.)
worker:
image: langgenius/dify-api:1.10.0
image: langgenius/dify-api:1.10.1-fix.1
restart: always
environment:
# Use the shared environment variables.
@@ -78,7 +78,7 @@ services:
# worker_beat service
# Celery beat for scheduling periodic tasks.
worker_beat:
image: langgenius/dify-api:1.10.0
image: langgenius/dify-api:1.10.1-fix.1
restart: always
environment:
# Use the shared environment variables.
@@ -106,7 +106,7 @@ services:
# Frontend web application.
web:
image: langgenius/dify-web:1.10.0
image: langgenius/dify-web:1.10.1-fix.1
restart: always
environment:
CONSOLE_API_URL: ${CONSOLE_API_URL:-}
@@ -131,7 +131,7 @@ services:
ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true}
ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true}
ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true}
# The PostgreSQL database.
db_postgres:
image: postgres:15-alpine
@@ -459,7 +459,7 @@ services:
timeout: 10s
# seekdb vector database
seekdb:
seekdb:
image: oceanbase/seekdb:latest
container_name: seekdb
profiles:
@@ -486,7 +486,7 @@ services:
# Qdrant vector store.
# (if used, you need to set VECTOR_STORE to qdrant in the api & worker service.)
qdrant:
image: langgenius/qdrant:v1.7.3
image: langgenius/qdrant:v1.8.3
profiles:
- qdrant
restart: always

View File

@@ -636,7 +636,7 @@ x-shared-env: &shared-api-worker-env
services:
# API service
api:
image: langgenius/dify-api:1.10.0
image: langgenius/dify-api:1.10.1-fix.1
restart: always
environment:
# Use the shared environment variables.
@@ -675,7 +675,7 @@ services:
# worker service
# The Celery worker for processing all queues (dataset, workflow, mail, etc.)
worker:
image: langgenius/dify-api:1.10.0
image: langgenius/dify-api:1.10.1-fix.1
restart: always
environment:
# Use the shared environment variables.
@@ -712,7 +712,7 @@ services:
# worker_beat service
# Celery beat for scheduling periodic tasks.
worker_beat:
image: langgenius/dify-api:1.10.0
image: langgenius/dify-api:1.10.1-fix.1
restart: always
environment:
# Use the shared environment variables.
@@ -740,7 +740,7 @@ services:
# Frontend web application.
web:
image: langgenius/dify-web:1.10.0
image: langgenius/dify-web:1.10.1-fix.1
restart: always
environment:
CONSOLE_API_URL: ${CONSOLE_API_URL:-}
@@ -765,7 +765,7 @@ services:
ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true}
ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true}
ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true}
# The PostgreSQL database.
db_postgres:
image: postgres:15-alpine
@@ -1093,7 +1093,7 @@ services:
timeout: 10s
# seekdb vector database
seekdb:
seekdb:
image: oceanbase/seekdb:latest
container_name: seekdb
profiles:
@@ -1120,7 +1120,7 @@ services:
# Qdrant vector store.
# (if used, you need to set VECTOR_STORE to qdrant in the api & worker service.)
qdrant:
image: langgenius/qdrant:v1.7.3
image: langgenius/qdrant:v1.8.3
profiles:
- qdrant
restart: always

View File

@@ -99,9 +99,9 @@ If your IDE is VSCode, rename `web/.vscode/settings.example.json` to `web/.vscod
## Test
We start to use [Jest](https://jestjs.io/) and [React Testing Library](https://testing-library.com/docs/react-testing-library/intro/) for Unit Testing.
We use [Jest](https://jestjs.io/) and [React Testing Library](https://testing-library.com/docs/react-testing-library/intro/) for Unit Testing.
You can create a test file with a suffix of `.spec` beside the file that to be tested. For example, if you want to test a file named `util.ts`. The test file name should be `util.spec.ts`.
**📖 Complete Testing Guide**: See [web/testing/testing.md](./testing/testing.md) for detailed testing specifications, best practices, and examples.
Run test:
@@ -109,10 +109,22 @@ Run test:
pnpm run test
```
If you are not familiar with writing tests, here is some code to refer to:
### Example Code
- [classnames.spec.ts](./utils/classnames.spec.ts)
- [index.spec.tsx](./app/components/base/button/index.spec.tsx)
If you are not familiar with writing tests, refer to:
- [classnames.spec.ts](./utils/classnames.spec.ts) - Utility function test example
- [index.spec.tsx](./app/components/base/button/index.spec.tsx) - Component test example
### Analyze Component Complexity
Before writing tests, use the script to analyze component complexity:
```bash
pnpm analyze-component app/components/your-component/index.tsx
```
This will help you determine the testing strategy. See [web/testing/testing.md](./testing/testing.md) for details.
## Documentation

View File

@@ -1,6 +1,24 @@
import { BlockEnum } from '@/app/components/workflow/types'
import { useWorkflowStore } from '@/app/components/workflow/store'
// Type for mocked store
type MockWorkflowStore = {
showOnboarding: boolean
setShowOnboarding: jest.Mock
hasShownOnboarding: boolean
setHasShownOnboarding: jest.Mock
hasSelectedStartNode: boolean
setHasSelectedStartNode: jest.Mock
setShouldAutoOpenStartNodeSelector: jest.Mock
notInitialWorkflow: boolean
}
// Type for mocked node
type MockNode = {
id: string
data: { type?: BlockEnum }
}
// Mock zustand store
jest.mock('@/app/components/workflow/store')
@@ -39,7 +57,7 @@ describe('Workflow Onboarding Integration Logic', () => {
describe('Onboarding State Management', () => {
it('should initialize onboarding state correctly', () => {
const store = useWorkflowStore()
const store = useWorkflowStore() as unknown as MockWorkflowStore
expect(store.showOnboarding).toBe(false)
expect(store.hasSelectedStartNode).toBe(false)
@@ -47,7 +65,7 @@ describe('Workflow Onboarding Integration Logic', () => {
})
it('should update onboarding visibility', () => {
const store = useWorkflowStore()
const store = useWorkflowStore() as unknown as MockWorkflowStore
store.setShowOnboarding(true)
expect(mockSetShowOnboarding).toHaveBeenCalledWith(true)
@@ -57,14 +75,14 @@ describe('Workflow Onboarding Integration Logic', () => {
})
it('should track node selection state', () => {
const store = useWorkflowStore()
const store = useWorkflowStore() as unknown as MockWorkflowStore
store.setHasSelectedStartNode(true)
expect(mockSetHasSelectedStartNode).toHaveBeenCalledWith(true)
})
it('should track onboarding show state', () => {
const store = useWorkflowStore()
const store = useWorkflowStore() as unknown as MockWorkflowStore
store.setHasShownOnboarding(true)
expect(mockSetHasShownOnboarding).toHaveBeenCalledWith(true)
@@ -205,60 +223,44 @@ describe('Workflow Onboarding Integration Logic', () => {
it('should auto-expand for TriggerSchedule in new workflow', () => {
const shouldAutoOpenStartNodeSelector = true
const nodeType = BlockEnum.TriggerSchedule
const nodeType: BlockEnum = BlockEnum.TriggerSchedule
const isChatMode = false
const validStartTypes = [BlockEnum.Start, BlockEnum.TriggerSchedule, BlockEnum.TriggerWebhook, BlockEnum.TriggerPlugin]
const shouldAutoExpand = shouldAutoOpenStartNodeSelector && (
nodeType === BlockEnum.Start
|| nodeType === BlockEnum.TriggerSchedule
|| nodeType === BlockEnum.TriggerWebhook
|| nodeType === BlockEnum.TriggerPlugin
) && !isChatMode
const shouldAutoExpand = shouldAutoOpenStartNodeSelector && validStartTypes.includes(nodeType) && !isChatMode
expect(shouldAutoExpand).toBe(true)
})
it('should auto-expand for TriggerWebhook in new workflow', () => {
const shouldAutoOpenStartNodeSelector = true
const nodeType = BlockEnum.TriggerWebhook
const nodeType: BlockEnum = BlockEnum.TriggerWebhook
const isChatMode = false
const validStartTypes = [BlockEnum.Start, BlockEnum.TriggerSchedule, BlockEnum.TriggerWebhook, BlockEnum.TriggerPlugin]
const shouldAutoExpand = shouldAutoOpenStartNodeSelector && (
nodeType === BlockEnum.Start
|| nodeType === BlockEnum.TriggerSchedule
|| nodeType === BlockEnum.TriggerWebhook
|| nodeType === BlockEnum.TriggerPlugin
) && !isChatMode
const shouldAutoExpand = shouldAutoOpenStartNodeSelector && validStartTypes.includes(nodeType) && !isChatMode
expect(shouldAutoExpand).toBe(true)
})
it('should auto-expand for TriggerPlugin in new workflow', () => {
const shouldAutoOpenStartNodeSelector = true
const nodeType = BlockEnum.TriggerPlugin
const nodeType: BlockEnum = BlockEnum.TriggerPlugin
const isChatMode = false
const validStartTypes = [BlockEnum.Start, BlockEnum.TriggerSchedule, BlockEnum.TriggerWebhook, BlockEnum.TriggerPlugin]
const shouldAutoExpand = shouldAutoOpenStartNodeSelector && (
nodeType === BlockEnum.Start
|| nodeType === BlockEnum.TriggerSchedule
|| nodeType === BlockEnum.TriggerWebhook
|| nodeType === BlockEnum.TriggerPlugin
) && !isChatMode
const shouldAutoExpand = shouldAutoOpenStartNodeSelector && validStartTypes.includes(nodeType) && !isChatMode
expect(shouldAutoExpand).toBe(true)
})
it('should not auto-expand for non-trigger nodes', () => {
const shouldAutoOpenStartNodeSelector = true
const nodeType = BlockEnum.LLM
const nodeType: BlockEnum = BlockEnum.LLM
const isChatMode = false
const validStartTypes = [BlockEnum.Start, BlockEnum.TriggerSchedule, BlockEnum.TriggerWebhook, BlockEnum.TriggerPlugin]
const shouldAutoExpand = shouldAutoOpenStartNodeSelector && (
nodeType === BlockEnum.Start
|| nodeType === BlockEnum.TriggerSchedule
|| nodeType === BlockEnum.TriggerWebhook
|| nodeType === BlockEnum.TriggerPlugin
) && !isChatMode
const shouldAutoExpand = shouldAutoOpenStartNodeSelector && validStartTypes.includes(nodeType) && !isChatMode
expect(shouldAutoExpand).toBe(false)
})
@@ -321,7 +323,7 @@ describe('Workflow Onboarding Integration Logic', () => {
const nodeData = { type: BlockEnum.Start, title: 'Start' }
// Simulate node creation logic from workflow-children.tsx
const createdNodeData = {
const createdNodeData: Record<string, unknown> = {
...nodeData,
// Note: 'selected: true' should NOT be added
}
@@ -334,7 +336,7 @@ describe('Workflow Onboarding Integration Logic', () => {
const nodeData = { type: BlockEnum.TriggerWebhook, title: 'Webhook Trigger' }
const toolConfig = { webhook_url: 'https://example.com/webhook' }
const createdNodeData = {
const createdNodeData: Record<string, unknown> = {
...nodeData,
...toolConfig,
// Note: 'selected: true' should NOT be added
@@ -352,7 +354,7 @@ describe('Workflow Onboarding Integration Logic', () => {
config: { interval: '1h' },
}
const createdNodeData = {
const createdNodeData: Record<string, unknown> = {
...nodeData,
}
@@ -495,7 +497,7 @@ describe('Workflow Onboarding Integration Logic', () => {
BlockEnum.TriggerWebhook,
BlockEnum.TriggerPlugin,
]
const hasStartNode = nodes.some(node => startNodeTypes.includes(node.data?.type))
const hasStartNode = nodes.some((node: MockNode) => startNodeTypes.includes(node.data?.type as BlockEnum))
const isEmpty = nodes.length === 0 || !hasStartNode
expect(isEmpty).toBe(true)
@@ -516,7 +518,7 @@ describe('Workflow Onboarding Integration Logic', () => {
BlockEnum.TriggerWebhook,
BlockEnum.TriggerPlugin,
]
const hasStartNode = nodes.some(node => startNodeTypes.includes(node.data.type))
const hasStartNode = nodes.some((node: MockNode) => startNodeTypes.includes(node.data.type as BlockEnum))
const isEmpty = nodes.length === 0 || !hasStartNode
expect(isEmpty).toBe(true)
@@ -536,7 +538,7 @@ describe('Workflow Onboarding Integration Logic', () => {
BlockEnum.TriggerWebhook,
BlockEnum.TriggerPlugin,
]
const hasStartNode = nodes.some(node => startNodeTypes.includes(node.data.type))
const hasStartNode = nodes.some((node: MockNode) => startNodeTypes.includes(node.data.type as BlockEnum))
const isEmpty = nodes.length === 0 || !hasStartNode
expect(isEmpty).toBe(false)
@@ -571,7 +573,7 @@ describe('Workflow Onboarding Integration Logic', () => {
})
// Simulate the check logic with hasShownOnboarding = true
const store = useWorkflowStore()
const store = useWorkflowStore() as unknown as MockWorkflowStore
const shouldTrigger = !store.hasShownOnboarding && !store.showOnboarding && !store.notInitialWorkflow
expect(shouldTrigger).toBe(false)
@@ -605,7 +607,7 @@ describe('Workflow Onboarding Integration Logic', () => {
})
// Simulate the check logic with notInitialWorkflow = true
const store = useWorkflowStore()
const store = useWorkflowStore() as unknown as MockWorkflowStore
const shouldTrigger = !store.hasShownOnboarding && !store.showOnboarding && !store.notInitialWorkflow
expect(shouldTrigger).toBe(false)

View File

@@ -1,4 +1,5 @@
import { getWorkflowEntryNode } from '@/app/components/workflow/utils/workflow-entry'
import type { Node } from '@/app/components/workflow/types'
// Mock the getWorkflowEntryNode function
jest.mock('@/app/components/workflow/utils/workflow-entry', () => ({
@@ -7,6 +8,9 @@ jest.mock('@/app/components/workflow/utils/workflow-entry', () => ({
const mockGetWorkflowEntryNode = getWorkflowEntryNode as jest.MockedFunction<typeof getWorkflowEntryNode>
// Mock entry node for testing (truthy value)
const mockEntryNode = { id: 'start-node', data: { type: 'start' } } as Node
describe('App Card Toggle Logic', () => {
beforeEach(() => {
jest.clearAllMocks()
@@ -39,7 +43,7 @@ describe('App Card Toggle Logic', () => {
describe('Entry Node Detection Logic', () => {
it('should disable toggle when workflow missing entry node', () => {
mockGetWorkflowEntryNode.mockReturnValue(false)
mockGetWorkflowEntryNode.mockReturnValue(undefined)
const result = calculateToggleState(
'workflow',
@@ -55,7 +59,7 @@ describe('App Card Toggle Logic', () => {
})
it('should enable toggle when workflow has entry node', () => {
mockGetWorkflowEntryNode.mockReturnValue(true)
mockGetWorkflowEntryNode.mockReturnValue(mockEntryNode)
const result = calculateToggleState(
'workflow',
@@ -101,7 +105,7 @@ describe('App Card Toggle Logic', () => {
})
it('should consider published state when workflow has graph', () => {
mockGetWorkflowEntryNode.mockReturnValue(true)
mockGetWorkflowEntryNode.mockReturnValue(mockEntryNode)
const result = calculateToggleState(
'workflow',
@@ -117,7 +121,7 @@ describe('App Card Toggle Logic', () => {
describe('Permissions Logic', () => {
it('should disable webapp toggle when user lacks editor permissions', () => {
mockGetWorkflowEntryNode.mockReturnValue(true)
mockGetWorkflowEntryNode.mockReturnValue(mockEntryNode)
const result = calculateToggleState(
'workflow',
@@ -132,7 +136,7 @@ describe('App Card Toggle Logic', () => {
})
it('should disable api toggle when user lacks manager permissions', () => {
mockGetWorkflowEntryNode.mockReturnValue(true)
mockGetWorkflowEntryNode.mockReturnValue(mockEntryNode)
const result = calculateToggleState(
'workflow',
@@ -147,7 +151,7 @@ describe('App Card Toggle Logic', () => {
})
it('should enable toggle when user has proper permissions', () => {
mockGetWorkflowEntryNode.mockReturnValue(true)
mockGetWorkflowEntryNode.mockReturnValue(mockEntryNode)
const webappResult = calculateToggleState(
'workflow',
@@ -172,7 +176,7 @@ describe('App Card Toggle Logic', () => {
describe('Combined Conditions Logic', () => {
it('should handle multiple disable conditions correctly', () => {
mockGetWorkflowEntryNode.mockReturnValue(false)
mockGetWorkflowEntryNode.mockReturnValue(undefined)
const result = calculateToggleState(
'workflow',
@@ -191,7 +195,7 @@ describe('App Card Toggle Logic', () => {
})
it('should enable when all conditions are satisfied', () => {
mockGetWorkflowEntryNode.mockReturnValue(true)
mockGetWorkflowEntryNode.mockReturnValue(mockEntryNode)
const result = calculateToggleState(
'workflow',

View File

@@ -67,6 +67,10 @@ const Operation: FC<OperationProps> = ({
agent_thoughts,
} = item
const [localFeedback, setLocalFeedback] = useState(config?.supportAnnotation ? adminFeedback : feedback)
const [adminLocalFeedback, setAdminLocalFeedback] = useState(adminFeedback)
// Separate feedback types for display
const userFeedback = feedback
const content = useMemo(() => {
if (agent_thoughts?.length)
@@ -81,6 +85,10 @@ const Operation: FC<OperationProps> = ({
await onFeedback?.(id, { rating, content })
setLocalFeedback({ rating })
// Update admin feedback state separately if annotation is supported
if (config?.supportAnnotation)
setAdminLocalFeedback(rating ? { rating } : undefined)
}
const handleThumbsDown = () => {
@@ -180,18 +188,53 @@ const Operation: FC<OperationProps> = ({
)}
</div>
)}
{!isOpeningStatement && config?.supportFeedback && localFeedback?.rating && onFeedback && (
{!isOpeningStatement && config?.supportFeedback && onFeedback && (
<div className='ml-1 flex items-center gap-0.5 rounded-[10px] border-[0.5px] border-components-actionbar-border bg-components-actionbar-bg p-0.5 shadow-md backdrop-blur-sm'>
{localFeedback?.rating === 'like' && (
<ActionButton state={ActionButtonState.Active} onClick={() => handleFeedback(null)}>
<RiThumbUpLine className='h-4 w-4' />
</ActionButton>
{/* User Feedback Display */}
{userFeedback?.rating && (
<div className='flex items-center'>
<span className='mr-1 text-xs text-text-tertiary'>User</span>
{userFeedback.rating === 'like' ? (
<ActionButton state={ActionButtonState.Active} title={userFeedback.content ? `User liked this response: ${userFeedback.content}` : 'User liked this response'}>
<RiThumbUpLine className='h-3 w-3' />
</ActionButton>
) : (
<ActionButton state={ActionButtonState.Destructive} title={userFeedback.content ? `User disliked this response: ${userFeedback.content}` : 'User disliked this response'}>
<RiThumbDownLine className='h-3 w-3' />
</ActionButton>
)}
</div>
)}
{localFeedback?.rating === 'dislike' && (
<ActionButton state={ActionButtonState.Destructive} onClick={() => handleFeedback(null)}>
<RiThumbDownLine className='h-4 w-4' />
</ActionButton>
{/* Admin Feedback Controls */}
{config?.supportAnnotation && (
<div className='flex items-center'>
{userFeedback?.rating && <div className='mx-1 h-3 w-[0.5px] bg-components-actionbar-border' />}
{!adminLocalFeedback?.rating ? (
<>
<ActionButton onClick={() => handleFeedback('like')}>
<RiThumbUpLine className='h-4 w-4' />
</ActionButton>
<ActionButton onClick={handleThumbsDown}>
<RiThumbDownLine className='h-4 w-4' />
</ActionButton>
</>
) : (
<>
{adminLocalFeedback.rating === 'like' ? (
<ActionButton state={ActionButtonState.Active} onClick={() => handleFeedback(null)}>
<RiThumbUpLine className='h-4 w-4' />
</ActionButton>
) : (
<ActionButton state={ActionButtonState.Destructive} onClick={() => handleFeedback(null)}>
<RiThumbDownLine className='h-4 w-4' />
</ActionButton>
)}
</>
)}
</div>
)}
</div>
)}
</div>

View File

@@ -0,0 +1,675 @@
import React from 'react'
import { fireEvent, render, screen } from '@testing-library/react'
import Drawer from './index'
import type { IDrawerProps } from './index'
// Capture dialog onClose for testing
let capturedDialogOnClose: (() => void) | null = null
// Mock react-i18next
jest.mock('react-i18next', () => ({
useTranslation: () => ({
t: (key: string) => key,
}),
}))
// Mock @headlessui/react
jest.mock('@headlessui/react', () => ({
Dialog: ({ children, open, onClose, className, unmount }: {
children: React.ReactNode
open: boolean
onClose: () => void
className: string
unmount: boolean
}) => {
capturedDialogOnClose = onClose
if (!open)
return null
return (
<div
data-testid="dialog"
data-open={open}
data-unmount={unmount}
className={className}
role="dialog"
>
{children}
</div>
)
},
DialogBackdrop: ({ children, className, onClick }: {
children?: React.ReactNode
className: string
onClick: () => void
}) => (
<div
data-testid="dialog-backdrop"
className={className}
onClick={onClick}
>
{children}
</div>
),
DialogTitle: ({ children, as: _as, className, ...props }: {
children: React.ReactNode
as?: string
className?: string
}) => (
<div data-testid="dialog-title" className={className} {...props}>
{children}
</div>
),
}))
// Mock XMarkIcon
jest.mock('@heroicons/react/24/outline', () => ({
XMarkIcon: ({ className, onClick }: { className: string; onClick?: () => void }) => (
<svg data-testid="close-icon" className={className} onClick={onClick} />
),
}))
// Helper function to render Drawer with default props
const defaultProps: IDrawerProps = {
isOpen: true,
onClose: jest.fn(),
children: <div data-testid="drawer-content">Content</div>,
}
const renderDrawer = (props: Partial<IDrawerProps> = {}) => {
const mergedProps = { ...defaultProps, ...props }
return render(<Drawer {...mergedProps} />)
}
describe('Drawer', () => {
beforeEach(() => {
jest.clearAllMocks()
capturedDialogOnClose = null
})
// Basic rendering tests
describe('Rendering', () => {
it('should render when isOpen is true', () => {
// Arrange & Act
renderDrawer({ isOpen: true })
// Assert
expect(screen.getByRole('dialog')).toBeInTheDocument()
expect(screen.getByTestId('drawer-content')).toBeInTheDocument()
})
it('should not render when isOpen is false', () => {
// Arrange & Act
renderDrawer({ isOpen: false })
// Assert
expect(screen.queryByRole('dialog')).not.toBeInTheDocument()
})
it('should render children content', () => {
// Arrange
const childContent = <p data-testid="custom-child">Custom Content</p>
// Act
renderDrawer({ children: childContent })
// Assert
expect(screen.getByTestId('custom-child')).toBeInTheDocument()
expect(screen.getByText('Custom Content')).toBeInTheDocument()
})
})
// Title and description tests
describe('Title and Description', () => {
it('should render title when provided', () => {
// Arrange & Act
renderDrawer({ title: 'Test Title' })
// Assert
expect(screen.getByText('Test Title')).toBeInTheDocument()
})
it('should not render title when not provided', () => {
// Arrange & Act
renderDrawer({ title: '' })
// Assert
const titles = screen.queryAllByTestId('dialog-title')
const titleWithText = titles.find(el => el.textContent !== '')
expect(titleWithText).toBeUndefined()
})
it('should render description when provided', () => {
// Arrange & Act
renderDrawer({ description: 'Test Description' })
// Assert
expect(screen.getByText('Test Description')).toBeInTheDocument()
})
it('should not render description when not provided', () => {
// Arrange & Act
renderDrawer({ description: '' })
// Assert
expect(screen.queryByText('Test Description')).not.toBeInTheDocument()
})
it('should render both title and description together', () => {
// Arrange & Act
renderDrawer({
title: 'My Title',
description: 'My Description',
})
// Assert
expect(screen.getByText('My Title')).toBeInTheDocument()
expect(screen.getByText('My Description')).toBeInTheDocument()
})
})
// Close button tests
describe('Close Button', () => {
it('should render close icon when showClose is true', () => {
// Arrange & Act
renderDrawer({ showClose: true })
// Assert
expect(screen.getByTestId('close-icon')).toBeInTheDocument()
})
it('should not render close icon when showClose is false', () => {
// Arrange & Act
renderDrawer({ showClose: false })
// Assert
expect(screen.queryByTestId('close-icon')).not.toBeInTheDocument()
})
it('should not render close icon by default', () => {
// Arrange & Act
renderDrawer({})
// Assert
expect(screen.queryByTestId('close-icon')).not.toBeInTheDocument()
})
it('should call onClose when close icon is clicked', () => {
// Arrange
const onClose = jest.fn()
renderDrawer({ showClose: true, onClose })
// Act
fireEvent.click(screen.getByTestId('close-icon'))
// Assert
expect(onClose).toHaveBeenCalledTimes(1)
})
})
// Backdrop/Mask tests
describe('Backdrop and Mask', () => {
it('should render backdrop when noOverlay is false', () => {
// Arrange & Act
renderDrawer({ noOverlay: false })
// Assert
expect(screen.getByTestId('dialog-backdrop')).toBeInTheDocument()
})
it('should not render backdrop when noOverlay is true', () => {
// Arrange & Act
renderDrawer({ noOverlay: true })
// Assert
expect(screen.queryByTestId('dialog-backdrop')).not.toBeInTheDocument()
})
it('should apply mask background when mask is true', () => {
// Arrange & Act
renderDrawer({ mask: true })
// Assert
const backdrop = screen.getByTestId('dialog-backdrop')
expect(backdrop.className).toContain('bg-black/30')
})
it('should not apply mask background when mask is false', () => {
// Arrange & Act
renderDrawer({ mask: false })
// Assert
const backdrop = screen.getByTestId('dialog-backdrop')
expect(backdrop.className).not.toContain('bg-black/30')
})
it('should call onClose when backdrop is clicked and clickOutsideNotOpen is false', () => {
// Arrange
const onClose = jest.fn()
renderDrawer({ onClose, clickOutsideNotOpen: false })
// Act
fireEvent.click(screen.getByTestId('dialog-backdrop'))
// Assert
expect(onClose).toHaveBeenCalledTimes(1)
})
it('should not call onClose when backdrop is clicked and clickOutsideNotOpen is true', () => {
// Arrange
const onClose = jest.fn()
renderDrawer({ onClose, clickOutsideNotOpen: true })
// Act
fireEvent.click(screen.getByTestId('dialog-backdrop'))
// Assert
expect(onClose).not.toHaveBeenCalled()
})
})
// Footer tests
describe('Footer', () => {
it('should render default footer with cancel and save buttons when footer is undefined', () => {
// Arrange & Act
renderDrawer({ footer: undefined })
// Assert
expect(screen.getByText('common.operation.cancel')).toBeInTheDocument()
expect(screen.getByText('common.operation.save')).toBeInTheDocument()
})
it('should not render footer when footer is null', () => {
// Arrange & Act
renderDrawer({ footer: null })
// Assert
expect(screen.queryByText('common.operation.cancel')).not.toBeInTheDocument()
expect(screen.queryByText('common.operation.save')).not.toBeInTheDocument()
})
it('should render custom footer when provided', () => {
// Arrange
const customFooter = <div data-testid="custom-footer">Custom Footer</div>
// Act
renderDrawer({ footer: customFooter })
// Assert
expect(screen.getByTestId('custom-footer')).toBeInTheDocument()
expect(screen.queryByText('common.operation.cancel')).not.toBeInTheDocument()
})
it('should call onCancel when cancel button is clicked', () => {
// Arrange
const onCancel = jest.fn()
renderDrawer({ onCancel })
// Act
const cancelButton = screen.getByText('common.operation.cancel')
fireEvent.click(cancelButton)
// Assert
expect(onCancel).toHaveBeenCalledTimes(1)
})
it('should call onOk when save button is clicked', () => {
// Arrange
const onOk = jest.fn()
renderDrawer({ onOk })
// Act
const saveButton = screen.getByText('common.operation.save')
fireEvent.click(saveButton)
// Assert
expect(onOk).toHaveBeenCalledTimes(1)
})
it('should not throw when onCancel is not provided and cancel is clicked', () => {
// Arrange
renderDrawer({ onCancel: undefined })
// Act & Assert
expect(() => {
fireEvent.click(screen.getByText('common.operation.cancel'))
}).not.toThrow()
})
it('should not throw when onOk is not provided and save is clicked', () => {
// Arrange
renderDrawer({ onOk: undefined })
// Act & Assert
expect(() => {
fireEvent.click(screen.getByText('common.operation.save'))
}).not.toThrow()
})
})
// Custom className tests
describe('Custom ClassNames', () => {
it('should apply custom dialogClassName', () => {
// Arrange & Act
renderDrawer({ dialogClassName: 'custom-dialog-class' })
// Assert
expect(screen.getByRole('dialog').className).toContain('custom-dialog-class')
})
it('should apply custom dialogBackdropClassName', () => {
// Arrange & Act
renderDrawer({ dialogBackdropClassName: 'custom-backdrop-class' })
// Assert
expect(screen.getByTestId('dialog-backdrop').className).toContain('custom-backdrop-class')
})
it('should apply custom containerClassName', () => {
// Arrange & Act
const { container } = renderDrawer({ containerClassName: 'custom-container-class' })
// Assert
const containerDiv = container.querySelector('.custom-container-class')
expect(containerDiv).toBeInTheDocument()
})
it('should apply custom panelClassName', () => {
// Arrange & Act
const { container } = renderDrawer({ panelClassName: 'custom-panel-class' })
// Assert
const panelDiv = container.querySelector('.custom-panel-class')
expect(panelDiv).toBeInTheDocument()
})
})
// Position tests
describe('Position', () => {
it('should apply center position class when positionCenter is true', () => {
// Arrange & Act
const { container } = renderDrawer({ positionCenter: true })
// Assert
const containerDiv = container.querySelector('.\\!justify-center')
expect(containerDiv).toBeInTheDocument()
})
it('should use end position by default when positionCenter is false', () => {
// Arrange & Act
const { container } = renderDrawer({ positionCenter: false })
// Assert
const containerDiv = container.querySelector('.justify-end')
expect(containerDiv).toBeInTheDocument()
})
})
// Unmount prop tests
describe('Unmount Prop', () => {
it('should pass unmount prop to Dialog component', () => {
// Arrange & Act
renderDrawer({ unmount: true })
// Assert
expect(screen.getByTestId('dialog').getAttribute('data-unmount')).toBe('true')
})
it('should default unmount to false', () => {
// Arrange & Act
renderDrawer({})
// Assert
expect(screen.getByTestId('dialog').getAttribute('data-unmount')).toBe('false')
})
})
// Edge cases
describe('Edge Cases', () => {
it('should handle empty string title', () => {
// Arrange & Act
renderDrawer({ title: '' })
// Assert
expect(screen.getByRole('dialog')).toBeInTheDocument()
})
it('should handle empty string description', () => {
// Arrange & Act
renderDrawer({ description: '' })
// Assert
expect(screen.getByRole('dialog')).toBeInTheDocument()
})
it('should handle special characters in title', () => {
// Arrange
const specialTitle = '<script>alert("xss")</script>'
// Act
renderDrawer({ title: specialTitle })
// Assert
expect(screen.getByText(specialTitle)).toBeInTheDocument()
})
it('should handle very long title', () => {
// Arrange
const longTitle = 'A'.repeat(500)
// Act
renderDrawer({ title: longTitle })
// Assert
expect(screen.getByText(longTitle)).toBeInTheDocument()
})
it('should handle complex children with multiple elements', () => {
// Arrange
const complexChildren = (
<div data-testid="complex-children">
<h1>Heading</h1>
<p>Paragraph</p>
<input data-testid="input-element" />
<button data-testid="button-element">Button</button>
</div>
)
// Act
renderDrawer({ children: complexChildren })
// Assert
expect(screen.getByTestId('complex-children')).toBeInTheDocument()
expect(screen.getByText('Heading')).toBeInTheDocument()
expect(screen.getByText('Paragraph')).toBeInTheDocument()
expect(screen.getByTestId('input-element')).toBeInTheDocument()
expect(screen.getByTestId('button-element')).toBeInTheDocument()
})
it('should handle null children gracefully', () => {
// Arrange & Act
renderDrawer({ children: null as unknown as React.ReactNode })
// Assert
expect(screen.getByRole('dialog')).toBeInTheDocument()
})
it('should handle undefined footer without crashing', () => {
// Arrange & Act
renderDrawer({ footer: undefined })
// Assert
expect(screen.getByRole('dialog')).toBeInTheDocument()
})
it('should handle rapid open/close toggles', () => {
// Arrange
const onClose = jest.fn()
const { rerender } = render(
<Drawer {...defaultProps} isOpen={true} onClose={onClose}>
<div>Content</div>
</Drawer>,
)
// Act - Toggle multiple times
rerender(
<Drawer {...defaultProps} isOpen={false} onClose={onClose}>
<div>Content</div>
</Drawer>,
)
rerender(
<Drawer {...defaultProps} isOpen={true} onClose={onClose}>
<div>Content</div>
</Drawer>,
)
rerender(
<Drawer {...defaultProps} isOpen={false} onClose={onClose}>
<div>Content</div>
</Drawer>,
)
// Assert
expect(screen.queryByRole('dialog')).not.toBeInTheDocument()
})
})
// Combined prop scenarios
describe('Combined Prop Scenarios', () => {
it('should render with all optional props', () => {
// Arrange & Act
renderDrawer({
title: 'Full Feature Title',
description: 'Full Feature Description',
dialogClassName: 'custom-dialog',
dialogBackdropClassName: 'custom-backdrop',
containerClassName: 'custom-container',
panelClassName: 'custom-panel',
showClose: true,
mask: true,
positionCenter: true,
unmount: true,
noOverlay: false,
footer: <div data-testid="custom-full-footer">Footer</div>,
})
// Assert
expect(screen.getByRole('dialog')).toBeInTheDocument()
expect(screen.getByText('Full Feature Title')).toBeInTheDocument()
expect(screen.getByText('Full Feature Description')).toBeInTheDocument()
expect(screen.getByTestId('close-icon')).toBeInTheDocument()
expect(screen.getByTestId('custom-full-footer')).toBeInTheDocument()
})
it('should render minimal drawer with only required props', () => {
// Arrange
const minimalProps: IDrawerProps = {
isOpen: true,
onClose: jest.fn(),
children: <div>Minimal Content</div>,
}
// Act
render(<Drawer {...minimalProps} />)
// Assert
expect(screen.getByRole('dialog')).toBeInTheDocument()
expect(screen.getByText('Minimal Content')).toBeInTheDocument()
})
it('should handle showClose with title simultaneously', () => {
// Arrange & Act
renderDrawer({
title: 'Title with Close',
showClose: true,
})
// Assert
expect(screen.getByText('Title with Close')).toBeInTheDocument()
expect(screen.getByTestId('close-icon')).toBeInTheDocument()
})
it('should handle noOverlay with clickOutsideNotOpen', () => {
// Arrange
const onClose = jest.fn()
// Act
renderDrawer({
noOverlay: true,
clickOutsideNotOpen: true,
onClose,
})
// Assert - backdrop should not exist
expect(screen.queryByTestId('dialog-backdrop')).not.toBeInTheDocument()
})
})
// Dialog onClose callback tests (e.g., Escape key)
describe('Dialog onClose Callback', () => {
it('should call onClose when Dialog triggers close and clickOutsideNotOpen is false', () => {
// Arrange
const onClose = jest.fn()
renderDrawer({ onClose, clickOutsideNotOpen: false })
// Act - Simulate Dialog's onClose (e.g., pressing Escape)
capturedDialogOnClose?.()
// Assert
expect(onClose).toHaveBeenCalledTimes(1)
})
it('should not call onClose when Dialog triggers close and clickOutsideNotOpen is true', () => {
// Arrange
const onClose = jest.fn()
renderDrawer({ onClose, clickOutsideNotOpen: true })
// Act - Simulate Dialog's onClose (e.g., pressing Escape)
capturedDialogOnClose?.()
// Assert
expect(onClose).not.toHaveBeenCalled()
})
it('should call onClose by default when Dialog triggers close', () => {
// Arrange
const onClose = jest.fn()
renderDrawer({ onClose })
// Act
capturedDialogOnClose?.()
// Assert
expect(onClose).toHaveBeenCalledTimes(1)
})
})
// Event handler interaction tests
describe('Event Handler Interactions', () => {
it('should handle multiple consecutive close icon clicks', () => {
// Arrange
const onClose = jest.fn()
renderDrawer({ showClose: true, onClose })
// Act
const closeIcon = screen.getByTestId('close-icon')
fireEvent.click(closeIcon)
fireEvent.click(closeIcon)
fireEvent.click(closeIcon)
// Assert
expect(onClose).toHaveBeenCalledTimes(3)
})
it('should handle onCancel and onOk being the same function', () => {
// Arrange
const handler = jest.fn()
renderDrawer({ onCancel: handler, onOk: handler })
// Act
fireEvent.click(screen.getByText('common.operation.cancel'))
fireEvent.click(screen.getByText('common.operation.save'))
// Assert
expect(handler).toHaveBeenCalledTimes(2)
})
})
})

View File

@@ -1,7 +1,7 @@
'use client'
import { useCallback, useEffect, useMemo } from 'react'
import { useNodes } from 'reactflow'
import useNodes from '@/app/components/workflow/store/workflow/use-nodes'
import { useNodesInteractions } from '@/app/components/workflow/hooks/use-nodes-interactions'
import type { CommonNodeType } from '@/app/components/workflow/types'
import { ragPipelineNodesAction } from '@/app/components/goto-anything/actions/rag-pipeline-nodes'

View File

@@ -3,7 +3,7 @@ import {
useCallback,
useMemo,
} from 'react'
import { useEdges, useNodes } from 'reactflow'
import { useEdges } from 'reactflow'
import { RiApps2AddLine } from '@remixicon/react'
import { useTranslation } from 'react-i18next'
import {
@@ -22,7 +22,6 @@ import AppPublisher from '@/app/components/app/app-publisher'
import { useFeatures } from '@/app/components/base/features/hooks'
import type {
CommonEdgeType,
CommonNodeType,
Node,
} from '@/app/components/workflow/types'
import {
@@ -42,6 +41,7 @@ import { useIsChatMode } from '@/app/components/workflow/hooks'
import type { StartNodeType } from '@/app/components/workflow/nodes/start/types'
import { useProviderContext } from '@/context/provider-context'
import { Plan } from '@/app/components/billing/type'
import useNodes from '@/app/components/workflow/store/workflow/use-nodes'
const FeaturesTrigger = () => {
const { t } = useTranslation()
@@ -58,7 +58,7 @@ const FeaturesTrigger = () => {
const toolPublished = useStore(s => s.toolPublished)
const lastPublishedHasUserInput = useStore(s => s.lastPublishedHasUserInput)
const nodes = useNodes<CommonNodeType>()
const nodes = useNodes()
const hasWorkflowNodes = nodes.length > 0
const startNode = nodes.find(node => node.data.type === BlockEnum.Start)
const startVariables = (startNode as Node<StartNodeType>)?.data?.variables

View File

@@ -2,10 +2,12 @@ import React, { useCallback } from 'react'
import { act, render } from '@testing-library/react'
import { useTriggerStatusStore } from '../store/trigger-status'
import { isTriggerNode } from '../types'
import type { BlockEnum } from '../types'
import type { EntryNodeStatus } from '../store/trigger-status'
// Mock the isTriggerNode function
// Mock the isTriggerNode function while preserving BlockEnum
jest.mock('../types', () => ({
...jest.requireActual('../types'),
isTriggerNode: jest.fn(),
}))
@@ -17,7 +19,7 @@ const TestTriggerNode: React.FC<{
nodeType: string
}> = ({ nodeId, nodeType }) => {
const triggerStatus = useTriggerStatusStore(state =>
mockIsTriggerNode(nodeType) ? (state.triggerStatuses[nodeId] || 'disabled') : 'enabled',
mockIsTriggerNode(nodeType as BlockEnum) ? (state.triggerStatuses[nodeId] || 'disabled') : 'enabled',
)
return (
@@ -271,7 +273,7 @@ describe('Trigger Status Synchronization Integration', () => {
nodeType: string
}> = ({ nodeId, nodeType }) => {
const triggerStatusSelector = useCallback((state: any) =>
mockIsTriggerNode(nodeType) ? (state.triggerStatuses[nodeId] || 'disabled') : 'enabled',
mockIsTriggerNode(nodeType as BlockEnum) ? (state.triggerStatuses[nodeId] || 'disabled') : 'enabled',
[nodeId, nodeType],
)
const triggerStatus = useTriggerStatusStore(triggerStatusSelector)
@@ -313,7 +315,7 @@ describe('Trigger Status Synchronization Integration', () => {
const TestComponent: React.FC<{ nodeType: string }> = ({ nodeType }) => {
const triggerStatusSelector = useCallback((state: any) =>
mockIsTriggerNode(nodeType) ? (state.triggerStatuses['test-node'] || 'disabled') : 'enabled',
mockIsTriggerNode(nodeType as BlockEnum) ? (state.triggerStatuses['test-node'] || 'disabled') : 'enabled',
['test-node', nodeType], // Dependencies should match implementation
)
const status = useTriggerStatusStore(triggerStatusSelector)

View File

@@ -9,7 +9,7 @@ import {
useState,
} from 'react'
import { useTranslation } from 'react-i18next'
import { useNodes } from 'reactflow'
import useNodes from '@/app/components/workflow/store/workflow/use-nodes'
import type {
OffsetOptions,
Placement,

View File

@@ -4,7 +4,7 @@ import {
useEffect,
useMemo,
} from 'react'
import { useNodes } from 'reactflow'
import useNodes from '@/app/components/workflow/store/workflow/use-nodes'
import { useTranslation } from 'react-i18next'
import BlockIcon from '../block-icon'
import type { BlockEnum, CommonNodeType } from '../types'

View File

@@ -5,7 +5,6 @@ import {
import { useTranslation } from 'react-i18next'
import {
useEdges,
useNodes,
} from 'reactflow'
import {
RiCloseLine,
@@ -19,7 +18,6 @@ import {
import type { ChecklistItem } from '../hooks/use-checklist'
import type {
CommonEdgeType,
CommonNodeType,
} from '../types'
import cn from '@/utils/classnames'
import {
@@ -32,7 +30,10 @@ import {
} from '@/app/components/base/icons/src/vender/line/general'
import { Warning } from '@/app/components/base/icons/src/vender/line/alertsAndFeedback'
import { IconR } from '@/app/components/base/icons/src/vender/line/arrows'
import type { BlockEnum } from '../types'
import type {
BlockEnum,
} from '../types'
import useNodes from '@/app/components/workflow/store/workflow/use-nodes'
type WorkflowChecklistProps = {
disabled: boolean
@@ -42,8 +43,8 @@ const WorkflowChecklist = ({
}: WorkflowChecklistProps) => {
const { t } = useTranslation()
const [open, setOpen] = useState(false)
const nodes = useNodes<CommonNodeType>()
const edges = useEdges<CommonEdgeType>()
const nodes = useNodes()
const needWarningNodes = useChecklist(nodes, edges)
const { handleNodeSelect } = useNodesInteractions()

View File

@@ -4,7 +4,7 @@ import {
useRef,
} from 'react'
import { useTranslation } from 'react-i18next'
import { useEdges, useNodes, useStoreApi } from 'reactflow'
import { useEdges, useStoreApi } from 'reactflow'
import type {
CommonEdgeType,
CommonNodeType,
@@ -56,6 +56,7 @@ import {
} from '@/service/use-tools'
import { useStore as useAppStore } from '@/app/components/app/store'
import { AppModeEnum } from '@/types/app'
import useNodes from '@/app/components/workflow/store/workflow/use-nodes'
export type ChecklistItem = {
id: string
@@ -407,7 +408,7 @@ export const useChecklistBeforePublish = () => {
export const useWorkflowRunValidation = () => {
const { t } = useTranslation()
const nodes = useNodes<CommonNodeType>()
const nodes = useNodes()
const edges = useEdges<CommonEdgeType>()
const needWarningNodes = useChecklist(nodes, edges)
const { notify } = useToastContext()

View File

@@ -1,5 +1,5 @@
import { useMemo } from 'react'
import { useNodes } from 'reactflow'
import useNodes from '@/app/components/workflow/store/workflow/use-nodes'
import { useTranslation } from 'react-i18next'
import { BlockEnum, type CommonNodeType } from '../types'
import { getWorkflowEntryNode } from '../utils/workflow-entry'

View File

@@ -18,6 +18,7 @@ import ReactFlow, {
ReactFlowProvider,
SelectionMode,
useEdgesState,
useNodes,
useNodesState,
useOnViewportChange,
useReactFlow,
@@ -97,6 +98,7 @@ import {
useAllMCPTools,
useAllWorkflowTools,
} from '@/service/use-tools'
import { isEqual } from 'lodash-es'
const Confirm = dynamic(() => import('@/app/components/base/confirm'), {
ssr: false,
@@ -167,7 +169,24 @@ export const Workflow: FC<WorkflowProps> = memo(({
setShowConfirm,
setControlPromptEditorRerenderKey,
setSyncWorkflowDraftHash,
setNodes: setNodesInStore,
} = workflowStore.getState()
const currentNodes = useNodes()
const setNodesOnlyChangeWithData = useCallback((nodes: Node[]) => {
const nodesData = nodes.map(node => ({
id: node.id,
data: node.data,
}))
const oldData = workflowStore.getState().nodes.map(node => ({
id: node.id,
data: node.data,
}))
if (!isEqual(oldData, nodesData))
setNodesInStore(nodes)
}, [setNodesInStore, workflowStore])
useEffect(() => {
setNodesOnlyChangeWithData(currentNodes as Node[])
}, [currentNodes, setNodesOnlyChangeWithData])
const {
handleSyncWorkflowDraft,
syncWorkflowDraftWhenPageClose,

View File

@@ -4,7 +4,7 @@ import {
useRef,
} from 'react'
import { useClickAway } from 'ahooks'
import { useNodes } from 'reactflow'
import useNodes from '@/app/components/workflow/store/workflow/use-nodes'
import PanelOperatorPopup from './nodes/_base/components/panel-operator/panel-operator-popup'
import type { Node } from './types'
import { useStore } from './store'
@@ -16,7 +16,6 @@ const NodeContextmenu = () => {
const { handleNodeContextmenuCancel, handlePaneContextmenuCancel } = usePanelInteractions()
const nodeMenu = useStore(s => s.nodeMenu)
const currentNode = nodes.find(node => node.id === nodeMenu?.nodeId) as Node
useEffect(() => {
if (nodeMenu)
handlePaneContextmenuCancel()

View File

@@ -1,7 +1,5 @@
import { useCallback } from 'react'
import {
useNodes,
} from 'reactflow'
import { useNodes } from 'reactflow'
import { uniqBy } from 'lodash-es'
import {
useIsChatMode,

View File

@@ -46,7 +46,7 @@ const ConditionValue = ({
if (Array.isArray(value)) // transfer method
return value[0]
if(value === true || value === false)
if (value === true || value === false)
return value ? 'True' : 'False'
return value.replace(/{{#([^#]*)#}}/g, (a, b) => {

View File

@@ -1,6 +1,7 @@
import { isValidCronExpression, parseCronExpression } from './cron-parser'
import { getNextExecutionTime, getNextExecutionTimes } from './execution-time-calculator'
import type { ScheduleTriggerNodeType } from '../types'
import { BlockEnum } from '../../../types'
// Comprehensive integration tests for cron-parser and execution-time-calculator compatibility
describe('cron-parser + execution-time-calculator integration', () => {
@@ -14,13 +15,13 @@ describe('cron-parser + execution-time-calculator integration', () => {
})
const createCronData = (overrides: Partial<ScheduleTriggerNodeType> = {}): ScheduleTriggerNodeType => ({
id: 'test-cron',
type: 'schedule-trigger',
type: BlockEnum.TriggerSchedule,
title: 'test-schedule',
mode: 'cron',
frequency: 'daily',
timezone: 'UTC',
...overrides,
})
} as ScheduleTriggerNodeType)
describe('backward compatibility validation', () => {
it('maintains exact behavior for legacy cron expressions', () => {

View File

@@ -1,8 +1,9 @@
import { useCallback } from 'react'
import {
useNodes,
useStoreApi,
} from 'reactflow'
import { useNodes } from 'reactflow'
import { uniqBy } from 'lodash-es'
import { produce } from 'immer'
import {

View File

@@ -0,0 +1,7 @@
import {
useStore,
} from '@/app/components/workflow/store'
const useWorkflowNodes = () => useStore(s => s.nodes)
export default useWorkflowNodes

View File

@@ -23,6 +23,8 @@ export type WorkflowDraftSliceShape = {
setIsSyncingWorkflowDraft: (isSyncingWorkflowDraft: boolean) => void
isWorkflowDataLoaded: boolean
setIsWorkflowDataLoaded: (loaded: boolean) => void
nodes: Node[]
setNodes: (nodes: Node[]) => void
}
export const createWorkflowDraftSlice: StateCreator<WorkflowDraftSliceShape> = set => ({
@@ -37,4 +39,6 @@ export const createWorkflowDraftSlice: StateCreator<WorkflowDraftSliceShape> = s
setIsSyncingWorkflowDraft: isSyncingWorkflowDraft => set(() => ({ isSyncingWorkflowDraft })),
isWorkflowDataLoaded: false,
setIsWorkflowDataLoaded: loaded => set(() => ({ isWorkflowDataLoaded: loaded })),
nodes: [],
setNodes: nodes => set(() => ({ nodes })),
})

View File

@@ -43,13 +43,18 @@ jest.mock('@/app/components/billing/trigger-events-limit-modal', () => ({
}))
type DefaultPlanShape = typeof defaultPlan
type ResetShape = {
apiRateLimit: number | null
triggerEvents: number | null
}
type PlanShape = Omit<DefaultPlanShape, 'reset'> & { reset: ResetShape }
type PlanOverrides = Partial<Omit<DefaultPlanShape, 'usage' | 'total' | 'reset'>> & {
usage?: Partial<DefaultPlanShape['usage']>
total?: Partial<DefaultPlanShape['total']>
reset?: Partial<DefaultPlanShape['reset']>
reset?: Partial<ResetShape>
}
const createPlan = (overrides: PlanOverrides = {}): DefaultPlanShape => ({
const createPlan = (overrides: PlanOverrides = {}): PlanShape => ({
...defaultPlan,
...overrides,
usage: {

View File

@@ -1,6 +1,6 @@
{
"name": "dify-web",
"version": "1.10.0",
"version": "1.10.1",
"private": true,
"packageManager": "pnpm@10.22.0+sha512.bf049efe995b28f527fd2b41ae0474ce29186f7edcb3bf545087bd61fbbebb2bf75362d1307fda09c2d288e1e499787ac12d4fcb617a974718a6051f2eee741c",
"engines": {
@@ -37,6 +37,7 @@
"check:i18n-types": "node ./i18n-config/check-i18n-sync.js",
"test": "jest",
"test:watch": "jest --watch",
"analyze-component": "node testing/analyze-component.js",
"storybook": "storybook dev -p 6006",
"build-storybook": "storybook build",
"preinstall": "npx only-allow pnpm",
@@ -101,15 +102,15 @@
"mime": "^4.1.0",
"mitt": "^3.0.1",
"negotiator": "^1.0.0",
"next": "~15.5.6",
"next": "~15.5.7",
"next-pwa": "^5.6.0",
"next-themes": "^0.4.6",
"pinyin-pro": "^3.27.0",
"qrcode.react": "^4.2.0",
"qs": "^6.14.0",
"react": "19.1.1",
"react": "19.2.1",
"react-18-input-autosize": "^3.0.0",
"react-dom": "19.1.1",
"react-dom": "19.2.1",
"react-easy-crop": "^5.5.3",
"react-hook-form": "^7.65.0",
"react-hotkeys-hook": "^4.6.2",
@@ -150,9 +151,9 @@
"@happy-dom/jest-environment": "^20.0.8",
"@mdx-js/loader": "^3.1.1",
"@mdx-js/react": "^3.1.1",
"@next/bundle-analyzer": "15.5.4",
"@next/eslint-plugin-next": "15.5.4",
"@next/mdx": "15.5.4",
"@next/bundle-analyzer": "15.5.7",
"@next/eslint-plugin-next": "15.5.7",
"@next/mdx": "15.5.7",
"@rgrove/parse-xml": "^4.2.0",
"@storybook/addon-docs": "9.1.13",
"@storybook/addon-links": "9.1.13",
@@ -170,8 +171,8 @@
"@types/negotiator": "^0.6.4",
"@types/node": "18.15.0",
"@types/qs": "^6.14.0",
"@types/react": "~19.1.17",
"@types/react-dom": "~19.1.11",
"@types/react": "~19.2.7",
"@types/react-dom": "~19.2.3",
"@types/react-slider": "^1.3.6",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/react-window": "^1.8.8",
@@ -206,8 +207,8 @@
"uglify-js": "^3.19.3"
},
"resolutions": {
"@types/react": "~19.1.17",
"@types/react-dom": "~19.1.11",
"@types/react": "~19.2.7",
"@types/react-dom": "~19.2.3",
"string-width": "~4.2.3",
"@eslint/plugin-kit": "~0.3",
"canvas": "^3.2.0",

1059
web/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

1057
web/testing/analyze-component.js Executable file

File diff suppressed because it is too large Load Diff

432
web/testing/testing.md Normal file
View File

@@ -0,0 +1,432 @@
# Frontend Testing Guide
This document is the complete testing specification for the Dify frontend project.
Goal: Readable, change-friendly, reusable, and debuggable tests.
When I ask you to write/refactor/fix tests, follow these rules by default.
## Tech Stack
- **Framework**: Next.js 15 + React 19 + TypeScript
- **Testing Tools**: Jest 29.7 + React Testing Library 16.0
- **Test Environment**: @happy-dom/jest-environment
- **File Naming**: `ComponentName.spec.tsx` (same directory as component)
## Running Tests
```bash
# Run all tests
pnpm test
# Watch mode
pnpm test -- --watch
# Generate coverage report
pnpm test -- --coverage
# Run specific file
pnpm test -- path/to/file.spec.tsx
```
## Project Test Setup
- **Configuration**: `jest.config.ts` loads the Testing Library presets, sets the `@happy-dom/jest-environment`, and respects our path aliases (`@/...`). Check this file before adding new transformers or module name mappers.
- **Global setup**: `jest.setup.ts` already imports `@testing-library/jest-dom` and runs `cleanup()` after every test. Add any environment-level mocks (for example `ResizeObserver`, `matchMedia`, `IntersectionObserver`, `TextEncoder`, `crypto`) here so they are shared consistently.
- **Manual mocks**: Place reusable mocks inside `web/__mocks__/`. Use `jest.mock('module-name')` to point to these helpers rather than redefining mocks in every spec.
- **Script utilities**: `web/testing/analyze-component.js` analyzes component complexity and generates test prompts for AI assistants. Commands:
- `pnpm analyze-component <path>` - Analyze and generate test prompt
- `pnpm analyze-component <path> --json` - Output analysis as JSON
- `pnpm analyze-component <path> --review` - Generate test review prompt
- `pnpm analyze-component --help` - Show help
- **Integration suites**: Files in `web/__tests__/` exercise cross-component flows. Prefer adding new end-to-end style specs there rather than mixing them into component directories.
## Test Authoring Principles
- **Single behavior per test**: Each test verifies one user-observable behavior.
- **Black-box first**: Assert external behavior and observable outputs, avoid internal implementation details.
- **Semantic naming**: Use `should <behavior> when <condition>` and group related cases with `describe(<subject or scenario>)`.
- **AAA / GivenWhenThen**: Separate Arrange, Act, and Assert clearly with code blocks or comments.
- **Minimal but sufficient assertions**: Keep only the expectations that express the essence of the behavior.
- **Reusable test data**: Prefer test data builders or factories over hard-coded masses of data.
- **De-flake**: Control time, randomness, network, concurrency, and ordering.
- **Fast & stable**: Keep unit tests running in milliseconds; reserve integration tests for cross-module behavior with isolation.
- **Structured describe blocks**: Organize tests with `describe` sections and add a brief comment before each block to explain the scenario it covers so readers can quickly understand the scope.
## Component Complexity Guidelines
Use `pnpm analyze-component <path>` to analyze component complexity and adopt different testing strategies based on the results.
### 🔴 Very Complex Components (Complexity > 50)
- **Refactor first**: Break component into smaller pieces
- **Integration tests**: Test complex workflows end-to-end
- **Data-driven tests**: Use `test.each()` for multiple scenarios
- **Performance benchmarks**: Add performance tests for critical paths
### ⚠️ Complex Components (Complexity 30-50)
- **Multiple describe blocks**: Group related test cases
- **Integration scenarios**: Test feature combinations
- **Organized structure**: Keep tests maintainable
### 📏 Large Components (500+ lines)
- **Consider refactoring**: Split into smaller components if possible
- **Section testing**: Test major sections separately
- **Helper functions**: Reduce test complexity with utilities
## Basic Guidelines
- ✅ AAA pattern: Arrange (setup) → Act (execute) → Assert (verify)
- ✅ Descriptive test names: `"should [behavior] when [condition]"`
- ✅ TypeScript: No `any` types
-**Cleanup**: `jest.clearAllMocks()` should be in `beforeEach()`, not `afterEach()`. This ensures mock call history is reset before each test, preventing test pollution when using assertions like `toHaveBeenCalledWith()` or `toHaveBeenCalledTimes()`.
**⚠️ Mock components must accurately reflect actual component behavior**, especially conditional rendering based on props or state.
**Rules**:
1. **Match actual conditional rendering**: If the real component returns `null` or doesn't render under certain conditions, the mock must do the same. Always check the actual component implementation before creating mocks.
1. **Use shared state variables when needed**: When mocking components that depend on shared context or state (e.g., `PortalToFollowElem` with `PortalToFollowElemContent`), use module-level variables to track state and reset them in `beforeEach`.
1. **Always reset shared mock state in beforeEach**: Module-level variables used in mocks must be reset in `beforeEach` to ensure test isolation, even if you set default values elsewhere.
1. **Use fake timers only when needed**: Only use `jest.useFakeTimers()` if:
- Testing components that use real `setTimeout`/`setInterval` (not mocked)
- Testing time-based behavior (delays, animations)
- If you mock all time-dependent functions, fake timers are unnecessary
1. **Prefer importing over mocking project components**: When tests need other components from the project, import them directly instead of mocking them. Only mock external dependencies, APIs, or complex context providers that are difficult to set up.
**Why this matters**: Mocks that don't match actual behavior can lead to:
- **False positives**: Tests pass but code would fail in production
- **Missed bugs**: Tests don't catch real conditional rendering issues
- **Maintenance burden**: Tests become misleading documentation
- **State leakage**: Tests interfere with each other when shared state isn't reset
## Testing Components with Dedicated Dependencies
When a component has dedicated dependencies (custom hooks, managers, utilities) that are **only used by that component**, use the following strategy to balance integration testing and unit testing.
### Summary Checklist
When testing components with dedicated dependencies:
- **Identify** which dependencies are dedicated vs. reusable
- **Write integration tests** for component + dedicated dependencies together
- **Write unit tests** for complex edge cases in dependencies
- **Avoid mocking** dedicated dependencies in integration tests
- **Use fake timers** if timing logic is involved
- **Test user behavior**, not implementation details
- **Document** the testing strategy in code comments
- **Ensure** integration tests cover 100% of user-facing scenarios
- **Reserve** unit tests for edge cases not practical in integration tests
## Test Scenarios
Apply the following test scenarios based on component features:
### 1. Rendering Tests (REQUIRED - All Components)
**Key Points**:
- Verify component renders properly
- Check key elements exist
- Use semantic queries (getByRole, getByLabelText)
### 2. Props Testing (REQUIRED - All Components)
Exercise the prop combinations that change observable behavior. Show how required props gate functionality, how optional props fall back to their defaults, and how invalid combinations surface through user-facing safeguards. Let TypeScript catch structural issues; keep runtime assertions focused on what the component renders or triggers.
### 3. State Management
Treat component state as part of the public behavior: confirm the initial render in context, execute the interactions or prop updates that move the state machine, and assert the resulting UI or side effects. Use `waitFor()`/async queries whenever transitions resolve asynchronously, and only check cleanup paths when they change what a user sees or experiences (duplicate events, lingering timers, etc.).
#### Context, Providers, and Stores
- ✅ Wrap components with the actual provider from `web/context` or `app/components/.../context` whenever practical.
- ✅ When creating lightweight provider stubs, mirror the real default values and surface helper builders (for example `createMockWorkflowContext`).
- ✅ Reset shared stores (React context, Zustand, TanStack Query cache) between tests to avoid leaking state. Prefer helper factory functions over module-level singletons in specs.
- ✅ For hooks that read from context, use `renderHook` with a custom wrapper that supplies required providers.
### 4. Performance Optimization
Cover memoized callbacks or values only when they influence observable behavior—memoized children, subscription updates, expensive computations. Trigger realistic re-renders and assert the outcomes (avoided rerenders, reused results) instead of inspecting hook internals.
### 5. Event Handlers
Simulate the interactions that matter to users—primary clicks, change events, submits, and relevant keyboard shortcuts—and confirm the resulting behavior. When handlers prevent defaults or rely on bubbling, cover the scenarios where that choice affects the UI or downstream flows.
### 6. API Calls and Async Operations
**Must Test**:
- ✅ Mock all API calls using `jest.mock`
- ✅ Test retry logic (if applicable)
- ✅ Verify error handling and user feedback
- ✅ Use `waitFor()` for async operations
- ✅ For `@tanstack/react-query`, instantiate a fresh `QueryClient` per spec and wrap with `QueryClientProvider`
- ✅ Clear timers, intervals, and pending promises between tests when using fake timers
**Guidelines**:
- Prefer spying on `global.fetch`/`axios`/`ky` and returning deterministic responses over reaching out to the network.
- Use MSW (`msw` is already installed) when you need declarative request handlers across multiple specs.
- Keep async assertions inside `await waitFor(...)` blocks or the async `findBy*` queries to avoid race conditions.
### 7. Next.js Routing
Mock the specific Next.js navigation hooks your component consumes (`useRouter`, `usePathname`, `useSearchParams`) and drive realistic routing flows—query parameters, redirects, guarded routes, URL updates—while asserting the rendered outcome or navigation side effects.
### 8. Edge Cases (REQUIRED - All Components)
**Must Test**:
- ✅ null/undefined/empty values
- ✅ Boundary conditions
- ✅ Error states
- ✅ Loading states
- ✅ Unexpected inputs
### 9. Test Data Builders (Anti-hardcoding)
For complex inputs/entities, use Builders with solid defaults and chainable overrides.
### 10. Accessibility Testing (Optional)
- Test keyboard navigation
- Verify ARIA attributes
- Test focus management
- Ensure screen reader compatibility
### 11. Snapshot Testing (Use Sparingly)
Reserve snapshots for static, deterministic fragments (icons, badges, layout chrome). Keep them tight, prefer explicit assertions for behavior, and review any snapshot updates deliberately instead of accepting them wholesale.
**Note**: Dify is a desktop application. **No need for** responsive/mobile testing.
## Code Style
### Example Structure
```typescript
import { render, screen, fireEvent, waitFor } from '@testing-library/react'
import Component from './index'
// Mock dependencies
jest.mock('@/service/api')
// Shared state for mocks (if needed)
let mockSharedState = false
describe('ComponentName', () => {
beforeEach(() => {
jest.clearAllMocks() // ✅ Reset mocks before each test
mockSharedState = false // ✅ Reset shared state if used in mocks
})
describe('Rendering', () => {
it('should render without crashing', () => {
// Arrange
const props = { title: 'Test' }
// Act
render(<Component {...props} />)
// Assert
expect(screen.getByText('Test')).toBeInTheDocument()
})
})
describe('User Interactions', () => {
it('should handle click events', () => {
const handleClick = jest.fn()
render(<Component onClick={handleClick} />)
fireEvent.click(screen.getByRole('button'))
expect(handleClick).toHaveBeenCalledTimes(1)
})
})
describe('Edge Cases', () => {
it('should handle null data', () => {
render(<Component data={null} />)
expect(screen.getByText(/no data/i)).toBeInTheDocument()
})
})
})
```
## Dify-Specific Components
### General
1. **i18n**: Always return key
```typescript
jest.mock('react-i18next', () => ({
useTranslation: () => ({
t: (key: string) => key,
}),
}))
```
1. **Forms**: Test validation logic thoroughly
1. **Example - Correct mock with conditional rendering**:
```typescript
// ✅ CORRECT: Matches actual component behavior
let mockPortalOpenState = false
jest.mock('@/app/components/base/portal-to-follow-elem', () => ({
PortalToFollowElem: ({ children, open, ...props }: any) => {
mockPortalOpenState = open || false // Update shared state
return <div data-open={open}>{children}</div>
},
PortalToFollowElemContent: ({ children }: any) => {
// ✅ Matches actual: returns null when open is false
if (!mockPortalOpenState) return null
return <div>{children}</div>
},
}))
describe('Component', () => {
beforeEach(() => {
jest.clearAllMocks() // ✅ Reset mock call history
mockPortalOpenState = false // ✅ Reset shared state
})
})
```
### Workflow Components (`workflow/`)
**Must Test**:
- ⚙️ **Node configuration**: Test all node configuration options
- ✔️ **Data validation**: Verify input/output validation rules
- 🔄 **Variable passing**: Test data flow between nodes
- 🔗 **Edge connections**: Test graph structure and connections
- ❌ **Error handling**: Verify invalid configuration handling
- 🧪 **Integration**: Test complete workflow execution paths
### Dataset Components (`dataset/`)
**Must Test**:
- 📤 **File upload**: Test file upload and validation
- 📄 **File types**: Verify supported format handling
- 📃 **Pagination**: Test data loading and pagination
- 🔍 **Search & filtering**: Test query functionality
- 📊 **Data format handling**: Test various data formats
- ⚠️ **Error states**: Test upload failures and invalid data
### Configuration Components (`app/configuration`, `config/`)
**Must Test**:
- ✅ **Form validation**: Test all validation rules thoroughly
- 💾 **Save/reset functionality**: Test data persistence
- 🔒 **Required vs optional fields**: Verify field validation
- 📌 **Configuration persistence**: Test state preservation
- 💬 **Error feedback**: Verify user error messages
- 🎯 **Default values**: Test initial configuration state
## Testing Strategy Quick Reference
### Required (All Components)
- ✅ Renders without crashing
- ✅ Props (required, optional, defaults)
- ✅ Edge cases (null, undefined, empty values)
### Conditional (When Present in Component)
- 🔄 **useState** → State initialization, transitions, cleanup
- ⚡ **useEffect** → Execution, dependencies, cleanup
- 🎯 **Event Handlers** → All onClick, onChange, onSubmit, keyboard events
- 🌐 **API Calls** → Loading, success, error states
- 🔀 **Routing** → Navigation, params, query strings
- 🚀 **useCallback/useMemo** → Referential equality, dependencies
- ⚙️ **Workflow** → Node config, data flow, validation
- 📚 **Dataset** → Upload, pagination, search
- 🎛️ **Configuration** → Form validation, persistence
### Complex Components (Complexity 30+)
- Group tests in multiple `describe` blocks
- Test integration scenarios
- Consider splitting component before testing
## Coverage Goals
### ⚠️ MANDATORY: Complete Coverage in Single Generation
Aim for 100% coverage:
- ✅ 100% function coverage (every exported function/method tested)
- ✅ 100% statement coverage (every line executed)
- ✅ >95% branch coverage (every if/else, switch case, ternary tested)
- ✅ >95% line coverage
Generate comprehensive tests covering **all** code paths and scenarios.
## Debugging Tips
### View Rendered DOM
```typescript
import { screen } from '@testing-library/react'
// Print entire DOM
screen.debug()
// Print specific element
screen.debug(screen.getByRole('button'))
```
### Finding Elements
Priority order (recommended top to bottom):
1. `getByRole` - Most recommended, follows accessibility standards
1. `getByLabelText` - Form fields
1. `getByPlaceholderText` - Only when no label
1. `getByText` - Non-interactive elements
1. `getByDisplayValue` - Current form value
1. `getByAltText` - Images
1. `getByTitle` - Last choice
1. `getByTestId` - Only as last resort
### Async Debugging
```typescript
// Wait for element to appear
await waitFor(() => {
expect(screen.getByText('Loaded')).toBeInTheDocument()
})
// Wait for element to disappear
await waitFor(() => {
expect(screen.queryByText('Loading')).not.toBeInTheDocument()
})
// Find async element
const element = await screen.findByText('Async Content')
```
## Reference Examples
Test examples in the project:
- [classnames.spec.ts](../utils/classnames.spec.ts) - Utility function tests
- [index.spec.tsx](../app/components/base/button/index.spec.tsx) - Component tests
## Resources
- [Jest Documentation](https://jestjs.io/docs/getting-started)
- [React Testing Library Documentation](https://testing-library.com/docs/react-testing-library/intro/)
- [Testing Library Best Practices](https://kentcdodds.com/blog/common-mistakes-with-react-testing-library)
- [Jest Mock Functions](https://jestjs.io/docs/mock-functions)
______________________________________________________________________
**Remember**: Writing tests is not just about coverage, but ensuring code quality and maintainability. Good tests should be clear, concise, and meaningful.

View File

@@ -6,6 +6,7 @@
"dom.iterable",
"esnext"
],
"types": ["jest", "node", "@testing-library/jest-dom"],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
@@ -40,11 +41,6 @@
"app/components/develop/Prose.jsx"
],
"exclude": [
"node_modules",
"**/*.test.ts",
"**/*.test.tsx",
"**/*.spec.ts",
"**/*.spec.tsx",
"__tests__/**"
"node_modules"
]
}

View File

@@ -2,6 +2,7 @@
* Test suite for app redirection utility functions
* Tests navigation path generation based on user permissions and app modes
*/
import { AppModeEnum } from '@/types/app'
import { getRedirection, getRedirectionPath } from './app-redirection'
describe('app-redirection', () => {
@@ -12,44 +13,44 @@ describe('app-redirection', () => {
*/
describe('getRedirectionPath', () => {
test('returns overview path when user is not editor', () => {
const app = { id: 'app-123', mode: 'chat' as const }
const app = { id: 'app-123', mode: AppModeEnum.CHAT }
const result = getRedirectionPath(false, app)
expect(result).toBe('/app/app-123/overview')
})
test('returns workflow path for workflow mode when user is editor', () => {
const app = { id: 'app-123', mode: 'workflow' as const }
const app = { id: 'app-123', mode: AppModeEnum.WORKFLOW }
const result = getRedirectionPath(true, app)
expect(result).toBe('/app/app-123/workflow')
})
test('returns workflow path for advanced-chat mode when user is editor', () => {
const app = { id: 'app-123', mode: 'advanced-chat' as const }
const app = { id: 'app-123', mode: AppModeEnum.ADVANCED_CHAT }
const result = getRedirectionPath(true, app)
expect(result).toBe('/app/app-123/workflow')
})
test('returns configuration path for chat mode when user is editor', () => {
const app = { id: 'app-123', mode: 'chat' as const }
const app = { id: 'app-123', mode: AppModeEnum.CHAT }
const result = getRedirectionPath(true, app)
expect(result).toBe('/app/app-123/configuration')
})
test('returns configuration path for completion mode when user is editor', () => {
const app = { id: 'app-123', mode: 'completion' as const }
const app = { id: 'app-123', mode: AppModeEnum.COMPLETION }
const result = getRedirectionPath(true, app)
expect(result).toBe('/app/app-123/configuration')
})
test('returns configuration path for agent-chat mode when user is editor', () => {
const app = { id: 'app-456', mode: 'agent-chat' as const }
const app = { id: 'app-456', mode: AppModeEnum.AGENT_CHAT }
const result = getRedirectionPath(true, app)
expect(result).toBe('/app/app-456/configuration')
})
test('handles different app IDs', () => {
const app1 = { id: 'abc-123', mode: 'chat' as const }
const app2 = { id: 'xyz-789', mode: 'workflow' as const }
const app1 = { id: 'abc-123', mode: AppModeEnum.CHAT }
const app2 = { id: 'xyz-789', mode: AppModeEnum.WORKFLOW }
expect(getRedirectionPath(false, app1)).toBe('/app/abc-123/overview')
expect(getRedirectionPath(true, app2)).toBe('/app/xyz-789/workflow')
@@ -64,7 +65,7 @@ describe('app-redirection', () => {
* Tests that the redirection function is called with the correct path
*/
test('calls redirection function with correct path for non-editor', () => {
const app = { id: 'app-123', mode: 'chat' as const }
const app = { id: 'app-123', mode: AppModeEnum.CHAT }
const mockRedirect = jest.fn()
getRedirection(false, app, mockRedirect)
@@ -74,7 +75,7 @@ describe('app-redirection', () => {
})
test('calls redirection function with workflow path for editor', () => {
const app = { id: 'app-123', mode: 'workflow' as const }
const app = { id: 'app-123', mode: AppModeEnum.WORKFLOW }
const mockRedirect = jest.fn()
getRedirection(true, app, mockRedirect)
@@ -84,7 +85,7 @@ describe('app-redirection', () => {
})
test('calls redirection function with configuration path for chat mode editor', () => {
const app = { id: 'app-123', mode: 'chat' as const }
const app = { id: 'app-123', mode: AppModeEnum.CHAT }
const mockRedirect = jest.fn()
getRedirection(true, app, mockRedirect)
@@ -94,7 +95,7 @@ describe('app-redirection', () => {
})
test('works with different redirection functions', () => {
const app = { id: 'app-123', mode: 'workflow' as const }
const app = { id: 'app-123', mode: AppModeEnum.WORKFLOW }
const paths: string[] = []
const customRedirect = (path: string) => paths.push(path)