Compare commits

..

36 Commits

Author SHA1 Message Date
GareArc
f9731e56ca fix: coerce tenant_id to str to prevent Pydantic validation error on None 2026-03-08 20:35:06 -07:00
GareArc
ddf36b9fed fix: address bot review feedback — precedence bugs, logging, header normalization
- Fix operator precedence in trace_id_source expressions (enterprise_trace.py)
- Use logger.exception for error paths in ops_trace_task.py
- Normalize OTLP header keys to lowercase for gRPC compliance (exporter.py)
- Use resolved credential_id in emitted node metadata (ops_trace_manager.py)
- Fix README span_id derivation docs to match UUID-based implementation
2026-03-08 20:19:32 -07:00
GareArc
ce5cfba2e5 fix: restore missing enum members and resolve mypy type errors
- Add PROMPT_TOKENS/COMPLETION_TOKENS back to WorkflowNodeExecutionMetadataKey
  (lost during upstream move from core.workflow.enums to dify_graph.enums)
- Add type: ignore[attr-defined] for SQLAlchemy ORM column access pattern
- Simplify draft_node_execution_trace narrowing for correct mypy inference
- Create enterprise/__init__.py to fix mypy duplicate module name resolution
2026-03-08 19:37:53 -07:00
autofix-ci[bot]
877ca99581 [autofix.ci] apply automated fixes 2026-03-09 02:25:57 +00:00
GareArc
2ab98a95f9 fix: rename _mock_ee* params to drop underscore prefix (PT019 lint) 2026-03-08 19:23:49 -07:00
GareArc
1dd2c728da fix: address CI failures and review feedback for enterprise otel exporter
- Fix import path: core.workflow.enums -> dify_graph.enums (2 files)
- Add module __getattr__ for lazy CASE_ROUTING/CASE_TO_TRACE_TASK exports
- Remove unnecessary cast() calls in id_generator.py
- Fix credential lookup: don't gate model-level lookup on provider credential
- Pass user_id to TraceTask in gateway emit
- Add ENTERPRISE_ENABLED check in ext_celery.py
- Auto-format fixes from ruff
2026-03-08 19:13:30 -07:00
GareArc
5417888717 Merge remote-tracking branch 'origin/main' into feat/otel-telemetry-ee
# Conflicts:
#	api/core/app/apps/advanced_chat/generate_task_pipeline.py
#	api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py
#	api/core/llm_generator/llm_generator.py
#	api/extensions/otel/parser/llm.py
#	api/extensions/otel/parser/retrieval.py
#	api/extensions/otel/parser/tool.py
#	api/services/account_service.py
#	api/services/workflow_service.py
#	api/tests/unit_tests/services/test_account_service.py
2026-03-08 17:08:52 -07:00
GareArc
2324a0cfc9 fix(telemetry): restore TRACE_TASK_TO_CASE lookup broken by CE safety refactor
The CE safety commit (8a3485454a) converted module-level dicts to lazy
functions but forgot to update __init__.py, which still imported the
now-deleted TRACE_TASK_TO_CASE constant causing an ImportError at startup.

Add get_trace_task_to_case() to gateway.py as a lazy public wrapper
(inverse of _get_case_to_trace_task) and update __init__.py to call it.
2026-03-02 20:00:14 -08:00
GareArc
dfcf11e8b7 fix(telemetry): ensure CE safety for enterprise-only imports and DB lookups
- Move enqueue_draft_node_execution_trace import inside call site in workflow_service.py
- Make gateway.py enterprise type imports lazy (routing dicts built on first call)
- Restore typed ModelConfig in llm_generator method signatures (revert dict regression)
- Fix generate_structured_output using wrong key model_parameters -> completion_params
- Replace unsafe cast(str, msg.content) with get_text_content() across llm_generator
- Remove duplicated payload classes from generator.py, import from core.llm_generator.entities
- Gate _lookup_app_and_workspace_names and credential lookups in ops_trace_manager behind is_enterprise_telemetry_enabled()
2026-03-02 18:50:18 -08:00
GareArc
3e8ef464c8 feat(telemetry): promote gen_ai scalar fields from log-only to span attributes
Move gen_ai.usage.*, gen_ai.request.model, gen_ai.provider.name, and
gen_ai.user.id from companion-log-only to span attributes on workflow
and node execution spans.

These are small scalars with no size risk. Having them on spans enables
filtering and grouping in trace UIs (Tempo, Jaeger, Datadog) without
requiring a cross-signal join to companion logs.

Data dictionary updated: span tables gain the new fields; companion log
'additional attributes' tables trimmed to only list fields not already
covered by 'All span attributes'.
2026-03-02 15:56:41 -08:00
GareArc
b0b9f56128 fix(telemetry): fix zero-value message and workflow duration histograms
Workflow RT: replace float(info.workflow_run_elapsed_time) with
(end_time - start_time).total_seconds() using workflow_run.created_at and
workflow_run.finished_at. The elapsed_time DB field defaults to 0 and can
be stale if the workflow_storage Celery task has not committed yet when the
trace fires. Wall-clock timestamps are more reliable; elapsed_time is kept
as fallback.

Message RT: change end_time from created_at + provider_response_latency to
message.updated_at when updated_at > created_at. The pipeline explicitly
sets message.updated_at = naive_utc_now() at the moment the LLM response
is complete, making it the canonical response-complete timestamp.
Falls back to the latency-based calculation for error/aborted messages.
2026-03-02 04:19:50 -08:00
GareArc
5fde8e3084 fix(telemetry): gate ObservabilityLayer content attrs behind ENTERPRISE_INCLUDE_CONTENT
Add should_include_content() helper to extensions/otel/parser/base.py that
returns True in CE (no behaviour change) and respects ENTERPRISE_INCLUDE_CONTENT
in EE. Gate all content-bearing span attributes in LLM, retrieval, tool, and
default node parsers so that gen_ai.completion, gen_ai.prompt, retrieval.document,
tool call arguments/results, and node input/output values are suppressed when
ENTERPRISE_ENABLED=True and ENTERPRISE_INCLUDE_CONTENT=False.
2026-03-02 04:19:50 -08:00
GareArc
b28670cf55 fix(telemetry): add credential_name lookup with async-safe fallback 2026-03-02 02:28:21 -08:00
GareArc
99ebc8a63f fix(telemetry): populate LLM credential info in node execution traces
- Add _lookup_llm_credential_info() to query Provider/ProviderModel tables
- Lookup LLM credentials when tool credential_id is null
- Fall back to provider-level credential if no model-specific credential
2026-03-02 01:54:28 -08:00
GareArc
091e65aafa docs(telemetry): add token consumption query patterns to data dictionary
Add token hierarchy diagram, common PromQL queries (totals, drill-down,
rates), and app name lookup via trace query.
2026-03-02 01:18:24 -08:00
GareArc
31f876f108 fix(telemetry): populate missing fields in node execution trace
- Add invoke_from to node execution trace metadata dict
- Add credential_id to node execution trace metadata dict
- Add conversation_id to metadata after message_id lookup
- Add tool_name to tool_info dict in tool node
2026-03-02 01:18:08 -08:00
GareArc
e0d3eb2fc4 fix(celery): register enterprise_telemetry_task in worker imports
Fixes Celery worker error where process_enterprise_telemetry task
was unregistered despite being dispatched from the app.

Added conditional import when ENTERPRISE_TELEMETRY_ENABLED=true
to ensure the task is available in the worker process.

Resolves: KeyError 'tasks.enterprise_telemetry_task.process_enterprise_telemetry'
2026-03-01 22:28:48 -08:00
GareArc
bb8d830fa5 refactor(telemetry): add resolved_parent_context property and fix edge cases
- Add resolved_parent_context property to BaseTraceInfo for reusable parent context extraction
- Refactor enterprise_trace.py to use property instead of duplicated dict plucking (~19 lines eliminated)
- Fix UUID validation in exporter.py with specific error logging for invalid trace correlation IDs
- Add error isolation in event_handlers.py to prevent telemetry failures from breaking user operations
- Replace pickle-based payload_fallback with JSON storage rehydration for security
- Update TelemetryEnvelope to use Pydantic v2 ConfigDict with extra='forbid'
- Update tests to reflect contract changes and new error handling behavior
2026-03-01 19:36:47 -08:00
yunlu.wen
808ab4bd75 fix token label 2026-03-02 11:35:14 +08:00
GareArc
876fafe1ca fix(telemetry): use URL scheme instead of API key for gRPC TLS detection
- Change insecure parameter from API key-based to URL scheme-based detection
- https:// endpoints now correctly use TLS (insecure=False)
- All other endpoints (http://, no scheme) use insecure=True
- Update tests to reflect URL scheme-based logic
- Remove incorrect documentation claiming API key controls TLS
2026-03-01 02:25:18 -08:00
GareArc
fc1f08f593 fix: align integration files with main, apply complete OTEL additions from reference
Reset 6 integration files to main baseline, then surgically applied
all OTEL telemetry additions from the reference branch diff
(1.12.1-otel-ee vs release/e-1.12.1).

Files fixed:
- core/ops/entities/trace_entity.py: added new trace info classes,
  OperationType enum, resolved_trace_id, type annotation fixes
- core/ops/ops_trace_manager.py: added lookup helpers, new trace
  methods (prompt_generation, node_execution, draft_node_execution),
  metadata enrichment, enterprise telemetry gate, storage-id fallback
- core/llm_generator/llm_generator.py: replaced TraceQueueManager
  with telemetry_emit for generate_conversation_name
- core/app/apps/advanced_chat/generate_task_pipeline.py: added
  telemetry_emit in _save_message, trace_manager param threading
- core/app/apps/workflow/app_generator.py: added parent_trace_context
  extraction from args
- core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py:
  replaced TraceTask with telemetry_emit

All main business logic preserved (human input, pause/resume,
_record_files, etc). basedpyright: 0 errors.
2026-03-01 01:47:05 -08:00
GareArc
d9606507e0 fix: restore enterprise workspace joining feature from main
These are valid features from origin/main that were accidentally removed:
- Enterprise default workspace auto-join functionality
- Related tests for account and enterprise services

This ensures the OTEL branch = main + OTEL, not main - features + OTEL
2026-03-01 01:08:38 -08:00
GareArc
8c3ef87f90 fix(otel): resolve OTEL-related type errors
- Add trace_manager parameter to _handle_advanced_chat_message_end_event
- Add missing _lookup_app_and_workspace_names helper function
- Import Tenant model for workspace name lookup
2026-03-01 01:08:38 -08:00
GareArc
d0a373d534 feat(telemetry): add missing OTEL infrastructure files
- Add core telemetry events module
- Add enterprise telemetry contracts, draft trace, ID generator
- Add enterprise telemetry tasks and event handlers
- Add OTEL configuration (app_config, enterprise config)
- Add OTEL semantic conventions
- Add feedback events and app event updates
- Add comprehensive test coverage for all modules
2026-03-01 00:47:37 -08:00
GareArc
81f44c746f feat(telemetry): add missing ID fields for name attributes
- Add dify.credential.id to node execution events
- Add dify.event.id to all telemetry events (APP_CREATED, APP_UPDATED, APP_DELETED, FEEDBACK_CREATED)

This ensures all .name fields have corresponding .id fields for reliable aggregation and deduplication.
2026-03-01 00:45:30 -08:00
GareArc
20b36d984c fix(telemetry): add resolved_trace_id property to eliminate trace_id inconsistencies
Add computed property to BaseTraceInfo that provides intelligent fallback:
1. External trace_id (from X-Trace-Id header)
2. workflow_run_id (for workflow-related traces)
3. message_id (as final fallback)

This ensures attribute dify.trace_id always matches log-level trace_id,
eliminating inconsistencies where attribute was null but log-level had value.

Changes:
- Add resolved_trace_id property to BaseTraceInfo (trace_entity.py)
- Replace 4 direct trace_id attribute assignments with resolved_trace_id
- Add trace_id_source parameter to 5 emit_metric_only_event calls

Fixes trace_id inconsistency found in MESSAGE_RUN, TOOL_EXECUTION,
MODERATION_CHECK, SUGGESTED_QUESTION_GENERATION, GENERATE_NAME_EXECUTION,
DATASET_RETRIEVAL, and PROMPT_GENERATION_EXECUTION events.

All 78 telemetry tests passing.
2026-03-01 00:44:36 -08:00
GareArc
5e8226fddd refactor(telemetry): move gateway to core as stateless module-level functions
Move routing table, emit(), and is_enterprise_telemetry_enabled() from
enterprise/telemetry/gateway.py into core/telemetry/gateway.py so both
CE and EE share one code path. The ce_eligible flag in CASE_ROUTING
controls which events flow in CE — flipping it is the only change needed
to enable an event in community edition.

- Delete enterprise/telemetry/gateway.py (class-based singleton)
- Create core/telemetry/gateway.py (stateless functions, no shared state)
- Simplify core/telemetry/__init__.py to thin facade over gateway
- Remove TelemetryGateway class and get_gateway() from ext_enterprise_telemetry
- Single-source is_enterprise_telemetry_enabled in core.telemetry.gateway
- Fix pre-existing test bugs (missing dify.event.id in metric handler tests)
- Update all imports and mock paths across 7 test files
2026-03-01 00:44:36 -08:00
GareArc
e1e599efc1 feat(telemetry): add model provider and name tags to all trace metrics
Add comprehensive model tracking across all OTEL metrics and logs:
- Node execution metrics now include model_name for LLM operations
- Suggested question metrics include model_provider and model_name
- Dataset retrieval captures both embedding and rerank model info
- Updated DATA_DICTIONARY.md with complete metric label documentation

This enables granular cost tracking, performance analysis, and usage monitoring per model across all operation types.
2026-03-01 00:44:22 -08:00
GareArc
366a889977 docs(enterprise): split telemetry docs into README and data dictionary
Separate background/configuration instructions from the data dictionary:
- README.md: Overview, configuration, correlation model, content gating
- DATA_DICTIONARY.md: Pure reference format with signals and attributes

The data dictionary is now concise (465 lines vs 911) and focuses on
attribute types and relationships without verbose explanations.
2026-03-01 00:44:10 -08:00
GareArc
e6a68123c7 docs(enterprise): add telemetry data dictionary for OTEL signals
- Comprehensive reference for all enterprise telemetry signals
- Documents 3 span types, 10 counters, 6 histograms, 13 log events
- Includes trace correlation model with ASCII diagrams
- Configuration reference for all 8 ENTERPRISE_* variables
- Per-emission-site label tables for metrics
- Full JSON schemas for structured log events
- Content gating behavior and token double-counting warnings
2026-03-01 00:44:10 -08:00
GareArc
f55a34c4a5 feat(enterprise-telemetry): wire bearer token auth and configurable insecure flag into OTEL exporter 2026-03-01 00:44:04 -08:00
GareArc
99722f3088 feat(telemetry): unify token metric label structure with Pydantic enforcement
- Add TokenMetricLabels BaseModel to enforce consistent label structure
- All dify.token.* metrics now use identical 6-label structure:
  * tenant_id, app_id, operation_type, model_provider, model_name, node_type
- Pydantic validation ensures runtime enforcement (extra='forbid', frozen=True)
- Enables filtering by operation_type to avoid double-counting:
  * workflow: aggregated workflow-level tokens
  * node_execution: individual node-level tokens
  * message: direct message tokens
  * rule_generate/code_generate: prompt generation tokens

Previously, inconsistent label cardinality made aggregation impossible:
- WORKFLOW: 3 labels
- NODE_EXECUTION: 6 labels
- MESSAGE: 5 labels
- PROMPT_GENERATION: 5 labels

Now all use the same 6-label structure for consistent querying.
2026-03-01 00:44:04 -08:00
GareArc
85391d93d3 feat: add dedicated app event counters and convert event names to StrEnum
- Add APP_CREATED, APP_UPDATED, APP_DELETED counters to EnterpriseTelemetryCounter
- Create EnterpriseTelemetryEvent StrEnum for type-safe event names
- Update metric_handler to use new app-specific counters with labels (tenant_id, app_id, mode)
- Convert all event_name strings to EnterpriseTelemetryEvent enum values
- Update exporter to create OTEL meters for new app counters (dify.app.created.total, etc.)
- Update tests to verify new counter behavior and enum usage
2026-03-01 00:44:03 -08:00
GareArc
eb9f1c9780 feat(telemetry): add operation_type labels for token metrics
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-01 00:44:03 -08:00
GareArc
3daa6ea2f5 docs(telemetry): clarify enterprise_telemetry queue is EE-only 2026-03-01 00:43:46 -08:00
GareArc
945d01dd08 feat(telemetry): add enterprise OTEL telemetry with gateway, traces, metrics, and logs 2026-03-01 00:43:46 -08:00
119 changed files with 7397 additions and 6265 deletions

View File

@@ -42,7 +42,7 @@ The scripts resolve paths relative to their location, so you can run them from a
1. Set up your application by visiting `http://localhost:3000`.
1. Start the worker service (async and scheduler tasks, runs from `api`).
1. Optional: start the worker service (async tasks, runs from `api`).
```bash
./dev/start-worker
@@ -54,6 +54,87 @@ The scripts resolve paths relative to their location, so you can run them from a
./dev/start-beat
```
### Manual commands
<details>
<summary>Show manual setup and run steps</summary>
These commands assume you start from the repository root.
1. Start the docker-compose stack.
The backend requires middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using `docker-compose`.
```bash
cp docker/middleware.env.example docker/middleware.env
# Use mysql or another vector database profile if you are not using postgres/weaviate.
docker compose -f docker/docker-compose.middleware.yaml --profile postgresql --profile weaviate -p dify up -d
```
1. Copy env files.
```bash
cp api/.env.example api/.env
cp web/.env.example web/.env.local
```
1. Install UV if needed.
```bash
pip install uv
# Or on macOS
brew install uv
```
1. Install API dependencies.
```bash
cd api
uv sync --group dev
```
1. Install web dependencies.
```bash
cd web
pnpm install
cd ..
```
1. Start backend (runs migrations first, in a new terminal).
```bash
cd api
uv run flask db upgrade
uv run flask run --host 0.0.0.0 --port=5001 --debug
```
1. Start Dify [web](../web) service (in a new terminal).
```bash
cd web
pnpm dev:inspect
```
1. Set up your application by visiting `http://localhost:3000`.
1. Optional: start the worker service (async tasks, in a new terminal).
```bash
cd api
# Note: enterprise_telemetry queue is only used in Enterprise Edition
uv run celery -A app.celery worker -P threads -c 2 --loglevel INFO -Q dataset,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention,enterprise_telemetry
```
1. Optional: start Celery Beat (scheduled tasks, in a new terminal).
```bash
cd api
uv run celery -A app.celery beat
```
</details>
### Environment notes
> [!IMPORTANT]

View File

@@ -81,6 +81,7 @@ def initialize_extensions(app: DifyApp):
ext_commands,
ext_compress,
ext_database,
ext_enterprise_telemetry,
ext_fastopenapi,
ext_forward_refs,
ext_hosting_provider,
@@ -131,6 +132,7 @@ def initialize_extensions(app: DifyApp):
ext_commands,
ext_fastopenapi,
ext_otel,
ext_enterprise_telemetry,
ext_request_logging,
ext_session_factory,
]

View File

@@ -8,7 +8,7 @@ from pydantic_settings import BaseSettings, PydanticBaseSettingsSource, Settings
from libs.file_utils import search_file_upwards
from .deploy import DeploymentConfig
from .enterprise import EnterpriseFeatureConfig
from .enterprise import EnterpriseFeatureConfig, EnterpriseTelemetryConfig
from .extra import ExtraServiceConfig
from .feature import FeatureConfig
from .middleware import MiddlewareConfig
@@ -73,6 +73,8 @@ class DifyConfig(
# Enterprise feature configs
# **Before using, please contact business@dify.ai by email to inquire about licensing matters.**
EnterpriseFeatureConfig,
# Enterprise telemetry configs
EnterpriseTelemetryConfig,
):
model_config = SettingsConfigDict(
# read from dotenv format config file

View File

@@ -18,3 +18,49 @@ class EnterpriseFeatureConfig(BaseSettings):
description="Allow customization of the enterprise logo.",
default=False,
)
class EnterpriseTelemetryConfig(BaseSettings):
"""
Configuration for enterprise telemetry.
"""
ENTERPRISE_TELEMETRY_ENABLED: bool = Field(
description="Enable enterprise telemetry collection (also requires ENTERPRISE_ENABLED=true).",
default=False,
)
ENTERPRISE_OTLP_ENDPOINT: str = Field(
description="Enterprise OTEL collector endpoint.",
default="",
)
ENTERPRISE_OTLP_HEADERS: str = Field(
description="Auth headers for OTLP export (key=value,key2=value2).",
default="",
)
ENTERPRISE_OTLP_PROTOCOL: str = Field(
description="OTLP protocol: 'http' or 'grpc' (default: http).",
default="http",
)
ENTERPRISE_OTLP_API_KEY: str = Field(
description="Bearer token for enterprise OTLP export authentication.",
default="",
)
ENTERPRISE_INCLUDE_CONTENT: bool = Field(
description="Include input/output content in traces (privacy toggle).",
default=True,
)
ENTERPRISE_SERVICE_NAME: str = Field(
description="Service name for OTEL resource.",
default="dify",
)
ENTERPRISE_OTEL_SAMPLING_RATE: float = Field(
description="Sampling rate for enterprise traces (0.0 to 1.0, default 1.0 = 100%).",
default=1.0,
)

View File

@@ -121,9 +121,3 @@ class NeedAddIdsError(BaseHTTPException):
error_code = "need_add_ids"
description = "Need to add ids."
code = 400
class VariableValidationError(BaseHTTPException):
error_code = "variable_validation_error"
description = "Variable validation failed."
code = 400

View File

@@ -11,12 +11,7 @@ from werkzeug.exceptions import Forbidden, InternalServerError, NotFound
import services
from controllers.console import console_ns
from controllers.console.app.error import (
ConversationCompletedError,
DraftWorkflowNotExist,
DraftWorkflowNotSync,
VariableValidationError,
)
from controllers.console.app.error import ConversationCompletedError, DraftWorkflowNotExist, DraftWorkflowNotSync
from controllers.console.app.workflow_run import workflow_run_node_execution_model
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required
@@ -37,7 +32,6 @@ from dify_graph.enums import NodeType
from dify_graph.file.models import File
from dify_graph.graph_engine.manager import GraphEngineManager
from dify_graph.model_runtime.utils.encoders import jsonable_encoder
from dify_graph.variables.exc import VariableError
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from factories import file_factory, variable_factory
@@ -308,8 +302,6 @@ class DraftWorkflowApi(Resource):
)
except WorkflowHashNotEqualError:
raise DraftWorkflowNotSync()
except VariableError as e:
raise VariableValidationError(description=str(e))
return {
"result": "success",

View File

@@ -159,9 +159,12 @@ class WorkflowAppGenerator(BaseAppGenerator):
inputs: Mapping[str, Any] = args["inputs"]
extras = {
extras: dict[str, Any] = {
**extract_external_trace_id_from_args(args),
}
parent_trace_context = args.get("_parent_trace_context")
if parent_trace_context:
extras["parent_trace_context"] = parent_trace_context
workflow_run_id = str(workflow_run_id or uuid.uuid4())
# FIXME (Yeuoly): we need to remove the SKIP_PREPARE_USER_INPUTS_KEY from the args
# trigger shouldn't prepare user inputs

View File

@@ -1,6 +1,7 @@
import hashlib
import logging
from threading import Thread, Timer
import time
from threading import Thread
from typing import Union
from flask import Flask, current_app
@@ -95,9 +96,9 @@ class MessageCycleManager:
if auto_generate_conversation_name and is_first_message:
# start generate thread
# time.sleep not block other logic
thread = Timer(
1,
self._generate_conversation_name_worker,
time.sleep(1)
thread = Thread(
target=self._generate_conversation_name_worker,
kwargs={
"flask_app": current_app._get_current_object(), # type: ignore
"conversation_id": conversation_id,

View File

@@ -9,6 +9,7 @@ class RuleGeneratePayload(BaseModel):
instruction: str = Field(..., description="Rule generation instruction")
model_config_data: ModelConfig = Field(..., alias="model_config", description="Model configuration")
no_variable: bool = Field(default=False, description="Whether to exclude variables")
app_id: str | None = Field(default=None, description="App ID for prompt generation tracing")
class RuleCodeGeneratePayload(RuleGeneratePayload):
@@ -18,3 +19,4 @@ class RuleCodeGeneratePayload(RuleGeneratePayload):
class RuleStructuredOutputPayload(BaseModel):
instruction: str = Field(..., description="Structured output generation instruction")
model_config_data: ModelConfig = Field(..., alias="model_config", description="Model configuration")
app_id: str | None = Field(default=None, description="App ID for prompt generation tracing")

View File

@@ -9,8 +9,8 @@ from pydantic import BaseModel, ConfigDict, field_serializer, field_validator
class BaseTraceInfo(BaseModel):
message_id: str | None = None
message_data: Any | None = None
inputs: Union[str, dict[str, Any], list] | None = None
outputs: Union[str, dict[str, Any], list] | None = None
inputs: Union[str, dict[str, Any], list[Any]] | None = None
outputs: Union[str, dict[str, Any], list[Any]] | None = None
start_time: datetime | None = None
end_time: datetime | None = None
metadata: dict[str, Any]
@@ -18,7 +18,7 @@ class BaseTraceInfo(BaseModel):
@field_validator("inputs", "outputs")
@classmethod
def ensure_type(cls, v):
def ensure_type(cls, v: str | dict[str, Any] | list[Any] | None) -> str | dict[str, Any] | list[Any] | None:
if v is None:
return None
if isinstance(v, str | dict | list):
@@ -27,6 +27,48 @@ class BaseTraceInfo(BaseModel):
model_config = ConfigDict(protected_namespaces=())
@property
def resolved_trace_id(self) -> str | None:
"""Get trace_id with intelligent fallback.
Priority:
1. External trace_id (from X-Trace-Id header)
2. workflow_run_id (if this trace type has it)
3. message_id (as final fallback)
"""
if self.trace_id:
return self.trace_id
# Try workflow_run_id (only exists on workflow-related traces)
workflow_run_id = getattr(self, "workflow_run_id", None)
if workflow_run_id:
return workflow_run_id
# Final fallback to message_id
return str(self.message_id) if self.message_id else None
@property
def resolved_parent_context(self) -> tuple[str | None, str | None]:
"""Resolve cross-workflow parent linking from metadata.
Extracts typed parent IDs from the untyped ``parent_trace_context``
metadata dict (set by tool_node when invoking nested workflows).
Returns:
(trace_correlation_override, parent_span_id_source) where
trace_correlation_override is the outer workflow_run_id and
parent_span_id_source is the outer node_execution_id.
"""
parent_ctx = self.metadata.get("parent_trace_context")
if not isinstance(parent_ctx, dict):
return None, None
trace_override = parent_ctx.get("parent_workflow_run_id")
parent_span = parent_ctx.get("parent_node_execution_id")
return (
trace_override if isinstance(trace_override, str) else None,
parent_span if isinstance(parent_span, str) else None,
)
@field_serializer("start_time", "end_time")
def serialize_datetime(self, dt: datetime | None) -> str | None:
if dt is None:
@@ -48,7 +90,10 @@ class WorkflowTraceInfo(BaseTraceInfo):
workflow_run_version: str
error: str | None = None
total_tokens: int
prompt_tokens: int | None = None
completion_tokens: int | None = None
file_list: list[str]
invoked_by: str | None = None
query: str
metadata: dict[str, Any]
@@ -59,7 +104,7 @@ class MessageTraceInfo(BaseTraceInfo):
answer_tokens: int
total_tokens: int
error: str | None = None
file_list: Union[str, dict[str, Any], list] | None = None
file_list: Union[str, dict[str, Any], list[Any]] | None = None
message_file_data: Any | None = None
conversation_mode: str
gen_ai_server_time_to_first_token: float | None = None
@@ -106,7 +151,7 @@ class ToolTraceInfo(BaseTraceInfo):
tool_config: dict[str, Any]
time_cost: Union[int, float]
tool_parameters: dict[str, Any]
file_url: Union[str, None, list] = None
file_url: Union[str, None, list[str]] = None
class GenerateNameTraceInfo(BaseTraceInfo):
@@ -114,6 +159,79 @@ class GenerateNameTraceInfo(BaseTraceInfo):
tenant_id: str
class PromptGenerationTraceInfo(BaseTraceInfo):
"""Trace information for prompt generation operations (rule-generate, code-generate, etc.)."""
tenant_id: str
user_id: str
app_id: str | None = None
operation_type: str
instruction: str
prompt_tokens: int
completion_tokens: int
total_tokens: int
model_provider: str
model_name: str
latency: float
total_price: float | None = None
currency: str | None = None
error: str | None = None
model_config = ConfigDict(protected_namespaces=())
class WorkflowNodeTraceInfo(BaseTraceInfo):
workflow_id: str
workflow_run_id: str
tenant_id: str
node_execution_id: str
node_id: str
node_type: str
title: str
status: str
error: str | None = None
elapsed_time: float
index: int
predecessor_node_id: str | None = None
total_tokens: int = 0
total_price: float = 0.0
currency: str | None = None
model_provider: str | None = None
model_name: str | None = None
prompt_tokens: int | None = None
completion_tokens: int | None = None
tool_name: str | None = None
iteration_id: str | None = None
iteration_index: int | None = None
loop_id: str | None = None
loop_index: int | None = None
parallel_id: str | None = None
node_inputs: Mapping[str, Any] | None = None
node_outputs: Mapping[str, Any] | None = None
process_data: Mapping[str, Any] | None = None
invoked_by: str | None = None
model_config = ConfigDict(protected_namespaces=())
class DraftNodeExecutionTrace(WorkflowNodeTraceInfo):
pass
class TaskData(BaseModel):
app_id: str
trace_info_type: str
@@ -128,11 +246,31 @@ trace_info_info_map = {
"DatasetRetrievalTraceInfo": DatasetRetrievalTraceInfo,
"ToolTraceInfo": ToolTraceInfo,
"GenerateNameTraceInfo": GenerateNameTraceInfo,
"PromptGenerationTraceInfo": PromptGenerationTraceInfo,
"WorkflowNodeTraceInfo": WorkflowNodeTraceInfo,
"DraftNodeExecutionTrace": DraftNodeExecutionTrace,
}
class OperationType(StrEnum):
"""Operation type for token metric labels.
Used as a metric attribute on ``dify.tokens.input`` / ``dify.tokens.output``
counters so consumers can break down token usage by operation.
"""
WORKFLOW = "workflow"
NODE_EXECUTION = "node_execution"
MESSAGE = "message"
RULE_GENERATE = "rule_generate"
CODE_GENERATE = "code_generate"
STRUCTURED_OUTPUT = "structured_output"
INSTRUCTION_MODIFY = "instruction_modify"
class TraceTaskName(StrEnum):
CONVERSATION_TRACE = "conversation"
DRAFT_NODE_EXECUTION_TRACE = "draft_node_execution"
WORKFLOW_TRACE = "workflow"
MESSAGE_TRACE = "message"
MODERATION_TRACE = "moderation"
@@ -140,4 +278,6 @@ class TraceTaskName(StrEnum):
DATASET_RETRIEVAL_TRACE = "dataset_retrieval"
TOOL_TRACE = "tool"
GENERATE_NAME_TRACE = "generate_conversation_name"
PROMPT_GENERATION_TRACE = "prompt_generation"
NODE_EXECUTION_TRACE = "node_execution"
DATASOURCE_TRACE = "datasource"

View File

@@ -15,22 +15,32 @@ from sqlalchemy import select
from sqlalchemy.orm import Session, sessionmaker
from core.helper.encrypter import batch_decrypt_token, encrypt_token, obfuscated_token
from core.ops.entities.config_entity import OPS_FILE_PATH, TracingProviderEnum
from core.ops.entities.config_entity import (
OPS_FILE_PATH,
TracingProviderEnum,
)
from core.ops.entities.trace_entity import (
DatasetRetrievalTraceInfo,
DraftNodeExecutionTrace,
GenerateNameTraceInfo,
MessageTraceInfo,
ModerationTraceInfo,
PromptGenerationTraceInfo,
SuggestedQuestionTraceInfo,
TaskData,
ToolTraceInfo,
TraceTaskName,
WorkflowNodeTraceInfo,
WorkflowTraceInfo,
)
from core.ops.utils import get_message_data
from extensions.ext_database import db
from extensions.ext_storage import storage
from models.engine import db
from models.account import Tenant
from models.dataset import Dataset
from models.model import App, AppModelConfig, Conversation, Message, MessageFile, TraceAppConfig
from models.provider import Provider, ProviderCredential, ProviderModel, ProviderModelCredential, ProviderType
from models.tools import ApiToolProvider, BuiltinToolProvider, MCPToolProvider, WorkflowToolProvider
from models.workflow import WorkflowAppLog
from tasks.ops_trace_task import process_trace_tasks
@@ -40,9 +50,142 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
def _lookup_app_and_workspace_names(app_id: str | None, tenant_id: str | None) -> tuple[str, str]:
"""Return (app_name, workspace_name) for the given IDs. Falls back to empty strings."""
app_name = ""
workspace_name = ""
if not app_id and not tenant_id:
return app_name, workspace_name
with Session(db.engine) as session:
if app_id:
name = session.scalar(select(App.name).where(App.id == app_id))
if name:
app_name = name
if tenant_id:
name = session.scalar(select(Tenant.name).where(Tenant.id == tenant_id))
if name:
workspace_name = name
return app_name, workspace_name
_PROVIDER_TYPE_TO_MODEL: dict[str, type] = {
"builtin": BuiltinToolProvider,
"plugin": BuiltinToolProvider,
"api": ApiToolProvider,
"workflow": WorkflowToolProvider,
"mcp": MCPToolProvider,
}
def _lookup_credential_name(credential_id: str | None, provider_type: str | None) -> str:
if not credential_id:
return ""
model_cls = _PROVIDER_TYPE_TO_MODEL.get(provider_type or "")
if not model_cls:
return ""
with Session(db.engine) as session:
name = session.scalar(select(model_cls.name).where(model_cls.id == credential_id)) # type: ignore[attr-defined]
return str(name) if name else ""
def _lookup_llm_credential_info(
tenant_id: str | None, provider: str | None, model: str | None, model_type: str | None = "llm"
) -> tuple[str | None, str]:
"""
Lookup LLM credential ID and name for the given provider and model.
Returns (credential_id, credential_name).
Handles async timing issues gracefully - if credential is deleted between lookups,
returns the ID but empty name rather than failing.
"""
if not tenant_id or not provider:
return None, ""
try:
with Session(db.engine) as session:
# Try to find provider-level or model-level configuration
provider_record = session.scalar(
select(Provider).where(
Provider.tenant_id == tenant_id,
Provider.provider_name == provider,
Provider.provider_type == ProviderType.CUSTOM,
)
)
if not provider_record:
return None, ""
# Check if there's a model-specific config
credential_id = None
credential_name = ""
is_model_level = False
if model:
# Try model-level first
model_record = session.scalar(
select(ProviderModel).where(
ProviderModel.tenant_id == tenant_id,
ProviderModel.provider_name == provider,
ProviderModel.model_name == model,
ProviderModel.model_type == model_type,
)
)
if model_record and model_record.credential_id:
credential_id = model_record.credential_id
is_model_level = True
if not credential_id and provider_record.credential_id:
# Fall back to provider-level credential
credential_id = provider_record.credential_id
is_model_level = False
# Lookup credential_name if we have credential_id
if credential_id:
try:
if is_model_level:
# Query ProviderModelCredential
cred_name = session.scalar(
select(ProviderModelCredential.credential_name).where(
ProviderModelCredential.id == credential_id
)
)
else:
# Query ProviderCredential
cred_name = session.scalar(
select(ProviderCredential.credential_name).where(ProviderCredential.id == credential_id)
)
if cred_name:
credential_name = str(cred_name)
except Exception as e:
# Credential might have been deleted between lookups (async timing)
# Return ID but empty name rather than failing
logger.warning(
"Failed to lookup credential name for credential_id=%s (provider=%s, model=%s): %s",
credential_id,
provider,
model,
str(e),
)
return credential_id, credential_name
except Exception as e:
# Database query failed or other unexpected error
# Return empty rather than propagating error to telemetry emission
logger.warning(
"Failed to lookup LLM credential info for tenant_id=%s, provider=%s, model=%s: %s",
tenant_id,
provider,
model,
str(e),
)
return None, ""
class OpsTraceProviderConfigMap(collections.UserDict[str, dict[str, Any]]):
def __getitem__(self, key: str) -> dict[str, Any]:
match key:
def __getitem__(self, provider: str) -> dict[str, Any]:
match provider:
case TracingProviderEnum.LANGFUSE:
from core.ops.entities.config_entity import LangfuseConfig
from core.ops.langfuse_trace.langfuse_trace import LangFuseDataTrace
@@ -149,7 +292,7 @@ class OpsTraceProviderConfigMap(collections.UserDict[str, dict[str, Any]]):
}
case _:
raise KeyError(f"Unsupported tracing provider: {key}")
raise KeyError(f"Unsupported tracing provider: {provider}")
provider_config_map = OpsTraceProviderConfigMap()
@@ -314,6 +457,10 @@ class OpsTraceManager:
if app_id is None:
return None
# Handle storage_id format (tenant-{uuid}) - not a real app_id
if isinstance(app_id, str) and app_id.startswith("tenant-"):
return None
app: App | None = db.session.query(App).where(App.id == app_id).first()
if app is None:
@@ -466,8 +613,6 @@ class TraceTask:
@classmethod
def _get_workflow_run_repo(cls):
from repositories.factory import DifyAPIRepositoryFactory
if cls._workflow_run_repo is None:
with cls._repo_lock:
if cls._workflow_run_repo is None:
@@ -478,6 +623,56 @@ class TraceTask:
cls._workflow_run_repo = DifyAPIRepositoryFactory.create_api_workflow_run_repository(session_maker)
return cls._workflow_run_repo
@classmethod
def _get_user_id_from_metadata(cls, metadata: dict[str, Any]) -> str:
"""Extract user ID from metadata, prioritizing end_user over account.
Returns the actual user ID (end_user or account) who invoked the workflow,
regardless of invoke_from context.
"""
# Priority 1: End user (external users via API/WebApp)
if user_id := metadata.get("from_end_user_id"):
return f"end_user:{user_id}"
# Priority 2: Account user (internal users via console/debugger)
if user_id := metadata.get("from_account_id"):
return f"account:{user_id}"
# Priority 3: User (internal users via console/debugger)
if user_id := metadata.get("user_id"):
return f"user:{user_id}"
return "anonymous"
@classmethod
def _calculate_workflow_token_split(cls, workflow_run_id: str, tenant_id: str) -> tuple[int, int]:
from dify_graph.enums import WorkflowNodeExecutionMetadataKey
from models.workflow import WorkflowNodeExecutionModel
with Session(db.engine) as session:
node_executions = session.scalars(
select(WorkflowNodeExecutionModel).where(
WorkflowNodeExecutionModel.tenant_id == tenant_id,
WorkflowNodeExecutionModel.workflow_run_id == workflow_run_id,
)
).all()
total_prompt = 0
total_completion = 0
for node_exec in node_executions:
metadata = node_exec.execution_metadata_dict
prompt = metadata.get(WorkflowNodeExecutionMetadataKey.PROMPT_TOKENS)
if prompt is not None:
total_prompt += prompt
completion = metadata.get(WorkflowNodeExecutionMetadataKey.COMPLETION_TOKENS)
if completion is not None:
total_completion += completion
return (total_prompt, total_completion)
def __init__(
self,
trace_type: Any,
@@ -498,6 +693,8 @@ class TraceTask:
self.app_id = None
self.trace_id = None
self.kwargs = kwargs
if user_id is not None and "user_id" not in self.kwargs:
self.kwargs["user_id"] = user_id
external_trace_id = kwargs.get("external_trace_id")
if external_trace_id:
self.trace_id = external_trace_id
@@ -511,7 +708,7 @@ class TraceTask:
TraceTaskName.WORKFLOW_TRACE: lambda: self.workflow_trace(
workflow_run_id=self.workflow_run_id, conversation_id=self.conversation_id, user_id=self.user_id
),
TraceTaskName.MESSAGE_TRACE: lambda: self.message_trace(message_id=self.message_id),
TraceTaskName.MESSAGE_TRACE: lambda: self.message_trace(message_id=self.message_id, **self.kwargs),
TraceTaskName.MODERATION_TRACE: lambda: self.moderation_trace(
message_id=self.message_id, timer=self.timer, **self.kwargs
),
@@ -527,6 +724,9 @@ class TraceTask:
TraceTaskName.GENERATE_NAME_TRACE: lambda: self.generate_name_trace(
conversation_id=self.conversation_id, timer=self.timer, **self.kwargs
),
TraceTaskName.PROMPT_GENERATION_TRACE: lambda: self.prompt_generation_trace(**self.kwargs),
TraceTaskName.NODE_EXECUTION_TRACE: lambda: self.node_execution_trace(**self.kwargs),
TraceTaskName.DRAFT_NODE_EXECUTION_TRACE: lambda: self.draft_node_execution_trace(**self.kwargs),
}
return preprocess_map.get(self.trace_type, lambda: None)()
@@ -562,6 +762,10 @@ class TraceTask:
total_tokens = workflow_run.total_tokens
prompt_tokens, completion_tokens = self._calculate_workflow_token_split(
workflow_run_id=workflow_run_id, tenant_id=tenant_id
)
file_list = workflow_run_inputs.get("sys.file") or []
query = workflow_run_inputs.get("query") or workflow_run_inputs.get("sys.query") or ""
@@ -582,7 +786,14 @@ class TraceTask:
)
message_id = session.scalar(message_data_stmt)
metadata = {
from core.telemetry.gateway import is_enterprise_telemetry_enabled
if is_enterprise_telemetry_enabled():
app_name, workspace_name = _lookup_app_and_workspace_names(workflow_run.app_id, tenant_id)
else:
app_name, workspace_name = "", ""
metadata: dict[str, Any] = {
"workflow_id": workflow_id,
"conversation_id": conversation_id,
"workflow_run_id": workflow_run_id,
@@ -595,8 +806,14 @@ class TraceTask:
"triggered_from": workflow_run.triggered_from,
"user_id": user_id,
"app_id": workflow_run.app_id,
"app_name": app_name,
"workspace_name": workspace_name,
}
parent_trace_context = self.kwargs.get("parent_trace_context")
if parent_trace_context:
metadata["parent_trace_context"] = parent_trace_context
workflow_trace_info = WorkflowTraceInfo(
trace_id=self.trace_id,
workflow_data=workflow_run.to_dict(),
@@ -611,6 +828,8 @@ class TraceTask:
workflow_run_version=workflow_run_version,
error=error,
total_tokens=total_tokens,
prompt_tokens=prompt_tokens,
completion_tokens=completion_tokens,
file_list=file_list,
query=query,
metadata=metadata,
@@ -618,10 +837,11 @@ class TraceTask:
message_id=message_id,
start_time=workflow_run.created_at,
end_time=workflow_run.finished_at,
invoked_by=self._get_user_id_from_metadata(metadata),
)
return workflow_trace_info
def message_trace(self, message_id: str | None):
def message_trace(self, message_id: str | None, **kwargs):
if not message_id:
return {}
message_data = get_message_data(message_id)
@@ -644,6 +864,19 @@ class TraceTask:
streaming_metrics = self._extract_streaming_metrics(message_data)
tenant_id = ""
with Session(db.engine) as session:
tid = session.scalar(select(App.tenant_id).where(App.id == message_data.app_id))
if tid:
tenant_id = str(tid)
from core.telemetry.gateway import is_enterprise_telemetry_enabled
if is_enterprise_telemetry_enabled():
app_name, workspace_name = _lookup_app_and_workspace_names(message_data.app_id, tenant_id)
else:
app_name, workspace_name = "", ""
metadata = {
"conversation_id": message_data.conversation_id,
"ls_provider": message_data.model_provider,
@@ -655,7 +888,14 @@ class TraceTask:
"workflow_run_id": message_data.workflow_run_id,
"from_source": message_data.from_source,
"message_id": message_id,
"tenant_id": tenant_id,
"app_id": message_data.app_id,
"user_id": message_data.from_end_user_id or message_data.from_account_id,
"app_name": app_name,
"workspace_name": workspace_name,
}
if node_execution_id := kwargs.get("node_execution_id"):
metadata["node_execution_id"] = node_execution_id
message_tokens = message_data.message_tokens
@@ -672,7 +912,9 @@ class TraceTask:
outputs=message_data.answer,
file_list=file_list,
start_time=created_at,
end_time=created_at + timedelta(seconds=message_data.provider_response_latency),
end_time=message_data.updated_at
if message_data.updated_at and message_data.updated_at > created_at
else created_at + timedelta(seconds=message_data.provider_response_latency),
metadata=metadata,
message_file_data=message_file_data,
conversation_mode=conversation_mode,
@@ -697,6 +939,8 @@ class TraceTask:
"preset_response": moderation_result.preset_response,
"query": moderation_result.query,
}
if node_execution_id := kwargs.get("node_execution_id"):
metadata["node_execution_id"] = node_execution_id
# get workflow_app_log_id
workflow_app_log_id = None
@@ -738,6 +982,8 @@ class TraceTask:
"workflow_run_id": message_data.workflow_run_id,
"from_source": message_data.from_source,
}
if node_execution_id := kwargs.get("node_execution_id"):
metadata["node_execution_id"] = node_execution_id
# get workflow_app_log_id
workflow_app_log_id = None
@@ -777,6 +1023,52 @@ class TraceTask:
if not message_data:
return {}
tenant_id = ""
with Session(db.engine) as session:
tid = session.scalar(select(App.tenant_id).where(App.id == message_data.app_id))
if tid:
tenant_id = str(tid)
from core.telemetry.gateway import is_enterprise_telemetry_enabled
if is_enterprise_telemetry_enabled():
app_name, workspace_name = _lookup_app_and_workspace_names(message_data.app_id, tenant_id)
else:
app_name, workspace_name = "", ""
doc_list = [doc.model_dump() for doc in documents] if documents else []
dataset_ids: set[str] = set()
for doc in doc_list:
doc_meta = doc.get("metadata") or {}
did = doc_meta.get("dataset_id")
if did:
dataset_ids.add(did)
embedding_models: dict[str, dict[str, str]] = {}
if dataset_ids:
with Session(db.engine) as session:
rows = session.execute(
select(Dataset.id, Dataset.embedding_model, Dataset.embedding_model_provider).where(
Dataset.id.in_(list(dataset_ids))
)
).all()
for row in rows:
embedding_models[str(row[0])] = {
"embedding_model": row[1] or "",
"embedding_model_provider": row[2] or "",
}
# Extract rerank model info from retrieval_model kwargs
rerank_model_provider = ""
rerank_model_name = ""
if "retrieval_model" in kwargs:
retrieval_model = kwargs["retrieval_model"]
if isinstance(retrieval_model, dict):
reranking_model = retrieval_model.get("reranking_model")
if isinstance(reranking_model, dict):
rerank_model_provider = reranking_model.get("reranking_provider_name", "")
rerank_model_name = reranking_model.get("reranking_model_name", "")
metadata = {
"message_id": message_id,
"ls_provider": message_data.model_provider,
@@ -787,13 +1079,23 @@ class TraceTask:
"agent_based": message_data.agent_based,
"workflow_run_id": message_data.workflow_run_id,
"from_source": message_data.from_source,
"tenant_id": tenant_id,
"app_id": message_data.app_id,
"user_id": message_data.from_end_user_id or message_data.from_account_id,
"app_name": app_name,
"workspace_name": workspace_name,
"embedding_models": embedding_models,
"rerank_model_provider": rerank_model_provider,
"rerank_model_name": rerank_model_name,
}
if node_execution_id := kwargs.get("node_execution_id"):
metadata["node_execution_id"] = node_execution_id
dataset_retrieval_trace_info = DatasetRetrievalTraceInfo(
trace_id=self.trace_id,
message_id=message_id,
inputs=message_data.query or message_data.inputs,
documents=[doc.model_dump() for doc in documents] if documents else [],
documents=doc_list,
start_time=timer.get("start"),
end_time=timer.get("end"),
metadata=metadata,
@@ -836,6 +1138,10 @@ class TraceTask:
"error": error,
"tool_parameters": tool_parameters,
}
if message_data.workflow_run_id:
metadata["workflow_run_id"] = message_data.workflow_run_id
if node_execution_id := kwargs.get("node_execution_id"):
metadata["node_execution_id"] = node_execution_id
file_url = ""
message_file_data = db.session.query(MessageFile).filter_by(message_id=message_id).first()
@@ -890,6 +1196,8 @@ class TraceTask:
"conversation_id": conversation_id,
"tenant_id": tenant_id,
}
if node_execution_id := kwargs.get("node_execution_id"):
metadata["node_execution_id"] = node_execution_id
generate_name_trace_info = GenerateNameTraceInfo(
trace_id=self.trace_id,
@@ -904,6 +1212,182 @@ class TraceTask:
return generate_name_trace_info
def prompt_generation_trace(self, **kwargs) -> PromptGenerationTraceInfo | dict:
tenant_id = kwargs.get("tenant_id", "")
user_id = kwargs.get("user_id", "")
app_id = kwargs.get("app_id")
operation_type = kwargs.get("operation_type", "")
instruction = kwargs.get("instruction", "")
generated_output = kwargs.get("generated_output", "")
prompt_tokens = kwargs.get("prompt_tokens", 0)
completion_tokens = kwargs.get("completion_tokens", 0)
total_tokens = kwargs.get("total_tokens", 0)
model_provider = kwargs.get("model_provider", "")
model_name = kwargs.get("model_name", "")
latency = kwargs.get("latency", 0.0)
timer = kwargs.get("timer")
start_time = timer.get("start") if timer else None
end_time = timer.get("end") if timer else None
total_price = kwargs.get("total_price")
currency = kwargs.get("currency")
error = kwargs.get("error")
app_name = None
workspace_name = None
if app_id:
app_name, workspace_name = _lookup_app_and_workspace_names(app_id, tenant_id)
metadata = {
"tenant_id": tenant_id,
"user_id": user_id,
"app_id": app_id or "",
"app_name": app_name,
"workspace_name": workspace_name,
"operation_type": operation_type,
"model_provider": model_provider,
"model_name": model_name,
}
if node_execution_id := kwargs.get("node_execution_id"):
metadata["node_execution_id"] = node_execution_id
return PromptGenerationTraceInfo(
trace_id=self.trace_id,
inputs=instruction,
outputs=generated_output,
start_time=start_time,
end_time=end_time,
metadata=metadata,
tenant_id=tenant_id,
user_id=user_id,
app_id=app_id,
operation_type=operation_type,
instruction=instruction,
prompt_tokens=prompt_tokens,
completion_tokens=completion_tokens,
total_tokens=total_tokens,
model_provider=model_provider,
model_name=model_name,
latency=latency,
total_price=total_price,
currency=currency,
error=error,
)
def node_execution_trace(self, **kwargs) -> WorkflowNodeTraceInfo | dict:
node_data: dict = kwargs.get("node_execution_data", {})
if not node_data:
return {}
from core.telemetry.gateway import is_enterprise_telemetry_enabled
if is_enterprise_telemetry_enabled():
app_name, workspace_name = _lookup_app_and_workspace_names(
node_data.get("app_id"), node_data.get("tenant_id")
)
else:
app_name, workspace_name = "", ""
# Try tool credential lookup first
credential_id = node_data.get("credential_id")
if is_enterprise_telemetry_enabled():
credential_name = _lookup_credential_name(credential_id, node_data.get("credential_provider_type"))
# If no credential_id found (e.g., LLM nodes), try LLM credential lookup
if not credential_id:
llm_cred_id, llm_cred_name = _lookup_llm_credential_info(
tenant_id=node_data.get("tenant_id"),
provider=node_data.get("model_provider"),
model=node_data.get("model_name"),
model_type="llm",
)
if llm_cred_id:
credential_id = llm_cred_id
credential_name = llm_cred_name
else:
credential_name = ""
metadata: dict[str, Any] = {
"tenant_id": node_data.get("tenant_id"),
"app_id": node_data.get("app_id"),
"app_name": app_name,
"workspace_name": workspace_name,
"user_id": node_data.get("user_id"),
"invoke_from": node_data.get("invoke_from"),
"credential_id": credential_id,
"credential_name": credential_name,
"dataset_ids": node_data.get("dataset_ids"),
"dataset_names": node_data.get("dataset_names"),
"plugin_name": node_data.get("plugin_name"),
}
parent_trace_context = node_data.get("parent_trace_context")
if parent_trace_context:
metadata["parent_trace_context"] = parent_trace_context
message_id: str | None = None
conversation_id = node_data.get("conversation_id")
workflow_execution_id = node_data.get("workflow_execution_id")
if conversation_id and workflow_execution_id and not parent_trace_context:
with Session(db.engine) as session:
msg_id = session.scalar(
select(Message.id).where(
Message.conversation_id == conversation_id,
Message.workflow_run_id == workflow_execution_id,
)
)
if msg_id:
message_id = str(msg_id)
metadata["message_id"] = message_id
if conversation_id:
metadata["conversation_id"] = conversation_id
return WorkflowNodeTraceInfo(
trace_id=self.trace_id,
message_id=message_id,
start_time=node_data.get("created_at"),
end_time=node_data.get("finished_at"),
metadata=metadata,
workflow_id=node_data.get("workflow_id", ""),
workflow_run_id=node_data.get("workflow_execution_id", ""),
tenant_id=node_data.get("tenant_id", ""),
node_execution_id=node_data.get("node_execution_id", ""),
node_id=node_data.get("node_id", ""),
node_type=node_data.get("node_type", ""),
title=node_data.get("title", ""),
status=node_data.get("status", ""),
error=node_data.get("error"),
elapsed_time=node_data.get("elapsed_time", 0.0),
index=node_data.get("index", 0),
predecessor_node_id=node_data.get("predecessor_node_id"),
total_tokens=node_data.get("total_tokens", 0),
total_price=node_data.get("total_price", 0.0),
currency=node_data.get("currency"),
model_provider=node_data.get("model_provider"),
model_name=node_data.get("model_name"),
prompt_tokens=node_data.get("prompt_tokens"),
completion_tokens=node_data.get("completion_tokens"),
tool_name=node_data.get("tool_name"),
iteration_id=node_data.get("iteration_id"),
iteration_index=node_data.get("iteration_index"),
loop_id=node_data.get("loop_id"),
loop_index=node_data.get("loop_index"),
parallel_id=node_data.get("parallel_id"),
node_inputs=node_data.get("node_inputs"),
node_outputs=node_data.get("node_outputs"),
process_data=node_data.get("process_data"),
invoked_by=self._get_user_id_from_metadata(metadata),
)
def draft_node_execution_trace(self, **kwargs) -> DraftNodeExecutionTrace | dict:
node_trace = self.node_execution_trace(**kwargs)
if not isinstance(node_trace, WorkflowNodeTraceInfo):
return node_trace
return DraftNodeExecutionTrace(**node_trace.model_dump())
def _extract_streaming_metrics(self, message_data) -> dict:
if not message_data.message_metadata:
return {}
@@ -937,13 +1421,17 @@ class TraceQueueManager:
self.user_id = user_id
self.trace_instance = OpsTraceManager.get_ops_trace_instance(app_id)
self.flask_app = current_app._get_current_object() # type: ignore
from core.telemetry.gateway import is_enterprise_telemetry_enabled
self._enterprise_telemetry_enabled = is_enterprise_telemetry_enabled()
if trace_manager_timer is None:
self.start_timer()
def add_trace_task(self, trace_task: TraceTask):
global trace_manager_timer, trace_manager_queue
try:
if self.trace_instance:
if self._enterprise_telemetry_enabled or self.trace_instance:
trace_task.app_id = self.app_id
trace_manager_queue.put(trace_task)
except Exception:
@@ -979,20 +1467,27 @@ class TraceQueueManager:
def send_to_celery(self, tasks: list[TraceTask]):
with self.flask_app.app_context():
for task in tasks:
if task.app_id is None:
continue
storage_id = task.app_id
if storage_id is None:
tenant_id = task.kwargs.get("tenant_id")
if tenant_id:
storage_id = f"tenant-{tenant_id}"
else:
logger.warning("Skipping trace without app_id or tenant_id, trace_type: %s", task.trace_type)
continue
file_id = uuid4().hex
trace_info = task.execute()
task_data = TaskData(
app_id=task.app_id,
app_id=storage_id,
trace_info_type=type(trace_info).__name__,
trace_info=trace_info.model_dump() if trace_info else None,
)
file_path = f"{OPS_FILE_PATH}{task.app_id}/{file_id}.json"
file_path = f"{OPS_FILE_PATH}{storage_id}/{file_id}.json"
storage.save(file_path, task_data.model_dump_json().encode("utf-8"))
file_info = {
"file_id": file_id,
"app_id": task.app_id,
"app_id": storage_id,
}
process_trace_tasks.delay(file_info) # type: ignore

View File

@@ -0,0 +1,43 @@
"""Telemetry facade.
Thin public API for emitting telemetry events. All routing logic
lives in ``core.telemetry.gateway`` which is shared by both CE and EE.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from core.ops.entities.trace_entity import TraceTaskName
from core.telemetry.events import TelemetryContext, TelemetryEvent
from core.telemetry.gateway import emit as gateway_emit
from core.telemetry.gateway import get_trace_task_to_case
if TYPE_CHECKING:
from core.ops.ops_trace_manager import TraceQueueManager
def emit(event: TelemetryEvent, trace_manager: TraceQueueManager | None = None) -> None:
"""Emit a telemetry event.
Translates the ``TelemetryEvent`` (keyed by ``TraceTaskName``) into a
``TelemetryCase`` and delegates to ``core.telemetry.gateway.emit()``.
"""
case = get_trace_task_to_case().get(event.name)
if case is None:
return
context: dict[str, object] = {
"tenant_id": event.context.tenant_id,
"user_id": event.context.user_id,
"app_id": event.context.app_id,
}
gateway_emit(case, context, event.payload, trace_manager)
__all__ = [
"TelemetryContext",
"TelemetryEvent",
"TraceTaskName",
"emit",
]

View File

@@ -0,0 +1,21 @@
from __future__ import annotations
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from core.ops.entities.trace_entity import TraceTaskName
@dataclass(frozen=True)
class TelemetryContext:
tenant_id: str | None = None
user_id: str | None = None
app_id: str | None = None
@dataclass(frozen=True)
class TelemetryEvent:
name: TraceTaskName
context: TelemetryContext
payload: dict[str, Any]

View File

@@ -0,0 +1,238 @@
"""Telemetry gateway — single routing layer for all editions.
Maps ``TelemetryCase`` → ``CaseRoute`` and dispatches events to either
the CE/EE trace pipeline (``TraceQueueManager``) or the enterprise-only
metric/log Celery queue.
This module lives in ``core/`` so both CE and EE share one routing table
and one ``emit()`` entry point. No separate enterprise gateway module is
needed — enterprise-specific dispatch (Celery task, payload offloading)
is handled here behind lazy imports that no-op in CE.
"""
from __future__ import annotations
import json
import logging
import uuid
from typing import TYPE_CHECKING, Any
from core.ops.entities.trace_entity import TraceTaskName
from extensions.ext_storage import storage
if TYPE_CHECKING:
from core.ops.ops_trace_manager import TraceQueueManager
from enterprise.telemetry.contracts import TelemetryCase
logger = logging.getLogger(__name__)
PAYLOAD_SIZE_THRESHOLD_BYTES = 1 * 1024 * 1024
# ---------------------------------------------------------------------------
# Routing table — authoritative mapping for all editions
# ---------------------------------------------------------------------------
_case_to_trace_task: dict | None = None
_case_routing: dict | None = None
def _get_case_to_trace_task() -> dict:
global _case_to_trace_task
if _case_to_trace_task is None:
from enterprise.telemetry.contracts import TelemetryCase
_case_to_trace_task = {
TelemetryCase.WORKFLOW_RUN: TraceTaskName.WORKFLOW_TRACE,
TelemetryCase.MESSAGE_RUN: TraceTaskName.MESSAGE_TRACE,
TelemetryCase.NODE_EXECUTION: TraceTaskName.NODE_EXECUTION_TRACE,
TelemetryCase.DRAFT_NODE_EXECUTION: TraceTaskName.DRAFT_NODE_EXECUTION_TRACE,
TelemetryCase.PROMPT_GENERATION: TraceTaskName.PROMPT_GENERATION_TRACE,
TelemetryCase.TOOL_EXECUTION: TraceTaskName.TOOL_TRACE,
TelemetryCase.MODERATION_CHECK: TraceTaskName.MODERATION_TRACE,
TelemetryCase.SUGGESTED_QUESTION: TraceTaskName.SUGGESTED_QUESTION_TRACE,
TelemetryCase.DATASET_RETRIEVAL: TraceTaskName.DATASET_RETRIEVAL_TRACE,
TelemetryCase.GENERATE_NAME: TraceTaskName.GENERATE_NAME_TRACE,
}
return _case_to_trace_task
def get_trace_task_to_case() -> dict:
"""Return TraceTaskName → TelemetryCase (inverse of _get_case_to_trace_task)."""
return {v: k for k, v in _get_case_to_trace_task().items()}
def _get_case_routing() -> dict:
global _case_routing
if _case_routing is None:
from enterprise.telemetry.contracts import CaseRoute, SignalType, TelemetryCase
_case_routing = {
# TRACE — CE-eligible (flow in both CE and EE)
TelemetryCase.WORKFLOW_RUN: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=True),
TelemetryCase.MESSAGE_RUN: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=True),
TelemetryCase.TOOL_EXECUTION: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=True),
TelemetryCase.MODERATION_CHECK: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=True),
TelemetryCase.SUGGESTED_QUESTION: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=True),
TelemetryCase.DATASET_RETRIEVAL: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=True),
TelemetryCase.GENERATE_NAME: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=True),
# TRACE — enterprise-only
TelemetryCase.NODE_EXECUTION: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=False),
TelemetryCase.DRAFT_NODE_EXECUTION: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=False),
TelemetryCase.PROMPT_GENERATION: CaseRoute(signal_type=SignalType.TRACE, ce_eligible=False),
# METRIC_LOG — enterprise-only (signal-driven, not trace)
TelemetryCase.APP_CREATED: CaseRoute(signal_type=SignalType.METRIC_LOG, ce_eligible=False),
TelemetryCase.APP_UPDATED: CaseRoute(signal_type=SignalType.METRIC_LOG, ce_eligible=False),
TelemetryCase.APP_DELETED: CaseRoute(signal_type=SignalType.METRIC_LOG, ce_eligible=False),
TelemetryCase.FEEDBACK_CREATED: CaseRoute(signal_type=SignalType.METRIC_LOG, ce_eligible=False),
}
return _case_routing
def __getattr__(name: str) -> dict:
"""Lazy module-level access to routing tables."""
if name == "CASE_ROUTING":
return _get_case_routing()
if name == "CASE_TO_TRACE_TASK":
return _get_case_to_trace_task()
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def is_enterprise_telemetry_enabled() -> bool:
try:
from enterprise.telemetry.exporter import is_enterprise_telemetry_enabled
return is_enterprise_telemetry_enabled()
except Exception:
return False
def _handle_payload_sizing(
payload: dict[str, Any],
tenant_id: str,
event_id: str,
) -> tuple[dict[str, Any], str | None]:
"""Inline or offload payload based on size.
Returns ``(payload_for_envelope, storage_key | None)``. Payloads
exceeding ``PAYLOAD_SIZE_THRESHOLD_BYTES`` are written to object
storage and replaced with an empty dict in the envelope.
"""
try:
payload_json = json.dumps(payload)
payload_size = len(payload_json.encode("utf-8"))
except (TypeError, ValueError):
logger.warning("Failed to serialize payload for sizing: event_id=%s", event_id)
return payload, None
if payload_size <= PAYLOAD_SIZE_THRESHOLD_BYTES:
return payload, None
storage_key = f"telemetry/{tenant_id}/{event_id}.json"
try:
storage.save(storage_key, payload_json.encode("utf-8"))
logger.debug("Stored large payload to storage: key=%s, size=%d", storage_key, payload_size)
return {}, storage_key
except Exception:
logger.warning("Failed to store large payload, inlining instead: event_id=%s", event_id, exc_info=True)
return payload, None
# ---------------------------------------------------------------------------
# Public API
# ---------------------------------------------------------------------------
def emit(
case: TelemetryCase,
context: dict[str, Any],
payload: dict[str, Any],
trace_manager: TraceQueueManager | None = None,
) -> None:
"""Route a telemetry event to the correct pipeline.
TRACE events are enqueued into ``TraceQueueManager`` (works in both CE
and EE). Enterprise-only traces are silently dropped when EE is
disabled.
METRIC_LOG events are dispatched to the enterprise Celery queue;
silently dropped when enterprise telemetry is unavailable.
"""
route = _get_case_routing().get(case)
if route is None:
logger.warning("Unknown telemetry case: %s, dropping event", case)
return
if not route.ce_eligible and not is_enterprise_telemetry_enabled():
logger.debug("Dropping EE-only event: case=%s (EE disabled)", case)
return
if route.signal_type == "trace":
_emit_trace(case, context, payload, trace_manager)
else:
_emit_metric_log(case, context, payload)
def _emit_trace(
case: TelemetryCase,
context: dict[str, Any],
payload: dict[str, Any],
trace_manager: TraceQueueManager | None,
) -> None:
from core.ops.ops_trace_manager import TraceQueueManager as LocalTraceQueueManager
from core.ops.ops_trace_manager import TraceTask
trace_task_name = _get_case_to_trace_task().get(case)
if trace_task_name is None:
logger.warning("No TraceTaskName mapping for case: %s", case)
return
queue_manager = trace_manager or LocalTraceQueueManager(
app_id=context.get("app_id"),
user_id=context.get("user_id"),
)
queue_manager.add_trace_task(TraceTask(trace_task_name, user_id=context.get("user_id"), **payload))
logger.debug("Enqueued trace task: case=%s, app_id=%s", case, context.get("app_id"))
def _emit_metric_log(
case: TelemetryCase,
context: dict[str, Any],
payload: dict[str, Any],
) -> None:
"""Build envelope and dispatch to enterprise Celery queue.
No-ops when the enterprise telemetry task is not importable (CE mode).
"""
try:
from tasks.enterprise_telemetry_task import process_enterprise_telemetry
except ImportError:
logger.debug("Enterprise metric/log dispatch unavailable, dropping: case=%s", case)
return
tenant_id = context.get("tenant_id") or ""
event_id = str(uuid.uuid4())
payload_for_envelope, payload_ref = _handle_payload_sizing(payload, tenant_id, event_id)
from enterprise.telemetry.contracts import TelemetryEnvelope
envelope = TelemetryEnvelope(
case=case,
tenant_id=tenant_id,
event_id=event_id,
payload=payload_for_envelope,
metadata={"payload_ref": payload_ref} if payload_ref else None,
)
process_enterprise_telemetry.delay(envelope.model_dump_json())
logger.debug(
"Enqueued metric/log event: case=%s, tenant_id=%s, event_id=%s",
case,
tenant_id,
event_id,
)

View File

@@ -232,6 +232,8 @@ class WorkflowNodeExecutionMetadataKey(StrEnum):
"""
TOTAL_TOKENS = "total_tokens"
PROMPT_TOKENS = "prompt_tokens"
COMPLETION_TOKENS = "completion_tokens"
TOTAL_PRICE = "total_price"
CURRENCY = "currency"
TOOL_INFO = "tool_info"

View File

@@ -62,7 +62,9 @@ class ToolNode(Node[ToolNodeData]):
tool_info = {
"provider_type": self.node_data.provider_type.value,
"provider_id": self.node_data.provider_id,
"tool_name": self.node_data.tool_name,
"plugin_unique_identifier": self.node_data.plugin_unique_identifier,
"credential_id": self.node_data.credential_id,
}
# get tool runtime

View File

View File

@@ -0,0 +1,525 @@
# Dify Enterprise Telemetry Data Dictionary
Quick reference for all telemetry signals emitted by Dify Enterprise. For configuration and architecture details, see [README.md](./README.md).
## Resource Attributes
Attached to every signal (Span, Metric, Log).
| Attribute | Type | Example |
|-----------|------|---------|
| `service.name` | string | `dify` |
| `host.name` | string | `dify-api-7f8b` |
## Traces (Spans)
### `dify.workflow.run`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.trace_id` | string | Business trace ID (Workflow Run ID) |
| `dify.tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.workflow.id` | string | Workflow definition ID |
| `dify.workflow.run_id` | string | Unique ID for this run |
| `dify.workflow.status` | string | `succeeded`, `failed`, `stopped`, etc. |
| `dify.workflow.error` | string | Error message if failed |
| `dify.workflow.elapsed_time` | float | Total execution time (seconds) |
| `dify.invoke_from` | string | `api`, `webapp`, `debug` |
| `dify.conversation.id` | string | Conversation ID (optional) |
| `dify.message.id` | string | Message ID (optional) |
| `dify.invoked_by` | string | User ID who triggered the run |
| `gen_ai.usage.total_tokens` | int | Total tokens across all nodes (optional) |
| `gen_ai.user.id` | string | End-user identifier (optional) |
| `dify.parent.trace_id` | string | Parent workflow trace ID (optional) |
| `dify.parent.workflow.run_id` | string | Parent workflow run ID (optional) |
| `dify.parent.node.execution_id` | string | Parent node execution ID (optional) |
| `dify.parent.app.id` | string | Parent app ID (optional) |
### `dify.node.execution`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.trace_id` | string | Business trace ID |
| `dify.tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.workflow.id` | string | Workflow definition ID |
| `dify.workflow.run_id` | string | Workflow Run ID |
| `dify.message.id` | string | Message ID (optional) |
| `dify.conversation.id` | string | Conversation ID (optional) |
| `dify.node.execution_id` | string | Unique node execution ID |
| `dify.node.id` | string | Node ID in workflow graph |
| `dify.node.type` | string | Node type (see appendix) |
| `dify.node.title` | string | Display title |
| `dify.node.status` | string | `succeeded`, `failed` |
| `dify.node.error` | string | Error message if failed |
| `dify.node.elapsed_time` | float | Execution time (seconds) |
| `dify.node.index` | int | Execution order index |
| `dify.node.predecessor_node_id` | string | Triggering node ID |
| `dify.node.iteration_id` | string | Iteration ID (optional) |
| `dify.node.loop_id` | string | Loop ID (optional) |
| `dify.node.parallel_id` | string | Parallel branch ID (optional) |
| `dify.node.invoked_by` | string | User ID who triggered execution |
| `gen_ai.usage.input_tokens` | int | Prompt tokens (LLM nodes only) |
| `gen_ai.usage.output_tokens` | int | Completion tokens (LLM nodes only) |
| `gen_ai.usage.total_tokens` | int | Total tokens (LLM nodes only) |
| `gen_ai.request.model` | string | LLM model name (LLM nodes only) |
| `gen_ai.provider.name` | string | LLM provider name (LLM nodes only) |
| `gen_ai.user.id` | string | End-user identifier (optional) |
### `dify.node.execution.draft`
Same attributes as `dify.node.execution`. Emitted during Preview/Debug runs.
## Counters
All counters are cumulative and emitted at 100% accuracy.
### Token Counters
| Metric | Unit | Description |
|--------|------|-------------|
| `dify.tokens.total` | `{token}` | Total tokens consumed |
| `dify.tokens.input` | `{token}` | Input (prompt) tokens |
| `dify.tokens.output` | `{token}` | Output (completion) tokens |
**Labels:**
- `tenant_id`, `app_id`, `operation_type`, `model_provider`, `model_name`, `node_type` (if node_execution)
⚠️ **Warning:** `dify.tokens.total` at workflow level includes all node tokens. Filter by `operation_type` to avoid double-counting.
#### Token Hierarchy & Query Patterns
Token metrics are emitted at multiple layers. Understanding the hierarchy prevents double-counting:
```
App-level total
├── workflow ← sum of all node_execution tokens (DO NOT add both)
│ └── node_execution ← per-node breakdown
├── message ← independent (non-workflow chat apps only)
├── rule_generate ← independent helper LLM call
├── code_generate ← independent helper LLM call
├── structured_output ← independent helper LLM call
└── instruction_modify← independent helper LLM call
```
**Key rule:** `workflow` tokens already include all `node_execution` tokens. Never sum both.
**Available labels on token metrics:** `tenant_id`, `app_id`, `operation_type`, `model_provider`, `model_name`, `node_type`.
App name is only available on span attributes (`dify.app.name`), not metric labels — use `app_id` for metric queries.
**Common queries** (PromQL):
```promql
# ── Totals ──────────────────────────────────────────────────
# App-level total (exclude node_execution to avoid double-counting)
sum by (app_id) (dify_tokens_total{operation_type!="node_execution"})
# Single app total
sum (dify_tokens_total{app_id="<app_id>", operation_type!="node_execution"})
# Per-tenant totals
sum by (tenant_id) (dify_tokens_total{operation_type!="node_execution"})
# ── Drill-down ──────────────────────────────────────────────
# Workflow-level tokens for an app
sum (dify_tokens_total{app_id="<app_id>", operation_type="workflow"})
# Node-level breakdown within an app
sum by (node_type) (dify_tokens_total{app_id="<app_id>", operation_type="node_execution"})
# Model breakdown for an app
sum by (model_provider, model_name) (dify_tokens_total{app_id="<app_id>"})
# Input vs output per model
sum by (model_name) (dify_tokens_input_total{app_id="<app_id>"})
sum by (model_name) (dify_tokens_output_total{app_id="<app_id>"})
# ── Rates ───────────────────────────────────────────────────
# Token consumption rate (per hour)
sum(rate(dify_tokens_total{operation_type!="node_execution"}[1h]))
# Per-app consumption rate
sum by (app_id) (rate(dify_tokens_total{operation_type!="node_execution"}[1h]))
```
**Finding `app_id` from app name** (trace query — Tempo / Jaeger):
```
{ resource.dify.app.name = "My Chatbot" } | select(resource.dify.app.id)
```
### Request Counters
| Metric | Unit | Description |
|--------|------|-------------|
| `dify.requests.total` | `{request}` | Total operations count |
**Labels by type:**
| `type` | Additional Labels |
|--------|-------------------|
| `workflow` | `tenant_id`, `app_id`, `status`, `invoke_from` |
| `node` | `tenant_id`, `app_id`, `node_type`, `model_provider`, `model_name`, `status` |
| `draft_node` | `tenant_id`, `app_id`, `node_type`, `model_provider`, `model_name`, `status` |
| `message` | `tenant_id`, `app_id`, `model_provider`, `model_name`, `status`, `invoke_from` |
| `tool` | `tenant_id`, `app_id`, `tool_name` |
| `moderation` | `tenant_id`, `app_id` |
| `suggested_question` | `tenant_id`, `app_id`, `model_provider`, `model_name` |
| `dataset_retrieval` | `tenant_id`, `app_id` |
| `generate_name` | `tenant_id`, `app_id` |
| `prompt_generation` | `tenant_id`, `app_id`, `operation_type`, `model_provider`, `model_name`, `status` |
### Error Counters
| Metric | Unit | Description |
|--------|------|-------------|
| `dify.errors.total` | `{error}` | Total failed operations |
**Labels by type:**
| `type` | Additional Labels |
|--------|-------------------|
| `workflow` | `tenant_id`, `app_id` |
| `node` | `tenant_id`, `app_id`, `node_type`, `model_provider`, `model_name` |
| `draft_node` | `tenant_id`, `app_id`, `node_type`, `model_provider`, `model_name` |
| `message` | `tenant_id`, `app_id`, `model_provider`, `model_name` |
| `tool` | `tenant_id`, `app_id`, `tool_name` |
| `prompt_generation` | `tenant_id`, `app_id`, `operation_type`, `model_provider`, `model_name` |
### Other Counters
| Metric | Unit | Labels |
|--------|------|--------|
| `dify.feedback.total` | `{feedback}` | `tenant_id`, `app_id`, `rating` |
| `dify.dataset.retrievals.total` | `{retrieval}` | `tenant_id`, `app_id`, `dataset_id`, `embedding_model_provider`, `embedding_model`, `rerank_model_provider`, `rerank_model` |
| `dify.app.created.total` | `{app}` | `tenant_id`, `app_id`, `mode` |
| `dify.app.updated.total` | `{app}` | `tenant_id`, `app_id` |
| `dify.app.deleted.total` | `{app}` | `tenant_id`, `app_id` |
## Histograms
| Metric | Unit | Labels |
|--------|------|--------|
| `dify.workflow.duration` | `s` | `tenant_id`, `app_id`, `status` |
| `dify.node.duration` | `s` | `tenant_id`, `app_id`, `node_type`, `model_provider`, `model_name`, `plugin_name` |
| `dify.message.duration` | `s` | `tenant_id`, `app_id`, `model_provider`, `model_name` |
| `dify.message.time_to_first_token` | `s` | `tenant_id`, `app_id`, `model_provider`, `model_name` |
| `dify.tool.duration` | `s` | `tenant_id`, `app_id`, `tool_name` |
| `dify.prompt_generation.duration` | `s` | `tenant_id`, `app_id`, `operation_type`, `model_provider`, `model_name` |
## Structured Logs
### Span Companion Logs
Logs that accompany spans. Signal type: `span_detail`
#### `dify.workflow.run` Companion Log
**Common attributes:** All span attributes (see Traces section) plus:
| Additional Attribute | Type | Always Present | Description |
|---------------------|------|----------------|-------------|
| `dify.app.name` | string | No | Application display name |
| `dify.workspace.name` | string | No | Workspace display name |
| `dify.workflow.version` | string | Yes | Workflow definition version |
| `dify.workflow.inputs` | string/JSON | Yes | Input parameters (content-gated) |
| `dify.workflow.outputs` | string/JSON | Yes | Output results (content-gated) |
| `dify.workflow.query` | string | No | User query text (content-gated) |
**Event attributes:**
- `dify.event.name`: `"dify.workflow.run"`
- `dify.event.signal`: `"span_detail"`
- `trace_id`, `span_id`, `tenant_id`, `user_id`
#### `dify.node.execution` and `dify.node.execution.draft` Companion Logs
**Common attributes:** All span attributes (see Traces section) plus:
| Additional Attribute | Type | Always Present | Description |
|---------------------|------|----------------|-------------|
| `dify.app.name` | string | No | Application display name |
| `dify.workspace.name` | string | No | Workspace display name |
| `dify.invoke_from` | string | No | Invocation source |
| `gen_ai.tool.name` | string | No | Tool name (tool nodes only) |
| `dify.node.total_price` | float | No | Cost (LLM nodes only) |
| `dify.node.currency` | string | No | Currency code (LLM nodes only) |
| `dify.node.iteration_index` | int | No | Iteration index (iteration nodes) |
| `dify.node.loop_index` | int | No | Loop index (loop nodes) |
| `dify.plugin.name` | string | No | Plugin name (tool/knowledge nodes) |
| `dify.credential.name` | string | No | Credential name (plugin nodes) |
| `dify.credential.id` | string | No | Credential ID (plugin nodes) |
| `dify.dataset.ids` | JSON array | No | Dataset IDs (knowledge nodes) |
| `dify.dataset.names` | JSON array | No | Dataset names (knowledge nodes) |
| `dify.node.inputs` | string/JSON | Yes | Node inputs (content-gated) |
| `dify.node.outputs` | string/JSON | Yes | Node outputs (content-gated) |
| `dify.node.process_data` | string/JSON | No | Processing data (content-gated) |
**Event attributes:**
- `dify.event.name`: `"dify.node.execution"` or `"dify.node.execution.draft"`
- `dify.event.signal`: `"span_detail"`
- `trace_id`, `span_id`, `tenant_id`, `user_id`
### Standalone Logs
Logs without structural spans. Signal type: `metric_only`
#### `dify.message.run`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.message.run"` |
| `dify.event.signal` | string | `"metric_only"` |
| `trace_id` | string | OTEL trace ID (32-char hex) |
| `span_id` | string | OTEL span ID (16-char hex) |
| `tenant_id` | string | Tenant identifier |
| `user_id` | string | User identifier (optional) |
| `dify.app_id` | string | Application identifier |
| `dify.message.id` | string | Message identifier |
| `dify.conversation.id` | string | Conversation ID (optional) |
| `dify.workflow.run_id` | string | Workflow run ID (optional) |
| `dify.invoke_from` | string | `service-api`, `web-app`, `debugger`, `explore` |
| `gen_ai.provider.name` | string | LLM provider |
| `gen_ai.request.model` | string | LLM model |
| `gen_ai.usage.input_tokens` | int | Input tokens |
| `gen_ai.usage.output_tokens` | int | Output tokens |
| `gen_ai.usage.total_tokens` | int | Total tokens |
| `dify.message.status` | string | `succeeded`, `failed` |
| `dify.message.error` | string | Error message (if failed) |
| `dify.message.duration` | float | Duration (seconds) |
| `dify.message.time_to_first_token` | float | TTFT (seconds) |
| `dify.message.inputs` | string/JSON | Inputs (content-gated) |
| `dify.message.outputs` | string/JSON | Outputs (content-gated) |
#### `dify.tool.execution`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.tool.execution"` |
| `dify.event.signal` | string | `"metric_only"` |
| `trace_id` | string | OTEL trace ID |
| `span_id` | string | OTEL span ID |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.message.id` | string | Message identifier |
| `dify.tool.name` | string | Tool name |
| `dify.tool.duration` | float | Duration (seconds) |
| `dify.tool.status` | string | `succeeded`, `failed` |
| `dify.tool.error` | string | Error message (if failed) |
| `dify.tool.inputs` | string/JSON | Inputs (content-gated) |
| `dify.tool.outputs` | string/JSON | Outputs (content-gated) |
| `dify.tool.parameters` | string/JSON | Parameters (content-gated) |
| `dify.tool.config` | string/JSON | Configuration (content-gated) |
#### `dify.moderation.check`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.moderation.check"` |
| `dify.event.signal` | string | `"metric_only"` |
| `trace_id` | string | OTEL trace ID |
| `span_id` | string | OTEL span ID |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.message.id` | string | Message identifier |
| `dify.moderation.type` | string | `input`, `output` |
| `dify.moderation.action` | string | `pass`, `block`, `flag` |
| `dify.moderation.flagged` | boolean | Whether flagged |
| `dify.moderation.categories` | JSON array | Flagged categories |
| `dify.moderation.query` | string | Content (content-gated) |
#### `dify.suggested_question.generation`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.suggested_question.generation"` |
| `dify.event.signal` | string | `"metric_only"` |
| `trace_id` | string | OTEL trace ID |
| `span_id` | string | OTEL span ID |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.message.id` | string | Message identifier |
| `dify.suggested_question.count` | int | Number of questions |
| `dify.suggested_question.duration` | float | Duration (seconds) |
| `dify.suggested_question.status` | string | `succeeded`, `failed` |
| `dify.suggested_question.error` | string | Error message (if failed) |
| `dify.suggested_question.questions` | JSON array | Questions (content-gated) |
#### `dify.dataset.retrieval`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.dataset.retrieval"` |
| `dify.event.signal` | string | `"metric_only"` |
| `trace_id` | string | OTEL trace ID |
| `span_id` | string | OTEL span ID |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.message.id` | string | Message identifier |
| `dify.dataset.id` | string | Dataset identifier |
| `dify.dataset.name` | string | Dataset name |
| `dify.dataset.embedding_providers` | JSON array | Embedding model providers (one per dataset) |
| `dify.dataset.embedding_models` | JSON array | Embedding models (one per dataset) |
| `dify.retrieval.rerank_provider` | string | Rerank model provider |
| `dify.retrieval.rerank_model` | string | Rerank model name |
| `dify.retrieval.query` | string | Search query (content-gated) |
| `dify.retrieval.document_count` | int | Documents retrieved |
| `dify.retrieval.duration` | float | Duration (seconds) |
| `dify.retrieval.status` | string | `succeeded`, `failed` |
| `dify.retrieval.error` | string | Error message (if failed) |
| `dify.dataset.documents` | JSON array | Documents (content-gated) |
#### `dify.generate_name.execution`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.generate_name.execution"` |
| `dify.event.signal` | string | `"metric_only"` |
| `trace_id` | string | OTEL trace ID |
| `span_id` | string | OTEL span ID |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.conversation.id` | string | Conversation identifier |
| `dify.generate_name.duration` | float | Duration (seconds) |
| `dify.generate_name.status` | string | `succeeded`, `failed` |
| `dify.generate_name.error` | string | Error message (if failed) |
| `dify.generate_name.inputs` | string/JSON | Inputs (content-gated) |
| `dify.generate_name.outputs` | string | Generated name (content-gated) |
#### `dify.prompt_generation.execution`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.prompt_generation.execution"` |
| `dify.event.signal` | string | `"metric_only"` |
| `trace_id` | string | OTEL trace ID |
| `span_id` | string | OTEL span ID |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.prompt_generation.operation_type` | string | Operation type (see appendix) |
| `gen_ai.provider.name` | string | LLM provider |
| `gen_ai.request.model` | string | LLM model |
| `gen_ai.usage.input_tokens` | int | Input tokens |
| `gen_ai.usage.output_tokens` | int | Output tokens |
| `gen_ai.usage.total_tokens` | int | Total tokens |
| `dify.prompt_generation.duration` | float | Duration (seconds) |
| `dify.prompt_generation.status` | string | `succeeded`, `failed` |
| `dify.prompt_generation.error` | string | Error message (if failed) |
| `dify.prompt_generation.instruction` | string | Instruction (content-gated) |
| `dify.prompt_generation.output` | string/JSON | Output (content-gated) |
#### `dify.app.created`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.app.created"` |
| `dify.event.signal` | string | `"metric_only"` |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.app.mode` | string | `chat`, `completion`, `agent-chat`, `workflow` |
| `dify.app.created_at` | string | Timestamp (ISO 8601) |
#### `dify.app.updated`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.app.updated"` |
| `dify.event.signal` | string | `"metric_only"` |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.app.updated_at` | string | Timestamp (ISO 8601) |
#### `dify.app.deleted`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.app.deleted"` |
| `dify.event.signal` | string | `"metric_only"` |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.app.deleted_at` | string | Timestamp (ISO 8601) |
#### `dify.feedback.created`
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.feedback.created"` |
| `dify.event.signal` | string | `"metric_only"` |
| `trace_id` | string | OTEL trace ID |
| `span_id` | string | OTEL span ID |
| `tenant_id` | string | Tenant identifier |
| `dify.app_id` | string | Application identifier |
| `dify.message.id` | string | Message identifier |
| `dify.feedback.rating` | string | `like`, `dislike`, `null` |
| `dify.feedback.content` | string | Feedback text (content-gated) |
| `dify.feedback.created_at` | string | Timestamp (ISO 8601) |
#### `dify.telemetry.rehydration_failed`
Diagnostic event for telemetry system health monitoring.
| Attribute | Type | Description |
|-----------|------|-------------|
| `dify.event.name` | string | `"dify.telemetry.rehydration_failed"` |
| `dify.event.signal` | string | `"metric_only"` |
| `tenant_id` | string | Tenant identifier |
| `dify.telemetry.error` | string | Error message |
| `dify.telemetry.payload_type` | string | Payload type (see appendix) |
| `dify.telemetry.correlation_id` | string | Correlation ID |
## Content-Gated Attributes
When `ENTERPRISE_INCLUDE_CONTENT=false`, these attributes are replaced with reference strings (`ref:{id_type}={uuid}`).
| Attribute | Signal |
|-----------|--------|
| `dify.workflow.inputs` | `dify.workflow.run` |
| `dify.workflow.outputs` | `dify.workflow.run` |
| `dify.workflow.query` | `dify.workflow.run` |
| `dify.node.inputs` | `dify.node.execution` |
| `dify.node.outputs` | `dify.node.execution` |
| `dify.node.process_data` | `dify.node.execution` |
| `dify.message.inputs` | `dify.message.run` |
| `dify.message.outputs` | `dify.message.run` |
| `dify.tool.inputs` | `dify.tool.execution` |
| `dify.tool.outputs` | `dify.tool.execution` |
| `dify.tool.parameters` | `dify.tool.execution` |
| `dify.tool.config` | `dify.tool.execution` |
| `dify.moderation.query` | `dify.moderation.check` |
| `dify.suggested_question.questions` | `dify.suggested_question.generation` |
| `dify.retrieval.query` | `dify.dataset.retrieval` |
| `dify.dataset.documents` | `dify.dataset.retrieval` |
| `dify.generate_name.inputs` | `dify.generate_name.execution` |
| `dify.generate_name.outputs` | `dify.generate_name.execution` |
| `dify.prompt_generation.instruction` | `dify.prompt_generation.execution` |
| `dify.prompt_generation.output` | `dify.prompt_generation.execution` |
| `dify.feedback.content` | `dify.feedback.created` |
## Appendix
### Operation Types
- `workflow`, `node_execution`, `message`, `rule_generate`, `code_generate`, `structured_output`, `instruction_modify`
### Node Types
- `start`, `end`, `answer`, `llm`, `knowledge-retrieval`, `knowledge-index`, `if-else`, `code`, `template-transform`, `question-classifier`, `http-request`, `tool`, `datasource`, `variable-aggregator`, `loop`, `iteration`, `parameter-extractor`, `assigner`, `document-extractor`, `list-operator`, `agent`, `trigger-webhook`, `trigger-schedule`, `trigger-plugin`, `human-input`
### Workflow Statuses
- `running`, `succeeded`, `failed`, `stopped`, `partial-succeeded`, `paused`
### Payload Types
- `workflow`, `node`, `message`, `tool`, `moderation`, `suggested_question`, `dataset_retrieval`, `generate_name`, `prompt_generation`, `app`, `feedback`
### Null Value Behavior
**Spans:** Attributes with `null` values are omitted.
**Logs:** Attributes with `null` values appear as `null` in JSON.
**Content-Gated:** Replaced with reference strings, not set to `null`.

View File

@@ -0,0 +1,121 @@
# Dify Enterprise Telemetry
This document provides an overview of the Dify Enterprise OpenTelemetry (OTEL) exporter and how to configure it for integration with observability stacks like Prometheus, Grafana, Jaeger, or Honeycomb.
## Overview
Dify Enterprise uses a "slim span + rich companion log" architecture to provide high-fidelity observability without overwhelming trace storage.
- **Traces (Spans)**: Capture the structure, identity, and timing of high-level operations (Workflows and Nodes).
- **Structured Logs**: Provide deep context (inputs, outputs, metadata) for every event, correlated to spans via `trace_id` and `span_id`.
- **Metrics**: Provide 100% accurate counters and histograms for usage, performance, and error tracking.
### Signal Architecture
```mermaid
graph TD
A[Workflow Run] -->|Span| B(dify.workflow.run)
A -->|Log| C(dify.workflow.run detail)
B ---|trace_id| C
D[Node Execution] -->|Span| E(dify.node.execution)
D -->|Log| F(dify.node.execution detail)
E ---|span_id| F
G[Message/Tool/etc] -->|Log| H(dify.* event)
G -->|Metric| I(dify.* counter/histogram)
```
## Configuration
The Enterprise OTEL exporter is configured via environment variables.
| Variable | Description | Default |
|----------|-------------|---------|
| `ENTERPRISE_ENABLED` | Master switch for all enterprise features. | `false` |
| `ENTERPRISE_TELEMETRY_ENABLED` | Master switch for enterprise telemetry. | `false` |
| `ENTERPRISE_OTLP_ENDPOINT` | OTLP collector endpoint (e.g., `http://otel-collector:4318`). | - |
| `ENTERPRISE_OTLP_HEADERS` | Custom headers for OTLP requests (e.g., `x-scope-orgid=tenant1`). | - |
| `ENTERPRISE_OTLP_PROTOCOL` | OTLP transport protocol (`http` or `grpc`). | `http` |
| `ENTERPRISE_OTLP_API_KEY` | Bearer token for authentication. | - |
| `ENTERPRISE_INCLUDE_CONTENT` | Whether to include sensitive content (inputs/outputs) in logs. | `true` |
| `ENTERPRISE_SERVICE_NAME` | Service name reported to OTEL. | `dify` |
| `ENTERPRISE_OTEL_SAMPLING_RATE` | Sampling rate for traces (0.0 to 1.0). Metrics are always 100%. | `1.0` |
## Correlation Model
Dify uses deterministic ID generation to ensure signals are correlated across different services and asynchronous tasks.
### ID Generation Rules
- `trace_id`: Derived from the correlation ID (workflow_run_id or node_execution_id for drafts) using `int(UUID(correlation_id))`
- `span_id`: Derived from the source ID using the lower 64 bits of `UUID(source_id)`
### Scenario A: Simple Workflow
A single workflow run with multiple nodes. All spans and logs share the same `trace_id` (derived from `workflow_run_id`).
```
trace_id = UUID(workflow_run_id)
├── [root span] dify.workflow.run (span_id = hash(workflow_run_id))
│ ├── [child] dify.node.execution - "Start" (span_id = hash(node_exec_id_1))
│ ├── [child] dify.node.execution - "LLM" (span_id = hash(node_exec_id_2))
│ └── [child] dify.node.execution - "End" (span_id = hash(node_exec_id_3))
```
### Scenario B: Nested Sub-Workflow
A workflow calling another workflow via a Tool or Sub-workflow node. The child workflow's spans are linked to the parent via `parent_span_id`. Both workflows share the same trace_id.
```
trace_id = UUID(outer_workflow_run_id) ← shared across both workflows
├── [root] dify.workflow.run (outer) (span_id = hash(outer_workflow_run_id))
│ ├── dify.node.execution - "Start Node"
│ ├── dify.node.execution - "Tool Node" (triggers sub-workflow)
│ │ └── [child] dify.workflow.run (inner) (span_id = hash(inner_workflow_run_id))
│ │ ├── dify.node.execution - "Inner Start"
│ │ └── dify.node.execution - "Inner End"
│ └── dify.node.execution - "End Node"
```
**Key attributes for nested workflows:**
- Inner workflow's `dify.parent.trace_id` = outer `workflow_run_id`
- Inner workflow's `dify.parent.node.execution_id` = tool node's `execution_id`
- Inner workflow's `dify.parent.workflow.run_id` = outer `workflow_run_id`
- Inner workflow's `dify.parent.app.id` = outer `app_id`
### Scenario C: Draft Node Execution
A single node run in isolation (debugger/preview mode). It creates its own trace where the node span is the root.
```
trace_id = UUID(node_execution_id) ← own trace, NOT part of any workflow
└── dify.node.execution.draft (span_id = hash(node_execution_id))
```
**Key difference:** Draft executions use `node_execution_id` as the correlation_id, so they are NOT children of any workflow trace.
## Content Gating
When `ENTERPRISE_INCLUDE_CONTENT` is set to `false`, sensitive content attributes (inputs, outputs, queries) are replaced with reference strings (e.g., `ref:workflow_run_id=...`) to prevent data leakage to the OTEL collector.
**Reference String Format:**
```
ref:{id_type}={uuid}
```
**Examples:**
```
ref:workflow_run_id=550e8400-e29b-41d4-a716-446655440000
ref:node_execution_id=660e8400-e29b-41d4-a716-446655440001
ref:message_id=770e8400-e29b-41d4-a716-446655440002
```
To retrieve actual content when gating is enabled, query the Dify database using the provided UUID.
## Reference
For a complete list of telemetry signals, attributes, and data structures, see [DATA_DICTIONARY.md](./DATA_DICTIONARY.md).

View File

View File

@@ -0,0 +1,73 @@
"""Telemetry gateway contracts and data structures.
This module defines the envelope format for telemetry events and the routing
configuration that determines how each event type is processed.
"""
from __future__ import annotations
from enum import StrEnum
from typing import Any
from pydantic import BaseModel, ConfigDict
class TelemetryCase(StrEnum):
"""Enumeration of all known telemetry event cases."""
WORKFLOW_RUN = "workflow_run"
NODE_EXECUTION = "node_execution"
DRAFT_NODE_EXECUTION = "draft_node_execution"
MESSAGE_RUN = "message_run"
TOOL_EXECUTION = "tool_execution"
MODERATION_CHECK = "moderation_check"
SUGGESTED_QUESTION = "suggested_question"
DATASET_RETRIEVAL = "dataset_retrieval"
GENERATE_NAME = "generate_name"
PROMPT_GENERATION = "prompt_generation"
APP_CREATED = "app_created"
APP_UPDATED = "app_updated"
APP_DELETED = "app_deleted"
FEEDBACK_CREATED = "feedback_created"
class SignalType(StrEnum):
"""Signal routing type for telemetry cases."""
TRACE = "trace"
METRIC_LOG = "metric_log"
class CaseRoute(BaseModel):
"""Routing configuration for a telemetry case.
Attributes:
signal_type: The type of signal (trace or metric_log).
ce_eligible: Whether this case is eligible for community edition tracing.
"""
signal_type: SignalType
ce_eligible: bool
class TelemetryEnvelope(BaseModel):
"""Envelope for telemetry events.
Attributes:
case: The telemetry case type.
tenant_id: The tenant identifier.
event_id: Unique event identifier for deduplication.
payload: The main event payload (inline for small payloads,
empty when offloaded to storage via ``payload_ref``).
metadata: Optional metadata dictionary. When the gateway
offloads a large payload to object storage, this contains
``{"payload_ref": "<storage_key>"}``.
"""
model_config = ConfigDict(extra="forbid", use_enum_values=False)
case: TelemetryCase
tenant_id: str
event_id: str
payload: dict[str, Any]
metadata: dict[str, Any] | None = None

View File

@@ -0,0 +1,77 @@
from __future__ import annotations
from collections.abc import Mapping
from typing import Any
from core.telemetry import TelemetryContext, TelemetryEvent, TraceTaskName
from core.telemetry import emit as telemetry_emit
from dify_graph.enums import WorkflowNodeExecutionMetadataKey
from models.workflow import WorkflowNodeExecutionModel
def enqueue_draft_node_execution_trace(
*,
execution: WorkflowNodeExecutionModel,
outputs: Mapping[str, Any] | None,
workflow_execution_id: str | None,
user_id: str,
) -> None:
node_data = _build_node_execution_data(
execution=execution,
outputs=outputs,
workflow_execution_id=workflow_execution_id,
)
telemetry_emit(
TelemetryEvent(
name=TraceTaskName.DRAFT_NODE_EXECUTION_TRACE,
context=TelemetryContext(
tenant_id=execution.tenant_id,
user_id=user_id,
app_id=execution.app_id,
),
payload={"node_execution_data": node_data},
)
)
def _build_node_execution_data(
*,
execution: WorkflowNodeExecutionModel,
outputs: Mapping[str, Any] | None,
workflow_execution_id: str | None,
) -> dict[str, Any]:
metadata = execution.execution_metadata_dict
node_outputs = outputs if outputs is not None else execution.outputs_dict
execution_id = workflow_execution_id or execution.workflow_run_id or execution.id
return {
"workflow_id": execution.workflow_id,
"workflow_execution_id": execution_id,
"tenant_id": execution.tenant_id,
"app_id": execution.app_id,
"node_execution_id": execution.id,
"node_id": execution.node_id,
"node_type": execution.node_type,
"title": execution.title,
"status": execution.status,
"error": execution.error,
"elapsed_time": execution.elapsed_time,
"index": execution.index,
"predecessor_node_id": execution.predecessor_node_id,
"created_at": execution.created_at,
"finished_at": execution.finished_at,
"total_tokens": metadata.get(WorkflowNodeExecutionMetadataKey.TOTAL_TOKENS, 0),
"total_price": metadata.get(WorkflowNodeExecutionMetadataKey.TOTAL_PRICE, 0.0),
"currency": metadata.get(WorkflowNodeExecutionMetadataKey.CURRENCY),
"tool_name": (metadata.get(WorkflowNodeExecutionMetadataKey.TOOL_INFO) or {}).get("tool_name")
if isinstance(metadata.get(WorkflowNodeExecutionMetadataKey.TOOL_INFO), dict)
else None,
"iteration_id": metadata.get(WorkflowNodeExecutionMetadataKey.ITERATION_ID),
"iteration_index": metadata.get(WorkflowNodeExecutionMetadataKey.ITERATION_INDEX),
"loop_id": metadata.get(WorkflowNodeExecutionMetadataKey.LOOP_ID),
"loop_index": metadata.get(WorkflowNodeExecutionMetadataKey.LOOP_INDEX),
"parallel_id": metadata.get(WorkflowNodeExecutionMetadataKey.PARALLEL_ID),
"node_inputs": execution.inputs_dict,
"node_outputs": node_outputs,
"process_data": execution.process_data_dict,
}

View File

@@ -0,0 +1,938 @@
"""Enterprise trace handler — duck-typed, NOT a BaseTraceInstance subclass.
Invoked directly in the Celery task, not through OpsTraceManager dispatch.
Only requires a matching ``trace(trace_info)`` method signature.
Signal strategy:
- **Traces (spans)**: workflow run, node execution, draft node execution only.
- **Metrics + structured logs**: all other event types.
Token metric labels (unified structure):
All token metrics (dify.tokens.input, dify.tokens.output, dify.tokens.total) use the
same label set for consistent filtering and aggregation:
- tenant_id: Tenant identifier
- app_id: Application identifier
- operation_type: Source of token usage (workflow | node_execution | message | rule_generate | etc.)
- model_provider: LLM provider name (empty string if not applicable)
- model_name: LLM model name (empty string if not applicable)
- node_type: Workflow node type (empty string if not node_execution)
This unified structure allows filtering by operation_type to separate:
- Workflow-level aggregates (operation_type=workflow)
- Individual node executions (operation_type=node_execution)
- Direct message calls (operation_type=message)
- Prompt generation operations (operation_type=rule_generate, code_generate, etc.)
Without this, tokens are double-counted when querying totals (workflow totals include
node totals, since workflow.total_tokens is the sum of all node tokens).
"""
from __future__ import annotations
import json
import logging
from typing import Any, cast
from opentelemetry.util.types import AttributeValue
from core.ops.entities.trace_entity import (
BaseTraceInfo,
DatasetRetrievalTraceInfo,
DraftNodeExecutionTrace,
GenerateNameTraceInfo,
MessageTraceInfo,
ModerationTraceInfo,
OperationType,
PromptGenerationTraceInfo,
SuggestedQuestionTraceInfo,
ToolTraceInfo,
WorkflowNodeTraceInfo,
WorkflowTraceInfo,
)
from enterprise.telemetry.entities import (
EnterpriseTelemetryCounter,
EnterpriseTelemetryEvent,
EnterpriseTelemetryHistogram,
EnterpriseTelemetrySpan,
TokenMetricLabels,
)
from enterprise.telemetry.telemetry_log import emit_metric_only_event, emit_telemetry_log
logger = logging.getLogger(__name__)
class EnterpriseOtelTrace:
"""Duck-typed enterprise trace handler.
``*_trace`` methods emit spans (workflow/node only) or structured logs
(all other events), plus metrics at 100 % accuracy.
"""
def __init__(self) -> None:
from extensions.ext_enterprise_telemetry import get_enterprise_exporter
exporter = get_enterprise_exporter()
if exporter is None:
raise RuntimeError("EnterpriseOtelTrace instantiated but exporter is not initialized")
self._exporter = exporter
def trace(self, trace_info: BaseTraceInfo) -> None:
if isinstance(trace_info, WorkflowTraceInfo):
self._workflow_trace(trace_info)
elif isinstance(trace_info, MessageTraceInfo):
self._message_trace(trace_info)
elif isinstance(trace_info, ToolTraceInfo):
self._tool_trace(trace_info)
elif isinstance(trace_info, DraftNodeExecutionTrace):
self._draft_node_execution_trace(trace_info)
elif isinstance(trace_info, WorkflowNodeTraceInfo):
self._node_execution_trace(trace_info)
elif isinstance(trace_info, ModerationTraceInfo):
self._moderation_trace(trace_info)
elif isinstance(trace_info, SuggestedQuestionTraceInfo):
self._suggested_question_trace(trace_info)
elif isinstance(trace_info, DatasetRetrievalTraceInfo):
self._dataset_retrieval_trace(trace_info)
elif isinstance(trace_info, GenerateNameTraceInfo):
self._generate_name_trace(trace_info)
elif isinstance(trace_info, PromptGenerationTraceInfo):
self._prompt_generation_trace(trace_info)
def _common_attrs(self, trace_info: BaseTraceInfo) -> dict[str, Any]:
metadata = self._metadata(trace_info)
tenant_id, app_id, user_id = self._context_ids(trace_info, metadata)
return {
"dify.trace_id": trace_info.resolved_trace_id,
"dify.tenant_id": tenant_id,
"dify.app_id": app_id,
"dify.app.name": metadata.get("app_name"),
"dify.workspace.name": metadata.get("workspace_name"),
"gen_ai.user.id": user_id,
"dify.message.id": trace_info.message_id,
}
def _metadata(self, trace_info: BaseTraceInfo) -> dict[str, Any]:
return trace_info.metadata
def _context_ids(
self,
trace_info: BaseTraceInfo,
metadata: dict[str, Any],
) -> tuple[str | None, str | None, str | None]:
tenant_id = getattr(trace_info, "tenant_id", None) or metadata.get("tenant_id")
app_id = getattr(trace_info, "app_id", None) or metadata.get("app_id")
user_id = getattr(trace_info, "user_id", None) or metadata.get("user_id")
return tenant_id, app_id, user_id
def _labels(self, **values: AttributeValue) -> dict[str, AttributeValue]:
return dict(values)
def _safe_payload_value(self, value: Any) -> str | dict[str, Any] | list[object] | None:
if isinstance(value, str):
return value
if isinstance(value, dict):
return cast(dict[str, Any], value)
if isinstance(value, list):
items: list[object] = []
for item in cast(list[object], value):
items.append(item)
return items
return None
def _content_or_ref(self, value: Any, ref: str) -> Any:
if self._exporter.include_content:
return self._maybe_json(value)
return ref
def _maybe_json(self, value: Any) -> str | None:
if value is None:
return None
if isinstance(value, str):
return value
try:
return json.dumps(value, default=str)
except (TypeError, ValueError):
return str(value)
# ------------------------------------------------------------------
# SPAN-emitting handlers (workflow, node execution, draft node)
# ------------------------------------------------------------------
def _workflow_trace(self, info: WorkflowTraceInfo) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
# -- Span attrs: identity + structure + status + timing + gen_ai scalars --
span_attrs: dict[str, Any] = {
"dify.trace_id": info.resolved_trace_id,
"dify.tenant_id": tenant_id,
"dify.app_id": app_id,
"dify.workflow.id": info.workflow_id,
"dify.workflow.run_id": info.workflow_run_id,
"dify.workflow.status": info.workflow_run_status,
"dify.workflow.error": info.error,
"dify.workflow.elapsed_time": info.workflow_run_elapsed_time,
"dify.invoke_from": metadata.get("triggered_from"),
"dify.conversation.id": info.conversation_id,
"dify.message.id": info.message_id,
"dify.invoked_by": info.invoked_by,
"gen_ai.usage.total_tokens": info.total_tokens,
"gen_ai.user.id": user_id,
}
trace_correlation_override, parent_span_id_source = info.resolved_parent_context
parent_ctx = metadata.get("parent_trace_context")
if isinstance(parent_ctx, dict):
parent_ctx_dict = cast(dict[str, Any], parent_ctx)
span_attrs["dify.parent.trace_id"] = parent_ctx_dict.get("trace_id")
span_attrs["dify.parent.node.execution_id"] = parent_ctx_dict.get("parent_node_execution_id")
span_attrs["dify.parent.workflow.run_id"] = parent_ctx_dict.get("parent_workflow_run_id")
span_attrs["dify.parent.app.id"] = parent_ctx_dict.get("parent_app_id")
self._exporter.export_span(
EnterpriseTelemetrySpan.WORKFLOW_RUN,
span_attrs,
correlation_id=info.workflow_run_id,
span_id_source=info.workflow_run_id,
start_time=info.start_time,
end_time=info.end_time,
trace_correlation_override=trace_correlation_override,
parent_span_id_source=parent_span_id_source,
)
# -- Companion log: ALL attrs (span + detail) for full picture --
log_attrs: dict[str, Any] = {**span_attrs}
log_attrs.update(
{
"dify.app.name": metadata.get("app_name"),
"dify.workspace.name": metadata.get("workspace_name"),
"gen_ai.user.id": user_id,
"gen_ai.usage.total_tokens": info.total_tokens,
"dify.workflow.version": info.workflow_run_version,
}
)
ref = f"ref:workflow_run_id={info.workflow_run_id}"
log_attrs["dify.workflow.inputs"] = self._content_or_ref(info.workflow_run_inputs, ref)
log_attrs["dify.workflow.outputs"] = self._content_or_ref(info.workflow_run_outputs, ref)
log_attrs["dify.workflow.query"] = self._content_or_ref(info.query, ref)
emit_telemetry_log(
event_name=EnterpriseTelemetryEvent.WORKFLOW_RUN,
attributes=log_attrs,
signal="span_detail",
trace_id_source=info.workflow_run_id,
span_id_source=info.workflow_run_id,
tenant_id=tenant_id,
user_id=user_id,
)
# -- Metrics --
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
)
token_labels = TokenMetricLabels(
tenant_id=tenant_id or "",
app_id=app_id or "",
operation_type=OperationType.WORKFLOW,
model_provider="",
model_name="",
node_type="",
).to_dict()
self._exporter.increment_counter(EnterpriseTelemetryCounter.TOKENS, info.total_tokens, token_labels)
if info.prompt_tokens is not None and info.prompt_tokens > 0:
self._exporter.increment_counter(EnterpriseTelemetryCounter.INPUT_TOKENS, info.prompt_tokens, token_labels)
if info.completion_tokens is not None and info.completion_tokens > 0:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.OUTPUT_TOKENS, info.completion_tokens, token_labels
)
invoke_from = metadata.get("triggered_from", "")
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type="workflow",
status=info.workflow_run_status,
invoke_from=invoke_from,
),
)
# Prefer wall-clock timestamps over the elapsed_time field: elapsed_time defaults
# to 0 in the DB and can be stale if the Celery write races with the trace task.
# start_time = workflow_run.created_at, end_time = workflow_run.finished_at.
if info.start_time and info.end_time:
workflow_duration = (info.end_time - info.start_time).total_seconds()
elif info.workflow_run_elapsed_time:
workflow_duration = float(info.workflow_run_elapsed_time)
else:
workflow_duration = 0.0
self._exporter.record_histogram(
EnterpriseTelemetryHistogram.WORKFLOW_DURATION,
workflow_duration,
self._labels(
**labels,
status=info.workflow_run_status,
),
)
if info.error:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.ERRORS,
1,
self._labels(
**labels,
type="workflow",
),
)
def _node_execution_trace(self, info: WorkflowNodeTraceInfo) -> None:
self._emit_node_execution_trace(info, EnterpriseTelemetrySpan.NODE_EXECUTION, "node")
def _draft_node_execution_trace(self, info: DraftNodeExecutionTrace) -> None:
self._emit_node_execution_trace(
info,
EnterpriseTelemetrySpan.DRAFT_NODE_EXECUTION,
"draft_node",
correlation_id_override=info.node_execution_id,
trace_correlation_override_param=info.workflow_run_id,
)
def _emit_node_execution_trace(
self,
info: WorkflowNodeTraceInfo,
span_name: EnterpriseTelemetrySpan,
request_type: str,
correlation_id_override: str | None = None,
trace_correlation_override_param: str | None = None,
) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
# -- Span attrs: identity + structure + status + timing + gen_ai scalars --
span_attrs: dict[str, Any] = {
"dify.trace_id": info.resolved_trace_id,
"dify.tenant_id": tenant_id,
"dify.app_id": app_id,
"dify.workflow.id": info.workflow_id,
"dify.workflow.run_id": info.workflow_run_id,
"dify.message.id": info.message_id,
"dify.conversation.id": metadata.get("conversation_id"),
"dify.node.execution_id": info.node_execution_id,
"dify.node.id": info.node_id,
"dify.node.type": info.node_type,
"dify.node.title": info.title,
"dify.node.status": info.status,
"dify.node.error": info.error,
"dify.node.elapsed_time": info.elapsed_time,
"dify.node.index": info.index,
"dify.node.predecessor_node_id": info.predecessor_node_id,
"dify.node.iteration_id": info.iteration_id,
"dify.node.loop_id": info.loop_id,
"dify.node.parallel_id": info.parallel_id,
"dify.node.invoked_by": info.invoked_by,
"gen_ai.usage.input_tokens": info.prompt_tokens,
"gen_ai.usage.output_tokens": info.completion_tokens,
"gen_ai.usage.total_tokens": info.total_tokens,
"gen_ai.request.model": info.model_name,
"gen_ai.provider.name": info.model_provider,
"gen_ai.user.id": user_id,
}
resolved_override, _ = info.resolved_parent_context
trace_correlation_override = trace_correlation_override_param or resolved_override
effective_correlation_id = correlation_id_override or info.workflow_run_id
self._exporter.export_span(
span_name,
span_attrs,
correlation_id=effective_correlation_id,
span_id_source=info.node_execution_id,
start_time=info.start_time,
end_time=info.end_time,
trace_correlation_override=trace_correlation_override,
)
# -- Companion log: ALL attrs (span + detail) --
log_attrs: dict[str, Any] = {**span_attrs}
log_attrs.update(
{
"dify.app.name": metadata.get("app_name"),
"dify.workspace.name": metadata.get("workspace_name"),
"dify.invoke_from": metadata.get("invoke_from"),
"gen_ai.user.id": user_id,
"gen_ai.usage.total_tokens": info.total_tokens,
"dify.node.total_price": info.total_price,
"dify.node.currency": info.currency,
"gen_ai.provider.name": info.model_provider,
"gen_ai.request.model": info.model_name,
"gen_ai.tool.name": info.tool_name,
"dify.node.iteration_index": info.iteration_index,
"dify.node.loop_index": info.loop_index,
"dify.plugin.name": metadata.get("plugin_name"),
"dify.credential.name": metadata.get("credential_name"),
"dify.credential.id": metadata.get("credential_id"),
"dify.dataset.ids": self._maybe_json(metadata.get("dataset_ids")),
"dify.dataset.names": self._maybe_json(metadata.get("dataset_names")),
}
)
ref = f"ref:node_execution_id={info.node_execution_id}"
log_attrs["dify.node.inputs"] = self._content_or_ref(info.node_inputs, ref)
log_attrs["dify.node.outputs"] = self._content_or_ref(info.node_outputs, ref)
log_attrs["dify.node.process_data"] = self._content_or_ref(info.process_data, ref)
emit_telemetry_log(
event_name=span_name.value,
attributes=log_attrs,
signal="span_detail",
trace_id_source=info.workflow_run_id,
span_id_source=info.node_execution_id,
tenant_id=tenant_id,
user_id=user_id,
)
# -- Metrics --
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
node_type=info.node_type,
model_provider=info.model_provider or "",
)
if info.total_tokens:
token_labels = TokenMetricLabels(
tenant_id=tenant_id or "",
app_id=app_id or "",
operation_type=OperationType.NODE_EXECUTION,
model_provider=info.model_provider or "",
model_name=info.model_name or "",
node_type=info.node_type,
).to_dict()
self._exporter.increment_counter(EnterpriseTelemetryCounter.TOKENS, info.total_tokens, token_labels)
if info.prompt_tokens is not None and info.prompt_tokens > 0:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.INPUT_TOKENS, info.prompt_tokens, token_labels
)
if info.completion_tokens is not None and info.completion_tokens > 0:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.OUTPUT_TOKENS, info.completion_tokens, token_labels
)
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type=request_type,
status=info.status,
model_name=info.model_name or "",
),
)
duration_labels = dict(labels)
duration_labels["model_name"] = info.model_name or ""
plugin_name = metadata.get("plugin_name")
if plugin_name and info.node_type in {"tool", "knowledge-retrieval"}:
duration_labels["plugin_name"] = plugin_name
self._exporter.record_histogram(EnterpriseTelemetryHistogram.NODE_DURATION, info.elapsed_time, duration_labels)
if info.error:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.ERRORS,
1,
self._labels(
**labels,
type=request_type,
model_name=info.model_name or "",
),
)
# ------------------------------------------------------------------
# METRIC-ONLY handlers (structured log + counters/histograms)
# ------------------------------------------------------------------
def _message_trace(self, info: MessageTraceInfo) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
attrs = self._common_attrs(info)
attrs.update(
{
"dify.invoke_from": metadata.get("from_source"),
"dify.conversation.id": metadata.get("conversation_id"),
"dify.conversation.mode": info.conversation_mode,
"gen_ai.provider.name": metadata.get("ls_provider"),
"gen_ai.request.model": metadata.get("ls_model_name"),
"gen_ai.usage.input_tokens": info.message_tokens,
"gen_ai.usage.output_tokens": info.answer_tokens,
"gen_ai.usage.total_tokens": info.total_tokens,
"dify.message.status": metadata.get("status"),
"dify.message.error": info.error,
"dify.message.from_source": metadata.get("from_source"),
"dify.message.from_end_user_id": metadata.get("from_end_user_id"),
"dify.message.from_account_id": metadata.get("from_account_id"),
"dify.streaming": info.is_streaming_request,
"dify.message.time_to_first_token": info.gen_ai_server_time_to_first_token,
"dify.message.streaming_duration": info.llm_streaming_time_to_generate,
"dify.workflow.run_id": metadata.get("workflow_run_id"),
}
)
node_execution_id = metadata.get("node_execution_id")
if node_execution_id:
attrs["dify.node.execution_id"] = node_execution_id
ref = f"ref:message_id={info.message_id}"
inputs = self._safe_payload_value(info.inputs)
outputs = self._safe_payload_value(info.outputs)
attrs["dify.message.inputs"] = self._content_or_ref(inputs, ref)
attrs["dify.message.outputs"] = self._content_or_ref(outputs, ref)
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.MESSAGE_RUN,
attributes=attrs,
trace_id_source=metadata.get("workflow_run_id") or (str(info.message_id) if info.message_id else None),
span_id_source=node_execution_id,
tenant_id=tenant_id,
user_id=user_id,
)
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
model_provider=metadata.get("ls_provider") or "",
model_name=metadata.get("ls_model_name") or "",
)
token_labels = TokenMetricLabels(
tenant_id=tenant_id or "",
app_id=app_id or "",
operation_type=OperationType.MESSAGE,
model_provider=metadata.get("ls_provider") or "",
model_name=metadata.get("ls_model_name") or "",
node_type="",
).to_dict()
self._exporter.increment_counter(EnterpriseTelemetryCounter.TOKENS, info.total_tokens, token_labels)
if info.message_tokens > 0:
self._exporter.increment_counter(EnterpriseTelemetryCounter.INPUT_TOKENS, info.message_tokens, token_labels)
if info.answer_tokens > 0:
self._exporter.increment_counter(EnterpriseTelemetryCounter.OUTPUT_TOKENS, info.answer_tokens, token_labels)
invoke_from = metadata.get("from_source", "")
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type="message",
status=metadata.get("status", ""),
invoke_from=invoke_from,
),
)
if info.start_time and info.end_time:
duration = (info.end_time - info.start_time).total_seconds()
self._exporter.record_histogram(EnterpriseTelemetryHistogram.MESSAGE_DURATION, duration, labels)
if info.gen_ai_server_time_to_first_token is not None:
self._exporter.record_histogram(
EnterpriseTelemetryHistogram.MESSAGE_TTFT, info.gen_ai_server_time_to_first_token, labels
)
if info.error:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.ERRORS,
1,
self._labels(
**labels,
type="message",
),
)
def _tool_trace(self, info: ToolTraceInfo) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
attrs = self._common_attrs(info)
attrs.update(
{
"gen_ai.tool.name": info.tool_name,
"dify.tool.time_cost": info.time_cost,
"dify.tool.error": info.error,
"dify.workflow.run_id": metadata.get("workflow_run_id"),
}
)
node_execution_id = metadata.get("node_execution_id")
if node_execution_id:
attrs["dify.node.execution_id"] = node_execution_id
ref = f"ref:message_id={info.message_id}"
attrs["dify.tool.inputs"] = self._content_or_ref(info.tool_inputs, ref)
attrs["dify.tool.outputs"] = self._content_or_ref(info.tool_outputs, ref)
attrs["dify.tool.parameters"] = self._content_or_ref(info.tool_parameters, ref)
attrs["dify.tool.config"] = self._content_or_ref(info.tool_config, ref)
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.TOOL_EXECUTION,
attributes=attrs,
trace_id_source=info.resolved_trace_id,
span_id_source=node_execution_id,
tenant_id=tenant_id,
user_id=user_id,
)
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
tool_name=info.tool_name,
)
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type="tool",
),
)
self._exporter.record_histogram(EnterpriseTelemetryHistogram.TOOL_DURATION, float(info.time_cost), labels)
if info.error:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.ERRORS,
1,
self._labels(
**labels,
type="tool",
),
)
def _moderation_trace(self, info: ModerationTraceInfo) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
attrs = self._common_attrs(info)
attrs.update(
{
"dify.moderation.flagged": info.flagged,
"dify.moderation.action": info.action,
"dify.moderation.preset_response": info.preset_response,
"dify.workflow.run_id": metadata.get("workflow_run_id"),
}
)
node_execution_id = metadata.get("node_execution_id")
if node_execution_id:
attrs["dify.node.execution_id"] = node_execution_id
attrs["dify.moderation.query"] = self._content_or_ref(
info.query,
f"ref:message_id={info.message_id}",
)
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.MODERATION_CHECK,
attributes=attrs,
trace_id_source=info.resolved_trace_id,
span_id_source=node_execution_id,
tenant_id=tenant_id,
user_id=user_id,
)
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
)
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type="moderation",
),
)
def _suggested_question_trace(self, info: SuggestedQuestionTraceInfo) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
attrs = self._common_attrs(info)
attrs.update(
{
"gen_ai.usage.total_tokens": info.total_tokens,
"dify.suggested_question.status": info.status,
"dify.suggested_question.error": info.error,
"gen_ai.provider.name": info.model_provider,
"gen_ai.request.model": info.model_id,
"dify.suggested_question.count": len(info.suggested_question),
"dify.workflow.run_id": metadata.get("workflow_run_id"),
}
)
node_execution_id = metadata.get("node_execution_id")
if node_execution_id:
attrs["dify.node.execution_id"] = node_execution_id
attrs["dify.suggested_question.questions"] = self._content_or_ref(
info.suggested_question,
f"ref:message_id={info.message_id}",
)
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.SUGGESTED_QUESTION_GENERATION,
attributes=attrs,
trace_id_source=info.resolved_trace_id,
span_id_source=node_execution_id,
tenant_id=tenant_id,
user_id=user_id,
)
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
)
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type="suggested_question",
model_provider=info.model_provider or "",
model_name=info.model_id or "",
),
)
def _dataset_retrieval_trace(self, info: DatasetRetrievalTraceInfo) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
attrs = self._common_attrs(info)
attrs["dify.dataset.error"] = info.error
attrs["dify.workflow.run_id"] = metadata.get("workflow_run_id")
node_execution_id = metadata.get("node_execution_id")
if node_execution_id:
attrs["dify.node.execution_id"] = node_execution_id
docs: list[dict[str, Any]] = []
documents_any: Any = info.documents
documents_list: list[Any] = cast(list[Any], documents_any) if isinstance(documents_any, list) else []
for entry in documents_list:
if isinstance(entry, dict):
entry_dict: dict[str, Any] = cast(dict[str, Any], entry)
docs.append(entry_dict)
dataset_ids: list[str] = []
dataset_names: list[str] = []
structured_docs: list[dict[str, Any]] = []
for doc in docs:
meta_raw = doc.get("metadata")
meta: dict[str, Any] = cast(dict[str, Any], meta_raw) if isinstance(meta_raw, dict) else {}
did = meta.get("dataset_id")
dname = meta.get("dataset_name")
if did and did not in dataset_ids:
dataset_ids.append(did)
if dname and dname not in dataset_names:
dataset_names.append(dname)
structured_docs.append(
{
"dataset_id": did,
"document_id": meta.get("document_id"),
"segment_id": meta.get("segment_id"),
"score": meta.get("score"),
}
)
attrs["dify.dataset.ids"] = self._maybe_json(dataset_ids)
attrs["dify.dataset.names"] = self._maybe_json(dataset_names)
attrs["dify.retrieval.document_count"] = len(docs)
embedding_models_raw: Any = metadata.get("embedding_models")
embedding_models: dict[str, Any] = (
cast(dict[str, Any], embedding_models_raw) if isinstance(embedding_models_raw, dict) else {}
)
if embedding_models:
providers: list[str] = []
models: list[str] = []
for ds_info in embedding_models.values():
if isinstance(ds_info, dict):
ds_info_dict: dict[str, Any] = cast(dict[str, Any], ds_info)
p = ds_info_dict.get("embedding_model_provider", "")
m = ds_info_dict.get("embedding_model", "")
if p and p not in providers:
providers.append(p)
if m and m not in models:
models.append(m)
attrs["dify.dataset.embedding_providers"] = self._maybe_json(providers)
attrs["dify.dataset.embedding_models"] = self._maybe_json(models)
# Add rerank model to logs
rerank_provider = metadata.get("rerank_model_provider", "")
rerank_model = metadata.get("rerank_model_name", "")
if rerank_provider or rerank_model:
attrs["dify.retrieval.rerank_provider"] = rerank_provider
attrs["dify.retrieval.rerank_model"] = rerank_model
ref = f"ref:message_id={info.message_id}"
retrieval_inputs = self._safe_payload_value(info.inputs)
attrs["dify.retrieval.query"] = self._content_or_ref(retrieval_inputs, ref)
attrs["dify.dataset.documents"] = self._content_or_ref(structured_docs, ref)
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.DATASET_RETRIEVAL,
attributes=attrs,
trace_id_source=metadata.get("workflow_run_id") or (str(info.message_id) if info.message_id else None),
span_id_source=node_execution_id or (str(info.message_id) if info.message_id else None),
tenant_id=tenant_id,
user_id=user_id,
)
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
)
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type="dataset_retrieval",
),
)
for did in dataset_ids:
# Get embedding model for this specific dataset
ds_embedding_info = embedding_models.get(did, {})
embedding_provider = ds_embedding_info.get("embedding_model_provider", "")
embedding_model = ds_embedding_info.get("embedding_model", "")
# Get rerank model (same for all datasets in this retrieval)
rerank_provider = metadata.get("rerank_model_provider", "")
rerank_model = metadata.get("rerank_model_name", "")
self._exporter.increment_counter(
EnterpriseTelemetryCounter.DATASET_RETRIEVALS,
1,
self._labels(
**labels,
dataset_id=did,
embedding_model_provider=embedding_provider,
embedding_model=embedding_model,
rerank_model_provider=rerank_provider,
rerank_model=rerank_model,
),
)
def _generate_name_trace(self, info: GenerateNameTraceInfo) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
attrs = self._common_attrs(info)
attrs["dify.conversation.id"] = info.conversation_id
node_execution_id = metadata.get("node_execution_id")
if node_execution_id:
attrs["dify.node.execution_id"] = node_execution_id
ref = f"ref:conversation_id={info.conversation_id}"
inputs = self._safe_payload_value(info.inputs)
outputs = self._safe_payload_value(info.outputs)
attrs["dify.generate_name.inputs"] = self._content_or_ref(inputs, ref)
attrs["dify.generate_name.outputs"] = self._content_or_ref(outputs, ref)
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.GENERATE_NAME_EXECUTION,
attributes=attrs,
trace_id_source=info.resolved_trace_id,
span_id_source=node_execution_id,
tenant_id=tenant_id,
user_id=user_id,
)
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
)
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type="generate_name",
),
)
def _prompt_generation_trace(self, info: PromptGenerationTraceInfo) -> None:
metadata = self._metadata(info)
tenant_id, app_id, user_id = self._context_ids(info, metadata)
attrs = {
"dify.trace_id": info.resolved_trace_id,
"dify.tenant_id": tenant_id,
"dify.user.id": user_id,
"dify.app.id": app_id or "",
"dify.app.name": metadata.get("app_name"),
"dify.workspace.name": metadata.get("workspace_name"),
"dify.operation.type": info.operation_type,
"gen_ai.provider.name": info.model_provider,
"gen_ai.request.model": info.model_name,
"gen_ai.usage.input_tokens": info.prompt_tokens,
"gen_ai.usage.output_tokens": info.completion_tokens,
"gen_ai.usage.total_tokens": info.total_tokens,
"dify.prompt_generation.latency": info.latency,
"dify.prompt_generation.error": info.error,
}
node_execution_id = metadata.get("node_execution_id")
if node_execution_id:
attrs["dify.node.execution_id"] = node_execution_id
if info.total_price is not None:
attrs["dify.prompt_generation.total_price"] = info.total_price
attrs["dify.prompt_generation.currency"] = info.currency
ref = f"ref:trace_id={info.trace_id}"
outputs = self._safe_payload_value(info.outputs)
attrs["dify.prompt_generation.instruction"] = self._content_or_ref(info.instruction, ref)
attrs["dify.prompt_generation.output"] = self._content_or_ref(outputs, ref)
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.PROMPT_GENERATION_EXECUTION,
attributes=attrs,
trace_id_source=info.resolved_trace_id,
span_id_source=node_execution_id,
tenant_id=tenant_id,
user_id=user_id,
)
token_labels = TokenMetricLabels(
tenant_id=tenant_id or "",
app_id=app_id or "",
operation_type=info.operation_type,
model_provider=info.model_provider,
model_name=info.model_name,
node_type="",
).to_dict()
labels = self._labels(
tenant_id=tenant_id or "",
app_id=app_id or "",
operation_type=info.operation_type,
model_provider=info.model_provider,
model_name=info.model_name,
)
self._exporter.increment_counter(EnterpriseTelemetryCounter.TOKENS, info.total_tokens, token_labels)
if info.prompt_tokens > 0:
self._exporter.increment_counter(EnterpriseTelemetryCounter.INPUT_TOKENS, info.prompt_tokens, token_labels)
if info.completion_tokens > 0:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.OUTPUT_TOKENS, info.completion_tokens, token_labels
)
status = "failed" if info.error else "success"
self._exporter.increment_counter(
EnterpriseTelemetryCounter.REQUESTS,
1,
self._labels(
**labels,
type="prompt_generation",
status=status,
),
)
self._exporter.record_histogram(
EnterpriseTelemetryHistogram.PROMPT_GENERATION_DURATION,
info.latency,
labels,
)
if info.error:
self._exporter.increment_counter(
EnterpriseTelemetryCounter.ERRORS,
1,
self._labels(
**labels,
type="prompt_generation",
),
)

View File

@@ -0,0 +1,121 @@
from enum import StrEnum
from typing import cast
from opentelemetry.util.types import AttributeValue
from pydantic import BaseModel, ConfigDict
class EnterpriseTelemetrySpan(StrEnum):
WORKFLOW_RUN = "dify.workflow.run"
NODE_EXECUTION = "dify.node.execution"
DRAFT_NODE_EXECUTION = "dify.node.execution.draft"
class EnterpriseTelemetryEvent(StrEnum):
"""Event names for enterprise telemetry logs."""
APP_CREATED = "dify.app.created"
APP_UPDATED = "dify.app.updated"
APP_DELETED = "dify.app.deleted"
FEEDBACK_CREATED = "dify.feedback.created"
WORKFLOW_RUN = "dify.workflow.run"
MESSAGE_RUN = "dify.message.run"
TOOL_EXECUTION = "dify.tool.execution"
MODERATION_CHECK = "dify.moderation.check"
SUGGESTED_QUESTION_GENERATION = "dify.suggested_question.generation"
DATASET_RETRIEVAL = "dify.dataset.retrieval"
GENERATE_NAME_EXECUTION = "dify.generate_name.execution"
PROMPT_GENERATION_EXECUTION = "dify.prompt_generation.execution"
REHYDRATION_FAILED = "dify.telemetry.rehydration_failed"
class EnterpriseTelemetryCounter(StrEnum):
TOKENS = "tokens"
INPUT_TOKENS = "input_tokens"
OUTPUT_TOKENS = "output_tokens"
REQUESTS = "requests"
ERRORS = "errors"
FEEDBACK = "feedback"
DATASET_RETRIEVALS = "dataset_retrievals"
APP_CREATED = "app_created"
APP_UPDATED = "app_updated"
APP_DELETED = "app_deleted"
class EnterpriseTelemetryHistogram(StrEnum):
WORKFLOW_DURATION = "workflow_duration"
NODE_DURATION = "node_duration"
MESSAGE_DURATION = "message_duration"
MESSAGE_TTFT = "message_ttft"
TOOL_DURATION = "tool_duration"
PROMPT_GENERATION_DURATION = "prompt_generation_duration"
class TokenMetricLabels(BaseModel):
"""Unified label structure for all dify.token.* metrics.
All token counters (dify.tokens.input, dify.tokens.output, dify.tokens.total) MUST
use this exact label set to ensure consistent filtering and aggregation across
different operation types.
Attributes:
tenant_id: Tenant identifier.
app_id: Application identifier.
operation_type: Source of token usage (workflow | node_execution | message |
rule_generate | code_generate | structured_output | instruction_modify).
model_provider: LLM provider name. Empty string if not applicable (e.g., workflow-level).
model_name: LLM model name. Empty string if not applicable (e.g., workflow-level).
node_type: Workflow node type. Empty string unless operation_type=node_execution.
Usage:
labels = TokenMetricLabels(
tenant_id="tenant-123",
app_id="app-456",
operation_type=OperationType.WORKFLOW,
model_provider="",
model_name="",
node_type="",
)
exporter.increment_counter(
EnterpriseTelemetryCounter.INPUT_TOKENS,
100,
labels.to_dict()
)
Design rationale:
Without this unified structure, tokens get double-counted when querying totals
because workflow.total_tokens is already the sum of all node tokens. The
operation_type label allows filtering to separate workflow-level aggregates from
node-level detail, while keeping the same label cardinality for consistent queries.
"""
tenant_id: str
app_id: str
operation_type: str
model_provider: str
model_name: str
node_type: str
model_config = ConfigDict(extra="forbid", frozen=True)
def to_dict(self) -> dict[str, AttributeValue]:
return cast(
dict[str, AttributeValue],
{
"tenant_id": self.tenant_id,
"app_id": self.app_id,
"operation_type": self.operation_type,
"model_provider": self.model_provider,
"model_name": self.model_name,
"node_type": self.node_type,
},
)
__all__ = [
"EnterpriseTelemetryCounter",
"EnterpriseTelemetryEvent",
"EnterpriseTelemetryHistogram",
"EnterpriseTelemetrySpan",
"TokenMetricLabels",
]

View File

@@ -0,0 +1,99 @@
"""Blinker signal handlers for enterprise telemetry.
Registered at import time via ``@signal.connect`` decorators.
Import must happen during ``ext_enterprise_telemetry.init_app()`` to
ensure handlers fire. Each handler delegates to ``core.telemetry.gateway``
which handles routing, EE-gating, and dispatch.
All handlers are best-effort: exceptions are caught and logged so that
telemetry failures never break user-facing operations.
"""
from __future__ import annotations
import logging
from events.app_event import app_was_created, app_was_deleted, app_was_updated
from events.feedback_event import feedback_was_created
logger = logging.getLogger(__name__)
__all__ = [
"_handle_app_created",
"_handle_app_deleted",
"_handle_app_updated",
"_handle_feedback_created",
]
@app_was_created.connect
def _handle_app_created(sender: object, **kwargs: object) -> None:
try:
from core.telemetry.gateway import emit as gateway_emit
from enterprise.telemetry.contracts import TelemetryCase
gateway_emit(
case=TelemetryCase.APP_CREATED,
context={"tenant_id": str(getattr(sender, "tenant_id", "") or "")},
payload={
"app_id": getattr(sender, "id", None),
"mode": getattr(sender, "mode", None),
},
)
except Exception:
logger.warning("Failed to emit app_created telemetry", exc_info=True)
@app_was_deleted.connect
def _handle_app_deleted(sender: object, **kwargs: object) -> None:
try:
from core.telemetry.gateway import emit as gateway_emit
from enterprise.telemetry.contracts import TelemetryCase
gateway_emit(
case=TelemetryCase.APP_DELETED,
context={"tenant_id": str(getattr(sender, "tenant_id", "") or "")},
payload={"app_id": getattr(sender, "id", None)},
)
except Exception:
logger.warning("Failed to emit app_deleted telemetry", exc_info=True)
@app_was_updated.connect
def _handle_app_updated(sender: object, **kwargs: object) -> None:
try:
from core.telemetry.gateway import emit as gateway_emit
from enterprise.telemetry.contracts import TelemetryCase
gateway_emit(
case=TelemetryCase.APP_UPDATED,
context={"tenant_id": str(getattr(sender, "tenant_id", "") or "")},
payload={"app_id": getattr(sender, "id", None)},
)
except Exception:
logger.warning("Failed to emit app_updated telemetry", exc_info=True)
@feedback_was_created.connect
def _handle_feedback_created(sender: object, **kwargs: object) -> None:
try:
from core.telemetry.gateway import emit as gateway_emit
from enterprise.telemetry.contracts import TelemetryCase
tenant_id = str(kwargs.get("tenant_id", "") or "")
gateway_emit(
case=TelemetryCase.FEEDBACK_CREATED,
context={"tenant_id": tenant_id},
payload={
"message_id": getattr(sender, "message_id", None),
"app_id": getattr(sender, "app_id", None),
"conversation_id": getattr(sender, "conversation_id", None),
"from_end_user_id": getattr(sender, "from_end_user_id", None),
"from_account_id": getattr(sender, "from_account_id", None),
"rating": getattr(sender, "rating", None),
"from_source": getattr(sender, "from_source", None),
"content": getattr(sender, "content", None),
},
)
except Exception:
logger.warning("Failed to emit feedback_created telemetry", exc_info=True)

View File

@@ -0,0 +1,284 @@
"""Enterprise OTEL exporter — shared by EnterpriseOtelTrace, event handlers, and direct instrumentation.
Uses dedicated TracerProvider and MeterProvider instances (configurable sampling,
independent from ext_otel.py infrastructure).
Initialized once during Flask extension init (single-threaded via ext_enterprise_telemetry.py).
Accessed via ``ext_enterprise_telemetry.get_enterprise_exporter()`` from any thread/process.
"""
import logging
import socket
import uuid
from datetime import datetime
from typing import Any, cast
from opentelemetry import trace
from opentelemetry.context import Context
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter as GRPCMetricExporter
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter as GRPCSpanExporter
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter as HTTPMetricExporter
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter as HTTPSpanExporter
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.trace.sampling import ParentBasedTraceIdRatio
from opentelemetry.semconv.resource import ResourceAttributes
from opentelemetry.trace import SpanContext, TraceFlags
from opentelemetry.util.types import Attributes, AttributeValue
from configs import dify_config
from enterprise.telemetry.entities import EnterpriseTelemetryCounter, EnterpriseTelemetryHistogram
from enterprise.telemetry.id_generator import (
CorrelationIdGenerator,
compute_deterministic_span_id,
set_correlation_id,
set_span_id_source,
)
logger = logging.getLogger(__name__)
def is_enterprise_telemetry_enabled() -> bool:
return bool(dify_config.ENTERPRISE_ENABLED and dify_config.ENTERPRISE_TELEMETRY_ENABLED)
def _parse_otlp_headers(raw: str) -> dict[str, str]:
"""Parse ``key=value,key2=value2`` into a dict."""
if not raw:
return {}
headers: dict[str, str] = {}
for pair in raw.split(","):
if "=" not in pair:
continue
k, v = pair.split("=", 1)
headers[k.strip().lower()] = v.strip()
return headers
def _datetime_to_ns(dt: datetime) -> int:
"""Convert a datetime to nanoseconds since epoch (OTEL convention)."""
return int(dt.timestamp() * 1_000_000_000)
class _ExporterFactory:
def __init__(self, protocol: str, endpoint: str, headers: dict[str, str], insecure: bool):
self._protocol = protocol
self._endpoint = endpoint
self._headers = headers
self._grpc_headers = tuple(headers.items()) if headers else None
self._http_headers = headers or None
self._insecure = insecure
def create_trace_exporter(self) -> HTTPSpanExporter | GRPCSpanExporter:
if self._protocol == "grpc":
return GRPCSpanExporter(
endpoint=self._endpoint or None,
headers=self._grpc_headers,
insecure=self._insecure,
)
trace_endpoint = f"{self._endpoint}/v1/traces" if self._endpoint else ""
return HTTPSpanExporter(endpoint=trace_endpoint or None, headers=self._http_headers)
def create_metric_exporter(self) -> HTTPMetricExporter | GRPCMetricExporter:
if self._protocol == "grpc":
return GRPCMetricExporter(
endpoint=self._endpoint or None,
headers=self._grpc_headers,
insecure=self._insecure,
)
metric_endpoint = f"{self._endpoint}/v1/metrics" if self._endpoint else ""
return HTTPMetricExporter(endpoint=metric_endpoint or None, headers=self._http_headers)
class EnterpriseExporter:
"""Shared OTEL exporter for all enterprise telemetry.
``export_span`` creates spans with optional real timestamps, deterministic
span/trace IDs, and cross-workflow parent linking.
``increment_counter`` / ``record_histogram`` emit OTEL metrics at 100% accuracy.
"""
def __init__(self, config: object) -> None:
endpoint: str = getattr(config, "ENTERPRISE_OTLP_ENDPOINT", "")
headers_raw: str = getattr(config, "ENTERPRISE_OTLP_HEADERS", "")
protocol: str = (getattr(config, "ENTERPRISE_OTLP_PROTOCOL", "http") or "http").lower()
service_name: str = getattr(config, "ENTERPRISE_SERVICE_NAME", "dify")
sampling_rate: float = getattr(config, "ENTERPRISE_OTEL_SAMPLING_RATE", 1.0)
self.include_content: bool = getattr(config, "ENTERPRISE_INCLUDE_CONTENT", True)
api_key: str = getattr(config, "ENTERPRISE_OTLP_API_KEY", "")
# Auto-detect TLS: https:// uses secure, everything else is insecure
insecure = not endpoint.startswith("https://")
resource = Resource(
attributes={
ResourceAttributes.SERVICE_NAME: service_name,
ResourceAttributes.HOST_NAME: socket.gethostname(),
}
)
sampler = ParentBasedTraceIdRatio(sampling_rate)
id_generator = CorrelationIdGenerator()
self._tracer_provider = TracerProvider(resource=resource, sampler=sampler, id_generator=id_generator)
headers = _parse_otlp_headers(headers_raw)
if api_key:
if "authorization" in headers:
logger.warning(
"ENTERPRISE_OTLP_API_KEY is set but ENTERPRISE_OTLP_HEADERS also contains "
"'authorization'; the API key will take precedence."
)
headers["authorization"] = f"Bearer {api_key}"
factory = _ExporterFactory(protocol, endpoint, headers, insecure=insecure)
trace_exporter = factory.create_trace_exporter()
self._tracer_provider.add_span_processor(BatchSpanProcessor(trace_exporter))
self._tracer = self._tracer_provider.get_tracer("dify.enterprise")
metric_exporter = factory.create_metric_exporter()
self._meter_provider = MeterProvider(
resource=resource,
metric_readers=[PeriodicExportingMetricReader(metric_exporter)],
)
meter = self._meter_provider.get_meter("dify.enterprise")
self._counters = {
EnterpriseTelemetryCounter.TOKENS: meter.create_counter("dify.tokens.total", unit="{token}"),
EnterpriseTelemetryCounter.INPUT_TOKENS: meter.create_counter("dify.tokens.input", unit="{token}"),
EnterpriseTelemetryCounter.OUTPUT_TOKENS: meter.create_counter("dify.tokens.output", unit="{token}"),
EnterpriseTelemetryCounter.REQUESTS: meter.create_counter("dify.requests.total", unit="{request}"),
EnterpriseTelemetryCounter.ERRORS: meter.create_counter("dify.errors.total", unit="{error}"),
EnterpriseTelemetryCounter.FEEDBACK: meter.create_counter("dify.feedback.total", unit="{feedback}"),
EnterpriseTelemetryCounter.DATASET_RETRIEVALS: meter.create_counter(
"dify.dataset.retrievals.total", unit="{retrieval}"
),
EnterpriseTelemetryCounter.APP_CREATED: meter.create_counter("dify.app.created.total", unit="{app}"),
EnterpriseTelemetryCounter.APP_UPDATED: meter.create_counter("dify.app.updated.total", unit="{app}"),
EnterpriseTelemetryCounter.APP_DELETED: meter.create_counter("dify.app.deleted.total", unit="{app}"),
}
self._histograms = {
EnterpriseTelemetryHistogram.WORKFLOW_DURATION: meter.create_histogram("dify.workflow.duration", unit="s"),
EnterpriseTelemetryHistogram.NODE_DURATION: meter.create_histogram("dify.node.duration", unit="s"),
EnterpriseTelemetryHistogram.MESSAGE_DURATION: meter.create_histogram("dify.message.duration", unit="s"),
EnterpriseTelemetryHistogram.MESSAGE_TTFT: meter.create_histogram(
"dify.message.time_to_first_token", unit="s"
),
EnterpriseTelemetryHistogram.TOOL_DURATION: meter.create_histogram("dify.tool.duration", unit="s"),
EnterpriseTelemetryHistogram.PROMPT_GENERATION_DURATION: meter.create_histogram(
"dify.prompt_generation.duration", unit="s"
),
}
def export_span(
self,
name: str,
attributes: dict[str, Any],
correlation_id: str | None = None,
span_id_source: str | None = None,
start_time: datetime | None = None,
end_time: datetime | None = None,
trace_correlation_override: str | None = None,
parent_span_id_source: str | None = None,
) -> None:
"""Export an OTEL span with optional deterministic IDs and real timestamps.
Args:
name: Span operation name.
attributes: Span attributes dict.
correlation_id: Source for trace_id derivation (groups spans in one trace).
span_id_source: Source for deterministic span_id (e.g. workflow_run_id or node_execution_id).
start_time: Real span start time. When None, uses current time.
end_time: Real span end time. When None, span ends immediately.
trace_correlation_override: Override trace_id source (for cross-workflow linking).
When set, trace_id is derived from this instead of ``correlation_id``.
parent_span_id_source: Override parent span_id source (for cross-workflow linking).
When set, parent span_id is derived from this value. When None and
``correlation_id`` is set, parent is the workflow root span.
"""
effective_trace_correlation = trace_correlation_override or correlation_id
set_correlation_id(effective_trace_correlation)
set_span_id_source(span_id_source)
try:
parent_context: Context | None = None
# A span is the "root" of its correlation group when span_id_source == correlation_id
# (i.e. a workflow root span). All other spans are children.
if parent_span_id_source:
# Cross-workflow linking: parent is an explicit span (e.g. tool node in outer workflow)
parent_span_id = compute_deterministic_span_id(parent_span_id_source)
try:
parent_trace_id = int(uuid.UUID(effective_trace_correlation)) if effective_trace_correlation else 0
except (ValueError, AttributeError):
logger.warning(
"Invalid trace correlation UUID for cross-workflow link: %s, span=%s",
effective_trace_correlation,
name,
)
parent_trace_id = 0
if parent_trace_id:
parent_span_context = SpanContext(
trace_id=parent_trace_id,
span_id=parent_span_id,
is_remote=True,
trace_flags=TraceFlags(TraceFlags.SAMPLED),
)
parent_context = trace.set_span_in_context(trace.NonRecordingSpan(parent_span_context))
elif correlation_id and correlation_id != span_id_source:
# Child span: parent is the correlation-group root (workflow root span)
parent_span_id = compute_deterministic_span_id(correlation_id)
try:
parent_trace_id = int(uuid.UUID(effective_trace_correlation or correlation_id))
except (ValueError, AttributeError):
logger.warning(
"Invalid trace correlation UUID for child span link: %s, span=%s",
effective_trace_correlation or correlation_id,
name,
)
parent_trace_id = 0
if parent_trace_id:
parent_span_context = SpanContext(
trace_id=parent_trace_id,
span_id=parent_span_id,
is_remote=True,
trace_flags=TraceFlags(TraceFlags.SAMPLED),
)
parent_context = trace.set_span_in_context(trace.NonRecordingSpan(parent_span_context))
span_start_time = _datetime_to_ns(start_time) if start_time is not None else None
span_end_on_exit = end_time is None
with self._tracer.start_as_current_span(
name,
context=parent_context,
start_time=span_start_time,
end_on_exit=span_end_on_exit,
) as span:
for key, value in attributes.items():
if value is not None:
span.set_attribute(key, value)
if end_time is not None:
span.end(end_time=_datetime_to_ns(end_time))
except Exception:
logger.exception("Failed to export span %s", name)
finally:
set_correlation_id(None)
set_span_id_source(None)
def increment_counter(
self, name: EnterpriseTelemetryCounter, value: int, labels: dict[str, AttributeValue]
) -> None:
counter = self._counters.get(name)
if counter:
counter.add(value, cast(Attributes, labels))
def record_histogram(
self, name: EnterpriseTelemetryHistogram, value: float, labels: dict[str, AttributeValue]
) -> None:
histogram = self._histograms.get(name)
if histogram:
histogram.record(value, cast(Attributes, labels))
def shutdown(self) -> None:
self._tracer_provider.shutdown()
self._meter_provider.shutdown()

View File

@@ -0,0 +1,75 @@
"""Custom OTEL ID Generator for correlation-based trace/span ID derivation.
Uses contextvars for thread-safe correlation_id -> trace_id mapping.
When a span_id_source is set, the span_id is derived deterministically
from that value, enabling any span to reference another as parent
without depending on span creation order.
"""
import random
import uuid
from contextvars import ContextVar
from opentelemetry.sdk.trace.id_generator import IdGenerator
_correlation_id_context: ContextVar[str | None] = ContextVar("correlation_id", default=None)
_span_id_source_context: ContextVar[str | None] = ContextVar("span_id_source", default=None)
def set_correlation_id(correlation_id: str | None) -> None:
_correlation_id_context.set(correlation_id)
def get_correlation_id() -> str | None:
return _correlation_id_context.get()
def set_span_id_source(source_id: str | None) -> None:
"""Set the source for deterministic span_id generation.
When set, ``generate_span_id()`` derives the span_id from this value
(lower 64 bits of the UUID). Pass the ``workflow_run_id`` for workflow
root spans or ``node_execution_id`` for node spans.
"""
_span_id_source_context.set(source_id)
def compute_deterministic_span_id(source_id: str) -> int:
"""Derive a deterministic span_id from any UUID string.
Uses the lower 64 bits of the UUID, guaranteeing non-zero output
(OTEL requires span_id != 0).
"""
span_id = uuid.UUID(source_id).int & ((1 << 64) - 1)
return span_id if span_id != 0 else 1
class CorrelationIdGenerator(IdGenerator):
"""ID generator that derives trace_id and optionally span_id from context.
- trace_id: always derived from correlation_id (groups all spans in one trace)
- span_id: derived from span_id_source when set (enables deterministic
parent-child linking), otherwise random
"""
def generate_trace_id(self) -> int:
correlation_id = _correlation_id_context.get()
if correlation_id:
try:
return uuid.UUID(correlation_id).int
except (ValueError, AttributeError):
pass
return random.getrandbits(128)
def generate_span_id(self) -> int:
source = _span_id_source_context.get()
if source:
try:
return compute_deterministic_span_id(source)
except (ValueError, AttributeError):
pass
span_id = random.getrandbits(64)
while span_id == 0:
span_id = random.getrandbits(64)
return span_id

View File

@@ -0,0 +1,381 @@
"""Enterprise metric/log event handler.
This module processes metric and log telemetry events after they've been
dequeued from the enterprise_telemetry Celery queue. It handles case routing,
idempotency checking, and payload rehydration.
"""
from __future__ import annotations
import json
import logging
from typing import Any
from enterprise.telemetry.contracts import TelemetryCase, TelemetryEnvelope
from extensions.ext_redis import redis_client
from extensions.ext_storage import storage
logger = logging.getLogger(__name__)
class EnterpriseMetricHandler:
"""Handler for enterprise metric and log telemetry events.
Processes envelopes from the enterprise_telemetry queue, routing each
case to the appropriate handler method. Implements idempotency checking
and payload rehydration with fallback.
"""
def _increment_diagnostic_counter(self, counter_name: str, labels: dict[str, str] | None = None) -> None:
"""Increment a diagnostic counter for operational monitoring.
Args:
counter_name: Name of the counter (e.g., 'processed_total', 'deduped_total').
labels: Optional labels for the counter.
"""
try:
from extensions.ext_enterprise_telemetry import get_enterprise_exporter
exporter = get_enterprise_exporter()
if not exporter:
return
full_counter_name = f"enterprise_telemetry.handler.{counter_name}"
logger.debug(
"Diagnostic counter: %s, labels=%s",
full_counter_name,
labels or {},
)
except Exception:
logger.debug("Failed to increment diagnostic counter: %s", counter_name, exc_info=True)
def handle(self, envelope: TelemetryEnvelope) -> None:
"""Main entry point for processing telemetry envelopes.
Args:
envelope: The telemetry envelope to process.
"""
# Check for duplicate events
if self._is_duplicate(envelope):
logger.debug(
"Skipping duplicate event: tenant_id=%s, event_id=%s",
envelope.tenant_id,
envelope.event_id,
)
self._increment_diagnostic_counter("deduped_total")
return
# Route to appropriate handler based on case
case = envelope.case
if case == TelemetryCase.APP_CREATED:
self._on_app_created(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "app_created"})
elif case == TelemetryCase.APP_UPDATED:
self._on_app_updated(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "app_updated"})
elif case == TelemetryCase.APP_DELETED:
self._on_app_deleted(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "app_deleted"})
elif case == TelemetryCase.FEEDBACK_CREATED:
self._on_feedback_created(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "feedback_created"})
elif case == TelemetryCase.MESSAGE_RUN:
self._on_message_run(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "message_run"})
elif case == TelemetryCase.TOOL_EXECUTION:
self._on_tool_execution(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "tool_execution"})
elif case == TelemetryCase.MODERATION_CHECK:
self._on_moderation_check(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "moderation_check"})
elif case == TelemetryCase.SUGGESTED_QUESTION:
self._on_suggested_question(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "suggested_question"})
elif case == TelemetryCase.DATASET_RETRIEVAL:
self._on_dataset_retrieval(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "dataset_retrieval"})
elif case == TelemetryCase.GENERATE_NAME:
self._on_generate_name(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "generate_name"})
elif case == TelemetryCase.PROMPT_GENERATION:
self._on_prompt_generation(envelope)
self._increment_diagnostic_counter("processed_total", {"case": "prompt_generation"})
else:
logger.warning(
"Unknown telemetry case: %s (tenant_id=%s, event_id=%s)",
case,
envelope.tenant_id,
envelope.event_id,
)
def _is_duplicate(self, envelope: TelemetryEnvelope) -> bool:
"""Check if this event has already been processed.
Uses Redis with TTL for deduplication. Returns True if duplicate,
False if first time seeing this event.
Args:
envelope: The telemetry envelope to check.
Returns:
True if this event_id has been seen before, False otherwise.
"""
dedup_key = f"telemetry:dedup:{envelope.tenant_id}:{envelope.event_id}"
try:
# Atomic set-if-not-exists with 1h TTL
# Returns True if key was set (first time), None if already exists (duplicate)
was_set = redis_client.set(dedup_key, b"1", nx=True, ex=3600)
return was_set is None
except Exception:
# Fail open: if Redis is unavailable, process the event
# (prefer occasional duplicate over lost data)
logger.warning(
"Redis unavailable for deduplication check, processing event anyway: %s",
envelope.event_id,
exc_info=True,
)
return False
def _rehydrate(self, envelope: TelemetryEnvelope) -> dict[str, Any]:
"""Rehydrate payload from storage reference or inline data.
If the envelope payload is empty and metadata contains a
``payload_ref``, the full payload is loaded from object storage
(where the gateway wrote it as JSON). When both the inline
payload and storage resolution fail, a degraded-event marker
is emitted so the gap is observable.
Args:
envelope: The telemetry envelope containing payload data.
Returns:
The rehydrated payload dictionary, or ``{}`` on total failure.
"""
payload = envelope.payload
# Resolve from object storage when the gateway offloaded a large payload.
if not payload and envelope.metadata:
payload_ref = envelope.metadata.get("payload_ref")
if payload_ref:
try:
payload_bytes = storage.load(payload_ref)
payload = json.loads(payload_bytes.decode("utf-8"))
logger.debug("Loaded payload from storage: key=%s", payload_ref)
except Exception:
logger.warning(
"Failed to load payload from storage: key=%s, event_id=%s",
payload_ref,
envelope.event_id,
exc_info=True,
)
if not payload:
# Storage resolution failed or no data available — emit degraded event.
logger.error(
"Payload rehydration failed for event_id=%s, tenant_id=%s, case=%s",
envelope.event_id,
envelope.tenant_id,
envelope.case,
)
from enterprise.telemetry.entities import EnterpriseTelemetryEvent
from enterprise.telemetry.telemetry_log import emit_metric_only_event
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.REHYDRATION_FAILED,
attributes={
"dify.tenant_id": envelope.tenant_id,
"dify.event_id": envelope.event_id,
"dify.case": envelope.case,
"rehydration_failed": True,
},
tenant_id=envelope.tenant_id,
)
self._increment_diagnostic_counter("rehydration_failed_total")
return {}
return payload
# Stub methods for each metric/log case
# These will be implemented in later tasks with actual emission logic
def _on_app_created(self, envelope: TelemetryEnvelope) -> None:
"""Handle app created event."""
from enterprise.telemetry.entities import EnterpriseTelemetryCounter, EnterpriseTelemetryEvent
from enterprise.telemetry.telemetry_log import emit_metric_only_event
from extensions.ext_enterprise_telemetry import get_enterprise_exporter
exporter = get_enterprise_exporter()
if not exporter:
logger.debug("No exporter available for APP_CREATED: event_id=%s", envelope.event_id)
return
payload = self._rehydrate(envelope)
if not payload:
return
attrs = {
"dify.app.id": payload.get("app_id"),
"dify.tenant_id": envelope.tenant_id,
"dify.event.id": envelope.event_id,
"dify.app.mode": payload.get("mode"),
}
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.APP_CREATED,
attributes=attrs,
tenant_id=envelope.tenant_id,
)
exporter.increment_counter(
EnterpriseTelemetryCounter.APP_CREATED,
1,
{
"tenant_id": envelope.tenant_id,
"app_id": str(payload.get("app_id", "")),
"mode": str(payload.get("mode", "")),
},
)
def _on_app_updated(self, envelope: TelemetryEnvelope) -> None:
"""Handle app updated event."""
from enterprise.telemetry.entities import EnterpriseTelemetryCounter, EnterpriseTelemetryEvent
from enterprise.telemetry.telemetry_log import emit_metric_only_event
from extensions.ext_enterprise_telemetry import get_enterprise_exporter
exporter = get_enterprise_exporter()
if not exporter:
logger.debug("No exporter available for APP_UPDATED: event_id=%s", envelope.event_id)
return
payload = self._rehydrate(envelope)
if not payload:
return
attrs = {
"dify.app.id": payload.get("app_id"),
"dify.tenant_id": envelope.tenant_id,
"dify.event.id": envelope.event_id,
}
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.APP_UPDATED,
attributes=attrs,
tenant_id=envelope.tenant_id,
)
exporter.increment_counter(
EnterpriseTelemetryCounter.APP_UPDATED,
1,
{
"tenant_id": envelope.tenant_id,
"app_id": str(payload.get("app_id", "")),
},
)
def _on_app_deleted(self, envelope: TelemetryEnvelope) -> None:
"""Handle app deleted event."""
from enterprise.telemetry.entities import EnterpriseTelemetryCounter, EnterpriseTelemetryEvent
from enterprise.telemetry.telemetry_log import emit_metric_only_event
from extensions.ext_enterprise_telemetry import get_enterprise_exporter
exporter = get_enterprise_exporter()
if not exporter:
logger.debug("No exporter available for APP_DELETED: event_id=%s", envelope.event_id)
return
payload = self._rehydrate(envelope)
if not payload:
return
attrs = {
"dify.app.id": payload.get("app_id"),
"dify.tenant_id": envelope.tenant_id,
"dify.event.id": envelope.event_id,
}
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.APP_DELETED,
attributes=attrs,
tenant_id=envelope.tenant_id,
)
exporter.increment_counter(
EnterpriseTelemetryCounter.APP_DELETED,
1,
{
"tenant_id": envelope.tenant_id,
"app_id": str(payload.get("app_id", "")),
},
)
def _on_feedback_created(self, envelope: TelemetryEnvelope) -> None:
"""Handle feedback created event."""
from enterprise.telemetry.entities import EnterpriseTelemetryCounter, EnterpriseTelemetryEvent
from enterprise.telemetry.telemetry_log import emit_metric_only_event
from extensions.ext_enterprise_telemetry import get_enterprise_exporter
exporter = get_enterprise_exporter()
if not exporter:
logger.debug("No exporter available for FEEDBACK_CREATED: event_id=%s", envelope.event_id)
return
payload = self._rehydrate(envelope)
if not payload:
return
include_content = exporter.include_content
attrs: dict = {
"dify.message.id": payload.get("message_id"),
"dify.tenant_id": envelope.tenant_id,
"dify.event.id": envelope.event_id,
"dify.app_id": payload.get("app_id"),
"dify.conversation.id": payload.get("conversation_id"),
"gen_ai.user.id": payload.get("from_end_user_id") or payload.get("from_account_id"),
"dify.feedback.rating": payload.get("rating"),
"dify.feedback.from_source": payload.get("from_source"),
}
if include_content:
attrs["dify.feedback.content"] = payload.get("content")
user_id = payload.get("from_end_user_id") or payload.get("from_account_id")
emit_metric_only_event(
event_name=EnterpriseTelemetryEvent.FEEDBACK_CREATED,
attributes=attrs,
tenant_id=envelope.tenant_id,
user_id=str(user_id or ""),
)
exporter.increment_counter(
EnterpriseTelemetryCounter.FEEDBACK,
1,
{
"tenant_id": envelope.tenant_id,
"app_id": str(payload.get("app_id", "")),
"rating": str(payload.get("rating", "")),
},
)
def _on_message_run(self, envelope: TelemetryEnvelope) -> None:
"""Handle message run event (stub)."""
logger.debug("Processing MESSAGE_RUN: event_id=%s", envelope.event_id)
def _on_tool_execution(self, envelope: TelemetryEnvelope) -> None:
"""Handle tool execution event (stub)."""
logger.debug("Processing TOOL_EXECUTION: event_id=%s", envelope.event_id)
def _on_moderation_check(self, envelope: TelemetryEnvelope) -> None:
"""Handle moderation check event (stub)."""
logger.debug("Processing MODERATION_CHECK: event_id=%s", envelope.event_id)
def _on_suggested_question(self, envelope: TelemetryEnvelope) -> None:
"""Handle suggested question event (stub)."""
logger.debug("Processing SUGGESTED_QUESTION: event_id=%s", envelope.event_id)
def _on_dataset_retrieval(self, envelope: TelemetryEnvelope) -> None:
"""Handle dataset retrieval event (stub)."""
logger.debug("Processing DATASET_RETRIEVAL: event_id=%s", envelope.event_id)
def _on_generate_name(self, envelope: TelemetryEnvelope) -> None:
"""Handle generate name event (stub)."""
logger.debug("Processing GENERATE_NAME: event_id=%s", envelope.event_id)
def _on_prompt_generation(self, envelope: TelemetryEnvelope) -> None:
"""Handle prompt generation event (stub)."""
logger.debug("Processing PROMPT_GENERATION: event_id=%s", envelope.event_id)

View File

@@ -0,0 +1,122 @@
"""Structured-log emitter for enterprise telemetry events.
Emits structured JSON log lines correlated with OTEL traces via trace_id.
Picked up by ``StructuredJSONFormatter`` → stdout/Loki/Elastic.
"""
from __future__ import annotations
import logging
import uuid
from functools import lru_cache
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from enterprise.telemetry.entities import EnterpriseTelemetryEvent
logger = logging.getLogger("dify.telemetry")
@lru_cache(maxsize=4096)
def compute_trace_id_hex(uuid_str: str | None) -> str:
"""Convert a business UUID string to a 32-hex OTEL-compatible trace_id.
Returns empty string when *uuid_str* is ``None`` or invalid.
"""
if not uuid_str:
return ""
normalized = uuid_str.strip().lower()
if len(normalized) == 32 and all(ch in "0123456789abcdef" for ch in normalized):
return normalized
try:
return f"{uuid.UUID(normalized).int:032x}"
except (ValueError, AttributeError):
return ""
@lru_cache(maxsize=4096)
def compute_span_id_hex(uuid_str: str | None) -> str:
if not uuid_str:
return ""
normalized = uuid_str.strip().lower()
if len(normalized) == 16 and all(ch in "0123456789abcdef" for ch in normalized):
return normalized
try:
from enterprise.telemetry.id_generator import compute_deterministic_span_id
return f"{compute_deterministic_span_id(normalized):016x}"
except (ValueError, AttributeError):
return ""
def emit_telemetry_log(
*,
event_name: str | EnterpriseTelemetryEvent,
attributes: dict[str, Any],
signal: str = "metric_only",
trace_id_source: str | None = None,
span_id_source: str | None = None,
tenant_id: str | None = None,
user_id: str | None = None,
) -> None:
"""Emit a structured log line for a telemetry event.
Parameters
----------
event_name:
Canonical event name, e.g. ``"dify.workflow.run"``.
attributes:
All event-specific attributes (already built by the caller).
signal:
``"metric_only"`` for events with no span, ``"span_detail"``
for detail logs accompanying a slim span.
trace_id_source:
A UUID string (e.g. ``workflow_run_id``) used to derive a 32-hex
trace_id for cross-signal correlation.
tenant_id:
Tenant identifier (for the ``IdentityContextFilter``).
user_id:
User identifier (for the ``IdentityContextFilter``).
"""
if not logger.isEnabledFor(logging.INFO):
return
attrs = {
"dify.event.name": event_name,
"dify.event.signal": signal,
**attributes,
}
extra: dict[str, Any] = {"attributes": attrs}
trace_id_hex = compute_trace_id_hex(trace_id_source)
if trace_id_hex:
extra["trace_id"] = trace_id_hex
span_id_hex = compute_span_id_hex(span_id_source)
if span_id_hex:
extra["span_id"] = span_id_hex
if tenant_id:
extra["tenant_id"] = tenant_id
if user_id:
extra["user_id"] = user_id
logger.info("telemetry.%s", signal, extra=extra)
def emit_metric_only_event(
*,
event_name: str | EnterpriseTelemetryEvent,
attributes: dict[str, Any],
trace_id_source: str | None = None,
span_id_source: str | None = None,
tenant_id: str | None = None,
user_id: str | None = None,
) -> None:
emit_telemetry_log(
event_name=event_name,
attributes=attributes,
signal="metric_only",
trace_id_source=trace_id_source,
span_id_source=span_id_source,
tenant_id=tenant_id,
user_id=user_id,
)

View File

@@ -3,6 +3,12 @@ from blinker import signal
# sender: app
app_was_created = signal("app-was-created")
# sender: app
app_was_deleted = signal("app-was-deleted")
# sender: app
app_was_updated = signal("app-was-updated")
# sender: app, kwargs: app_model_config
app_model_config_was_updated = signal("app-model-config-was-updated")

View File

@@ -0,0 +1,4 @@
from blinker import signal
# sender: MessageFeedback, kwargs: tenant_id
feedback_was_created = signal("feedback-was-created")

View File

@@ -204,6 +204,8 @@ def init_app(app: DifyApp) -> Celery:
"schedule": timedelta(minutes=dify_config.API_TOKEN_LAST_USED_UPDATE_INTERVAL),
}
if dify_config.ENTERPRISE_ENABLED and dify_config.ENTERPRISE_TELEMETRY_ENABLED:
imports.append("tasks.enterprise_telemetry_task")
celery_app.conf.update(beat_schedule=beat_schedule, imports=imports)
return celery_app

View File

@@ -0,0 +1,50 @@
"""Flask extension for enterprise telemetry lifecycle management.
Initializes the EnterpriseExporter singleton during ``create_app()``
(single-threaded), registers blinker event handlers, and hooks atexit
for graceful shutdown.
Skipped entirely when ``ENTERPRISE_ENABLED`` and ``ENTERPRISE_TELEMETRY_ENABLED``
are false (``is_enabled()`` gate).
"""
from __future__ import annotations
import atexit
import logging
from typing import TYPE_CHECKING
from configs import dify_config
if TYPE_CHECKING:
from dify_app import DifyApp
from enterprise.telemetry.exporter import EnterpriseExporter
logger = logging.getLogger(__name__)
_exporter: EnterpriseExporter | None = None
def is_enabled() -> bool:
return bool(dify_config.ENTERPRISE_ENABLED and dify_config.ENTERPRISE_TELEMETRY_ENABLED)
def init_app(app: DifyApp) -> None:
global _exporter
if not is_enabled():
return
from enterprise.telemetry.exporter import EnterpriseExporter
_exporter = EnterpriseExporter(dify_config)
atexit.register(_exporter.shutdown)
# Import to trigger @signal.connect decorator registration
import enterprise.telemetry.event_handlers # noqa: F401 # type: ignore[reportUnusedImport]
logger.info("Enterprise telemetry initialized")
def get_enterprise_exporter() -> EnterpriseExporter | None:
return _exporter

View File

@@ -78,16 +78,24 @@ def init_app(app: DifyApp):
protocol = (dify_config.OTEL_EXPORTER_OTLP_PROTOCOL or "").lower()
if dify_config.OTEL_EXPORTER_TYPE == "otlp":
if protocol == "grpc":
# Auto-detect TLS: https:// uses secure, everything else is insecure
endpoint = dify_config.OTLP_BASE_ENDPOINT
insecure = not endpoint.startswith("https://")
exporter = GRPCSpanExporter(
endpoint=dify_config.OTLP_BASE_ENDPOINT,
endpoint=endpoint,
# Header field names must consist of lowercase letters, check RFC7540
headers=(("authorization", f"Bearer {dify_config.OTLP_API_KEY}"),),
insecure=True,
headers=(
(("authorization", f"Bearer {dify_config.OTLP_API_KEY}"),) if dify_config.OTLP_API_KEY else None
),
insecure=insecure,
)
metric_exporter = GRPCMetricExporter(
endpoint=dify_config.OTLP_BASE_ENDPOINT,
headers=(("authorization", f"Bearer {dify_config.OTLP_API_KEY}"),),
insecure=True,
endpoint=endpoint,
headers=(
(("authorization", f"Bearer {dify_config.OTLP_API_KEY}"),) if dify_config.OTLP_API_KEY else None
),
insecure=insecure,
)
else:
headers = {"Authorization": f"Bearer {dify_config.OTLP_API_KEY}"} if dify_config.OTLP_API_KEY else None

View File

@@ -5,7 +5,7 @@ This module provides parsers that extract node-specific metadata and set
OpenTelemetry span attributes according to semantic conventions.
"""
from extensions.otel.parser.base import DefaultNodeOTelParser, NodeOTelParser, safe_json_dumps
from extensions.otel.parser.base import DefaultNodeOTelParser, NodeOTelParser, safe_json_dumps, should_include_content
from extensions.otel.parser.llm import LLMNodeOTelParser
from extensions.otel.parser.retrieval import RetrievalNodeOTelParser
from extensions.otel.parser.tool import ToolNodeOTelParser
@@ -17,4 +17,5 @@ __all__ = [
"RetrievalNodeOTelParser",
"ToolNodeOTelParser",
"safe_json_dumps",
"should_include_content",
]

View File

@@ -1,5 +1,10 @@
"""
Base parser interface and utilities for OpenTelemetry node parsers.
Content gating: ``should_include_content()`` controls whether content-bearing
span attributes (inputs, outputs, prompts, completions, documents) are written.
Gate is only active in EE (``ENTERPRISE_ENABLED=True``) when
``ENTERPRISE_INCLUDE_CONTENT=False``; CE behaviour is unchanged.
"""
import json
@@ -9,6 +14,7 @@ from opentelemetry.trace import Span
from opentelemetry.trace.status import Status, StatusCode
from pydantic import BaseModel
from configs import dify_config
from dify_graph.enums import NodeType
from dify_graph.file.models import File
from dify_graph.graph_events import GraphNodeEventBase
@@ -17,6 +23,17 @@ from dify_graph.variables import Segment
from extensions.otel.semconv.gen_ai import ChainAttributes, GenAIAttributes
def should_include_content() -> bool:
"""Return True if content should be written to spans.
CE (ENTERPRISE_ENABLED=False): always True — no behaviour change.
EE: follows ENTERPRISE_INCLUDE_CONTENT (default True).
"""
if not dify_config.ENTERPRISE_ENABLED:
return True
return dify_config.ENTERPRISE_INCLUDE_CONTENT
def safe_json_dumps(obj: Any, ensure_ascii: bool = False) -> str:
"""
Safely serialize objects to JSON, handling non-serializable types.
@@ -105,10 +122,11 @@ class DefaultNodeOTelParser:
# Extract inputs and outputs from result_event
if result_event and result_event.node_run_result:
node_run_result = result_event.node_run_result
if node_run_result.inputs:
span.set_attribute(ChainAttributes.INPUT_VALUE, safe_json_dumps(node_run_result.inputs))
if node_run_result.outputs:
span.set_attribute(ChainAttributes.OUTPUT_VALUE, safe_json_dumps(node_run_result.outputs))
if should_include_content():
if node_run_result.inputs:
span.set_attribute(ChainAttributes.INPUT_VALUE, safe_json_dumps(node_run_result.inputs))
if node_run_result.outputs:
span.set_attribute(ChainAttributes.OUTPUT_VALUE, safe_json_dumps(node_run_result.outputs))
if error:
span.record_exception(error)

View File

@@ -21,3 +21,15 @@ class DifySpanAttributes:
INVOKE_FROM = "dify.invoke_from"
"""Invocation source, e.g. SERVICE_API, WEB_APP, DEBUGGER."""
INVOKED_BY = "dify.invoked_by"
"""Invoked by, e.g. end_user, account, user."""
USAGE_INPUT_TOKENS = "gen_ai.usage.input_tokens"
"""Number of input tokens (prompt tokens) used."""
USAGE_OUTPUT_TOKENS = "gen_ai.usage.output_tokens"
"""Number of output tokens (completion tokens) generated."""
USAGE_TOTAL_TOKENS = "gen_ai.usage.total_tokens"
"""Total number of tokens used."""

View File

@@ -72,18 +72,9 @@ SEGMENT_TO_VARIABLE_MAP = {
}
_MAX_VARIABLE_DESCRIPTION_LENGTH = 255
def build_conversation_variable_from_mapping(mapping: Mapping[str, Any], /) -> VariableBase:
if not mapping.get("name"):
raise VariableError("missing name")
description = mapping.get("description", "")
if len(description) > _MAX_VARIABLE_DESCRIPTION_LENGTH:
raise VariableError(
f"description of variable '{mapping['name']}' is too long"
f" (max {_MAX_VARIABLE_DESCRIPTION_LENGTH} characters)"
)
return _build_variable_from_mapping(mapping=mapping, selector=[CONVERSATION_VARIABLE_NODE_ID, mapping["name"]])

View File

@@ -1589,8 +1589,6 @@ class WorkflowDraftVariable(Base):
variable.file_id = file_id
variable._set_selector(list(variable_utils.to_selector(node_id, name)))
variable.node_execution_id = node_execution_id
variable.visible = True
variable.is_default_value = False
return variable
@classmethod

View File

@@ -700,8 +700,6 @@ def _model_to_insertion_dict(model: WorkflowDraftVariable) -> dict[str, Any]:
d["updated_at"] = model.updated_at
if model.description is not None:
d["description"] = model.description
if model.is_default_value is not None:
d["is_default_value"] = model.is_default_value
return d

View File

@@ -0,0 +1,52 @@
"""Celery worker for enterprise metric/log telemetry events.
This module defines the Celery task that processes telemetry envelopes
from the enterprise_telemetry queue. It deserializes envelopes and
dispatches them to the EnterpriseMetricHandler.
"""
import json
import logging
from celery import shared_task
from enterprise.telemetry.contracts import TelemetryEnvelope
from enterprise.telemetry.metric_handler import EnterpriseMetricHandler
logger = logging.getLogger(__name__)
@shared_task(queue="enterprise_telemetry")
def process_enterprise_telemetry(envelope_json: str) -> None:
"""Process enterprise metric/log telemetry envelope.
This task is enqueued by the TelemetryGateway for metric/log-only
events. It deserializes the envelope and dispatches to the handler.
Best-effort processing: logs errors but never raises, to avoid
failing user requests due to telemetry issues.
Args:
envelope_json: JSON-serialized TelemetryEnvelope.
"""
try:
# Deserialize envelope
envelope_dict = json.loads(envelope_json)
envelope = TelemetryEnvelope.model_validate(envelope_dict)
# Process through handler
handler = EnterpriseMetricHandler()
handler.handle(envelope)
logger.debug(
"Successfully processed telemetry envelope: tenant_id=%s, event_id=%s, case=%s",
envelope.tenant_id,
envelope.event_id,
envelope.case,
)
except Exception:
# Best-effort: log and drop on error, never fail user request
logger.warning(
"Failed to process enterprise telemetry envelope, dropping event",
exc_info=True,
)

View File

@@ -39,17 +39,36 @@ def process_trace_tasks(file_info):
trace_info["documents"] = [Document.model_validate(doc) for doc in trace_info["documents"]]
try:
trace_type = trace_info_info_map.get(trace_info_type)
if trace_type:
trace_info = trace_type(**trace_info)
from extensions.ext_enterprise_telemetry import is_enabled as is_ee_telemetry_enabled
if is_ee_telemetry_enabled():
from enterprise.telemetry.enterprise_trace import EnterpriseOtelTrace
try:
EnterpriseOtelTrace().trace(trace_info)
except Exception:
logger.exception("Enterprise trace failed for app_id: %s", app_id)
if trace_instance:
with current_app.app_context():
trace_type = trace_info_info_map.get(trace_info_type)
if trace_type:
trace_info = trace_type(**trace_info)
trace_instance.trace(trace_info)
logger.info("Processing trace tasks success, app_id: %s", app_id)
except Exception as e:
logger.info("error:\n\n\n%s\n\n\n\n", e)
logger.exception("Processing trace tasks failed, app_id: %s", app_id)
failed_key = f"{OPS_TRACE_FAILED_KEY}_{app_id}"
redis_client.incr(failed_key)
logger.info("Processing trace tasks failed, app_id: %s", app_id)
finally:
storage.delete(file_path)
try:
storage.delete(file_path)
except Exception as e:
logger.warning(
"Failed to delete trace file %s for app_id %s: %s",
file_path,
app_id,
e,
)

View File

@@ -1,70 +0,0 @@
from controllers.common.errors import (
BlockedFileExtensionError,
FilenameNotExistsError,
FileTooLargeError,
NoFileUploadedError,
RemoteFileUploadError,
TooManyFilesError,
UnsupportedFileTypeError,
)
class TestFilenameNotExistsError:
def test_defaults(self):
error = FilenameNotExistsError()
assert error.code == 400
assert error.description == "The specified filename does not exist."
class TestRemoteFileUploadError:
def test_defaults(self):
error = RemoteFileUploadError()
assert error.code == 400
assert error.description == "Error uploading remote file."
class TestFileTooLargeError:
def test_defaults(self):
error = FileTooLargeError()
assert error.code == 413
assert error.error_code == "file_too_large"
assert error.description == "File size exceeded. {message}"
class TestUnsupportedFileTypeError:
def test_defaults(self):
error = UnsupportedFileTypeError()
assert error.code == 415
assert error.error_code == "unsupported_file_type"
assert error.description == "File type not allowed."
class TestBlockedFileExtensionError:
def test_defaults(self):
error = BlockedFileExtensionError()
assert error.code == 400
assert error.error_code == "file_extension_blocked"
assert error.description == "The file extension is blocked for security reasons."
class TestTooManyFilesError:
def test_defaults(self):
error = TooManyFilesError()
assert error.code == 400
assert error.error_code == "too_many_files"
assert error.description == "Only one file is allowed."
class TestNoFileUploadedError:
def test_defaults(self):
error = NoFileUploadedError()
assert error.code == 400
assert error.error_code == "no_file_uploaded"
assert error.description == "Please upload your file."

View File

@@ -1,95 +1,22 @@
from flask import Response
from controllers.common.file_response import (
_normalize_mime_type,
enforce_download_for_html,
is_html_content,
)
from controllers.common.file_response import enforce_download_for_html, is_html_content
class TestNormalizeMimeType:
def test_returns_empty_string_for_none(self):
assert _normalize_mime_type(None) == ""
def test_returns_empty_string_for_empty_string(self):
assert _normalize_mime_type("") == ""
def test_normalizes_mime_type(self):
assert _normalize_mime_type("Text/HTML; Charset=UTF-8") == "text/html"
class TestIsHtmlContent:
def test_detects_html_via_mime_type(self):
class TestFileResponseHelpers:
def test_is_html_content_detects_mime_type(self):
mime_type = "text/html; charset=UTF-8"
result = is_html_content(
mime_type=mime_type,
filename="file.txt",
extension="txt",
)
result = is_html_content(mime_type, filename="file.txt", extension="txt")
assert result is True
def test_detects_html_via_extension_argument(self):
result = is_html_content(
mime_type="text/plain",
filename=None,
extension="html",
)
def test_is_html_content_detects_extension(self):
result = is_html_content("text/plain", filename="report.html", extension=None)
assert result is True
def test_detects_html_via_filename_extension(self):
result = is_html_content(
mime_type="text/plain",
filename="report.html",
extension=None,
)
assert result is True
def test_returns_false_when_no_html_detected_anywhere(self):
"""
Missing negative test:
- MIME type is not HTML
- filename has no HTML extension
- extension argument is not HTML
"""
result = is_html_content(
mime_type="application/json",
filename="data.json",
extension="json",
)
assert result is False
def test_returns_false_when_all_inputs_are_none(self):
result = is_html_content(
mime_type=None,
filename=None,
extension=None,
)
assert result is False
class TestEnforceDownloadForHtml:
def test_sets_attachment_when_filename_missing(self):
response = Response("payload", mimetype="text/html")
updated = enforce_download_for_html(
response,
mime_type="text/html",
filename=None,
extension="html",
)
assert updated is True
assert response.headers["Content-Disposition"] == "attachment"
assert response.headers["Content-Type"] == "application/octet-stream"
assert response.headers["X-Content-Type-Options"] == "nosniff"
def test_sets_headers_when_filename_present(self):
def test_enforce_download_for_html_sets_headers(self):
response = Response("payload", mimetype="text/html")
updated = enforce_download_for_html(
@@ -100,12 +27,11 @@ class TestEnforceDownloadForHtml:
)
assert updated is True
assert response.headers["Content-Disposition"].startswith("attachment")
assert "unsafe.html" in response.headers["Content-Disposition"]
assert "attachment" in response.headers["Content-Disposition"]
assert response.headers["Content-Type"] == "application/octet-stream"
assert response.headers["X-Content-Type-Options"] == "nosniff"
def test_does_not_modify_response_for_non_html_content(self):
def test_enforce_download_for_html_no_change_for_non_html(self):
response = Response("payload", mimetype="text/plain")
updated = enforce_download_for_html(

View File

@@ -1,188 +0,0 @@
from uuid import UUID
import httpx
import pytest
from controllers.common import helpers
from controllers.common.helpers import FileInfo, guess_file_info_from_response
def make_response(
url="https://example.com/file.txt",
headers=None,
content=None,
):
return httpx.Response(
200,
request=httpx.Request("GET", url),
headers=headers or {},
content=content or b"",
)
class TestGuessFileInfoFromResponse:
def test_filename_from_url(self):
response = make_response(
url="https://example.com/test.pdf",
content=b"Hello World",
)
info = guess_file_info_from_response(response)
assert info.filename == "test.pdf"
assert info.extension == ".pdf"
assert info.mimetype == "application/pdf"
def test_filename_from_content_disposition(self):
headers = {
"Content-Disposition": "attachment; filename=myfile.csv",
"Content-Type": "text/csv",
}
response = make_response(
url="https://example.com/",
headers=headers,
content=b"Hello World",
)
info = guess_file_info_from_response(response)
assert info.filename == "myfile.csv"
assert info.extension == ".csv"
assert info.mimetype == "text/csv"
@pytest.mark.parametrize(
("magic_available", "expected_ext"),
[
(True, "txt"),
(False, "bin"),
],
)
def test_generated_filename_when_missing(self, monkeypatch, magic_available, expected_ext):
if magic_available:
if helpers.magic is None:
pytest.skip("python-magic is not installed, cannot run 'magic_available=True' test variant")
else:
monkeypatch.setattr(helpers, "magic", None)
response = make_response(
url="https://example.com/",
content=b"Hello World",
)
info = guess_file_info_from_response(response)
name, ext = info.filename.split(".")
UUID(name)
assert ext == expected_ext
def test_mimetype_from_header_when_unknown(self):
headers = {"Content-Type": "application/json"}
response = make_response(
url="https://example.com/file.unknown",
headers=headers,
content=b'{"a": 1}',
)
info = guess_file_info_from_response(response)
assert info.mimetype == "application/json"
def test_extension_added_when_missing(self):
headers = {"Content-Type": "image/png"}
response = make_response(
url="https://example.com/image",
headers=headers,
content=b"fakepngdata",
)
info = guess_file_info_from_response(response)
assert info.extension == ".png"
assert info.filename.endswith(".png")
def test_content_length_used_as_size(self):
headers = {
"Content-Length": "1234",
"Content-Type": "text/plain",
}
response = make_response(
url="https://example.com/a.txt",
headers=headers,
content=b"a" * 1234,
)
info = guess_file_info_from_response(response)
assert info.size == 1234
def test_size_minus_one_when_header_missing(self):
response = make_response(url="https://example.com/a.txt")
info = guess_file_info_from_response(response)
assert info.size == -1
def test_fallback_to_bin_extension(self):
headers = {"Content-Type": "application/octet-stream"}
response = make_response(
url="https://example.com/download",
headers=headers,
content=b"\x00\x01\x02\x03",
)
info = guess_file_info_from_response(response)
assert info.extension == ".bin"
assert info.filename.endswith(".bin")
def test_return_type(self):
response = make_response()
info = guess_file_info_from_response(response)
assert isinstance(info, FileInfo)
class TestMagicImportWarnings:
@pytest.mark.parametrize(
("platform_name", "expected_message"),
[
("Windows", "pip install python-magic-bin"),
("Darwin", "brew install libmagic"),
("Linux", "sudo apt-get install libmagic1"),
("Other", "install `libmagic`"),
],
)
def test_magic_import_warning_per_platform(
self,
monkeypatch,
platform_name,
expected_message,
):
import builtins
import importlib
# Force ImportError when "magic" is imported
real_import = builtins.__import__
def fake_import(name, *args, **kwargs):
if name == "magic":
raise ImportError("No module named magic")
return real_import(name, *args, **kwargs)
monkeypatch.setattr(builtins, "__import__", fake_import)
monkeypatch.setattr("platform.system", lambda: platform_name)
# Remove helpers so it imports fresh
import sys
original_helpers = sys.modules.get(helpers.__name__)
sys.modules.pop(helpers.__name__, None)
try:
with pytest.warns(UserWarning, match="To use python-magic") as warning:
imported_helpers = importlib.import_module(helpers.__name__)
assert expected_message in str(warning[0].message)
finally:
if original_helpers is not None:
sys.modules[helpers.__name__] = original_helpers

View File

@@ -1,189 +0,0 @@
import sys
from enum import StrEnum
from unittest.mock import MagicMock, patch
import pytest
from flask_restx import Namespace
from pydantic import BaseModel
class UserModel(BaseModel):
id: int
name: str
class ProductModel(BaseModel):
id: int
price: float
@pytest.fixture(autouse=True)
def mock_console_ns():
"""Mock the console_ns to avoid circular imports during test collection."""
mock_ns = MagicMock(spec=Namespace)
mock_ns.models = {}
# Inject mock before importing schema module
with patch.dict(sys.modules, {"controllers.console": MagicMock(console_ns=mock_ns)}):
yield mock_ns
def test_default_ref_template_value():
from controllers.common.schema import DEFAULT_REF_TEMPLATE_SWAGGER_2_0
assert DEFAULT_REF_TEMPLATE_SWAGGER_2_0 == "#/definitions/{model}"
def test_register_schema_model_calls_namespace_schema_model():
from controllers.common.schema import register_schema_model
namespace = MagicMock(spec=Namespace)
register_schema_model(namespace, UserModel)
namespace.schema_model.assert_called_once()
model_name, schema = namespace.schema_model.call_args.args
assert model_name == "UserModel"
assert isinstance(schema, dict)
assert "properties" in schema
def test_register_schema_model_passes_schema_from_pydantic():
from controllers.common.schema import DEFAULT_REF_TEMPLATE_SWAGGER_2_0, register_schema_model
namespace = MagicMock(spec=Namespace)
register_schema_model(namespace, UserModel)
schema = namespace.schema_model.call_args.args[1]
expected_schema = UserModel.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
assert schema == expected_schema
def test_register_schema_models_registers_multiple_models():
from controllers.common.schema import register_schema_models
namespace = MagicMock(spec=Namespace)
register_schema_models(namespace, UserModel, ProductModel)
assert namespace.schema_model.call_count == 2
called_names = [call.args[0] for call in namespace.schema_model.call_args_list]
assert called_names == ["UserModel", "ProductModel"]
def test_register_schema_models_calls_register_schema_model(monkeypatch):
from controllers.common.schema import register_schema_models
namespace = MagicMock(spec=Namespace)
calls = []
def fake_register(ns, model):
calls.append((ns, model))
monkeypatch.setattr(
"controllers.common.schema.register_schema_model",
fake_register,
)
register_schema_models(namespace, UserModel, ProductModel)
assert calls == [
(namespace, UserModel),
(namespace, ProductModel),
]
class StatusEnum(StrEnum):
ACTIVE = "active"
INACTIVE = "inactive"
class PriorityEnum(StrEnum):
HIGH = "high"
LOW = "low"
def test_get_or_create_model_returns_existing_model(mock_console_ns):
from controllers.common.schema import get_or_create_model
existing_model = MagicMock()
mock_console_ns.models = {"TestModel": existing_model}
result = get_or_create_model("TestModel", {"key": "value"})
assert result == existing_model
mock_console_ns.model.assert_not_called()
def test_get_or_create_model_creates_new_model_when_not_exists(mock_console_ns):
from controllers.common.schema import get_or_create_model
mock_console_ns.models = {}
new_model = MagicMock()
mock_console_ns.model.return_value = new_model
field_def = {"name": {"type": "string"}}
result = get_or_create_model("NewModel", field_def)
assert result == new_model
mock_console_ns.model.assert_called_once_with("NewModel", field_def)
def test_get_or_create_model_does_not_call_model_if_exists(mock_console_ns):
from controllers.common.schema import get_or_create_model
existing_model = MagicMock()
mock_console_ns.models = {"ExistingModel": existing_model}
result = get_or_create_model("ExistingModel", {"key": "value"})
assert result == existing_model
mock_console_ns.model.assert_not_called()
def test_register_enum_models_registers_single_enum():
from controllers.common.schema import register_enum_models
namespace = MagicMock(spec=Namespace)
register_enum_models(namespace, StatusEnum)
namespace.schema_model.assert_called_once()
model_name, schema = namespace.schema_model.call_args.args
assert model_name == "StatusEnum"
assert isinstance(schema, dict)
def test_register_enum_models_registers_multiple_enums():
from controllers.common.schema import register_enum_models
namespace = MagicMock(spec=Namespace)
register_enum_models(namespace, StatusEnum, PriorityEnum)
assert namespace.schema_model.call_count == 2
called_names = [call.args[0] for call in namespace.schema_model.call_args_list]
assert called_names == ["StatusEnum", "PriorityEnum"]
def test_register_enum_models_uses_correct_ref_template():
from controllers.common.schema import register_enum_models
namespace = MagicMock(spec=Namespace)
register_enum_models(namespace, StatusEnum)
schema = namespace.schema_model.call_args.args[1]
# Verify the schema contains enum values
assert "enum" in schema or "anyOf" in schema

View File

@@ -124,12 +124,12 @@ def test_message_cycle_manager_uses_new_conversation_flag(monkeypatch):
def start(self):
self.started = True
def fake_thread(*args, **kwargs):
def fake_thread(**kwargs):
thread = DummyThread(**kwargs)
captured["thread"] = thread
return thread
monkeypatch.setattr(message_cycle_manager, "Timer", fake_thread)
monkeypatch.setattr(message_cycle_manager, "Thread", fake_thread)
manager = MessageCycleManager(application_generate_entity=entity, task_state=MagicMock())
thread = manager.generate_conversation_name(conversation_id="existing-conversation-id", query="hello")

View File

@@ -0,0 +1,200 @@
"""Unit tests for TraceQueueManager telemetry guard.
This test suite verifies that TraceQueueManager correctly drops trace tasks
when telemetry is disabled, proving Bug 1 from code review is a false positive.
The guard logic moved from persistence.py to TraceQueueManager.add_trace_task()
at line 1282 of ops_trace_manager.py:
if self._enterprise_telemetry_enabled or self.trace_instance:
trace_task.app_id = self.app_id
trace_manager_queue.put(trace_task)
Tasks are only enqueued if EITHER:
- Enterprise telemetry is enabled (_enterprise_telemetry_enabled=True), OR
- A third-party trace instance (Langfuse, etc.) is configured
When BOTH are false, tasks are silently dropped (correct behavior).
"""
import queue
import sys
import types
from unittest.mock import MagicMock, patch
import pytest
@pytest.fixture
def trace_queue_manager_and_task(monkeypatch):
"""Fixture to provide TraceQueueManager and TraceTask with delayed imports."""
module_name = "core.ops.ops_trace_manager"
if module_name not in sys.modules:
ops_stub = types.ModuleType(module_name)
class StubTraceTask:
def __init__(self, trace_type):
self.trace_type = trace_type
self.app_id = None
class StubTraceQueueManager:
def __init__(self, app_id=None):
self.app_id = app_id
from core.telemetry.gateway import is_enterprise_telemetry_enabled
self._enterprise_telemetry_enabled = is_enterprise_telemetry_enabled()
self.trace_instance = StubOpsTraceManager.get_ops_trace_instance(app_id)
def add_trace_task(self, trace_task):
if self._enterprise_telemetry_enabled or self.trace_instance:
trace_task.app_id = self.app_id
from core.ops.ops_trace_manager import trace_manager_queue
trace_manager_queue.put(trace_task)
class StubOpsTraceManager:
@staticmethod
def get_ops_trace_instance(app_id):
return None
ops_stub.TraceQueueManager = StubTraceQueueManager
ops_stub.TraceTask = StubTraceTask
ops_stub.OpsTraceManager = StubOpsTraceManager
ops_stub.trace_manager_queue = MagicMock(spec=queue.Queue)
monkeypatch.setitem(sys.modules, module_name, ops_stub)
from core.ops.entities.trace_entity import TraceTaskName
ops_module = __import__(module_name, fromlist=["TraceQueueManager", "TraceTask"])
TraceQueueManager = ops_module.TraceQueueManager
TraceTask = ops_module.TraceTask
return TraceQueueManager, TraceTask, TraceTaskName
class TestTraceQueueManagerTelemetryGuard:
"""Test TraceQueueManager's telemetry guard in add_trace_task()."""
def test_task_not_enqueued_when_telemetry_disabled_and_no_trace_instance(self, trace_queue_manager_and_task):
"""Verify task is NOT enqueued when telemetry disabled and no trace instance.
This is the core guard: when _enterprise_telemetry_enabled=False AND
trace_instance=None, the task should be silently dropped.
"""
TraceQueueManager, TraceTask, TraceTaskName = trace_queue_manager_and_task
mock_queue = MagicMock(spec=queue.Queue)
trace_task = TraceTask(trace_type=TraceTaskName.WORKFLOW_TRACE)
with (
patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False),
patch("core.ops.ops_trace_manager.OpsTraceManager.get_ops_trace_instance", return_value=None),
patch("core.ops.ops_trace_manager.trace_manager_queue", mock_queue),
):
manager = TraceQueueManager(app_id="test-app-id")
manager.add_trace_task(trace_task)
mock_queue.put.assert_not_called()
def test_task_enqueued_when_telemetry_enabled(self, trace_queue_manager_and_task):
"""Verify task IS enqueued when enterprise telemetry is enabled.
When _enterprise_telemetry_enabled=True, the task should be enqueued
regardless of trace_instance state.
"""
TraceQueueManager, TraceTask, TraceTaskName = trace_queue_manager_and_task
mock_queue = MagicMock(spec=queue.Queue)
trace_task = TraceTask(trace_type=TraceTaskName.WORKFLOW_TRACE)
with (
patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True),
patch("core.ops.ops_trace_manager.OpsTraceManager.get_ops_trace_instance", return_value=None),
patch("core.ops.ops_trace_manager.trace_manager_queue", mock_queue),
):
manager = TraceQueueManager(app_id="test-app-id")
manager.add_trace_task(trace_task)
mock_queue.put.assert_called_once()
called_task = mock_queue.put.call_args[0][0]
assert called_task.app_id == "test-app-id"
def test_task_enqueued_when_trace_instance_configured(self, trace_queue_manager_and_task):
"""Verify task IS enqueued when third-party trace instance is configured.
When trace_instance is not None (e.g., Langfuse configured), the task
should be enqueued even if enterprise telemetry is disabled.
"""
TraceQueueManager, TraceTask, TraceTaskName = trace_queue_manager_and_task
mock_queue = MagicMock(spec=queue.Queue)
mock_trace_instance = MagicMock()
trace_task = TraceTask(trace_type=TraceTaskName.WORKFLOW_TRACE)
with (
patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False),
patch(
"core.ops.ops_trace_manager.OpsTraceManager.get_ops_trace_instance", return_value=mock_trace_instance
),
patch("core.ops.ops_trace_manager.trace_manager_queue", mock_queue),
):
manager = TraceQueueManager(app_id="test-app-id")
manager.add_trace_task(trace_task)
mock_queue.put.assert_called_once()
called_task = mock_queue.put.call_args[0][0]
assert called_task.app_id == "test-app-id"
def test_task_enqueued_when_both_telemetry_and_trace_instance_enabled(self, trace_queue_manager_and_task):
"""Verify task IS enqueued when both telemetry and trace instance are enabled.
When both _enterprise_telemetry_enabled=True AND trace_instance is set,
the task should definitely be enqueued.
"""
TraceQueueManager, TraceTask, TraceTaskName = trace_queue_manager_and_task
mock_queue = MagicMock(spec=queue.Queue)
mock_trace_instance = MagicMock()
trace_task = TraceTask(trace_type=TraceTaskName.WORKFLOW_TRACE)
with (
patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True),
patch(
"core.ops.ops_trace_manager.OpsTraceManager.get_ops_trace_instance", return_value=mock_trace_instance
),
patch("core.ops.ops_trace_manager.trace_manager_queue", mock_queue),
):
manager = TraceQueueManager(app_id="test-app-id")
manager.add_trace_task(trace_task)
mock_queue.put.assert_called_once()
called_task = mock_queue.put.call_args[0][0]
assert called_task.app_id == "test-app-id"
def test_app_id_set_before_enqueue(self, trace_queue_manager_and_task):
"""Verify app_id is set on the task before enqueuing.
The guard logic sets trace_task.app_id = self.app_id before calling
trace_manager_queue.put(trace_task). This test verifies that behavior.
"""
TraceQueueManager, TraceTask, TraceTaskName = trace_queue_manager_and_task
mock_queue = MagicMock(spec=queue.Queue)
trace_task = TraceTask(trace_type=TraceTaskName.WORKFLOW_TRACE)
with (
patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True),
patch("core.ops.ops_trace_manager.OpsTraceManager.get_ops_trace_instance", return_value=None),
patch("core.ops.ops_trace_manager.trace_manager_queue", mock_queue),
):
manager = TraceQueueManager(app_id="expected-app-id")
manager.add_trace_task(trace_task)
called_task = mock_queue.put.call_args[0][0]
assert called_task.app_id == "expected-app-id"

View File

@@ -0,0 +1,181 @@
"""Unit tests for core.telemetry.emit() routing and enterprise-only filtering."""
from __future__ import annotations
import queue
import sys
import types
from unittest.mock import MagicMock, patch
import pytest
from core.ops.entities.trace_entity import TraceTaskName
from core.telemetry.events import TelemetryContext, TelemetryEvent
@pytest.fixture
def telemetry_test_setup(monkeypatch):
module_name = "core.ops.ops_trace_manager"
ops_stub = types.ModuleType(module_name)
class StubTraceTask:
def __init__(self, trace_type, **kwargs):
self.trace_type = trace_type
self.app_id = None
self.kwargs = kwargs
class StubTraceQueueManager:
def __init__(self, app_id=None, user_id=None):
self.app_id = app_id
self.user_id = user_id
self.trace_instance = StubOpsTraceManager.get_ops_trace_instance(app_id)
def add_trace_task(self, trace_task):
trace_task.app_id = self.app_id
from core.ops.ops_trace_manager import trace_manager_queue
trace_manager_queue.put(trace_task)
class StubOpsTraceManager:
@staticmethod
def get_ops_trace_instance(app_id):
return None
ops_stub.TraceQueueManager = StubTraceQueueManager
ops_stub.TraceTask = StubTraceTask
ops_stub.OpsTraceManager = StubOpsTraceManager
ops_stub.trace_manager_queue = MagicMock(spec=queue.Queue)
monkeypatch.setitem(sys.modules, module_name, ops_stub)
from core.telemetry import emit
return emit, ops_stub.trace_manager_queue
class TestTelemetryEmit:
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_emit_enterprise_trace_creates_trace_task(self, mock_ee, telemetry_test_setup):
emit_fn, mock_queue = telemetry_test_setup
event = TelemetryEvent(
name=TraceTaskName.DRAFT_NODE_EXECUTION_TRACE,
context=TelemetryContext(
tenant_id="test-tenant",
user_id="test-user",
app_id="test-app",
),
payload={"key": "value"},
)
emit_fn(event)
mock_queue.put.assert_called_once()
called_task = mock_queue.put.call_args[0][0]
assert called_task.trace_type == TraceTaskName.DRAFT_NODE_EXECUTION_TRACE
def test_emit_community_trace_enqueued(self, telemetry_test_setup):
emit_fn, mock_queue = telemetry_test_setup
event = TelemetryEvent(
name=TraceTaskName.WORKFLOW_TRACE,
context=TelemetryContext(
tenant_id="test-tenant",
user_id="test-user",
app_id="test-app",
),
payload={},
)
emit_fn(event)
mock_queue.put.assert_called_once()
def test_emit_enterprise_only_trace_dropped_when_ee_disabled(self, telemetry_test_setup):
emit_fn, mock_queue = telemetry_test_setup
event = TelemetryEvent(
name=TraceTaskName.DRAFT_NODE_EXECUTION_TRACE,
context=TelemetryContext(
tenant_id="test-tenant",
user_id="test-user",
app_id="test-app",
),
payload={},
)
emit_fn(event)
mock_queue.put.assert_not_called()
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_emit_all_enterprise_only_traces_allowed_when_ee_enabled(self, mock_ee, telemetry_test_setup):
emit_fn, mock_queue = telemetry_test_setup
enterprise_only_traces = [
TraceTaskName.DRAFT_NODE_EXECUTION_TRACE,
TraceTaskName.NODE_EXECUTION_TRACE,
TraceTaskName.PROMPT_GENERATION_TRACE,
]
for trace_name in enterprise_only_traces:
mock_queue.reset_mock()
event = TelemetryEvent(
name=trace_name,
context=TelemetryContext(
tenant_id="test-tenant",
user_id="test-user",
app_id="test-app",
),
payload={},
)
emit_fn(event)
mock_queue.put.assert_called_once()
called_task = mock_queue.put.call_args[0][0]
assert called_task.trace_type == trace_name
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_emit_passes_name_directly_to_trace_task(self, mock_ee, telemetry_test_setup):
emit_fn, mock_queue = telemetry_test_setup
event = TelemetryEvent(
name=TraceTaskName.DRAFT_NODE_EXECUTION_TRACE,
context=TelemetryContext(
tenant_id="test-tenant",
user_id="test-user",
app_id="test-app",
),
payload={"extra": "data"},
)
emit_fn(event)
mock_queue.put.assert_called_once()
called_task = mock_queue.put.call_args[0][0]
assert called_task.trace_type == TraceTaskName.DRAFT_NODE_EXECUTION_TRACE
assert isinstance(called_task.trace_type, TraceTaskName)
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_emit_with_provided_trace_manager(self, mock_ee, telemetry_test_setup):
emit_fn, mock_queue = telemetry_test_setup
mock_trace_manager = MagicMock()
mock_trace_manager.add_trace_task = MagicMock()
event = TelemetryEvent(
name=TraceTaskName.NODE_EXECUTION_TRACE,
context=TelemetryContext(
tenant_id="test-tenant",
user_id="test-user",
app_id="test-app",
),
payload={},
)
emit_fn(event, trace_manager=mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
called_task = mock_trace_manager.add_trace_task.call_args[0][0]
assert called_task.trace_type == TraceTaskName.NODE_EXECUTION_TRACE

View File

@@ -0,0 +1,225 @@
from __future__ import annotations
import sys
from unittest.mock import MagicMock, patch
import pytest
from core.telemetry.gateway import emit, is_enterprise_telemetry_enabled
from enterprise.telemetry.contracts import TelemetryCase
class TestTelemetryCoreExports:
def test_is_enterprise_telemetry_enabled_exported(self) -> None:
from core.telemetry.gateway import is_enterprise_telemetry_enabled as exported_func
assert callable(exported_func)
@pytest.fixture
def mock_ops_trace_manager():
mock_module = MagicMock()
mock_trace_task_class = MagicMock()
mock_trace_task_class.return_value = MagicMock()
mock_module.TraceTask = mock_trace_task_class
mock_module.TraceQueueManager = MagicMock()
mock_trace_entity = MagicMock()
mock_trace_task_name = MagicMock()
mock_trace_task_name.return_value = "workflow"
mock_trace_entity.TraceTaskName = mock_trace_task_name
with (
patch.dict(sys.modules, {"core.ops.ops_trace_manager": mock_module}),
patch.dict(sys.modules, {"core.ops.entities.trace_entity": mock_trace_entity}),
):
yield mock_module, mock_trace_entity
class TestGatewayIntegrationTraceRouting:
@pytest.fixture
def mock_trace_manager(self) -> MagicMock:
return MagicMock()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_ce_eligible_trace_routed_to_trace_manager(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True):
context = {"app_id": "app-123", "user_id": "user-456", "tenant_id": "tenant-789"}
payload = {"workflow_run_id": "run-abc"}
emit(TelemetryCase.WORKFLOW_RUN, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_ce_eligible_trace_routed_when_ee_disabled(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False):
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"workflow_run_id": "run-abc"}
emit(TelemetryCase.WORKFLOW_RUN, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_enterprise_only_trace_dropped_when_ee_disabled(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False):
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"node_id": "node-abc"}
emit(TelemetryCase.NODE_EXECUTION, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_not_called()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_enterprise_only_trace_routed_when_ee_enabled(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True):
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"node_id": "node-abc"}
emit(TelemetryCase.NODE_EXECUTION, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
class TestGatewayIntegrationMetricRouting:
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_metric_case_routes_to_celery_task(
self,
mock_ee_enabled: MagicMock,
) -> None:
from enterprise.telemetry.contracts import TelemetryEnvelope
with patch("tasks.enterprise_telemetry_task.process_enterprise_telemetry.delay") as mock_delay:
context = {"tenant_id": "tenant-123"}
payload = {"app_id": "app-abc", "name": "My App"}
emit(TelemetryCase.APP_CREATED, context, payload)
mock_delay.assert_called_once()
envelope_json = mock_delay.call_args[0][0]
envelope = TelemetryEnvelope.model_validate_json(envelope_json)
assert envelope.case == TelemetryCase.APP_CREATED
assert envelope.tenant_id == "tenant-123"
assert envelope.payload["app_id"] == "app-abc"
@pytest.mark.usefixtures("mock_ops_trace_manager")
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_tool_execution_trace_routed(
self,
mock_ee_enabled: MagicMock,
) -> None:
mock_trace_manager = MagicMock()
context = {"tenant_id": "tenant-123", "app_id": "app-123"}
payload = {"tool_name": "test_tool", "tool_inputs": {}, "tool_outputs": "result"}
emit(TelemetryCase.TOOL_EXECUTION, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
@pytest.mark.usefixtures("mock_ops_trace_manager")
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_moderation_check_trace_routed(
self,
mock_ee_enabled: MagicMock,
) -> None:
mock_trace_manager = MagicMock()
context = {"tenant_id": "tenant-123", "app_id": "app-123"}
payload = {"message_id": "msg-123", "moderation_result": {"flagged": False}}
emit(TelemetryCase.MODERATION_CHECK, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
class TestGatewayIntegrationCEEligibility:
@pytest.fixture
def mock_trace_manager(self) -> MagicMock:
return MagicMock()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_workflow_run_is_ce_eligible(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False):
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"workflow_run_id": "run-abc"}
emit(TelemetryCase.WORKFLOW_RUN, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_message_run_is_ce_eligible(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False):
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"message_id": "msg-abc", "conversation_id": "conv-123"}
emit(TelemetryCase.MESSAGE_RUN, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_node_execution_not_ce_eligible(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False):
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"node_id": "node-abc"}
emit(TelemetryCase.NODE_EXECUTION, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_not_called()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_draft_node_execution_not_ce_eligible(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False):
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"node_execution_data": {}}
emit(TelemetryCase.DRAFT_NODE_EXECUTION, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_not_called()
@pytest.mark.usefixtures("mock_ops_trace_manager")
def test_prompt_generation_not_ce_eligible(
self,
mock_trace_manager: MagicMock,
) -> None:
with patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False):
context = {"app_id": "app-123", "user_id": "user-456", "tenant_id": "tenant-789"}
payload = {"operation_type": "generate", "instruction": "test"}
emit(TelemetryCase.PROMPT_GENERATION, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_not_called()
class TestIsEnterpriseTelemetryEnabled:
def test_returns_false_when_exporter_import_fails(self) -> None:
with patch.dict(sys.modules, {"enterprise.telemetry.exporter": None}):
result = is_enterprise_telemetry_enabled()
assert result is False
def test_function_is_callable(self) -> None:
assert callable(is_enterprise_telemetry_enabled)

View File

@@ -0,0 +1,230 @@
"""Unit tests for telemetry gateway contracts."""
from __future__ import annotations
import pytest
from pydantic import ValidationError
from core.telemetry.gateway import CASE_ROUTING
from enterprise.telemetry.contracts import CaseRoute, SignalType, TelemetryCase, TelemetryEnvelope
class TestTelemetryCase:
"""Tests for TelemetryCase enum."""
def test_all_cases_defined(self) -> None:
"""Verify all 14 telemetry cases are defined."""
expected_cases = {
"WORKFLOW_RUN",
"NODE_EXECUTION",
"DRAFT_NODE_EXECUTION",
"MESSAGE_RUN",
"TOOL_EXECUTION",
"MODERATION_CHECK",
"SUGGESTED_QUESTION",
"DATASET_RETRIEVAL",
"GENERATE_NAME",
"PROMPT_GENERATION",
"APP_CREATED",
"APP_UPDATED",
"APP_DELETED",
"FEEDBACK_CREATED",
}
actual_cases = {case.name for case in TelemetryCase}
assert actual_cases == expected_cases
def test_case_values(self) -> None:
"""Verify case enum values are correct."""
assert TelemetryCase.WORKFLOW_RUN.value == "workflow_run"
assert TelemetryCase.NODE_EXECUTION.value == "node_execution"
assert TelemetryCase.DRAFT_NODE_EXECUTION.value == "draft_node_execution"
assert TelemetryCase.MESSAGE_RUN.value == "message_run"
assert TelemetryCase.TOOL_EXECUTION.value == "tool_execution"
assert TelemetryCase.MODERATION_CHECK.value == "moderation_check"
assert TelemetryCase.SUGGESTED_QUESTION.value == "suggested_question"
assert TelemetryCase.DATASET_RETRIEVAL.value == "dataset_retrieval"
assert TelemetryCase.GENERATE_NAME.value == "generate_name"
assert TelemetryCase.PROMPT_GENERATION.value == "prompt_generation"
assert TelemetryCase.APP_CREATED.value == "app_created"
assert TelemetryCase.APP_UPDATED.value == "app_updated"
assert TelemetryCase.APP_DELETED.value == "app_deleted"
assert TelemetryCase.FEEDBACK_CREATED.value == "feedback_created"
class TestCaseRoute:
"""Tests for CaseRoute model."""
def test_valid_trace_route(self) -> None:
"""Verify valid trace route creation."""
route = CaseRoute(signal_type=SignalType.TRACE, ce_eligible=True)
assert route.signal_type == SignalType.TRACE
assert route.ce_eligible is True
def test_valid_metric_log_route(self) -> None:
"""Verify valid metric_log route creation."""
route = CaseRoute(signal_type=SignalType.METRIC_LOG, ce_eligible=False)
assert route.signal_type == SignalType.METRIC_LOG
assert route.ce_eligible is False
def test_invalid_signal_type(self) -> None:
"""Verify invalid signal_type is rejected."""
with pytest.raises(ValidationError):
CaseRoute(signal_type="invalid", ce_eligible=True)
class TestTelemetryEnvelope:
"""Tests for TelemetryEnvelope model."""
def test_valid_envelope_minimal(self) -> None:
"""Verify valid minimal envelope creation."""
envelope = TelemetryEnvelope(
case=TelemetryCase.WORKFLOW_RUN,
tenant_id="tenant-123",
event_id="event-456",
payload={"key": "value"},
)
assert envelope.case == TelemetryCase.WORKFLOW_RUN
assert envelope.tenant_id == "tenant-123"
assert envelope.event_id == "event-456"
assert envelope.payload == {"key": "value"}
assert envelope.metadata is None
def test_valid_envelope_full(self) -> None:
"""Verify valid envelope with all fields."""
metadata = {"payload_ref": "telemetry/tenant-789/event-012.json"}
envelope = TelemetryEnvelope(
case=TelemetryCase.MESSAGE_RUN,
tenant_id="tenant-789",
event_id="event-012",
payload={"message": "hello"},
metadata=metadata,
)
assert envelope.case == TelemetryCase.MESSAGE_RUN
assert envelope.tenant_id == "tenant-789"
assert envelope.event_id == "event-012"
assert envelope.payload == {"message": "hello"}
assert envelope.metadata == metadata
def test_missing_required_case(self) -> None:
"""Verify missing case field is rejected."""
with pytest.raises(ValidationError):
TelemetryEnvelope(
tenant_id="tenant-123",
event_id="event-456",
payload={"key": "value"},
)
def test_missing_required_tenant_id(self) -> None:
"""Verify missing tenant_id field is rejected."""
with pytest.raises(ValidationError):
TelemetryEnvelope(
case=TelemetryCase.WORKFLOW_RUN,
event_id="event-456",
payload={"key": "value"},
)
def test_missing_required_event_id(self) -> None:
"""Verify missing event_id field is rejected."""
with pytest.raises(ValidationError):
TelemetryEnvelope(
case=TelemetryCase.WORKFLOW_RUN,
tenant_id="tenant-123",
payload={"key": "value"},
)
def test_missing_required_payload(self) -> None:
"""Verify missing payload field is rejected."""
with pytest.raises(ValidationError):
TelemetryEnvelope(
case=TelemetryCase.WORKFLOW_RUN,
tenant_id="tenant-123",
event_id="event-456",
)
def test_metadata_none(self) -> None:
"""Verify metadata can be None."""
envelope = TelemetryEnvelope(
case=TelemetryCase.WORKFLOW_RUN,
tenant_id="tenant-123",
event_id="event-456",
payload={"key": "value"},
metadata=None,
)
assert envelope.metadata is None
class TestCaseRouting:
"""Tests for CASE_ROUTING table."""
def test_all_cases_routed(self) -> None:
"""Verify all 14 cases have routing entries."""
assert len(CASE_ROUTING) == 14
for case in TelemetryCase:
assert case in CASE_ROUTING
def test_trace_ce_eligible_cases(self) -> None:
"""Verify trace cases with CE eligibility."""
ce_eligible_trace_cases = {
TelemetryCase.WORKFLOW_RUN,
TelemetryCase.MESSAGE_RUN,
}
for case in ce_eligible_trace_cases:
route = CASE_ROUTING[case]
assert route.signal_type == SignalType.TRACE
assert route.ce_eligible is True
def test_trace_enterprise_only_cases(self) -> None:
"""Verify trace cases that are enterprise-only."""
enterprise_only_trace_cases = {
TelemetryCase.NODE_EXECUTION,
TelemetryCase.DRAFT_NODE_EXECUTION,
TelemetryCase.PROMPT_GENERATION,
}
for case in enterprise_only_trace_cases:
route = CASE_ROUTING[case]
assert route.signal_type == SignalType.TRACE
assert route.ce_eligible is False
def test_metric_log_cases(self) -> None:
"""Verify metric/log-only cases."""
metric_log_cases = {
TelemetryCase.APP_CREATED,
TelemetryCase.APP_UPDATED,
TelemetryCase.APP_DELETED,
TelemetryCase.FEEDBACK_CREATED,
}
for case in metric_log_cases:
route = CASE_ROUTING[case]
assert route.signal_type == SignalType.METRIC_LOG
assert route.ce_eligible is False
def test_routing_table_completeness(self) -> None:
"""Verify routing table covers all cases with correct types."""
trace_cases = {
TelemetryCase.WORKFLOW_RUN,
TelemetryCase.MESSAGE_RUN,
TelemetryCase.NODE_EXECUTION,
TelemetryCase.DRAFT_NODE_EXECUTION,
TelemetryCase.PROMPT_GENERATION,
TelemetryCase.TOOL_EXECUTION,
TelemetryCase.MODERATION_CHECK,
TelemetryCase.SUGGESTED_QUESTION,
TelemetryCase.DATASET_RETRIEVAL,
TelemetryCase.GENERATE_NAME,
}
metric_log_cases = {
TelemetryCase.APP_CREATED,
TelemetryCase.APP_UPDATED,
TelemetryCase.APP_DELETED,
TelemetryCase.FEEDBACK_CREATED,
}
all_cases = trace_cases | metric_log_cases
assert len(all_cases) == 14
assert all_cases == set(TelemetryCase)
for case in trace_cases:
assert CASE_ROUTING[case].signal_type == SignalType.TRACE
for case in metric_log_cases:
assert CASE_ROUTING[case].signal_type == SignalType.METRIC_LOG

View File

@@ -0,0 +1,121 @@
from unittest.mock import MagicMock, patch
import pytest
from enterprise.telemetry import event_handlers
from enterprise.telemetry.contracts import TelemetryCase
@pytest.fixture
def mock_gateway_emit():
with patch("core.telemetry.gateway.emit") as mock:
yield mock
def test_handle_app_created_calls_task(mock_gateway_emit):
sender = MagicMock()
sender.id = "app-123"
sender.tenant_id = "tenant-456"
sender.mode = "chat"
event_handlers._handle_app_created(sender)
mock_gateway_emit.assert_called_once_with(
case=TelemetryCase.APP_CREATED,
context={"tenant_id": "tenant-456"},
payload={"app_id": "app-123", "mode": "chat"},
)
def test_handle_app_created_no_exporter(mock_gateway_emit):
"""Gateway handles exporter availability internally; handler always calls gateway."""
sender = MagicMock()
sender.id = "app-123"
sender.tenant_id = "tenant-456"
event_handlers._handle_app_created(sender)
mock_gateway_emit.assert_called_once()
def test_handle_app_updated_calls_task(mock_gateway_emit):
sender = MagicMock()
sender.id = "app-123"
sender.tenant_id = "tenant-456"
event_handlers._handle_app_updated(sender)
mock_gateway_emit.assert_called_once_with(
case=TelemetryCase.APP_UPDATED,
context={"tenant_id": "tenant-456"},
payload={"app_id": "app-123"},
)
def test_handle_app_deleted_calls_task(mock_gateway_emit):
sender = MagicMock()
sender.id = "app-123"
sender.tenant_id = "tenant-456"
event_handlers._handle_app_deleted(sender)
mock_gateway_emit.assert_called_once_with(
case=TelemetryCase.APP_DELETED,
context={"tenant_id": "tenant-456"},
payload={"app_id": "app-123"},
)
def test_handle_feedback_created_calls_task(mock_gateway_emit):
sender = MagicMock()
sender.message_id = "msg-123"
sender.app_id = "app-456"
sender.conversation_id = "conv-789"
sender.from_end_user_id = "user-001"
sender.from_account_id = None
sender.rating = "like"
sender.from_source = "api"
sender.content = "Great response!"
event_handlers._handle_feedback_created(sender, tenant_id="tenant-456")
mock_gateway_emit.assert_called_once_with(
case=TelemetryCase.FEEDBACK_CREATED,
context={"tenant_id": "tenant-456"},
payload={
"message_id": "msg-123",
"app_id": "app-456",
"conversation_id": "conv-789",
"from_end_user_id": "user-001",
"from_account_id": None,
"rating": "like",
"from_source": "api",
"content": "Great response!",
},
)
def test_handle_feedback_created_no_exporter(mock_gateway_emit):
"""Gateway handles exporter availability internally; handler always calls gateway."""
sender = MagicMock()
sender.message_id = "msg-123"
event_handlers._handle_feedback_created(sender, tenant_id="tenant-456")
mock_gateway_emit.assert_called_once()
def test_handlers_create_valid_envelopes(mock_gateway_emit):
"""Verify handlers pass correct TelemetryCase and payload structure."""
sender = MagicMock()
sender.id = "app-123"
sender.tenant_id = "tenant-456"
sender.mode = "chat"
event_handlers._handle_app_created(sender)
call_kwargs = mock_gateway_emit.call_args[1]
assert call_kwargs["case"] == TelemetryCase.APP_CREATED
assert call_kwargs["context"]["tenant_id"] == "tenant-456"
assert call_kwargs["payload"]["app_id"] == "app-123"
assert call_kwargs["payload"]["mode"] == "chat"

View File

@@ -0,0 +1,263 @@
"""Unit tests for EnterpriseExporter and _ExporterFactory."""
from types import SimpleNamespace
from unittest.mock import MagicMock, patch
from configs.enterprise import EnterpriseTelemetryConfig
from enterprise.telemetry.exporter import EnterpriseExporter
def test_config_api_key_default_empty():
"""Test that ENTERPRISE_OTLP_API_KEY defaults to empty string."""
config = EnterpriseTelemetryConfig()
assert config.ENTERPRISE_OTLP_API_KEY == ""
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_api_key_only_injects_bearer_header(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that API key alone injects Bearer authorization header."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="https://collector.example.com",
ENTERPRISE_OTLP_HEADERS="",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="test-secret-key",
)
EnterpriseExporter(mock_config)
# Verify span exporter was called with Bearer header
assert mock_span_exporter.call_args is not None
headers = mock_span_exporter.call_args.kwargs.get("headers")
assert headers is not None
assert ("authorization", "Bearer test-secret-key") in headers
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_empty_api_key_no_auth_header(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that empty API key does not inject authorization header."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="https://collector.example.com",
ENTERPRISE_OTLP_HEADERS="",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="",
)
EnterpriseExporter(mock_config)
# Verify span exporter was called without authorization header
assert mock_span_exporter.call_args is not None
headers = mock_span_exporter.call_args.kwargs.get("headers")
# Headers should be None or not contain authorization
if headers is not None:
assert not any(key == "authorization" for key, _ in headers)
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_api_key_and_custom_headers_merge(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that API key and custom headers are merged correctly."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="https://collector.example.com",
ENTERPRISE_OTLP_HEADERS="x-custom=foo",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="test-key",
)
EnterpriseExporter(mock_config)
# Verify both headers are present
assert mock_span_exporter.call_args is not None
headers = mock_span_exporter.call_args.kwargs.get("headers")
assert headers is not None
assert ("authorization", "Bearer test-key") in headers
assert ("x-custom", "foo") in headers
@patch("enterprise.telemetry.exporter.logger")
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_api_key_overrides_conflicting_header(
mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock, mock_logger: MagicMock
) -> None:
"""Test that API key overrides conflicting authorization header and logs warning."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="https://collector.example.com",
ENTERPRISE_OTLP_HEADERS="authorization=Basic old",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="test-key",
)
EnterpriseExporter(mock_config)
# Verify Bearer header takes precedence
assert mock_span_exporter.call_args is not None
headers = mock_span_exporter.call_args.kwargs.get("headers")
assert headers is not None
assert ("authorization", "Bearer test-key") in headers
# Verify old authorization header is not present
assert ("authorization", "Basic old") not in headers
# Verify warning was logged
mock_logger.warning.assert_called_once()
assert mock_logger.warning.call_args is not None
warning_message = mock_logger.warning.call_args[0][0]
assert "ENTERPRISE_OTLP_API_KEY is set" in warning_message
assert "authorization" in warning_message
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_https_endpoint_uses_secure_grpc(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that https:// endpoint enables TLS (insecure=False) for gRPC."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="https://collector.example.com",
ENTERPRISE_OTLP_HEADERS="",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="test-key",
)
EnterpriseExporter(mock_config)
# Verify insecure=False for both exporters (https:// scheme)
assert mock_span_exporter.call_args is not None
assert mock_span_exporter.call_args.kwargs["insecure"] is False
assert mock_metric_exporter.call_args is not None
assert mock_metric_exporter.call_args.kwargs["insecure"] is False
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_http_endpoint_uses_insecure_grpc(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that http:// endpoint uses insecure gRPC (insecure=True)."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="http://collector.example.com",
ENTERPRISE_OTLP_HEADERS="",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="",
)
EnterpriseExporter(mock_config)
# Verify insecure=True for both exporters (http:// scheme)
assert mock_span_exporter.call_args is not None
assert mock_span_exporter.call_args.kwargs["insecure"] is True
assert mock_metric_exporter.call_args is not None
assert mock_metric_exporter.call_args.kwargs["insecure"] is True
@patch("enterprise.telemetry.exporter.HTTPSpanExporter")
@patch("enterprise.telemetry.exporter.HTTPMetricExporter")
def test_insecure_not_passed_to_http_exporters(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that insecure parameter is not passed to HTTP exporters."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="http://collector.example.com",
ENTERPRISE_OTLP_HEADERS="",
ENTERPRISE_OTLP_PROTOCOL="http",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="test-key",
)
EnterpriseExporter(mock_config)
# Verify insecure kwarg is NOT in HTTP exporter calls
assert mock_span_exporter.call_args is not None
assert "insecure" not in mock_span_exporter.call_args.kwargs
assert mock_metric_exporter.call_args is not None
assert "insecure" not in mock_metric_exporter.call_args.kwargs
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_api_key_with_special_chars_preserved(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that API key with special characters is preserved without mangling."""
special_key = "abc+def/ghi=jkl=="
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="https://collector.example.com",
ENTERPRISE_OTLP_HEADERS="",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY=special_key,
)
EnterpriseExporter(mock_config)
# Verify special characters are preserved in Bearer header
assert mock_span_exporter.call_args is not None
headers = mock_span_exporter.call_args.kwargs.get("headers")
assert headers is not None
assert ("authorization", f"Bearer {special_key}") in headers
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_no_scheme_localhost_uses_insecure(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that endpoint without scheme defaults to insecure for localhost."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="localhost:4317",
ENTERPRISE_OTLP_HEADERS="",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="",
)
EnterpriseExporter(mock_config)
# Verify insecure=True for localhost without scheme
assert mock_span_exporter.call_args is not None
assert mock_span_exporter.call_args.kwargs["insecure"] is True
assert mock_metric_exporter.call_args is not None
assert mock_metric_exporter.call_args.kwargs["insecure"] is True
@patch("enterprise.telemetry.exporter.GRPCSpanExporter")
@patch("enterprise.telemetry.exporter.GRPCMetricExporter")
def test_no_scheme_production_uses_insecure(mock_metric_exporter: MagicMock, mock_span_exporter: MagicMock) -> None:
"""Test that endpoint without scheme defaults to insecure (not https://)."""
mock_config = SimpleNamespace(
ENTERPRISE_OTLP_ENDPOINT="collector.example.com:4317",
ENTERPRISE_OTLP_HEADERS="",
ENTERPRISE_OTLP_PROTOCOL="grpc",
ENTERPRISE_SERVICE_NAME="dify",
ENTERPRISE_OTEL_SAMPLING_RATE=1.0,
ENTERPRISE_INCLUDE_CONTENT=True,
ENTERPRISE_OTLP_API_KEY="",
)
EnterpriseExporter(mock_config)
# Verify insecure=True for any endpoint without https:// scheme
assert mock_span_exporter.call_args is not None
assert mock_span_exporter.call_args.kwargs["insecure"] is True
assert mock_metric_exporter.call_args is not None
assert mock_metric_exporter.call_args.kwargs["insecure"] is True

View File

@@ -0,0 +1,272 @@
from __future__ import annotations
import sys
from unittest.mock import MagicMock, patch
import pytest
from core.ops.entities.trace_entity import TraceTaskName
from core.telemetry.gateway import (
CASE_ROUTING,
CASE_TO_TRACE_TASK,
PAYLOAD_SIZE_THRESHOLD_BYTES,
emit,
)
from enterprise.telemetry.contracts import SignalType, TelemetryCase, TelemetryEnvelope
class TestCaseRoutingTable:
def test_all_cases_have_routing(self) -> None:
for case in TelemetryCase:
assert case in CASE_ROUTING, f"Missing routing for {case}"
def test_trace_cases(self) -> None:
trace_cases = [
TelemetryCase.WORKFLOW_RUN,
TelemetryCase.MESSAGE_RUN,
TelemetryCase.NODE_EXECUTION,
TelemetryCase.DRAFT_NODE_EXECUTION,
TelemetryCase.PROMPT_GENERATION,
]
for case in trace_cases:
assert CASE_ROUTING[case].signal_type is SignalType.TRACE, f"{case} should be trace"
def test_metric_log_cases(self) -> None:
metric_log_cases = [
TelemetryCase.APP_CREATED,
TelemetryCase.APP_UPDATED,
TelemetryCase.APP_DELETED,
TelemetryCase.FEEDBACK_CREATED,
]
for case in metric_log_cases:
assert CASE_ROUTING[case].signal_type is SignalType.METRIC_LOG, f"{case} should be metric_log"
def test_ce_eligible_cases(self) -> None:
ce_eligible_cases = [
TelemetryCase.WORKFLOW_RUN,
TelemetryCase.MESSAGE_RUN,
TelemetryCase.TOOL_EXECUTION,
TelemetryCase.MODERATION_CHECK,
TelemetryCase.SUGGESTED_QUESTION,
TelemetryCase.DATASET_RETRIEVAL,
TelemetryCase.GENERATE_NAME,
]
for case in ce_eligible_cases:
assert CASE_ROUTING[case].ce_eligible is True, f"{case} should be CE eligible"
def test_enterprise_only_cases(self) -> None:
enterprise_only_cases = [
TelemetryCase.NODE_EXECUTION,
TelemetryCase.DRAFT_NODE_EXECUTION,
TelemetryCase.PROMPT_GENERATION,
]
for case in enterprise_only_cases:
assert CASE_ROUTING[case].ce_eligible is False, f"{case} should be enterprise-only"
def test_trace_cases_have_task_name_mapping(self) -> None:
trace_cases = [c for c in TelemetryCase if CASE_ROUTING[c].signal_type is SignalType.TRACE]
for case in trace_cases:
assert case in CASE_TO_TRACE_TASK, f"Missing TraceTaskName mapping for {case}"
@pytest.fixture
def mock_ops_trace_manager():
mock_module = MagicMock()
mock_trace_task_class = MagicMock()
mock_trace_task_class.return_value = MagicMock()
mock_module.TraceTask = mock_trace_task_class
mock_module.TraceQueueManager = MagicMock()
mock_trace_entity = MagicMock()
mock_trace_task_name = MagicMock()
mock_trace_task_name.return_value = "workflow"
mock_trace_entity.TraceTaskName = mock_trace_task_name
with (
patch.dict(sys.modules, {"core.ops.ops_trace_manager": mock_module}),
patch.dict(sys.modules, {"core.ops.entities.trace_entity": mock_trace_entity}),
):
yield mock_module, mock_trace_entity
class TestGatewayTraceRouting:
@pytest.fixture
def mock_trace_manager(self) -> MagicMock:
return MagicMock()
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_trace_case_routes_to_trace_manager(
self,
mock_ee_enabled: MagicMock,
mock_trace_manager: MagicMock,
mock_ops_trace_manager: tuple[MagicMock, MagicMock],
) -> None:
context = {"app_id": "app-123", "user_id": "user-456", "tenant_id": "tenant-789"}
payload = {"workflow_run_id": "run-abc"}
emit(TelemetryCase.WORKFLOW_RUN, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False)
def test_ce_eligible_trace_enqueued_when_ee_disabled(
self,
mock_ee_enabled: MagicMock,
mock_trace_manager: MagicMock,
mock_ops_trace_manager: tuple[MagicMock, MagicMock],
) -> None:
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"workflow_run_id": "run-abc"}
emit(TelemetryCase.WORKFLOW_RUN, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=False)
def test_enterprise_only_trace_dropped_when_ee_disabled(
self,
mock_ee_enabled: MagicMock,
mock_trace_manager: MagicMock,
mock_ops_trace_manager: tuple[MagicMock, MagicMock],
) -> None:
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"node_id": "node-abc"}
emit(TelemetryCase.NODE_EXECUTION, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_not_called()
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
def test_enterprise_only_trace_enqueued_when_ee_enabled(
self,
mock_ee_enabled: MagicMock,
mock_trace_manager: MagicMock,
mock_ops_trace_manager: tuple[MagicMock, MagicMock],
) -> None:
context = {"app_id": "app-123", "user_id": "user-456"}
payload = {"node_id": "node-abc"}
emit(TelemetryCase.NODE_EXECUTION, context, payload, mock_trace_manager)
mock_trace_manager.add_trace_task.assert_called_once()
class TestGatewayMetricLogRouting:
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
@patch("tasks.enterprise_telemetry_task.process_enterprise_telemetry.delay")
def test_metric_case_routes_to_celery_task(
self,
mock_delay: MagicMock,
mock_ee_enabled: MagicMock,
) -> None:
context = {"tenant_id": "tenant-123"}
payload = {"app_id": "app-abc", "name": "My App"}
emit(TelemetryCase.APP_CREATED, context, payload)
mock_delay.assert_called_once()
envelope_json = mock_delay.call_args[0][0]
envelope = TelemetryEnvelope.model_validate_json(envelope_json)
assert envelope.case == TelemetryCase.APP_CREATED
assert envelope.tenant_id == "tenant-123"
assert envelope.payload["app_id"] == "app-abc"
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
@patch("tasks.enterprise_telemetry_task.process_enterprise_telemetry.delay")
def test_envelope_has_unique_event_id(
self,
mock_delay: MagicMock,
mock_ee_enabled: MagicMock,
) -> None:
context = {"tenant_id": "tenant-123"}
payload = {"app_id": "app-abc"}
emit(TelemetryCase.APP_CREATED, context, payload)
emit(TelemetryCase.APP_CREATED, context, payload)
assert mock_delay.call_count == 2
envelope1 = TelemetryEnvelope.model_validate_json(mock_delay.call_args_list[0][0][0])
envelope2 = TelemetryEnvelope.model_validate_json(mock_delay.call_args_list[1][0][0])
assert envelope1.event_id != envelope2.event_id
class TestGatewayPayloadSizing:
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
@patch("tasks.enterprise_telemetry_task.process_enterprise_telemetry.delay")
def test_small_payload_inlined(
self,
mock_delay: MagicMock,
mock_ee_enabled: MagicMock,
) -> None:
context = {"tenant_id": "tenant-123"}
payload = {"key": "small_value"}
emit(TelemetryCase.APP_CREATED, context, payload)
envelope_json = mock_delay.call_args[0][0]
envelope = TelemetryEnvelope.model_validate_json(envelope_json)
assert envelope.payload == payload
assert envelope.metadata is None
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
@patch("core.telemetry.gateway.storage")
@patch("tasks.enterprise_telemetry_task.process_enterprise_telemetry.delay")
def test_large_payload_stored(
self,
mock_delay: MagicMock,
mock_storage: MagicMock,
mock_ee_enabled: MagicMock,
) -> None:
context = {"tenant_id": "tenant-123"}
large_value = "x" * (PAYLOAD_SIZE_THRESHOLD_BYTES + 1000)
payload = {"key": large_value}
emit(TelemetryCase.APP_CREATED, context, payload)
mock_storage.save.assert_called_once()
storage_key = mock_storage.save.call_args[0][0]
assert storage_key.startswith("telemetry/tenant-123/")
envelope_json = mock_delay.call_args[0][0]
envelope = TelemetryEnvelope.model_validate_json(envelope_json)
assert envelope.payload == {}
assert envelope.metadata is not None
assert envelope.metadata["payload_ref"] == storage_key
@patch("core.telemetry.gateway.is_enterprise_telemetry_enabled", return_value=True)
@patch("core.telemetry.gateway.storage")
@patch("tasks.enterprise_telemetry_task.process_enterprise_telemetry.delay")
def test_large_payload_fallback_on_storage_error(
self,
mock_delay: MagicMock,
mock_storage: MagicMock,
mock_ee_enabled: MagicMock,
) -> None:
mock_storage.save.side_effect = Exception("Storage failure")
context = {"tenant_id": "tenant-123"}
large_value = "x" * (PAYLOAD_SIZE_THRESHOLD_BYTES + 1000)
payload = {"key": large_value}
emit(TelemetryCase.APP_CREATED, context, payload)
envelope_json = mock_delay.call_args[0][0]
envelope = TelemetryEnvelope.model_validate_json(envelope_json)
assert envelope.payload == payload
assert envelope.metadata is None
class TestTraceTaskNameMapping:
def test_workflow_run_mapping(self) -> None:
assert CASE_TO_TRACE_TASK[TelemetryCase.WORKFLOW_RUN] is TraceTaskName.WORKFLOW_TRACE
def test_message_run_mapping(self) -> None:
assert CASE_TO_TRACE_TASK[TelemetryCase.MESSAGE_RUN] is TraceTaskName.MESSAGE_TRACE
def test_node_execution_mapping(self) -> None:
assert CASE_TO_TRACE_TASK[TelemetryCase.NODE_EXECUTION] is TraceTaskName.NODE_EXECUTION_TRACE
def test_draft_node_execution_mapping(self) -> None:
assert CASE_TO_TRACE_TASK[TelemetryCase.DRAFT_NODE_EXECUTION] is TraceTaskName.DRAFT_NODE_EXECUTION_TRACE
def test_prompt_generation_mapping(self) -> None:
assert CASE_TO_TRACE_TASK[TelemetryCase.PROMPT_GENERATION] is TraceTaskName.PROMPT_GENERATION_TRACE

View File

@@ -0,0 +1,507 @@
"""Unit tests for EnterpriseMetricHandler."""
import json
from unittest.mock import MagicMock, patch
import pytest
from enterprise.telemetry.contracts import TelemetryCase, TelemetryEnvelope
from enterprise.telemetry.metric_handler import EnterpriseMetricHandler
@pytest.fixture
def mock_redis():
with patch("enterprise.telemetry.metric_handler.redis_client") as mock:
yield mock
@pytest.fixture
def sample_envelope():
return TelemetryEnvelope(
case=TelemetryCase.APP_CREATED,
tenant_id="test-tenant",
event_id="test-event-123",
payload={"app_id": "app-123", "name": "Test App"},
)
def test_dispatch_app_created(sample_envelope, mock_redis):
mock_redis.set.return_value = True
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_app_created") as mock_handler:
handler.handle(sample_envelope)
mock_handler.assert_called_once_with(sample_envelope)
def test_dispatch_app_updated(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_UPDATED,
tenant_id="test-tenant",
event_id="test-event-456",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_app_updated") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_app_deleted(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_DELETED,
tenant_id="test-tenant",
event_id="test-event-789",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_app_deleted") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_feedback_created(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.FEEDBACK_CREATED,
tenant_id="test-tenant",
event_id="test-event-abc",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_feedback_created") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_message_run(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.MESSAGE_RUN,
tenant_id="test-tenant",
event_id="test-event-msg",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_message_run") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_tool_execution(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.TOOL_EXECUTION,
tenant_id="test-tenant",
event_id="test-event-tool",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_tool_execution") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_moderation_check(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.MODERATION_CHECK,
tenant_id="test-tenant",
event_id="test-event-mod",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_moderation_check") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_suggested_question(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.SUGGESTED_QUESTION,
tenant_id="test-tenant",
event_id="test-event-sq",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_suggested_question") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_dataset_retrieval(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.DATASET_RETRIEVAL,
tenant_id="test-tenant",
event_id="test-event-ds",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_dataset_retrieval") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_generate_name(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.GENERATE_NAME,
tenant_id="test-tenant",
event_id="test-event-gn",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_generate_name") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_dispatch_prompt_generation(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.PROMPT_GENERATION,
tenant_id="test-tenant",
event_id="test-event-pg",
payload={},
)
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_prompt_generation") as mock_handler:
handler.handle(envelope)
mock_handler.assert_called_once_with(envelope)
def test_all_known_cases_have_handlers(mock_redis):
mock_redis.set.return_value = True
handler = EnterpriseMetricHandler()
for case in TelemetryCase:
envelope = TelemetryEnvelope(
case=case,
tenant_id="test-tenant",
event_id=f"test-{case.value}",
payload={},
)
handler.handle(envelope)
def test_idempotency_duplicate(sample_envelope, mock_redis):
mock_redis.set.return_value = None
handler = EnterpriseMetricHandler()
with patch.object(handler, "_on_app_created") as mock_handler:
handler.handle(sample_envelope)
mock_handler.assert_not_called()
def test_idempotency_first_seen(sample_envelope, mock_redis):
mock_redis.set.return_value = True
handler = EnterpriseMetricHandler()
is_dup = handler._is_duplicate(sample_envelope)
assert is_dup is False
mock_redis.set.assert_called_once_with(
"telemetry:dedup:test-tenant:test-event-123",
b"1",
nx=True,
ex=3600,
)
def test_idempotency_redis_failure_fails_open(sample_envelope, mock_redis, caplog):
mock_redis.set.side_effect = Exception("Redis unavailable")
handler = EnterpriseMetricHandler()
is_dup = handler._is_duplicate(sample_envelope)
assert is_dup is False
assert "Redis unavailable for deduplication check" in caplog.text
def test_rehydration_uses_payload(sample_envelope):
handler = EnterpriseMetricHandler()
payload = handler._rehydrate(sample_envelope)
assert payload == {"app_id": "app-123", "name": "Test App"}
def test_rehydration_from_storage():
"""Verify _rehydrate loads payload from object storage via payload_ref."""
stored_data = {"app_id": "app-stored", "mode": "workflow"}
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_CREATED,
tenant_id="test-tenant",
event_id="test-event-fb",
payload={},
metadata={"payload_ref": "telemetry/test-tenant/test-event-fb.json"},
)
handler = EnterpriseMetricHandler()
with patch("enterprise.telemetry.metric_handler.storage") as mock_storage:
mock_storage.load.return_value = json.dumps(stored_data).encode("utf-8")
payload = handler._rehydrate(envelope)
assert payload == stored_data
mock_storage.load.assert_called_once_with("telemetry/test-tenant/test-event-fb.json")
def test_rehydration_storage_failure_emits_degraded_event():
"""Verify _rehydrate emits degraded event when storage load fails."""
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_CREATED,
tenant_id="test-tenant",
event_id="test-event-fail",
payload={},
metadata={"payload_ref": "telemetry/test-tenant/test-event-fail.json"},
)
handler = EnterpriseMetricHandler()
with (
patch("enterprise.telemetry.metric_handler.storage") as mock_storage,
patch("enterprise.telemetry.telemetry_log.emit_metric_only_event") as mock_emit,
):
mock_storage.load.side_effect = Exception("Storage unavailable")
payload = handler._rehydrate(envelope)
from enterprise.telemetry.entities import EnterpriseTelemetryEvent
assert payload == {}
mock_emit.assert_called_once()
call_args = mock_emit.call_args
assert call_args[1]["event_name"] == EnterpriseTelemetryEvent.REHYDRATION_FAILED
assert call_args[1]["attributes"]["rehydration_failed"] is True
def test_rehydration_emits_degraded_event_on_empty_payload():
"""Verify _rehydrate emits degraded event when payload is empty and no ref exists."""
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_CREATED,
tenant_id="test-tenant",
event_id="test-event-empty",
payload={},
)
handler = EnterpriseMetricHandler()
with patch("enterprise.telemetry.telemetry_log.emit_metric_only_event") as mock_emit:
payload = handler._rehydrate(envelope)
from enterprise.telemetry.entities import EnterpriseTelemetryEvent
assert payload == {}
mock_emit.assert_called_once()
call_args = mock_emit.call_args
assert call_args[1]["event_name"] == EnterpriseTelemetryEvent.REHYDRATION_FAILED
assert call_args[1]["attributes"]["rehydration_failed"] is True
def test_on_app_created_emits_correct_event(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_CREATED,
tenant_id="tenant-123",
event_id="event-456",
payload={"app_id": "app-789", "mode": "chat"},
)
handler = EnterpriseMetricHandler()
with (
patch("extensions.ext_enterprise_telemetry.get_enterprise_exporter") as mock_get_exporter,
patch("enterprise.telemetry.telemetry_log.emit_metric_only_event") as mock_emit,
):
mock_exporter = MagicMock()
mock_get_exporter.return_value = mock_exporter
handler._on_app_created(envelope)
from enterprise.telemetry.entities import EnterpriseTelemetryEvent
mock_emit.assert_called_once_with(
event_name=EnterpriseTelemetryEvent.APP_CREATED,
attributes={
"dify.app.id": "app-789",
"dify.tenant_id": "tenant-123",
"dify.event.id": "event-456",
"dify.app.mode": "chat",
},
tenant_id="tenant-123",
)
from enterprise.telemetry.entities import EnterpriseTelemetryCounter
mock_exporter.increment_counter.assert_called_once()
call_args = mock_exporter.increment_counter.call_args
assert call_args[0][0] == EnterpriseTelemetryCounter.APP_CREATED
assert call_args[0][1] == 1
assert call_args[0][2]["tenant_id"] == "tenant-123"
assert call_args[0][2]["app_id"] == "app-789"
assert call_args[0][2]["mode"] == "chat"
def test_on_app_updated_emits_correct_event(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_UPDATED,
tenant_id="tenant-123",
event_id="event-456",
payload={"app_id": "app-789"},
)
handler = EnterpriseMetricHandler()
with (
patch("extensions.ext_enterprise_telemetry.get_enterprise_exporter") as mock_get_exporter,
patch("enterprise.telemetry.telemetry_log.emit_metric_only_event") as mock_emit,
):
mock_exporter = MagicMock()
mock_get_exporter.return_value = mock_exporter
handler._on_app_updated(envelope)
from enterprise.telemetry.entities import EnterpriseTelemetryEvent
mock_emit.assert_called_once_with(
event_name=EnterpriseTelemetryEvent.APP_UPDATED,
attributes={
"dify.app.id": "app-789",
"dify.tenant_id": "tenant-123",
"dify.event.id": "event-456",
},
tenant_id="tenant-123",
)
from enterprise.telemetry.entities import EnterpriseTelemetryCounter
mock_exporter.increment_counter.assert_called_once()
call_args = mock_exporter.increment_counter.call_args
assert call_args[0][0] == EnterpriseTelemetryCounter.APP_UPDATED
assert call_args[0][1] == 1
assert call_args[0][2]["tenant_id"] == "tenant-123"
assert call_args[0][2]["app_id"] == "app-789"
def test_on_app_deleted_emits_correct_event(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_DELETED,
tenant_id="tenant-123",
event_id="event-456",
payload={"app_id": "app-789"},
)
handler = EnterpriseMetricHandler()
with (
patch("extensions.ext_enterprise_telemetry.get_enterprise_exporter") as mock_get_exporter,
patch("enterprise.telemetry.telemetry_log.emit_metric_only_event") as mock_emit,
):
mock_exporter = MagicMock()
mock_get_exporter.return_value = mock_exporter
handler._on_app_deleted(envelope)
from enterprise.telemetry.entities import EnterpriseTelemetryEvent
mock_emit.assert_called_once_with(
event_name=EnterpriseTelemetryEvent.APP_DELETED,
attributes={
"dify.app.id": "app-789",
"dify.tenant_id": "tenant-123",
"dify.event.id": "event-456",
},
tenant_id="tenant-123",
)
from enterprise.telemetry.entities import EnterpriseTelemetryCounter
mock_exporter.increment_counter.assert_called_once()
call_args = mock_exporter.increment_counter.call_args
assert call_args[0][0] == EnterpriseTelemetryCounter.APP_DELETED
assert call_args[0][1] == 1
assert call_args[0][2]["tenant_id"] == "tenant-123"
assert call_args[0][2]["app_id"] == "app-789"
def test_on_feedback_created_emits_correct_event(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.FEEDBACK_CREATED,
tenant_id="tenant-123",
event_id="event-456",
payload={
"message_id": "msg-001",
"app_id": "app-789",
"conversation_id": "conv-123",
"from_end_user_id": "user-456",
"from_account_id": None,
"rating": "like",
"from_source": "api",
"content": "Great!",
},
)
handler = EnterpriseMetricHandler()
with (
patch("extensions.ext_enterprise_telemetry.get_enterprise_exporter") as mock_get_exporter,
patch("enterprise.telemetry.telemetry_log.emit_metric_only_event") as mock_emit,
):
mock_exporter = MagicMock()
mock_exporter.include_content = True
mock_get_exporter.return_value = mock_exporter
handler._on_feedback_created(envelope)
mock_emit.assert_called_once()
call_args = mock_emit.call_args
assert call_args[1]["event_name"] == "dify.feedback.created"
assert call_args[1]["attributes"]["dify.message.id"] == "msg-001"
assert call_args[1]["attributes"]["dify.feedback.content"] == "Great!"
assert call_args[1]["tenant_id"] == "tenant-123"
assert call_args[1]["user_id"] == "user-456"
mock_exporter.increment_counter.assert_called_once()
counter_args = mock_exporter.increment_counter.call_args
assert counter_args[0][2]["app_id"] == "app-789"
assert counter_args[0][2]["rating"] == "like"
def test_on_feedback_created_without_content(mock_redis):
mock_redis.set.return_value = True
envelope = TelemetryEnvelope(
case=TelemetryCase.FEEDBACK_CREATED,
tenant_id="tenant-123",
event_id="event-456",
payload={
"message_id": "msg-001",
"app_id": "app-789",
"conversation_id": "conv-123",
"from_end_user_id": "user-456",
"from_account_id": None,
"rating": "like",
"from_source": "api",
"content": "Great!",
},
)
handler = EnterpriseMetricHandler()
with (
patch("extensions.ext_enterprise_telemetry.get_enterprise_exporter") as mock_get_exporter,
patch("enterprise.telemetry.telemetry_log.emit_metric_only_event") as mock_emit,
):
mock_exporter = MagicMock()
mock_exporter.include_content = False
mock_get_exporter.return_value = mock_exporter
handler._on_feedback_created(envelope)
mock_emit.assert_called_once()
call_args = mock_emit.call_args
assert "dify.feedback.content" not in call_args[1]["attributes"]

View File

@@ -25,7 +25,6 @@ from services.workflow_draft_variable_service import (
DraftVariableSaver,
VariableResetError,
WorkflowDraftVariableService,
_model_to_insertion_dict,
)
@@ -476,41 +475,3 @@ class TestWorkflowDraftVariableService:
assert node_var.visible == True
assert node_var.editable == True
assert node_var.node_execution_id == "exec-id"
class TestModelToInsertionDict:
"""Reproduce two production errors in _model_to_insertion_dict / _new()."""
def test_visible_and_is_default_value_always_present(self):
"""Problem 1: _new() did not set visible/is_default_value, causing
inconsistent dict keys across rows in multi-row INSERT and missing
is_default_value in the insertion dict entirely.
"""
conv_var = WorkflowDraftVariable.new_conversation_variable(
app_id="app-1",
name="counter",
value=StringSegment(value="0"),
)
# _new() should explicitly set these fields so they are not None
assert conv_var.visible is not None
assert conv_var.is_default_value is not None
d = _model_to_insertion_dict(conv_var)
# visible must appear in every row's dict
assert "visible" in d
# is_default_value must always be present
assert "is_default_value" in d
def test_description_passthrough(self):
"""_model_to_insertion_dict passes description as-is;
length validation is enforced earlier in build_conversation_variable_from_mapping.
"""
desc = "a" * 200
conv_var = WorkflowDraftVariable.new_conversation_variable(
app_id="app-1",
name="counter",
value=StringSegment(value="0"),
description=desc,
)
d = _model_to_insertion_dict(conv_var)
assert d["description"] == desc

View File

@@ -0,0 +1,69 @@
"""Unit tests for enterprise telemetry Celery task."""
import json
from unittest.mock import MagicMock, patch
import pytest
from enterprise.telemetry.contracts import TelemetryCase, TelemetryEnvelope
from tasks.enterprise_telemetry_task import process_enterprise_telemetry
@pytest.fixture
def sample_envelope_json():
envelope = TelemetryEnvelope(
case=TelemetryCase.APP_CREATED,
tenant_id="test-tenant",
event_id="test-event-123",
payload={"app_id": "app-123"},
)
return envelope.model_dump_json()
def test_process_enterprise_telemetry_success(sample_envelope_json):
with patch("tasks.enterprise_telemetry_task.EnterpriseMetricHandler") as mock_handler_class:
mock_handler = MagicMock()
mock_handler_class.return_value = mock_handler
process_enterprise_telemetry(sample_envelope_json)
mock_handler.handle.assert_called_once()
call_args = mock_handler.handle.call_args[0][0]
assert isinstance(call_args, TelemetryEnvelope)
assert call_args.case == TelemetryCase.APP_CREATED
assert call_args.tenant_id == "test-tenant"
assert call_args.event_id == "test-event-123"
def test_process_enterprise_telemetry_invalid_json(caplog):
invalid_json = "not valid json"
process_enterprise_telemetry(invalid_json)
assert "Failed to process enterprise telemetry envelope" in caplog.text
def test_process_enterprise_telemetry_handler_exception(sample_envelope_json, caplog):
with patch("tasks.enterprise_telemetry_task.EnterpriseMetricHandler") as mock_handler_class:
mock_handler = MagicMock()
mock_handler.handle.side_effect = Exception("Handler error")
mock_handler_class.return_value = mock_handler
process_enterprise_telemetry(sample_envelope_json)
assert "Failed to process enterprise telemetry envelope" in caplog.text
def test_process_enterprise_telemetry_validation_error(caplog):
invalid_envelope = json.dumps(
{
"case": "INVALID_CASE",
"tenant_id": "test-tenant",
"event_id": "test-event",
"payload": {},
}
)
process_enterprise_telemetry(invalid_envelope)
assert "Failed to process enterprise telemetry envelope" in caplog.text

View File

@@ -503,7 +503,7 @@ describe('TimePicker', () => {
const emitted = onChange.mock.calls[0][0]
expect(isDayjsObject(emitted)).toBe(true)
// 10:30 UTC converted to America/New_York (UTC-5 in Jan) = 05:30
expect(emitted.utcOffset()).toBe(dayjs.tz('2024-01-01', 'America/New_York').utcOffset())
expect(emitted.utcOffset()).toBe(dayjs().tz('America/New_York').utcOffset())
expect(emitted.hour()).toBe(5)
expect(emitted.minute()).toBe(30)
})

View File

@@ -20,7 +20,7 @@ describe('dayjs utilities', () => {
const result = toDayjs('07:15 PM', { timezone: tz })
expect(result).toBeDefined()
expect(result?.format('HH:mm')).toBe('19:15')
expect(result?.utcOffset()).toBe(getDateWithTimezone({ timezone: tz }).startOf('day').utcOffset())
expect(result?.utcOffset()).toBe(getDateWithTimezone({ timezone: tz }).utcOffset())
})
it('isDayjsObject detects dayjs instances', () => {

View File

@@ -225,97 +225,5 @@ describe('Compliance', () => {
payload: ACCOUNT_SETTING_TAB.BILLING,
})
})
// isPending branches: spinner visible, disabled class, guard blocks second call
it('should show spinner and guard against duplicate download when isPending is true', async () => {
// Arrange
let resolveDownload: (value: { url: string }) => void
vi.mocked(getDocDownloadUrl).mockImplementation(() => new Promise((resolve) => {
resolveDownload = resolve
}))
vi.mocked(useProviderContext).mockReturnValue({
...baseProviderContextValue,
plan: {
...baseProviderContextValue.plan,
type: Plan.team,
},
})
// Act
openMenuAndRender()
const downloadButtons = screen.getAllByText('common.operation.download')
fireEvent.click(downloadButtons[0])
// Assert - btn-disabled class and spinner should appear while mutation is pending
await waitFor(() => {
const menuItem = screen.getByText('common.compliance.soc2Type1').closest('[role="menuitem"]')
expect(menuItem).not.toBeNull()
const disabledBtn = menuItem!.querySelector('.cursor-not-allowed')
expect(disabledBtn).not.toBeNull()
}, { timeout: 10000 })
// Cleanup: resolve the pending promise
resolveDownload!({ url: 'http://example.com/doc.pdf' })
await waitFor(() => {
expect(downloadUrl).toHaveBeenCalled()
})
})
it('should not call downloadCompliance again while pending', async () => {
let resolveDownload: (value: { url: string }) => void
vi.mocked(getDocDownloadUrl).mockImplementation(() => new Promise((resolve) => {
resolveDownload = resolve
}))
vi.mocked(useProviderContext).mockReturnValue({
...baseProviderContextValue,
plan: {
...baseProviderContextValue.plan,
type: Plan.team,
},
})
openMenuAndRender()
const downloadButtons = screen.getAllByText('common.operation.download')
// First click starts download
fireEvent.click(downloadButtons[0])
// Wait for mutation to start and React to re-render (isPending=true)
await waitFor(() => {
const menuItem = screen.getByText('common.compliance.soc2Type1').closest('[role="menuitem"]')
const el = menuItem!.querySelector('.cursor-not-allowed')
expect(el).not.toBeNull()
expect(getDocDownloadUrl).toHaveBeenCalledTimes(1)
}, { timeout: 10000 })
// Second click while pending - should be guarded by isPending check
fireEvent.click(downloadButtons[0])
resolveDownload!({ url: 'http://example.com/doc.pdf' })
await waitFor(() => {
expect(downloadUrl).toHaveBeenCalledTimes(1)
}, { timeout: 10000 })
// getDocDownloadUrl should still have only been called once
expect(getDocDownloadUrl).toHaveBeenCalledTimes(1)
}, 20000)
// canShowUpgradeTooltip=false: enterprise plan has empty tooltip text → no TooltipContent
it('should show upgrade badge with empty tooltip for enterprise plan', () => {
// Arrange
vi.mocked(useProviderContext).mockReturnValue({
...baseProviderContextValue,
plan: {
...baseProviderContextValue.plan,
type: Plan.enterprise,
},
})
// Act
openMenuAndRender()
// Assert - enterprise is not in any download list, so upgrade badges should appear
// The key branch: upgradeTooltip[Plan.enterprise] = '' → canShowUpgradeTooltip=false
expect(screen.getAllByText('billing.upgradeBtn.encourageShort').length).toBeGreaterThan(0)
})
})
})

View File

@@ -247,23 +247,6 @@ describe('AccountDropdown', () => {
// Assert
expect(screen.getByText('common.userProfile.compliance')).toBeInTheDocument()
})
// Compound AND middle-false: IS_CLOUD_EDITION=true but isCurrentWorkspaceOwner=false
it('should hide Compliance in Cloud Edition when user is not workspace owner', () => {
// Arrange
mockConfig.IS_CLOUD_EDITION = true
vi.mocked(useAppContext).mockReturnValue({
...baseAppContextValue,
isCurrentWorkspaceOwner: false,
})
// Act
renderWithRouter(<AppSelector />)
fireEvent.click(screen.getByRole('button'))
// Assert
expect(screen.queryByText('common.userProfile.compliance')).not.toBeInTheDocument()
})
})
describe('Actions', () => {

View File

@@ -36,8 +36,8 @@ vi.mock('@/config', async (importOriginal) => {
return {
...actual,
IS_CE_EDITION: false,
get ZENDESK_WIDGET_KEY() { return mockZendeskKey.value || '' },
get SUPPORT_EMAIL_ADDRESS() { return mockSupportEmailKey.value || '' },
get ZENDESK_WIDGET_KEY() { return mockZendeskKey.value },
get SUPPORT_EMAIL_ADDRESS() { return mockSupportEmailKey.value },
}
})
@@ -173,18 +173,25 @@ describe('Support', () => {
expect(screen.queryByText('common.userProfile.contactUs')).not.toBeInTheDocument()
})
// Optional chain null guard: ZENDESK_WIDGET_KEY is null
it('should show Email Support when ZENDESK_WIDGET_KEY is null', () => {
it('should show email support if specified in the config', () => {
// Arrange
mockZendeskKey.value = null as unknown as string
mockZendeskKey.value = ''
mockSupportEmailKey.value = 'support@example.com'
vi.mocked(useProviderContext).mockReturnValue({
...baseProviderContextValue,
plan: {
...baseProviderContextValue.plan,
type: Plan.sandbox,
},
})
// Act
renderSupport()
fireEvent.click(screen.getByText('common.userProfile.support'))
// Assert
expect(screen.getByText('common.userProfile.emailSupport')).toBeInTheDocument()
expect(screen.queryByText('common.userProfile.contactUs')).not.toBeInTheDocument()
expect(screen.queryByText('common.userProfile.emailSupport')).toBeInTheDocument()
expect(screen.getByText('common.userProfile.emailSupport')?.closest('a')?.getAttribute('href')?.startsWith(`mailto:${mockSupportEmailKey.value}`)).toBe(true)
})
})

View File

@@ -136,32 +136,4 @@ describe('WorkplaceSelector', () => {
})
})
})
describe('Edge Cases', () => {
// find() returns undefined: no workspace with current: true
it('should not crash when no workspace has current: true', () => {
// Arrange
vi.mocked(useWorkspacesContext).mockReturnValue({
workspaces: [
{ id: '1', name: 'Workspace 1', current: false, plan: 'professional', status: 'normal', created_at: Date.now() },
],
})
// Act & Assert - should not throw
expect(() => renderComponent()).not.toThrow()
})
// name[0]?.toLocaleUpperCase() undefined: workspace with empty name
it('should not crash when workspace name is empty string', () => {
// Arrange
vi.mocked(useWorkspacesContext).mockReturnValue({
workspaces: [
{ id: '1', name: '', current: true, plan: 'sandbox', status: 'normal', created_at: Date.now() },
],
})
// Act & Assert - should not throw
expect(() => renderComponent()).not.toThrow()
})
})
})

View File

@@ -388,33 +388,37 @@ describe('DataSourceNotion Component', () => {
})
describe('Additional Action Edge Cases', () => {
it.each([
undefined,
null,
{},
{ data: undefined },
{ data: null },
{ data: '' },
{ data: 0 },
{ data: false },
{ data: 'http' },
{ data: 'internal' },
{ data: 'unknown' },
])('should cover connection data branch: %s', async (val) => {
it('should cover all possible falsy/nullish branches for connection data in handleAuthAgain and useEffect', async () => {
vi.mocked(useDataSourceIntegrates).mockReturnValue(mockQuerySuccess({ data: mockWorkspaces }))
/* eslint-disable-next-line ts/no-explicit-any */
vi.mocked(useNotionConnection).mockReturnValue({ data: val, isSuccess: true } as any)
render(<DataSourceNotion />)
// Trigger handleAuthAgain with these values
const workspaceItem = getWorkspaceItem('Workspace 1')
const actionBtn = within(workspaceItem).getByRole('button')
fireEvent.click(actionBtn)
const authAgainBtn = await screen.findByText('common.dataSource.notion.changeAuthorizedPages')
fireEvent.click(authAgainBtn)
const connectionCases = [
undefined,
null,
{},
{ data: undefined },
{ data: null },
{ data: '' },
{ data: 0 },
{ data: false },
{ data: 'http' },
{ data: 'internal' },
{ data: 'unknown' },
]
expect(useNotionConnection).toHaveBeenCalled()
for (const val of connectionCases) {
/* eslint-disable-next-line ts/no-explicit-any */
vi.mocked(useNotionConnection).mockReturnValue({ data: val, isSuccess: true } as any)
// Trigger handleAuthAgain with these values
const workspaceItem = getWorkspaceItem('Workspace 1')
const actionBtn = within(workspaceItem).getByRole('button')
fireEvent.click(actionBtn)
const authAgainBtn = await screen.findByText('common.dataSource.notion.changeAuthorizedPages')
fireEvent.click(authAgainBtn)
}
await waitFor(() => expect(useNotionConnection).toHaveBeenCalled())
})
})

View File

@@ -134,46 +134,5 @@ describe('ConfigJinaReaderModal Component', () => {
resolveSave!({ result: 'success' })
await waitFor(() => expect(mockOnSaved).toHaveBeenCalledTimes(1))
})
it('should show encryption info and external link in the modal', async () => {
render(<ConfigJinaReaderModal onCancel={mockOnCancel} onSaved={mockOnSaved} />)
// Verify PKCS1_OAEP link exists
const pkcsLink = screen.getByText('PKCS1_OAEP')
expect(pkcsLink.closest('a')).toHaveAttribute('href', 'https://pycryptodome.readthedocs.io/en/latest/src/cipher/oaep.html')
// Verify the Jina Reader external link
const jinaLink = screen.getByRole('link', { name: /datasetCreation\.jinaReader\.getApiKeyLinkText/i })
expect(jinaLink).toHaveAttribute('target', '_blank')
})
it('should return early when save is clicked while already saving (isSaving guard)', async () => {
const user = userEvent.setup()
// Arrange - a save that never resolves so isSaving stays true
let resolveFirst: (value: { result: 'success' }) => void
const neverResolves = new Promise<{ result: 'success' }>((resolve) => {
resolveFirst = resolve
})
vi.mocked(createDataSourceApiKeyBinding).mockReturnValue(neverResolves)
render(<ConfigJinaReaderModal onCancel={mockOnCancel} onSaved={mockOnSaved} />)
const apiKeyInput = screen.getByPlaceholderText('datasetCreation.jinaReader.apiKeyPlaceholder')
await user.type(apiKeyInput, 'valid-key')
const saveBtn = screen.getByRole('button', { name: /common\.operation\.save/i })
// First click - starts saving, isSaving becomes true
await user.click(saveBtn)
expect(createDataSourceApiKeyBinding).toHaveBeenCalledTimes(1)
// Second click using fireEvent bypasses disabled check - hits isSaving guard
const { fireEvent: fe } = await import('@testing-library/react')
fe.click(saveBtn)
// Still only called once because isSaving=true returns early
expect(createDataSourceApiKeyBinding).toHaveBeenCalledTimes(1)
// Cleanup
resolveFirst!({ result: 'success' })
await waitFor(() => expect(mockOnSaved).toHaveBeenCalled())
})
})
})

View File

@@ -195,57 +195,4 @@ describe('DataSourceWebsite Component', () => {
expect(removeDataSourceApiKeyBinding).not.toHaveBeenCalled()
})
})
describe('Firecrawl Save Flow', () => {
it('should re-fetch sources after saving Firecrawl configuration', async () => {
// Arrange
await renderAndWait(DataSourceProvider.fireCrawl)
fireEvent.click(screen.getByText('common.dataSource.configure'))
expect(screen.getByText('datasetCreation.firecrawl.configFirecrawl')).toBeInTheDocument()
vi.mocked(fetchDataSources).mockClear()
// Act - fill in required API key field and save
const apiKeyInput = screen.getByPlaceholderText('datasetCreation.firecrawl.apiKeyPlaceholder')
fireEvent.change(apiKeyInput, { target: { value: 'test-key' } })
fireEvent.click(screen.getByRole('button', { name: /common\.operation\.save/i }))
// Assert
await waitFor(() => {
expect(fetchDataSources).toHaveBeenCalled()
expect(screen.queryByText('datasetCreation.firecrawl.configFirecrawl')).not.toBeInTheDocument()
})
})
})
describe('Cancel Flow', () => {
it('should close watercrawl modal when cancel is clicked', async () => {
// Arrange
await renderAndWait(DataSourceProvider.waterCrawl)
fireEvent.click(screen.getByText('common.dataSource.configure'))
expect(screen.getByText('datasetCreation.watercrawl.configWatercrawl')).toBeInTheDocument()
// Act
fireEvent.click(screen.getByRole('button', { name: /common\.operation\.cancel/i }))
// Assert - modal closed
await waitFor(() => {
expect(screen.queryByText('datasetCreation.watercrawl.configWatercrawl')).not.toBeInTheDocument()
})
})
it('should close jina reader modal when cancel is clicked', async () => {
// Arrange
await renderAndWait(DataSourceProvider.jinaReader)
fireEvent.click(screen.getByText('common.dataSource.configure'))
expect(screen.getByText('datasetCreation.jinaReader.configJinaReader')).toBeInTheDocument()
// Act
fireEvent.click(screen.getByRole('button', { name: /common\.operation\.cancel/i }))
// Assert - modal closed
await waitFor(() => {
expect(screen.queryByText('datasetCreation.jinaReader.configJinaReader')).not.toBeInTheDocument()
})
})
})
})

View File

@@ -1,9 +1,8 @@
import { render, screen } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import Operate from './Operate'
describe('Operate', () => {
it('should render cancel and save when editing is open', () => {
it('renders cancel and save when editing', () => {
render(
<Operate
isOpen
@@ -19,7 +18,7 @@ describe('Operate', () => {
expect(screen.getByText('common.operation.save')).toBeInTheDocument()
})
it('should show add-key prompt when closed', () => {
it('shows add key prompt when closed', () => {
render(
<Operate
isOpen={false}
@@ -34,7 +33,7 @@ describe('Operate', () => {
expect(screen.getByText('common.provider.addKey')).toBeInTheDocument()
})
it('should show invalid state and edit prompt when status is fail', () => {
it('shows invalid state indicator and edit prompt when status is fail', () => {
render(
<Operate
isOpen={false}
@@ -50,7 +49,7 @@ describe('Operate', () => {
expect(screen.getByText('common.provider.editKey')).toBeInTheDocument()
})
it('should show edit prompt without error text when status is success', () => {
it('shows edit prompt without error text when status is success', () => {
render(
<Operate
isOpen={false}
@@ -66,30 +65,11 @@ describe('Operate', () => {
expect(screen.queryByText('common.provider.invalidApiKey')).toBeNull()
})
it('should not call onAdd when disabled', async () => {
const user = userEvent.setup()
const onAdd = vi.fn()
it('shows no actions for unsupported status', () => {
render(
<Operate
isOpen={false}
status="add"
disabled
onAdd={onAdd}
onCancel={vi.fn()}
onEdit={vi.fn()}
onSave={vi.fn()}
/>,
)
await user.click(screen.getByText('common.provider.addKey'))
expect(onAdd).not.toHaveBeenCalled()
})
it('should show no actions when status is unsupported', () => {
render(
<Operate
isOpen={false}
// @ts-expect-error intentional invalid status for runtime fallback coverage
status="unknown"
status={'unknown' as never}
onAdd={vi.fn()}
onCancel={vi.fn()}
onEdit={vi.fn()}

View File

@@ -267,99 +267,6 @@ describe('MembersPage', () => {
expect(screen.getByText(/plansCommon\.unlimited/i)).toBeInTheDocument()
})
it('should show non-billing member format for team plan even when billing is enabled', () => {
vi.mocked(useProviderContext).mockReturnValue(createMockProviderContextValue({
enableBilling: true,
plan: {
type: Plan.team,
total: { teamMembers: 50 } as unknown as ReturnType<typeof useProviderContext>['plan']['total'],
} as unknown as ReturnType<typeof useProviderContext>['plan'],
}))
render(<MembersPage />)
// Plan.team is an unlimited member plan → isNotUnlimitedMemberPlan=false → non-billing layout
expect(screen.getByText(/plansCommon\.memberAfter/i)).toBeInTheDocument()
})
it('should show invite button when user is manager but not owner', () => {
vi.mocked(useAppContext).mockReturnValue({
userProfile: { email: 'admin@example.com' },
currentWorkspace: { name: 'Test Workspace', role: 'admin' } as ICurrentWorkspace,
isCurrentWorkspaceOwner: false,
isCurrentWorkspaceManager: true,
} as unknown as AppContextValue)
render(<MembersPage />)
expect(screen.getByRole('button', { name: /invite/i })).toBeInTheDocument()
expect(screen.queryByRole('button', { name: /transfer ownership/i })).not.toBeInTheDocument()
})
it('should use created_at as fallback when last_active_at is empty', () => {
const memberNoLastActive: Member = {
...mockAccounts[1],
last_active_at: '',
created_at: '1700000000',
}
vi.mocked(useMembers).mockReturnValue({
data: { accounts: [memberNoLastActive] },
refetch: mockRefetch,
} as unknown as ReturnType<typeof useMembers>)
render(<MembersPage />)
expect(mockFormatTimeFromNow).toHaveBeenCalledWith(1700000000000)
})
it('should not show plural s when only one account in billing layout', () => {
vi.mocked(useMembers).mockReturnValue({
data: { accounts: [mockAccounts[0]] },
refetch: mockRefetch,
} as unknown as ReturnType<typeof useMembers>)
vi.mocked(useProviderContext).mockReturnValue(createMockProviderContextValue({
enableBilling: true,
plan: {
type: Plan.sandbox,
total: { teamMembers: 5 } as unknown as ReturnType<typeof useProviderContext>['plan']['total'],
} as unknown as ReturnType<typeof useProviderContext>['plan'],
}))
render(<MembersPage />)
expect(screen.getByText(/plansCommon\.member/i)).toBeInTheDocument()
expect(screen.getByText('1')).toBeInTheDocument()
})
it('should not show plural s when only one account in non-billing layout', () => {
vi.mocked(useMembers).mockReturnValue({
data: { accounts: [mockAccounts[0]] },
refetch: mockRefetch,
} as unknown as ReturnType<typeof useMembers>)
render(<MembersPage />)
expect(screen.getByText(/plansCommon\.memberAfter/i)).toBeInTheDocument()
expect(screen.getByText('1')).toBeInTheDocument()
})
it('should show normal role as fallback for unknown role', () => {
vi.mocked(useAppContext).mockReturnValue({
userProfile: { email: 'admin@example.com' },
currentWorkspace: { name: 'Test Workspace', role: 'admin' } as ICurrentWorkspace,
isCurrentWorkspaceOwner: false,
isCurrentWorkspaceManager: false,
} as unknown as AppContextValue)
vi.mocked(useMembers).mockReturnValue({
data: { accounts: [{ ...mockAccounts[1], role: 'unknown_role' as Member['role'] }] },
refetch: mockRefetch,
} as unknown as ReturnType<typeof useMembers>)
render(<MembersPage />)
expect(screen.getByText('common.members.normal')).toBeInTheDocument()
})
it('should show upgrade button when member limit is full', () => {
vi.mocked(useProviderContext).mockReturnValue(createMockProviderContextValue({
enableBilling: true,

View File

@@ -1,5 +1,5 @@
import type { InvitationResponse } from '@/models/common'
import { fireEvent, render, screen, waitFor } from '@testing-library/react'
import { render, screen, waitFor } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import { vi } from 'vitest'
import { ToastContext } from '@/app/components/base/toast/context'
@@ -171,66 +171,6 @@ describe('InviteModal', () => {
expect(screen.queryByText('user@example.com')).not.toBeInTheDocument()
})
it('should show unlimited label when workspace member limit is zero', async () => {
vi.mocked(useProviderContextSelector).mockImplementation(selector => selector({
licenseLimit: { workspace_members: { size: 5, limit: 0 } },
refreshLicenseLimit: mockRefreshLicenseLimit,
} as unknown as Parameters<typeof selector>[0]))
renderModal()
expect(await screen.findByText(/license\.unlimited/i)).toBeInTheDocument()
})
it('should initialize usedSize to zero when workspace_members.size is null', async () => {
vi.mocked(useProviderContextSelector).mockImplementation(selector => selector({
licenseLimit: { workspace_members: { size: null, limit: 10 } },
refreshLicenseLimit: mockRefreshLicenseLimit,
} as unknown as Parameters<typeof selector>[0]))
renderModal()
// usedSize starts at 0 (via ?? 0 fallback), no emails added → counter shows 0
expect(await screen.findByText('0')).toBeInTheDocument()
})
it('should not call onSend when invite result is not success', async () => {
const user = userEvent.setup()
vi.mocked(inviteMember).mockResolvedValue({
result: 'error',
invitation_results: [],
} as unknown as InvitationResponse)
renderModal()
await user.type(screen.getByTestId('mock-email-input'), 'user@example.com')
await user.click(screen.getByRole('button', { name: /members\.sendInvite/i }))
await waitFor(() => {
expect(inviteMember).toHaveBeenCalled()
expect(mockOnSend).not.toHaveBeenCalled()
expect(mockOnCancel).not.toHaveBeenCalled()
})
})
it('should show destructive text color when used size exceeds limit', async () => {
const user = userEvent.setup()
vi.mocked(useProviderContextSelector).mockImplementation(selector => selector({
licenseLimit: { workspace_members: { size: 10, limit: 10 } },
refreshLicenseLimit: mockRefreshLicenseLimit,
} as unknown as Parameters<typeof selector>[0]))
renderModal()
const input = screen.getByTestId('mock-email-input')
await user.type(input, 'user@example.com')
// usedSize = 10 + 1 = 11 > limit 10 → destructive color
const counter = screen.getByText('11')
expect(counter.closest('div')).toHaveClass('text-text-destructive')
})
it('should not submit if already submitting', async () => {
const user = userEvent.setup()
let resolveInvite: (value: InvitationResponse) => void
@@ -262,72 +202,4 @@ describe('InviteModal', () => {
expect(mockOnCancel).toHaveBeenCalled()
})
})
it('should show destructive color and disable send button when limit is exactly met with one email', async () => {
const user = userEvent.setup()
// size=10, limit=10 - adding 1 email makes usedSize=11 > limit=10
vi.mocked(useProviderContextSelector).mockImplementation(selector => selector({
licenseLimit: { workspace_members: { size: 10, limit: 10 } },
refreshLicenseLimit: mockRefreshLicenseLimit,
} as unknown as Parameters<typeof selector>[0]))
renderModal()
const input = screen.getByTestId('mock-email-input')
await user.type(input, 'user@example.com')
// isLimitExceeded=true → button is disabled, cannot submit
const sendBtn = screen.getByRole('button', { name: /members\.sendInvite/i })
expect(sendBtn).toBeDisabled()
expect(inviteMember).not.toHaveBeenCalled()
})
it('should hit isSubmitting guard inside handleSend when button is force-clicked during submission', async () => {
const user = userEvent.setup()
let resolveInvite: (value: InvitationResponse) => void
const invitePromise = new Promise<InvitationResponse>((resolve) => {
resolveInvite = resolve
})
vi.mocked(inviteMember).mockReturnValue(invitePromise)
renderModal()
const input = screen.getByTestId('mock-email-input')
await user.type(input, 'user@example.com')
const sendBtn = screen.getByRole('button', { name: /members\.sendInvite/i })
// First click starts submission
await user.click(sendBtn)
expect(inviteMember).toHaveBeenCalledTimes(1)
// Force-click bypasses disabled attribute → hits isSubmitting guard in handleSend
fireEvent.click(sendBtn)
expect(inviteMember).toHaveBeenCalledTimes(1)
// Cleanup
resolveInvite!({ result: 'success', invitation_results: [] })
await waitFor(() => {
expect(mockOnCancel).toHaveBeenCalled()
})
})
it('should not show error text color when isLimited is false even with many emails', async () => {
// size=0, limit=0 → isLimited=false, usedSize=emails.length
vi.mocked(useProviderContextSelector).mockImplementation(selector => selector({
licenseLimit: { workspace_members: { size: 0, limit: 0 } },
refreshLicenseLimit: mockRefreshLicenseLimit,
} as unknown as Parameters<typeof selector>[0]))
const user = userEvent.setup()
renderModal()
const input = screen.getByTestId('mock-email-input')
await user.type(input, 'user@example.com')
// isLimited=false → no destructive color
const counter = screen.getByText('1')
expect(counter.closest('div')).not.toHaveClass('text-text-destructive')
})
})

View File

@@ -2,12 +2,8 @@ import type { InvitationResult } from '@/models/common'
import { render, screen } from '@testing-library/react'
import InvitedModal from './index'
const mockConfigState = vi.hoisted(() => ({ isCeEdition: true }))
vi.mock('@/config', () => ({
get IS_CE_EDITION() {
return mockConfigState.isCeEdition
},
IS_CE_EDITION: true,
}))
describe('InvitedModal', () => {
@@ -17,11 +13,6 @@ describe('InvitedModal', () => {
{ email: 'failed@example.com', status: 'failed', message: 'Error msg' },
]
beforeEach(() => {
vi.clearAllMocks()
mockConfigState.isCeEdition = true
})
it('should show success and failed invitation sections', async () => {
render(<InvitedModal invitationResults={results} onCancel={mockOnCancel} />)
@@ -30,59 +21,4 @@ describe('InvitedModal', () => {
expect(screen.getByText('http://invite.com/1')).toBeInTheDocument()
expect(screen.getByText('failed@example.com')).toBeInTheDocument()
})
it('should hide invitation link section when there are no successes', () => {
const failedOnly: InvitationResult[] = [
{ email: 'fail@example.com', status: 'failed', message: 'Quota exceeded' },
]
render(<InvitedModal invitationResults={failedOnly} onCancel={mockOnCancel} />)
expect(screen.queryByText(/members\.invitationLink/i)).not.toBeInTheDocument()
expect(screen.getByText(/members\.failedInvitationEmails/i)).toBeInTheDocument()
})
it('should hide failed section when there are only successes', () => {
const successOnly: InvitationResult[] = [
{ email: 'ok@example.com', status: 'success', url: 'http://invite.com/2' },
]
render(<InvitedModal invitationResults={successOnly} onCancel={mockOnCancel} />)
expect(screen.getByText(/members\.invitationLink/i)).toBeInTheDocument()
expect(screen.queryByText(/members\.failedInvitationEmails/i)).not.toBeInTheDocument()
})
it('should hide both sections when results are empty', () => {
render(<InvitedModal invitationResults={[]} onCancel={mockOnCancel} />)
expect(screen.queryByText(/members\.invitationLink/i)).not.toBeInTheDocument()
expect(screen.queryByText(/members\.failedInvitationEmails/i)).not.toBeInTheDocument()
})
})
describe('InvitedModal (non-CE edition)', () => {
const mockOnCancel = vi.fn()
beforeEach(() => {
vi.clearAllMocks()
mockConfigState.isCeEdition = false
})
afterEach(() => {
mockConfigState.isCeEdition = true
})
it('should render invitationSentTip without CE edition content when IS_CE_EDITION is false', async () => {
const results: InvitationResult[] = [
{ email: 'success@example.com', status: 'success', url: 'http://invite.com/1' },
]
render(<InvitedModal invitationResults={results} onCancel={mockOnCancel} />)
// The !IS_CE_EDITION branch - should show the tip text
expect(await screen.findByText(/members\.invitationSentTip/i)).toBeInTheDocument()
// CE-only content should not be shown
expect(screen.queryByText(/members\.invitationLink/i)).not.toBeInTheDocument()
})
})

View File

@@ -49,13 +49,13 @@ describe('Operation', () => {
mockUseProviderContext.mockReturnValue({ datasetOperatorEnabled: false })
})
it('should render the current role label when member has editor role', () => {
it('renders the current role label', () => {
renderOperation()
expect(screen.getByText('common.members.editor')).toBeInTheDocument()
})
it('should show dataset operator option when feature flag is enabled', async () => {
it('shows dataset operator option when the feature flag is enabled', async () => {
const user = userEvent.setup()
mockUseProviderContext.mockReturnValue({ datasetOperatorEnabled: true })
@@ -66,7 +66,7 @@ describe('Operation', () => {
expect(await screen.findByText('common.members.datasetOperator')).toBeInTheDocument()
})
it('should show owner-allowed role options when operator role is admin', async () => {
it('shows owner-allowed role options for admin operators', async () => {
const user = userEvent.setup()
renderOperation({}, 'admin')
@@ -77,7 +77,7 @@ describe('Operation', () => {
expect(screen.getByText('common.members.normal')).toBeInTheDocument()
})
it('should not show role options when operator role is unsupported', async () => {
it('does not show role options for unsupported operators', async () => {
const user = userEvent.setup()
renderOperation({}, 'normal')
@@ -88,7 +88,7 @@ describe('Operation', () => {
expect(screen.getByText('common.members.removeFromTeam')).toBeInTheDocument()
})
it('should call updateMemberRole and onOperate when selecting another role', async () => {
it('calls updateMemberRole and onOperate when selecting another role', async () => {
const user = userEvent.setup()
const onOperate = vi.fn()
renderOperation({}, 'owner', onOperate)
@@ -102,24 +102,7 @@ describe('Operation', () => {
})
})
it('should show dataset operator option when operator is admin and feature flag is enabled', async () => {
const user = userEvent.setup()
mockUseProviderContext.mockReturnValue({ datasetOperatorEnabled: true })
renderOperation({}, 'admin')
await user.click(screen.getByText('common.members.editor'))
expect(await screen.findByText('common.members.datasetOperator')).toBeInTheDocument()
expect(screen.queryByText('common.members.admin')).not.toBeInTheDocument()
})
it('should fall back to normal role label when member role is unknown', () => {
renderOperation({ role: 'unknown_role' as Member['role'] })
expect(screen.getByText('common.members.normal')).toBeInTheDocument()
})
it('should call deleteMemberOrCancelInvitation when removing the member', async () => {
it('calls deleteMemberOrCancelInvitation when removing the member', async () => {
const user = userEvent.setup()
const onOperate = vi.fn()
renderOperation({}, 'owner', onOperate)

View File

@@ -13,6 +13,11 @@ vi.mock('@/context/app-context')
vi.mock('@/service/common')
vi.mock('@/service/use-common')
// Mock Modal directly to avoid transition/portal issues in tests
vi.mock('@/app/components/base/modal', () => ({
default: ({ children, isShow }: { children: React.ReactNode, isShow: boolean }) => isShow ? <div data-testid="mock-modal">{children}</div> : null,
}))
vi.mock('./member-selector', () => ({
default: ({ onSelect }: { onSelect: (id: string) => void }) => (
<button onClick={() => onSelect('new-owner-id')}>Select member</button>
@@ -35,13 +40,11 @@ describe('TransferOwnershipModal', () => {
data: { accounts: [] },
} as unknown as ReturnType<typeof useMembers>)
// Stub globalThis.location.reload (component calls globalThis.location.reload())
// Fix Location stubbing for reload
const mockReload = vi.fn()
vi.stubGlobal('location', {
...window.location,
reload: mockReload,
href: '',
assign: vi.fn(),
replace: vi.fn(),
} as unknown as Location)
})
@@ -102,8 +105,8 @@ describe('TransferOwnershipModal', () => {
await waitFor(() => {
expect(ownershipTransfer).toHaveBeenCalledWith('new-owner-id', { token: 'final-token' })
expect(window.location.reload).toHaveBeenCalled()
}, { timeout: 10000 })
}, 15000)
})
})
it('should handle timer countdown and resend', async () => {
vi.useFakeTimers()
@@ -199,70 +202,6 @@ describe('TransferOwnershipModal', () => {
})
})
it('should handle sendOwnerEmail returning null data', async () => {
const user = userEvent.setup()
vi.mocked(sendOwnerEmail).mockResolvedValue({
data: null,
result: 'success',
} as unknown as Awaited<ReturnType<typeof sendOwnerEmail>>)
renderModal()
await user.click(screen.getByTestId('transfer-modal-send-code'))
// Should advance to verify step even with null data
await waitFor(() => {
expect(screen.getByText(/members\.transferModal\.verifyEmail/i)).toBeInTheDocument()
})
})
it('should show fallback error prefix when sendOwnerEmail throws null', async () => {
const user = userEvent.setup()
vi.mocked(sendOwnerEmail).mockRejectedValue(null)
renderModal()
await user.click(screen.getByTestId('transfer-modal-send-code'))
await waitFor(() => {
expect(mockNotify).toHaveBeenCalledWith(expect.objectContaining({
type: 'error',
message: expect.stringContaining('Error sending verification code:'),
}))
})
})
it('should show fallback error prefix when verifyOwnerEmail throws null', async () => {
const user = userEvent.setup()
mockEmailVerification()
vi.mocked(verifyOwnerEmail).mockRejectedValue(null)
renderModal()
await goToTransferStep(user)
await waitFor(() => {
expect(mockNotify).toHaveBeenCalledWith(expect.objectContaining({
type: 'error',
message: expect.stringContaining('Error verifying email:'),
}))
})
})
it('should show fallback error prefix when ownershipTransfer throws null', async () => {
const user = userEvent.setup()
mockEmailVerification()
vi.mocked(ownershipTransfer).mockRejectedValue(null)
renderModal()
await goToTransferStep(user)
await selectNewOwnerAndSubmit(user)
await waitFor(() => {
expect(mockNotify).toHaveBeenCalledWith(expect.objectContaining({
type: 'error',
message: expect.stringContaining('Error ownership transfer:'),
}))
})
})
it('should close when close button is clicked', async () => {
const user = userEvent.setup()
renderModal()

View File

@@ -71,80 +71,9 @@ describe('MemberSelector', () => {
})
})
it('should filter list by email when name does not match', async () => {
const user = userEvent.setup()
render(<MemberSelector onSelect={mockOnSelect} />)
await user.click(screen.getByTestId('member-selector-trigger'))
await user.type(screen.getByTestId('member-selector-search'), 'john@')
const items = screen.getAllByTestId('member-selector-item')
expect(items).toHaveLength(1)
expect(screen.getByText('John Doe')).toBeInTheDocument()
expect(screen.queryByText('Jane Smith')).not.toBeInTheDocument()
})
it('should show placeholder when value does not match any account', () => {
render(<MemberSelector value="nonexistent-id" onSelect={mockOnSelect} />)
expect(screen.getByText(/members\.transferModal\.transferPlaceholder/i)).toBeInTheDocument()
})
it('should handle missing data gracefully', () => {
vi.mocked(useMembers).mockReturnValue({ data: undefined } as unknown as ReturnType<typeof useMembers>)
render(<MemberSelector onSelect={mockOnSelect} />)
expect(screen.getByText(/members\.transferModal\.transferPlaceholder/i)).toBeInTheDocument()
})
it('should filter by email when account name is empty', async () => {
const user = userEvent.setup()
vi.mocked(useMembers).mockReturnValue({
data: { accounts: [...mockAccounts, { id: '4', name: '', email: 'noname@example.com', avatar_url: '' }] },
} as unknown as ReturnType<typeof useMembers>)
render(<MemberSelector onSelect={mockOnSelect} />)
await user.click(screen.getByTestId('member-selector-trigger'))
await user.type(screen.getByTestId('member-selector-search'), 'noname@')
const items = screen.getAllByTestId('member-selector-item')
expect(items).toHaveLength(1)
})
it('should apply hover background class when dropdown is open', async () => {
const user = userEvent.setup()
render(<MemberSelector onSelect={mockOnSelect} />)
const trigger = screen.getByTestId('member-selector-trigger')
await user.click(trigger)
expect(trigger).toHaveClass('bg-state-base-hover-alt')
})
it('should not match account when neither name nor email contains search value', async () => {
const user = userEvent.setup()
render(<MemberSelector onSelect={mockOnSelect} />)
await user.click(screen.getByTestId('member-selector-trigger'))
await user.type(screen.getByTestId('member-selector-search'), 'xyz-no-match-xyz')
expect(screen.queryByTestId('member-selector-item')).not.toBeInTheDocument()
})
it('should fall back to empty string for account with undefined email when searching', async () => {
const user = userEvent.setup()
vi.mocked(useMembers).mockReturnValue({
data: {
accounts: [
{ id: '1', name: 'John', email: undefined as unknown as string, avatar_url: '' },
],
},
} as unknown as ReturnType<typeof useMembers>)
render(<MemberSelector onSelect={mockOnSelect} />)
await user.click(screen.getByTestId('member-selector-trigger'))
await user.type(screen.getByTestId('member-selector-search'), 'john')
const items = screen.getAllByTestId('member-selector-item')
expect(items).toHaveLength(1)
})
})

View File

@@ -433,55 +433,6 @@ describe('hooks', () => {
expect(result.current.credentials).toBeUndefined()
})
it('should not call invalidateQueries when neither predefined nor custom is enabled', () => {
const invalidateQueries = vi.fn()
; (useQueryClient as Mock).mockReturnValue({ invalidateQueries })
; (useQuery as Mock).mockReturnValue({
data: undefined,
isPending: false,
})
// Both predefinedEnabled and customEnabled are false (no credentialId)
const { result } = renderHook(() => useProviderCredentialsAndLoadBalancing(
'openai',
ConfigurationMethodEnum.predefinedModel,
false,
undefined,
undefined,
))
act(() => {
result.current.mutate()
})
expect(invalidateQueries).not.toHaveBeenCalled()
})
it('should build URL without credentialId when not provided in predefined queryFn', async () => {
// Trigger the queryFn when credentialId is undefined but predefinedEnabled is true
; (useQuery as Mock).mockReturnValue({
data: { credentials: { api_key: 'k' } },
isPending: false,
})
const { result: _result } = renderHook(() => useProviderCredentialsAndLoadBalancing(
'openai',
ConfigurationMethodEnum.predefinedModel,
true,
undefined,
undefined,
))
// Find and invoke the predefined queryFn
const queryCall = (useQuery as Mock).mock.calls.find(
call => call[0].queryKey?.[1] === 'credentials',
)
if (queryCall) {
await queryCall[0].queryFn()
expect(fetchModelProviderCredentials).toHaveBeenCalled()
}
})
})
describe('useModelList', () => {
@@ -1160,26 +1111,6 @@ describe('hooks', () => {
expect(result.current.plugins![0].plugin_id).toBe('plugin1')
})
it('should deduplicate plugins that exist in both collections and regular plugins', () => {
const duplicatePlugin = { plugin_id: 'shared-plugin', type: 'plugin' }
; (useMarketplacePluginsByCollectionId as Mock).mockReturnValue({
plugins: [duplicatePlugin],
isLoading: false,
})
; (useMarketplacePlugins as Mock).mockReturnValue({
plugins: [{ ...duplicatePlugin }, { plugin_id: 'unique-plugin', type: 'plugin' }],
queryPlugins: vi.fn(),
queryPluginsWithDebounced: vi.fn(),
isLoading: false,
})
const { result } = renderHook(() => useMarketplaceAllPlugins([], ''))
expect(result.current.plugins).toHaveLength(2)
expect(result.current.plugins!.filter(p => p.plugin_id === 'shared-plugin')).toHaveLength(1)
})
it('should handle loading states', () => {
; (useMarketplacePluginsByCollectionId as Mock).mockReturnValue({
plugins: [],
@@ -1196,45 +1127,6 @@ describe('hooks', () => {
expect(result.current.isLoading).toBe(true)
})
it('should not crash when plugins is undefined', () => {
; (useMarketplacePluginsByCollectionId as Mock).mockReturnValue({
plugins: [],
isLoading: false,
})
; (useMarketplacePlugins as Mock).mockReturnValue({
plugins: undefined,
queryPlugins: vi.fn(),
queryPluginsWithDebounced: vi.fn(),
isLoading: false,
})
const { result } = renderHook(() => useMarketplaceAllPlugins([], ''))
expect(result.current.plugins).toBeDefined()
expect(result.current.isLoading).toBe(false)
})
it('should return search plugins (not allPlugins) when searchText is truthy', () => {
const searchPlugins = [{ plugin_id: 'search-result', type: 'plugin' }]
const collectionPlugins = [{ plugin_id: 'collection-only', type: 'plugin' }]
; (useMarketplacePluginsByCollectionId as Mock).mockReturnValue({
plugins: collectionPlugins,
isLoading: false,
})
; (useMarketplacePlugins as Mock).mockReturnValue({
plugins: searchPlugins,
queryPlugins: vi.fn(),
queryPluginsWithDebounced: vi.fn(),
isLoading: false,
})
const { result } = renderHook(() => useMarketplaceAllPlugins([], 'openai'))
expect(result.current.plugins).toEqual(searchPlugins)
expect(result.current.plugins?.some(p => p.plugin_id === 'collection-only')).toBe(false)
})
})
describe('useRefreshModel', () => {
@@ -1342,35 +1234,6 @@ describe('hooks', () => {
expect(emit).not.toHaveBeenCalled()
})
it('should emit event and invalidate all supported model types when __model_type is undefined', () => {
const invalidateQueries = vi.fn()
const emit = vi.fn()
; (useQueryClient as Mock).mockReturnValue({ invalidateQueries })
; (useEventEmitterContextContext as Mock).mockReturnValue({
eventEmitter: { emit },
})
const provider = createMockProvider()
const customFields = { __model_name: 'my-model', __model_type: undefined } as unknown as CustomConfigurationModelFixedFields
const { result } = renderHook(() => useRefreshModel())
act(() => {
result.current.handleRefreshModel(provider, customFields, true)
})
expect(emit).toHaveBeenCalledWith({
type: UPDATE_MODEL_PROVIDER_CUSTOM_MODEL_LIST,
payload: 'openai',
})
// When __model_type is undefined, all supported model types are invalidated
const modelListCalls = invalidateQueries.mock.calls.filter(
call => call[0]?.queryKey?.[0] === 'model-list',
)
expect(modelListCalls).toHaveLength(provider.supported_model_types.length)
})
it('should handle provider with single model type', () => {
const invalidateQueries = vi.fn()

View File

@@ -60,15 +60,7 @@ vi.mock('@/context/provider-context', () => ({
}),
}))
type MockDefaultModelData = {
model: string
provider?: { provider: string }
} | null
const mockDefaultModelState: {
data: MockDefaultModelData
isLoading: boolean
} = {
const mockDefaultModelState = {
data: null,
isLoading: false,
}
@@ -204,129 +196,4 @@ describe('ModelProviderPage', () => {
])
expect(screen.queryByText('common.modelProvider.toBeConfigured')).not.toBeInTheDocument()
})
it('should show not configured alert when all default models are absent', () => {
mockDefaultModelState.data = null
mockDefaultModelState.isLoading = false
render(<ModelProviderPage searchText="" />)
expect(screen.getByText('common.modelProvider.notConfigured')).toBeInTheDocument()
})
it('should not show not configured alert when default model is loading', () => {
mockDefaultModelState.data = null
mockDefaultModelState.isLoading = true
render(<ModelProviderPage searchText="" />)
expect(screen.queryByText('common.modelProvider.notConfigured')).not.toBeInTheDocument()
})
it('should filter providers by label text', () => {
render(<ModelProviderPage searchText="OpenAI" />)
act(() => {
vi.advanceTimersByTime(600)
})
expect(screen.getByText('openai')).toBeInTheDocument()
expect(screen.queryByText('anthropic')).not.toBeInTheDocument()
})
it('should classify system-enabled providers with matching quota as configured', () => {
mockProviders.splice(0, mockProviders.length, {
provider: 'sys-provider',
label: { en_US: 'System Provider' },
custom_configuration: { status: CustomConfigurationStatusEnum.noConfigure },
system_configuration: {
enabled: true,
current_quota_type: CurrentSystemQuotaTypeEnum.free,
quota_configurations: [mockQuotaConfig],
},
})
render(<ModelProviderPage searchText="" />)
expect(screen.getByText('sys-provider')).toBeInTheDocument()
expect(screen.queryByText('common.modelProvider.toBeConfigured')).not.toBeInTheDocument()
})
it('should classify system-enabled provider with no matching quota as not configured', () => {
mockProviders.splice(0, mockProviders.length, {
provider: 'sys-no-quota',
label: { en_US: 'System No Quota' },
custom_configuration: { status: CustomConfigurationStatusEnum.noConfigure },
system_configuration: {
enabled: true,
current_quota_type: CurrentSystemQuotaTypeEnum.free,
quota_configurations: [],
},
})
render(<ModelProviderPage searchText="" />)
expect(screen.getByText('sys-no-quota')).toBeInTheDocument()
expect(screen.getByText('common.modelProvider.toBeConfigured')).toBeInTheDocument()
})
it('should preserve order of two non-fixed providers (sort returns 0)', () => {
mockProviders.splice(0, mockProviders.length, {
provider: 'alpha-provider',
label: { en_US: 'Alpha Provider' },
custom_configuration: { status: CustomConfigurationStatusEnum.active },
system_configuration: {
enabled: false,
current_quota_type: CurrentSystemQuotaTypeEnum.free,
quota_configurations: [mockQuotaConfig],
},
}, {
provider: 'beta-provider',
label: { en_US: 'Beta Provider' },
custom_configuration: { status: CustomConfigurationStatusEnum.active },
system_configuration: {
enabled: false,
current_quota_type: CurrentSystemQuotaTypeEnum.free,
quota_configurations: [mockQuotaConfig],
},
})
render(<ModelProviderPage searchText="" />)
const renderedProviders = screen.getAllByTestId('provider-card').map(item => item.textContent)
expect(renderedProviders).toEqual(['alpha-provider', 'beta-provider'])
})
it('should not show not configured alert when shared default model mock has data', () => {
mockDefaultModelState.data = { model: 'embed-model' }
render(<ModelProviderPage searchText="" />)
expect(screen.queryByText('common.modelProvider.notConfigured')).not.toBeInTheDocument()
})
it('should not show not configured alert when rerankDefaultModel has data', () => {
mockDefaultModelState.data = { model: 'rerank-model', provider: { provider: 'cohere' } }
mockDefaultModelState.isLoading = false
render(<ModelProviderPage searchText="" />)
expect(screen.queryByText('common.modelProvider.notConfigured')).not.toBeInTheDocument()
})
it('should not show not configured alert when ttsDefaultModel has data', () => {
mockDefaultModelState.data = { model: 'tts-model', provider: { provider: 'openai' } }
mockDefaultModelState.isLoading = false
render(<ModelProviderPage searchText="" />)
expect(screen.queryByText('common.modelProvider.notConfigured')).not.toBeInTheDocument()
})
it('should not show not configured alert when speech2textDefaultModel has data', () => {
mockDefaultModelState.data = { model: 'whisper', provider: { provider: 'openai' } }
mockDefaultModelState.isLoading = false
render(<ModelProviderPage searchText="" />)
expect(screen.queryByText('common.modelProvider.notConfigured')).not.toBeInTheDocument()
})
})

View File

@@ -96,97 +96,4 @@ describe('AddCredentialInLoadBalancing', () => {
expect(onSelectCredential).toHaveBeenCalledWith(modelCredential.available_credentials[0])
})
// renderTrigger with open=true: bg-state-base-hover style applied
it('should apply hover background when trigger is rendered with open=true', async () => {
vi.doMock('@/app/components/header/account-setting/model-provider-page/model-auth', () => ({
Authorized: ({
renderTrigger,
}: {
renderTrigger: (open?: boolean) => React.ReactNode
}) => (
<div data-testid="open-trigger">{renderTrigger(true)}</div>
),
}))
// Must invalidate module cache so the component picks up the new mock
vi.resetModules()
try {
const { default: AddCredentialLB } = await import('./add-credential-in-load-balancing')
const { container } = render(
<AddCredentialLB
provider={provider}
model={model}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
modelCredential={modelCredential}
onSelectCredential={vi.fn()}
/>,
)
// The trigger div rendered by renderTrigger(true) should have bg-state-base-hover
// (the static class applied when open=true via cn())
const triggerDiv = container.querySelector('[data-testid="open-trigger"] > div')
expect(triggerDiv).toBeInTheDocument()
expect(triggerDiv!.className).toContain('bg-state-base-hover')
}
finally {
vi.doUnmock('@/app/components/header/account-setting/model-provider-page/model-auth')
vi.resetModules()
}
})
// customizableModel configuration method: component renders the add credential label
it('should render correctly with customizableModel configuration method', () => {
render(
<AddCredentialInLoadBalancing
provider={provider}
model={model}
configurationMethod={ConfigurationMethodEnum.customizableModel}
modelCredential={modelCredential}
onSelectCredential={vi.fn()}
/>,
)
expect(screen.getByText(/modelProvider.auth.addCredential/i)).toBeInTheDocument()
})
it('should handle undefined available_credentials gracefully using nullish coalescing', () => {
const credentialWithNoAvailable = {
available_credentials: undefined,
credentials: {},
load_balancing: { enabled: false, configs: [] },
} as unknown as typeof modelCredential
render(
<AddCredentialInLoadBalancing
provider={provider}
model={model}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
modelCredential={credentialWithNoAvailable}
onSelectCredential={vi.fn()}
/>,
)
// Component should render without error - the ?? [] fallback is used
expect(screen.getByText(/modelProvider.auth.addCredential/i)).toBeInTheDocument()
})
it('should not throw when update action fires without onUpdate prop', () => {
// Arrange - no onUpdate prop
render(
<AddCredentialInLoadBalancing
provider={provider}
model={model}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
modelCredential={modelCredential}
onSelectCredential={vi.fn()}
/>,
)
// Act - trigger the update without onUpdate being set (should not throw)
expect(() => {
fireEvent.click(screen.getByRole('button', { name: 'Run update' }))
}).not.toThrow()
})
})

View File

@@ -85,69 +85,4 @@ describe('CredentialItem', () => {
expect(onDelete).not.toHaveBeenCalled()
})
// All disable flags true → no action buttons rendered
it('should hide all action buttons when disableRename, disableEdit, and disableDelete are all true', () => {
// Act
render(
<CredentialItem
credential={credential}
onEdit={vi.fn()}
onDelete={vi.fn()}
disableRename
disableEdit
disableDelete
/>,
)
// Assert
expect(screen.queryByTestId('edit-icon')).not.toBeInTheDocument()
expect(screen.queryByTestId('delete-icon')).not.toBeInTheDocument()
})
// disabled=true guards: clicks on the item row and on delete should both be no-ops
it('should not call onItemClick when disabled=true and item is clicked', () => {
const onItemClick = vi.fn()
render(<CredentialItem credential={credential} disabled onItemClick={onItemClick} />)
fireEvent.click(screen.getByText('Test API Key'))
expect(onItemClick).not.toHaveBeenCalled()
})
it('should not call onDelete when disabled=true and delete button is clicked', () => {
const onDelete = vi.fn()
render(<CredentialItem credential={credential} disabled onDelete={onDelete} />)
fireEvent.click(screen.getByTestId('delete-icon').closest('button') as HTMLButtonElement)
expect(onDelete).not.toHaveBeenCalled()
})
// showSelectedIcon=true: check icon area is always rendered; check icon only appears when IDs match
it('should render check icon area when showSelectedIcon=true and selectedCredentialId matches', () => {
render(
<CredentialItem
credential={credential}
showSelectedIcon
selectedCredentialId="cred-1"
/>,
)
expect(screen.getByTestId('check-icon')).toBeInTheDocument()
})
it('should not render check icon when showSelectedIcon=true but selectedCredentialId does not match', () => {
render(
<CredentialItem
credential={credential}
showSelectedIcon
selectedCredentialId="other-cred"
/>,
)
expect(screen.queryByTestId('check-icon')).not.toBeInTheDocument()
})
})

View File

@@ -24,6 +24,36 @@ vi.mock('../hooks', () => ({
}),
}))
let mockPortalOpen = false
vi.mock('@/app/components/base/portal-to-follow-elem', () => ({
PortalToFollowElem: ({ children, open }: { children: React.ReactNode, open: boolean }) => {
mockPortalOpen = open
return <div data-testid="portal" data-open={open}>{children}</div>
},
PortalToFollowElemTrigger: ({ children, onClick }: { children: React.ReactNode, onClick: () => void }) => (
<div data-testid="portal-trigger" onClick={onClick}>{children}</div>
),
PortalToFollowElemContent: ({ children }: { children: React.ReactNode }) => {
if (!mockPortalOpen)
return null
return <div data-testid="portal-content">{children}</div>
},
}))
vi.mock('@/app/components/base/confirm', () => ({
default: ({ isShow, onCancel, onConfirm }: { isShow: boolean, onCancel: () => void, onConfirm: () => void }) => {
if (!isShow)
return null
return (
<div data-testid="confirm-dialog">
<button onClick={onCancel}>Cancel</button>
<button onClick={onConfirm}>Confirm</button>
</div>
)
},
}))
vi.mock('./authorized-item', () => ({
default: ({ credentials, model, onEdit, onDelete, onItemClick }: {
credentials: Credential[]
@@ -75,127 +105,382 @@ describe('Authorized', () => {
beforeEach(() => {
vi.clearAllMocks()
mockPortalOpen = false
mockDeleteCredentialId = null
mockDoingAction = false
})
it('should render trigger and open popup when trigger is clicked', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
describe('Rendering', () => {
it('should render trigger button', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
fireEvent.click(screen.getByRole('button', { name: /trigger\s*closed/i }))
expect(screen.getByTestId('authorized-item')).toBeInTheDocument()
expect(screen.getByRole('button', { name: /addApiKey/i })).toBeInTheDocument()
})
expect(screen.getByText(/Trigger/)).toBeInTheDocument()
})
it('should call handleOpenModal when triggerOnlyOpenModal is true', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
triggerOnlyOpenModal
/>,
)
it('should render portal content when open', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
isOpen
/>,
)
fireEvent.click(screen.getByRole('button', { name: /trigger\s*closed/i }))
expect(mockHandleOpenModal).toHaveBeenCalled()
expect(screen.queryByTestId('authorized-item')).not.toBeInTheDocument()
})
expect(screen.getByTestId('portal-content')).toBeInTheDocument()
expect(screen.getByTestId('authorized-item')).toBeInTheDocument()
})
it('should call onItemClick when credential is selected', () => {
const onItemClick = vi.fn()
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
onItemClick={onItemClick}
/>,
)
it('should not render portal content when closed', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
fireEvent.click(screen.getByRole('button', { name: /trigger\s*closed/i }))
fireEvent.click(screen.getAllByRole('button', { name: 'Select' })[0])
expect(screen.queryByTestId('portal-content')).not.toBeInTheDocument()
})
expect(onItemClick).toHaveBeenCalledWith(mockCredentials[0], mockItems[0].model)
})
it('should render Add API Key button when not model credential', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
isOpen
/>,
)
it('should call handleActiveCredential when onItemClick is not provided', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
expect(screen.getByText(/addApiKey/)).toBeInTheDocument()
})
fireEvent.click(screen.getByRole('button', { name: /trigger\s*closed/i }))
fireEvent.click(screen.getAllByRole('button', { name: 'Select' })[0])
it('should render Add Model Credential button when is model credential', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
authParams={{ isModelCredential: true }}
isOpen
/>,
)
expect(mockHandleActiveCredential).toHaveBeenCalledWith(mockCredentials[0], mockItems[0].model)
})
expect(screen.getByText(/addModelCredential/)).toBeInTheDocument()
})
it('should call handleOpenModal with fixed model fields when adding model credential', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.customizableModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
authParams={{ isModelCredential: true }}
currentCustomConfigurationModelFixedFields={{
__model_name: 'gpt-4',
__model_type: ModelTypeEnum.textGeneration,
}}
/>,
)
it('should not render add action when hideAddAction is true', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
hideAddAction
isOpen
/>,
)
fireEvent.click(screen.getByRole('button', { name: /trigger\s*closed/i }))
fireEvent.click(screen.getByText(/addModelCredential/))
expect(screen.queryByText(/addApiKey/)).not.toBeInTheDocument()
})
expect(mockHandleOpenModal).toHaveBeenCalledWith(undefined, {
model: 'gpt-4',
model_type: ModelTypeEnum.textGeneration,
it('should render popup title when provided', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
popupTitle="Select Credential"
isOpen
/>,
)
expect(screen.getByText('Select Credential')).toBeInTheDocument()
})
})
it('should not render add action when hideAddAction is true', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
hideAddAction
/>,
)
describe('User Interactions', () => {
it('should call onOpenChange when trigger is clicked in controlled mode', () => {
const onOpenChange = vi.fn()
fireEvent.click(screen.getByRole('button', { name: /trigger\s*closed/i }))
expect(screen.queryByRole('button', { name: /addApiKey/i })).not.toBeInTheDocument()
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
isOpen={false}
onOpenChange={onOpenChange}
/>,
)
fireEvent.click(screen.getByTestId('portal-trigger'))
expect(onOpenChange).toHaveBeenCalledWith(true)
})
it('should toggle portal on trigger click', () => {
const { rerender } = render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
fireEvent.click(screen.getByTestId('portal-trigger'))
rerender(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
isOpen
/>,
)
expect(screen.getByTestId('portal-content')).toBeInTheDocument()
})
it('should open modal when triggerOnlyOpenModal is true', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
triggerOnlyOpenModal
/>,
)
fireEvent.click(screen.getByTestId('portal-trigger'))
expect(mockHandleOpenModal).toHaveBeenCalled()
})
it('should call handleOpenModal when Add API Key is clicked', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
isOpen
/>,
)
fireEvent.click(screen.getByText(/addApiKey/))
expect(mockHandleOpenModal).toHaveBeenCalled()
})
it('should call handleOpenModal with credential and model when edit is clicked', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
isOpen
/>,
)
fireEvent.click(screen.getAllByText('Edit')[0])
expect(mockHandleOpenModal).toHaveBeenCalledWith(
mockCredentials[0],
mockItems[0].model,
)
})
it('should pass current model fields when adding model credential', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.customizableModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
authParams={{ isModelCredential: true }}
currentCustomConfigurationModelFixedFields={{
__model_name: 'gpt-4',
__model_type: ModelTypeEnum.textGeneration,
}}
isOpen
/>,
)
fireEvent.click(screen.getByText(/addModelCredential/))
expect(mockHandleOpenModal).toHaveBeenCalledWith(undefined, {
model: 'gpt-4',
model_type: ModelTypeEnum.textGeneration,
})
})
it('should call onItemClick when credential is selected', () => {
const onItemClick = vi.fn()
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
onItemClick={onItemClick}
isOpen
/>,
)
fireEvent.click(screen.getAllByText('Select')[0])
expect(onItemClick).toHaveBeenCalledWith(mockCredentials[0], mockItems[0].model)
})
it('should call handleActiveCredential when onItemClick is not provided', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
isOpen
/>,
)
fireEvent.click(screen.getAllByText('Select')[0])
expect(mockHandleActiveCredential).toHaveBeenCalledWith(mockCredentials[0], mockItems[0].model)
})
it('should not call onItemClick when disableItemClick is true', () => {
const onItemClick = vi.fn()
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
onItemClick={onItemClick}
disableItemClick
isOpen
/>,
)
fireEvent.click(screen.getAllByText('Select')[0])
expect(onItemClick).not.toHaveBeenCalled()
})
})
it('should show confirm dialog and call confirm handler when delete is confirmed', () => {
mockDeleteCredentialId = 'cred-1'
describe('Delete Confirmation', () => {
it('should show confirm dialog when deleteCredentialId is set', () => {
mockDeleteCredentialId = 'cred-1'
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
fireEvent.click(screen.getByRole('button', { name: /common.operation.confirm/i }))
expect(mockHandleConfirmDelete).toHaveBeenCalled()
expect(screen.getByTestId('confirm-dialog')).toBeInTheDocument()
})
it('should not show confirm dialog when deleteCredentialId is null', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
expect(screen.queryByTestId('confirm-dialog')).not.toBeInTheDocument()
})
it('should call closeConfirmDelete when cancel is clicked', () => {
mockDeleteCredentialId = 'cred-1'
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
fireEvent.click(screen.getByText('Cancel'))
expect(mockCloseConfirmDelete).toHaveBeenCalled()
})
it('should call handleConfirmDelete when confirm is clicked', () => {
mockDeleteCredentialId = 'cred-1'
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
/>,
)
fireEvent.click(screen.getByText('Confirm'))
expect(mockHandleConfirmDelete).toHaveBeenCalled()
})
})
describe('Edge Cases', () => {
it('should handle empty items array', () => {
render(
<Authorized
provider={mockProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={[]}
renderTrigger={mockRenderTrigger}
isOpen
/>,
)
expect(screen.queryByTestId('authorized-item')).not.toBeInTheDocument()
})
it('should not render add action when provider does not allow custom token', () => {
const restrictedProvider = { ...mockProvider, allow_custom_token: false }
render(
<Authorized
provider={restrictedProvider}
configurationMethod={ConfigurationMethodEnum.predefinedModel}
items={mockItems}
renderTrigger={mockRenderTrigger}
isOpen
/>,
)
expect(screen.queryByText(/addApiKey/)).not.toBeInTheDocument()
})
})
})

View File

@@ -1,6 +1,5 @@
import type { ModelProvider } from '@/app/components/header/account-setting/model-provider-page/declarations'
import { render, screen } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import ConfigProvider from './config-provider'
const mockUseCredentialStatus = vi.fn()
@@ -55,8 +54,7 @@ describe('ConfigProvider', () => {
expect(screen.getByText(/operation.config/i)).toBeInTheDocument()
})
it('should show setup label and unavailable tooltip when custom credentials are not allowed and no credential exists', async () => {
const user = userEvent.setup()
it('should still render setup label when custom credentials are not allowed', () => {
mockUseCredentialStatus.mockReturnValue({
hasCredential: false,
authorized: false,
@@ -67,50 +65,6 @@ describe('ConfigProvider', () => {
render(<ConfigProvider provider={{ ...baseProvider, allow_custom_token: false }} />)
expect(screen.getByText(/operation.setup/i)).toBeInTheDocument()
await user.hover(screen.getByText(/operation.setup/i))
expect(await screen.findByText(/auth\.credentialUnavailable/i)).toBeInTheDocument()
})
it('should show config label when hasCredential but not authorized', () => {
mockUseCredentialStatus.mockReturnValue({
hasCredential: true,
authorized: false,
current_credential_id: 'cred-1',
current_credential_name: 'Key 1',
available_credentials: [],
})
render(<ConfigProvider provider={baseProvider} />)
expect(screen.getByText(/operation.config/i)).toBeInTheDocument()
})
it('should show config label when custom credentials are not allowed but credential exists', () => {
mockUseCredentialStatus.mockReturnValue({
hasCredential: true,
authorized: true,
current_credential_id: 'cred-1',
current_credential_name: 'Key 1',
available_credentials: [],
})
render(<ConfigProvider provider={{ ...baseProvider, allow_custom_token: false }} />)
expect(screen.getByText(/operation.config/i)).toBeInTheDocument()
})
it('should handle nullish credential values with fallbacks', () => {
mockUseCredentialStatus.mockReturnValue({
hasCredential: false,
authorized: false,
current_credential_id: null,
current_credential_name: null,
available_credentials: null,
})
render(<ConfigProvider provider={baseProvider} />)
expect(screen.getByText(/operation.setup/i)).toBeInTheDocument()
})
})

View File

@@ -1,12 +1,12 @@
import { render, screen } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import { fireEvent, render, screen } from '@testing-library/react'
import CredentialSelector from './credential-selector'
// Mock components
vi.mock('./authorized/credential-item', () => ({
default: ({ credential, onItemClick }: { credential: { credential_name: string }, onItemClick?: (c: unknown) => void }) => (
<button type="button" onClick={() => onItemClick?.(credential)}>
default: ({ credential, onItemClick }: { credential: { credential_name: string }, onItemClick: (c: unknown) => void }) => (
<div data-testid="credential-item" onClick={() => onItemClick(credential)}>
{credential.credential_name}
</button>
</div>
),
}))
@@ -19,6 +19,22 @@ vi.mock('@remixicon/react', () => ({
RiArrowDownSLine: () => <div data-testid="arrow-icon" />,
}))
// Mock portal components
vi.mock('@/app/components/base/portal-to-follow-elem', () => ({
PortalToFollowElem: ({ children, open }: { children: React.ReactNode, open: boolean }) => (
<div data-testid="portal" data-open={open}>{children}</div>
),
PortalToFollowElemTrigger: ({ children, onClick }: { children: React.ReactNode, onClick: () => void }) => (
<div data-testid="portal-trigger" onClick={onClick}>{children}</div>
),
PortalToFollowElemContent: ({ children }: { children: React.ReactNode, open?: boolean }) => {
// We should only render children if open or if we want to test they are hidden
// The real component might handle this with CSS or conditional rendering.
// Let's use conditional rendering in the mock to avoid "multiple elements" errors.
return <div data-testid="portal-content">{children}</div>
},
}))
describe('CredentialSelector', () => {
const mockCredentials = [
{ credential_id: 'cred-1', credential_name: 'Key 1' },
@@ -30,7 +46,7 @@ describe('CredentialSelector', () => {
vi.clearAllMocks()
})
it('should render selected credential name when selectedCredential is provided', () => {
it('should render selected credential name', () => {
render(
<CredentialSelector
selectedCredential={mockCredentials[0]}
@@ -39,11 +55,12 @@ describe('CredentialSelector', () => {
/>,
)
expect(screen.getByText('Key 1')).toBeInTheDocument()
// Use getAllByText and take the first one (the one in the trigger)
expect(screen.getAllByText('Key 1')[0]).toBeInTheDocument()
expect(screen.getByTestId('indicator')).toBeInTheDocument()
})
it('should render placeholder when selectedCredential is missing', () => {
it('should render placeholder when no credential selected', () => {
render(
<CredentialSelector
credentials={mockCredentials}
@@ -54,8 +71,7 @@ describe('CredentialSelector', () => {
expect(screen.getByText(/modelProvider.auth.selectModelCredential/)).toBeInTheDocument()
})
it('should call onSelect when a credential item is clicked', async () => {
const user = userEvent.setup()
it('should open portal on click', () => {
render(
<CredentialSelector
credentials={mockCredentials}
@@ -63,14 +79,26 @@ describe('CredentialSelector', () => {
/>,
)
await user.click(screen.getByText(/modelProvider.auth.selectModelCredential/))
await user.click(screen.getByRole('button', { name: 'Key 2' }))
fireEvent.click(screen.getByTestId('portal-trigger'))
expect(screen.getByTestId('portal')).toHaveAttribute('data-open', 'true')
expect(screen.getAllByTestId('credential-item')).toHaveLength(2)
})
it('should call onSelect when a credential is clicked', () => {
render(
<CredentialSelector
credentials={mockCredentials}
onSelect={mockOnSelect}
/>,
)
fireEvent.click(screen.getByTestId('portal-trigger'))
fireEvent.click(screen.getByText('Key 2'))
expect(mockOnSelect).toHaveBeenCalledWith(mockCredentials[1])
})
it('should call onSelect with add-new payload when add action is clicked', async () => {
const user = userEvent.setup()
it('should call onSelect with add new credential data when clicking add button', () => {
render(
<CredentialSelector
credentials={mockCredentials}
@@ -78,8 +106,8 @@ describe('CredentialSelector', () => {
/>,
)
await user.click(screen.getByText(/modelProvider.auth.selectModelCredential/))
await user.click(screen.getByText(/modelProvider.auth.addNewModelCredential/))
fireEvent.click(screen.getByTestId('portal-trigger'))
fireEvent.click(screen.getByText(/modelProvider.auth.addNewModelCredential/))
expect(mockOnSelect).toHaveBeenCalledWith(expect.objectContaining({
credential_id: '__add_new_credential',
@@ -87,8 +115,7 @@ describe('CredentialSelector', () => {
}))
})
it('should not open options when disabled is true', async () => {
const user = userEvent.setup()
it('should not open portal when disabled', () => {
render(
<CredentialSelector
disabled
@@ -97,7 +124,7 @@ describe('CredentialSelector', () => {
/>,
)
await user.click(screen.getByText(/modelProvider.auth.selectModelCredential/))
expect(screen.queryByRole('button', { name: 'Key 1' })).not.toBeInTheDocument()
fireEvent.click(screen.getByTestId('portal-trigger'))
expect(screen.getByTestId('portal')).toHaveAttribute('data-open', 'false')
})
})

View File

@@ -1,11 +1,9 @@
import type { ReactNode } from 'react'
import type {
Credential,
CustomModel,
ModelProvider,
} from '../../declarations'
import { act, renderHook } from '@testing-library/react'
import { ToastContext } from '@/app/components/base/toast/context'
import { ConfigurationMethodEnum, ModelModalModeEnum, ModelTypeEnum } from '../../declarations'
import { useAuth } from './use-auth'
@@ -22,13 +20,9 @@ const mockAddModelCredential = vi.fn()
const mockEditProviderCredential = vi.fn()
const mockEditModelCredential = vi.fn()
vi.mock('@/app/components/base/toast/context', async (importOriginal) => {
const actual = await importOriginal<typeof import('@/app/components/base/toast/context')>()
return {
...actual,
useToastContext: () => ({ notify: mockNotify }),
}
})
vi.mock('@/app/components/base/toast/context', () => ({
useToastContext: () => ({ notify: mockNotify }),
}))
vi.mock('@/app/components/header/account-setting/model-provider-page/hooks', () => ({
useModelModalHandler: () => mockOpenModelModal,
@@ -72,12 +66,6 @@ describe('useAuth', () => {
model_type: ModelTypeEnum.textGeneration,
}
const createWrapper = ({ children }: { children: ReactNode }) => (
<ToastContext.Provider value={{ notify: mockNotify, close: vi.fn() }}>
{children}
</ToastContext.Provider>
)
beforeEach(() => {
vi.clearAllMocks()
mockDeleteModelService.mockResolvedValue({ result: 'success' })
@@ -92,7 +80,7 @@ describe('useAuth', () => {
})
it('should open and close delete confirmation state', () => {
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel))
act(() => {
result.current.openConfirmDelete(credential, model)
@@ -112,7 +100,7 @@ describe('useAuth', () => {
})
it('should activate credential, notify success, and refresh models', async () => {
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.customizableModel), { wrapper: createWrapper })
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.customizableModel))
await act(async () => {
await result.current.handleActiveCredential(credential, model)
@@ -132,7 +120,7 @@ describe('useAuth', () => {
})
it('should close delete dialog without calling services when nothing is pending', async () => {
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel))
await act(async () => {
await result.current.handleConfirmDelete()
@@ -149,7 +137,7 @@ describe('useAuth', () => {
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel, undefined, {
isModelCredential: false,
onRemove,
}), { wrapper: createWrapper })
}))
act(() => {
result.current.openConfirmDelete(credential, model)
@@ -173,7 +161,7 @@ describe('useAuth', () => {
const onRemove = vi.fn()
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.customizableModel, undefined, {
onRemove,
}), { wrapper: createWrapper })
}))
act(() => {
result.current.openConfirmDelete(undefined, model)
@@ -191,7 +179,7 @@ describe('useAuth', () => {
})
it('should add or edit credentials and refresh on successful save', async () => {
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel))
await act(async () => {
await result.current.handleSaveCredential({ api_key: 'new-key' })
@@ -212,7 +200,7 @@ describe('useAuth', () => {
const deferred = createDeferred<{ result: string }>()
mockAddProviderCredential.mockReturnValueOnce(deferred.promise)
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel))
let first!: Promise<void>
let second!: Promise<void>
@@ -238,7 +226,7 @@ describe('useAuth', () => {
isModelCredential: true,
onUpdate,
mode: ModelModalModeEnum.configModelCredential,
}), { wrapper: createWrapper })
}))
act(() => {
result.current.handleOpenModal(credential, model)
@@ -256,90 +244,4 @@ describe('useAuth', () => {
}),
)
})
it('should not notify or refresh when handleSaveCredential returns non-success result', async () => {
mockAddProviderCredential.mockResolvedValue({ result: 'error' })
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
await act(async () => {
await result.current.handleSaveCredential({ api_key: 'some-key' })
})
expect(mockAddProviderCredential).toHaveBeenCalledWith({ api_key: 'some-key' })
expect(mockNotify).not.toHaveBeenCalled()
expect(mockHandleRefreshModel).not.toHaveBeenCalled()
})
it('should pass undefined model and model_type when handleActiveCredential is called without a model parameter', async () => {
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
await act(async () => {
await result.current.handleActiveCredential(credential)
})
expect(mockActiveProviderCredential).toHaveBeenCalledWith({
credential_id: 'cred-1',
model: undefined,
model_type: undefined,
})
})
// openConfirmDelete with credential only (no model): deleteCredentialId set, deleteModel stays null
it('should only set deleteCredentialId when openConfirmDelete is called without a model', () => {
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
act(() => {
result.current.openConfirmDelete(credential, undefined)
})
expect(result.current.deleteCredentialId).toBe('cred-1')
expect(result.current.deleteModel).toBeNull()
expect(result.current.pendingOperationCredentialId.current).toBe('cred-1')
expect(result.current.pendingOperationModel.current).toBeNull()
})
// doingActionRef guard: second handleConfirmDelete call while first is in progress is a no-op
it('should ignore a second handleConfirmDelete call while the first is still in progress', async () => {
const deferred = createDeferred<{ result: string }>()
mockDeleteProviderCredential.mockReturnValueOnce(deferred.promise)
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
act(() => {
result.current.openConfirmDelete(credential, model)
})
let first!: Promise<void>
let second!: Promise<void>
await act(async () => {
first = result.current.handleConfirmDelete()
second = result.current.handleConfirmDelete()
deferred.resolve({ result: 'success' })
await Promise.all([first, second])
})
expect(mockDeleteProviderCredential).toHaveBeenCalledTimes(1)
})
// doingActionRef guard: second handleActiveCredential call while first is in progress is a no-op
it('should ignore a second handleActiveCredential call while the first is still in progress', async () => {
const deferred = createDeferred<{ result: string }>()
mockActiveProviderCredential.mockReturnValueOnce(deferred.promise)
const { result } = renderHook(() => useAuth(provider, ConfigurationMethodEnum.predefinedModel), { wrapper: createWrapper })
let first!: Promise<void>
let second!: Promise<void>
await act(async () => {
first = result.current.handleActiveCredential(credential)
second = result.current.handleActiveCredential(credential)
deferred.resolve({ result: 'success' })
await Promise.all([first, second])
})
expect(mockActiveProviderCredential).toHaveBeenCalledTimes(1)
})
})

View File

@@ -13,13 +13,11 @@ vi.mock('./hooks', () => ({
// Mock Authorized
vi.mock('./authorized', () => ({
default: ({ renderTrigger, items, popupTitle }: { renderTrigger: (o?: boolean) => React.ReactNode, items: Array<{ selectedCredential?: unknown }>, popupTitle: string }) => (
default: ({ renderTrigger, items, popupTitle }: { renderTrigger: (o?: boolean) => React.ReactNode, items: { length: number }, popupTitle: string }) => (
<div data-testid="authorized-mock">
<div data-testid="trigger-closed">{renderTrigger()}</div>
<div data-testid="trigger-open">{renderTrigger(true)}</div>
<div data-testid="trigger-container">{renderTrigger()}</div>
<div data-testid="popup-title">{popupTitle}</div>
<div data-testid="items-count">{items.length}</div>
<div data-testid="items-selected">{items.map((it, i) => <span key={i} data-testid={`selected-${i}`}>{it.selectedCredential ? 'has-cred' : 'no-cred'}</span>)}</div>
</div>
),
}))
@@ -57,41 +55,8 @@ describe('ManageCustomModelCredentials', () => {
render(<ManageCustomModelCredentials provider={mockProvider} />)
expect(screen.getByTestId('authorized-mock')).toBeInTheDocument()
expect(screen.getAllByText(/modelProvider.auth.manageCredentials/).length).toBeGreaterThan(0)
expect(screen.getByText(/modelProvider.auth.manageCredentials/)).toBeInTheDocument()
expect(screen.getByTestId('items-count')).toHaveTextContent('2')
expect(screen.getByTestId('popup-title')).toHaveTextContent('modelProvider.auth.customModelCredentials')
})
it('should render trigger in both open and closed states', () => {
const mockModels = [
{
model: 'gpt-4',
available_model_credentials: [{ credential_id: 'c1', credential_name: 'Key 1' }],
current_credential_id: 'c1',
current_credential_name: 'Key 1',
},
]
mockUseCustomModels.mockReturnValue(mockModels)
render(<ManageCustomModelCredentials provider={mockProvider} />)
expect(screen.getByTestId('trigger-closed')).toBeInTheDocument()
expect(screen.getByTestId('trigger-open')).toBeInTheDocument()
})
it('should pass undefined selectedCredential when model has no current_credential_id', () => {
const mockModels = [
{
model: 'gpt-3.5',
available_model_credentials: [{ credential_id: 'c1', credential_name: 'Key 1' }],
current_credential_id: '',
current_credential_name: '',
},
]
mockUseCustomModels.mockReturnValue(mockModels)
render(<ManageCustomModelCredentials provider={mockProvider} />)
expect(screen.getByTestId('selected-0')).toHaveTextContent('no-cred')
})
})

View File

@@ -18,6 +18,15 @@ vi.mock('@/app/components/header/indicator', () => ({
default: ({ color }: { color: string }) => <div data-testid={`indicator-${color}`} />,
}))
vi.mock('@/app/components/base/tooltip', () => ({
default: ({ children, popupContent }: { children: React.ReactNode, popupContent: string }) => (
<div data-testid="tooltip-mock">
{children}
<div>{popupContent}</div>
</div>
),
}))
vi.mock('@remixicon/react', () => ({
RiArrowDownSLine: () => <div data-testid="arrow-icon" />,
}))
@@ -116,131 +125,6 @@ describe('SwitchCredentialInLoadBalancing', () => {
/>,
)
fireEvent.mouseEnter(screen.getByText(/auth.credentialUnavailableInButton/))
expect(screen.getByText('plugin.auth.credentialUnavailable')).toBeInTheDocument()
})
// Empty credentials with allowed custom: no tooltip but still shows unavailable text
it('should show unavailable status without tooltip when custom credentials are allowed', () => {
// Act
render(
<SwitchCredentialInLoadBalancing
provider={mockProvider}
model={mockModel}
credentials={[]}
customModelCredential={undefined}
setCustomModelCredential={mockSetCustomModelCredential}
/>,
)
// Assert
expect(screen.getByText(/auth.credentialUnavailableInButton/)).toBeInTheDocument()
expect(screen.queryByText('plugin.auth.credentialUnavailable')).not.toBeInTheDocument()
})
// not_allowed_to_use=true: indicator is red and destructive button text is shown
it('should show red indicator and unavailable button text when credential has not_allowed_to_use=true', () => {
const unavailableCredential = { credential_id: 'cred-1', credential_name: 'Key 1', not_allowed_to_use: true }
render(
<SwitchCredentialInLoadBalancing
provider={mockProvider}
model={mockModel}
credentials={[unavailableCredential]}
customModelCredential={unavailableCredential}
setCustomModelCredential={mockSetCustomModelCredential}
/>,
)
expect(screen.getByTestId('indicator-red')).toBeInTheDocument()
expect(screen.getByText(/auth.credentialUnavailableInButton/)).toBeInTheDocument()
})
// from_enterprise=true on the selected credential: Enterprise badge appears in the trigger
it('should show Enterprise badge when selected credential has from_enterprise=true', () => {
const enterpriseCredential = { credential_id: 'cred-1', credential_name: 'Enterprise Key', from_enterprise: true }
render(
<SwitchCredentialInLoadBalancing
provider={mockProvider}
model={mockModel}
credentials={[enterpriseCredential]}
customModelCredential={enterpriseCredential}
setCustomModelCredential={mockSetCustomModelCredential}
/>,
)
expect(screen.getByText('Enterprise')).toBeInTheDocument()
})
// non-empty credentials with allow_custom_token=false: no tooltip (tooltip only for empty+notAllowCustom)
it('should not show unavailable tooltip when credentials are non-empty and allow_custom_token=false', () => {
const restrictedProvider = { ...mockProvider, allow_custom_token: false }
render(
<SwitchCredentialInLoadBalancing
provider={restrictedProvider}
model={mockModel}
credentials={mockCredentials}
customModelCredential={mockCredentials[0]}
setCustomModelCredential={mockSetCustomModelCredential}
/>,
)
fireEvent.mouseEnter(screen.getByText('Key 1'))
expect(screen.queryByText('plugin.auth.credentialUnavailable')).not.toBeInTheDocument()
expect(screen.getByText('Key 1')).toBeInTheDocument()
})
it('should pass undefined currentCustomConfigurationModelFixedFields when model is undefined', () => {
render(
<SwitchCredentialInLoadBalancing
provider={mockProvider}
// @ts-expect-error testing runtime handling when model is omitted
model={undefined}
credentials={mockCredentials}
customModelCredential={mockCredentials[0]}
setCustomModelCredential={mockSetCustomModelCredential}
/>,
)
// Component still renders (Authorized receives undefined currentCustomConfigurationModelFixedFields)
expect(screen.getByTestId('authorized-mock')).toBeInTheDocument()
expect(screen.getByText('Key 1')).toBeInTheDocument()
})
it('should treat undefined credentials as empty list', () => {
render(
<SwitchCredentialInLoadBalancing
provider={mockProvider}
model={mockModel}
credentials={undefined}
customModelCredential={undefined}
setCustomModelCredential={mockSetCustomModelCredential}
/>,
)
// credentials is undefined → empty=true → unavailable text shown
expect(screen.getByText(/auth.credentialUnavailableInButton/)).toBeInTheDocument()
expect(screen.queryByTestId(/indicator-/)).not.toBeInTheDocument()
})
it('should render nothing for credential_name when it is empty string', () => {
const credWithEmptyName = { credential_id: 'cred-1', credential_name: '' }
render(
<SwitchCredentialInLoadBalancing
provider={mockProvider}
model={mockModel}
credentials={[credWithEmptyName]}
customModelCredential={credWithEmptyName}
setCustomModelCredential={mockSetCustomModelCredential}
/>,
)
// indicator-green shown (not authRemoved, not unavailable, not empty)
expect(screen.getByTestId('indicator-green')).toBeInTheDocument()
// credential_name is empty so nothing printed for name
expect(screen.queryByText('Key 1')).not.toBeInTheDocument()
})
})

View File

@@ -24,6 +24,10 @@ vi.mock('../hooks', () => ({
useLanguage: () => mockLanguage,
}))
vi.mock('@/app/components/base/icons/src/public/llm', () => ({
OpenaiYellow: () => <svg data-testid="openai-yellow-icon" />,
}))
const createI18nText = (value: string): I18nText => ({
en_US: value,
zh_Hans: value,
@@ -88,10 +92,10 @@ describe('ModelIcon', () => {
icon_small: createI18nText('openai.png'),
})
const { container } = render(<ModelIcon provider={provider} modelName="o1" />)
render(<ModelIcon provider={provider} modelName="o1" />)
expect(screen.queryByRole('img', { name: /model-icon/i })).not.toBeInTheDocument()
expect(container.querySelector('svg')).toBeInTheDocument()
expect(screen.getByTestId('openai-yellow-icon')).toBeInTheDocument()
})
// Edge case
@@ -101,25 +105,4 @@ describe('ModelIcon', () => {
expect(screen.queryByRole('img', { name: /model-icon/i })).not.toBeInTheDocument()
expect(container.firstChild).not.toBeNull()
})
it('should render OpenAI Yellow icon for langgenius/openai/openai provider with model starting with o', () => {
const provider = createModel({
provider: 'langgenius/openai/openai',
icon_small: createI18nText('openai.png'),
})
const { container } = render(<ModelIcon provider={provider} modelName="o3" />)
expect(screen.queryByRole('img', { name: /model-icon/i })).not.toBeInTheDocument()
expect(container.querySelector('svg')).toBeInTheDocument()
})
it('should apply opacity-50 when isDeprecated is true', () => {
const provider = createModel()
const { container } = render(<ModelIcon provider={provider} isDeprecated={true} />)
const wrapper = container.querySelector('.opacity-50')
expect(wrapper).toBeInTheDocument()
})
})

View File

@@ -161,7 +161,7 @@ function Form<
const disabled = readonly || (isEditMode && (variable === '__model_type' || variable === '__model_name'))
return (
<div key={variable} className={cn(itemClassName, 'py-3')}>
<div className={cn(fieldLabelClassName, 'flex items-center py-2 text-text-secondary system-sm-semibold')}>
<div className={cn(fieldLabelClassName, 'system-sm-semibold flex items-center py-2 text-text-secondary')}>
{label[language] || label.en_US}
{required && (
<span className="ml-1 text-red-500">*</span>
@@ -204,14 +204,13 @@ function Form<
return (
<div key={variable} className={cn(itemClassName, 'py-3')}>
<div className={cn(fieldLabelClassName, 'flex items-center py-2 text-text-secondary system-sm-semibold')}>
<div className={cn(fieldLabelClassName, 'system-sm-semibold flex items-center py-2 text-text-secondary')}>
{label[language] || label.en_US}
{required && (
<span className="ml-1 text-red-500">*</span>
)}
{tooltipContent}
</div>
{/* eslint-disable-next-line tailwindcss/no-unknown-classes */}
<div className={cn('grid gap-3', `grid-cols-${options?.length}`)}>
{options.filter((option) => {
if (option.show_on.length)
@@ -230,7 +229,7 @@ function Form<
>
<RadioE isChecked={value[variable] === option.value} />
<div className="text-text-secondary system-sm-regular">{option.label[language] || option.label.en_US}</div>
<div className="system-sm-regular text-text-secondary">{option.label[language] || option.label.en_US}</div>
</div>
))}
</div>
@@ -255,7 +254,7 @@ function Form<
return (
<div key={variable} className={cn(itemClassName, 'py-3')}>
<div className={cn(fieldLabelClassName, 'flex items-center py-2 text-text-secondary system-sm-semibold')}>
<div className={cn(fieldLabelClassName, 'system-sm-semibold flex items-center py-2 text-text-secondary')}>
{label[language] || label.en_US}
{required && (
@@ -296,9 +295,9 @@ function Form<
return (
<div key={variable} className={cn(itemClassName, 'py-3')}>
<div className="flex items-center justify-between py-2 text-text-secondary system-sm-semibold">
<div className="system-sm-semibold flex items-center justify-between py-2 text-text-secondary">
<div className="flex items-center space-x-2">
<span className={cn(fieldLabelClassName, 'flex items-center py-2 text-text-secondary system-sm-semibold')}>{label[language] || label.en_US}</span>
<span className={cn(fieldLabelClassName, 'system-sm-semibold flex items-center py-2 text-text-secondary')}>{label[language] || label.en_US}</span>
{required && (
<span className="ml-1 text-red-500">*</span>
)}
@@ -327,7 +326,7 @@ function Form<
} = formSchema as (CredentialFormSchemaTextInput | CredentialFormSchemaSecretInput)
return (
<div key={variable} className={cn(itemClassName, 'py-3')}>
<div className={cn(fieldLabelClassName, 'flex items-center py-2 text-text-secondary system-sm-semibold')}>
<div className={cn(fieldLabelClassName, 'system-sm-semibold flex items-center py-2 text-text-secondary')}>
{label[language] || label.en_US}
{required && (
<span className="ml-1 text-red-500">*</span>
@@ -359,7 +358,7 @@ function Form<
} = formSchema as (CredentialFormSchemaTextInput | CredentialFormSchemaSecretInput)
return (
<div key={variable} className={cn(itemClassName, 'py-3')}>
<div className={cn(fieldLabelClassName, 'flex items-center py-2 text-text-secondary system-sm-semibold')}>
<div className={cn(fieldLabelClassName, 'system-sm-semibold flex items-center py-2 text-text-secondary')}>
{label[language] || label.en_US}
{required && (
<span className="ml-1 text-red-500">*</span>
@@ -423,7 +422,7 @@ function Form<
return (
<div key={variable} className={cn(itemClassName, 'py-3')}>
<div className={cn(fieldLabelClassName, 'flex items-center py-2 text-text-secondary system-sm-semibold')}>
<div className={cn(fieldLabelClassName, 'system-sm-semibold flex items-center py-2 text-text-secondary')}>
{label[language] || label.en_US}
{required && (
<span className="ml-1 text-red-500">*</span>
@@ -452,7 +451,7 @@ function Form<
return (
<div key={variable} className={cn(itemClassName, 'py-3')}>
<div className={cn(fieldLabelClassName, 'flex items-center py-2 text-text-secondary system-sm-semibold')}>
<div className={cn(fieldLabelClassName, 'system-sm-semibold flex items-center py-2 text-text-secondary')}>
{label[language] || label.en_US}
{required && (
<span className="ml-1 text-red-500">*</span>

View File

@@ -93,88 +93,4 @@ describe('Input', () => {
expect(onChange).not.toHaveBeenCalledWith('2')
expect(onChange).not.toHaveBeenCalledWith('6')
})
it('should not clamp when min and max are not provided', () => {
const onChange = vi.fn()
render(
<Input
placeholder="Free"
onChange={onChange}
/>,
)
const input = screen.getByPlaceholderText('Free')
fireEvent.change(input, { target: { value: '999' } })
fireEvent.blur(input)
// onChange only called from change event, not from blur clamping
expect(onChange).toHaveBeenCalledTimes(1)
expect(onChange).toHaveBeenCalledWith('999')
})
it('should show check circle icon when validated is true', () => {
const { container } = render(
<Input
placeholder="Key"
onChange={vi.fn()}
validated
/>,
)
expect(screen.getByPlaceholderText('Key')).toBeInTheDocument()
expect(container.querySelector('.absolute.right-2\\.5.top-2\\.5')).toBeInTheDocument()
})
it('should not show check circle icon when validated is false', () => {
const { container } = render(
<Input
placeholder="Key"
onChange={vi.fn()}
validated={false}
/>,
)
expect(screen.getByPlaceholderText('Key')).toBeInTheDocument()
expect(container.querySelector('.absolute.right-2\\.5.top-2\\.5')).not.toBeInTheDocument()
})
it('should apply disabled attribute when disabled prop is true', () => {
render(
<Input
placeholder="Disabled"
onChange={vi.fn()}
disabled
/>,
)
expect(screen.getByPlaceholderText('Disabled')).toBeDisabled()
})
it('should call onFocus when input receives focus', () => {
const onFocus = vi.fn()
render(
<Input
placeholder="Focus"
onChange={vi.fn()}
onFocus={onFocus}
/>,
)
fireEvent.focus(screen.getByPlaceholderText('Focus'))
expect(onFocus).toHaveBeenCalledTimes(1)
})
it('should render with custom className', () => {
render(
<Input
placeholder="Styled"
onChange={vi.fn()}
className="custom-class"
/>,
)
expect(screen.getByPlaceholderText('Styled')).toHaveClass('custom-class')
})
})

View File

@@ -1,7 +1,5 @@
import type { ComponentProps } from 'react'
import type { Credential, CredentialFormSchema, CustomModel, ModelProvider } from '../declarations'
import type { Credential, CredentialFormSchema, ModelProvider } from '../declarations'
import { fireEvent, render, screen, waitFor } from '@testing-library/react'
import * as React from 'react'
import {
ConfigurationMethodEnum,
CurrentSystemQuotaTypeEnum,
@@ -45,6 +43,15 @@ const mockHandlers = vi.hoisted(() => ({
handleActiveCredential: vi.fn(),
}))
type FormResponse = {
isCheckValidated: boolean
values: Record<string, unknown>
}
const mockFormState = vi.hoisted(() => ({
responses: [] as FormResponse[],
setFieldValue: vi.fn(),
}))
vi.mock('../model-auth/hooks', () => ({
useCredentialData: () => ({
isLoading: mockState.isLoading,
@@ -79,6 +86,36 @@ vi.mock('../hooks', () => ({
useLanguage: () => 'en_US',
}))
vi.mock('@/app/components/base/form/form-scenarios/auth', async () => {
const React = await import('react')
const AuthForm = React.forwardRef(({
onChange,
}: {
onChange?: (field: string, value: string) => void
}, ref: React.ForwardedRef<{ getFormValues: () => FormResponse, getForm: () => { setFieldValue: (field: string, value: string) => void } }>) => {
React.useImperativeHandle(ref, () => ({
getFormValues: () => mockFormState.responses.shift() || { isCheckValidated: false, values: {} },
getForm: () => ({ setFieldValue: mockFormState.setFieldValue }),
}))
return (
<div>
<button type="button" onClick={() => onChange?.('__model_name', 'updated-model')}>Model Name Change</button>
</div>
)
})
return { default: AuthForm }
})
vi.mock('../model-auth', () => ({
CredentialSelector: ({ onSelect }: { onSelect: (credential: Credential & { addNewCredential?: boolean }) => void }) => (
<div>
<button type="button" onClick={() => onSelect({ credential_id: 'existing' })}>Choose Existing</button>
<button type="button" onClick={() => onSelect({ credential_id: 'new', addNewCredential: true })}>Add New</button>
</div>
),
}))
const createI18n = (text: string) => ({ en_US: text, zh_Hans: text })
const createProvider = (overrides?: Partial<ModelProvider>): ModelProvider => ({
@@ -121,7 +158,7 @@ const createProvider = (overrides?: Partial<ModelProvider>): ModelProvider => ({
...overrides,
})
const renderModal = (overrides?: Partial<ComponentProps<typeof ModelModal>>) => {
const renderModal = (overrides?: Partial<React.ComponentProps<typeof ModelModal>>) => {
const provider = createProvider()
const props = {
provider,
@@ -131,50 +168,13 @@ const renderModal = (overrides?: Partial<ComponentProps<typeof ModelModal>>) =>
onRemove: vi.fn(),
...overrides,
}
render(<ModelModal {...props} />)
return props
const view = render(<ModelModal {...props} />)
return {
...props,
unmount: view.unmount,
}
}
const mockFormRef1 = {
getFormValues: vi.fn(),
getForm: vi.fn(() => ({ setFieldValue: vi.fn() })),
}
const mockFormRef2 = {
getFormValues: vi.fn(),
getForm: vi.fn(() => ({ setFieldValue: vi.fn() })),
}
vi.mock('@/app/components/base/form/form-scenarios/auth', () => ({
default: React.forwardRef((props: { formSchemas: Record<string, unknown>[], onChange?: (f: string, v: string) => void }, ref: React.ForwardedRef<unknown>) => {
React.useImperativeHandle(ref, () => {
// Return the mock depending on schemas passed (hacky but works for refs)
if (props.formSchemas.length > 0 && props.formSchemas[0].name === '__model_name')
return mockFormRef1
return mockFormRef2
})
return (
<div data-testid="auth-form" onClick={() => props.onChange?.('test-field', 'val')}>
AuthForm Mock (
{props.formSchemas.length}
{' '}
fields)
</div>
)
}),
}))
vi.mock('../model-auth', () => ({
CredentialSelector: ({ onSelect }: { onSelect: (val: unknown) => void }) => (
<button onClick={() => onSelect({ addNewCredential: true })} data-testid="credential-selector">
Select Credential
</button>
),
useAuth: vi.fn(),
useCredentialData: vi.fn(),
useModelFormSchemas: vi.fn(),
}))
describe('ModelModal', () => {
beforeEach(() => {
vi.clearAllMocks()
@@ -187,131 +187,167 @@ describe('ModelModal', () => {
mockState.formValues = {}
mockState.modelNameAndTypeFormSchemas = []
mockState.modelNameAndTypeFormValues = {}
// reset form refs
mockFormRef1.getFormValues.mockReturnValue({ isCheckValidated: true, values: { __model_name: 'test', __model_type: ModelTypeEnum.textGeneration } })
mockFormRef2.getFormValues.mockReturnValue({ isCheckValidated: true, values: { __authorization_name__: 'test_auth', api_key: 'sk-test' } })
mockFormState.responses = []
})
it('should render title and loading state for predefined credential modal', () => {
it('should show title, description, and loading state for predefined models', () => {
mockState.isLoading = true
renderModal()
const predefined = renderModal()
expect(screen.getByText('common.modelProvider.auth.apiKeyModal.title')).toBeInTheDocument()
expect(screen.getByText('common.modelProvider.auth.apiKeyModal.desc')).toBeInTheDocument()
})
expect(screen.getByRole('status')).toBeInTheDocument()
expect(screen.getByRole('button', { name: 'common.operation.save' })).toBeDisabled()
it('should render model credential title when mode is configModelCredential', () => {
renderModal({
mode: ModelModalModeEnum.configModelCredential,
model: { model: 'gpt-4', model_type: ModelTypeEnum.textGeneration },
})
predefined.unmount()
const customizable = renderModal({ configurateMethod: ConfigurationMethodEnum.customizableModel })
expect(screen.queryByText('common.modelProvider.auth.apiKeyModal.desc')).not.toBeInTheDocument()
customizable.unmount()
mockState.credentialData = { credentials: {}, available_credentials: [] }
renderModal({ mode: ModelModalModeEnum.configModelCredential, model: { model: 'gpt-4', model_type: ModelTypeEnum.textGeneration } })
expect(screen.getByText('common.modelProvider.auth.addModelCredential')).toBeInTheDocument()
})
it('should render edit credential title when credential exists', () => {
renderModal({
mode: ModelModalModeEnum.configModelCredential,
credential: { credential_id: '1' } as unknown as Credential,
})
expect(screen.getByText('common.modelProvider.auth.editModelCredential')).toBeInTheDocument()
it('should reveal the credential label when adding a new credential', () => {
renderModal({ mode: ModelModalModeEnum.addCustomModelToModelList })
expect(screen.queryByText('common.modelProvider.auth.modelCredential')).not.toBeInTheDocument()
fireEvent.click(screen.getByText('Add New'))
expect(screen.getByText('common.modelProvider.auth.modelCredential')).toBeInTheDocument()
})
it('should change title to Add Model when mode is configCustomModel', () => {
mockState.modelNameAndTypeFormSchemas = [{ variable: '__model_name', type: 'text' } as unknown as CredentialFormSchema]
renderModal({ mode: ModelModalModeEnum.configCustomModel })
expect(screen.getByText('common.modelProvider.auth.addModel')).toBeInTheDocument()
it('should call onCancel when the cancel button is clicked', () => {
const { onCancel } = renderModal()
fireEvent.click(screen.getByRole('button', { name: 'common.operation.cancel' }))
expect(onCancel).toHaveBeenCalledTimes(1)
})
it('should validate and fail save if form is invalid in configCustomModel mode', async () => {
mockState.modelNameAndTypeFormSchemas = [{ variable: '__model_name', type: 'text' } as unknown as CredentialFormSchema]
mockFormRef1.getFormValues.mockReturnValue({ isCheckValidated: false, values: {} })
renderModal({ mode: ModelModalModeEnum.configCustomModel })
fireEvent.click(screen.getByRole('button', { name: 'common.operation.add' }))
expect(mockHandlers.handleSaveCredential).not.toHaveBeenCalled()
})
it('should call onCancel when the escape key is pressed', () => {
const { onCancel } = renderModal()
it('should validate and save new credential and model in configCustomModel mode', async () => {
mockState.modelNameAndTypeFormSchemas = [{ variable: '__model_name', type: 'text' } as unknown as CredentialFormSchema]
const props = renderModal({ mode: ModelModalModeEnum.configCustomModel })
fireEvent.click(screen.getByRole('button', { name: 'common.operation.add' }))
await waitFor(() => {
expect(mockHandlers.handleSaveCredential).toHaveBeenCalledWith({
credential_id: undefined,
credentials: { api_key: 'sk-test' },
name: 'test_auth',
model: 'test',
model_type: ModelTypeEnum.textGeneration,
})
expect(props.onSave).toHaveBeenCalled()
})
})
it('should save credential only in standard configProviderCredential mode', async () => {
const { onSave } = renderModal({ mode: ModelModalModeEnum.configProviderCredential })
fireEvent.click(screen.getByRole('button', { name: 'common.operation.save' }))
await waitFor(() => {
expect(mockHandlers.handleSaveCredential).toHaveBeenCalledWith({
credential_id: undefined,
credentials: { api_key: 'sk-test' },
name: 'test_auth',
})
expect(onSave).toHaveBeenCalled()
})
})
it('should save active credential and cancel when picking existing credential in addCustomModelToModelList mode', async () => {
renderModal({ mode: ModelModalModeEnum.addCustomModelToModelList, model: { model: 'm1', model_type: ModelTypeEnum.textGeneration } as unknown as CustomModel })
// By default selected is undefined so button clicks form
// Let's not click credential selector, so it evaluates without it. If selectedCredential is undefined, form validation is checked.
mockFormRef2.getFormValues.mockReturnValue({ isCheckValidated: false, values: {} })
fireEvent.click(screen.getByRole('button', { name: 'common.operation.add' }))
expect(mockHandlers.handleSaveCredential).not.toHaveBeenCalled()
})
it('should save active credential when picking existing credential in addCustomModelToModelList mode', async () => {
renderModal({ mode: ModelModalModeEnum.addCustomModelToModelList, model: { model: 'm2', model_type: ModelTypeEnum.textGeneration } as unknown as CustomModel })
// Select existing credential (addNewCredential: true simulates new but we can simulate false if we just hack the mocked state in the component, but it's internal.
// The credential selector sets selectedCredential.
fireEvent.click(screen.getByTestId('credential-selector')) // Sets addNewCredential = true internally, so it proceeds to form save
mockFormRef2.getFormValues.mockReturnValue({ isCheckValidated: true, values: { __authorization_name__: 'auth', api: 'key' } })
fireEvent.click(screen.getByRole('button', { name: 'common.operation.add' }))
await waitFor(() => {
expect(mockHandlers.handleSaveCredential).toHaveBeenCalledWith({
credential_id: undefined,
credentials: { api: 'key' },
name: 'auth',
model: 'm2',
model_type: ModelTypeEnum.textGeneration,
})
})
})
it('should open and confirm deletion of credential', () => {
mockState.credentialData = { credentials: { api_key: '123' }, available_credentials: [] }
mockState.formValues = { api_key: '123' } // To trigger isEditMode = true
const credential = { credential_id: 'c1' } as unknown as Credential
renderModal({ credential })
// Open Delete Confirm
fireEvent.click(screen.getByRole('button', { name: 'common.operation.remove' }))
expect(mockHandlers.openConfirmDelete).toHaveBeenCalledWith(credential, undefined)
// Simulate the dialog appearing and confirming
mockState.deleteCredentialId = 'c1'
renderModal({ credential }) // Re-render logic mock
fireEvent.click(screen.getAllByRole('button', { name: 'common.operation.confirm' })[0])
expect(mockHandlers.handleConfirmDelete).toHaveBeenCalled()
})
it('should bind escape key to cancel', () => {
const props = renderModal()
fireEvent.keyDown(document, { key: 'Escape' })
expect(props.onCancel).toHaveBeenCalled()
expect(onCancel).toHaveBeenCalledTimes(1)
})
it('should confirm deletion when a delete dialog is shown', () => {
mockState.credentialData = { credentials: { api_key: 'secret' }, available_credentials: [] }
mockState.deleteCredentialId = 'delete-id'
const credential: Credential = { credential_id: 'cred-1' }
const { onCancel } = renderModal({ credential })
expect(screen.getByText('common.modelProvider.confirmDelete')).toBeInTheDocument()
fireEvent.click(screen.getByRole('button', { name: 'common.operation.confirm' }))
expect(mockHandlers.handleConfirmDelete).toHaveBeenCalledTimes(1)
expect(onCancel).toHaveBeenCalledTimes(1)
})
it('should handle save flows for different modal modes', async () => {
mockState.modelNameAndTypeFormSchemas = [{ variable: '__model_name', type: 'text-input' } as unknown as CredentialFormSchema]
mockState.formSchemas = [{ variable: 'api_key', type: 'secret-input' } as unknown as CredentialFormSchema]
mockFormState.responses = [
{ isCheckValidated: true, values: { __model_name: 'custom-model', __model_type: ModelTypeEnum.textGeneration } },
{ isCheckValidated: true, values: { __authorization_name__: 'Auth Name', api_key: 'secret' } },
]
const configCustomModel = renderModal({ mode: ModelModalModeEnum.configCustomModel })
fireEvent.click(screen.getAllByText('Model Name Change')[0])
fireEvent.click(screen.getByRole('button', { name: 'common.operation.add' }))
expect(mockFormState.setFieldValue).toHaveBeenCalledWith('__model_name', 'updated-model')
await waitFor(() => {
expect(mockHandlers.handleSaveCredential).toHaveBeenCalledWith({
credential_id: undefined,
credentials: { api_key: 'secret' },
name: 'Auth Name',
model: 'custom-model',
model_type: ModelTypeEnum.textGeneration,
})
})
expect(configCustomModel.onSave).toHaveBeenCalledWith({ __authorization_name__: 'Auth Name', api_key: 'secret' })
configCustomModel.unmount()
mockFormState.responses = [{ isCheckValidated: true, values: { __authorization_name__: 'Model Auth', api_key: 'abc' } }]
const model = { model: 'gpt-4', model_type: ModelTypeEnum.textGeneration }
const configModelCredential = renderModal({
mode: ModelModalModeEnum.configModelCredential,
model,
credential: { credential_id: 'cred-123' },
})
fireEvent.click(screen.getByRole('button', { name: 'common.operation.save' }))
await waitFor(() => {
expect(mockHandlers.handleSaveCredential).toHaveBeenCalledWith({
credential_id: 'cred-123',
credentials: { api_key: 'abc' },
name: 'Model Auth',
model: 'gpt-4',
model_type: ModelTypeEnum.textGeneration,
})
})
expect(configModelCredential.onSave).toHaveBeenCalledWith({ __authorization_name__: 'Model Auth', api_key: 'abc' })
configModelCredential.unmount()
mockFormState.responses = [{ isCheckValidated: true, values: { __authorization_name__: 'Provider Auth', api_key: 'provider-key' } }]
const configProviderCredential = renderModal({ mode: ModelModalModeEnum.configProviderCredential })
fireEvent.click(screen.getByRole('button', { name: 'common.operation.save' }))
await waitFor(() => {
expect(mockHandlers.handleSaveCredential).toHaveBeenCalledWith({
credential_id: undefined,
credentials: { api_key: 'provider-key' },
name: 'Provider Auth',
})
})
configProviderCredential.unmount()
const addToModelList = renderModal({
mode: ModelModalModeEnum.addCustomModelToModelList,
model,
})
fireEvent.click(screen.getByText('Choose Existing'))
fireEvent.click(screen.getByRole('button', { name: 'common.operation.add' }))
expect(mockHandlers.handleActiveCredential).toHaveBeenCalledWith({ credential_id: 'existing' }, model)
expect(addToModelList.onCancel).toHaveBeenCalled()
addToModelList.unmount()
mockFormState.responses = [{ isCheckValidated: true, values: { __authorization_name__: 'New Auth', api_key: 'new-key' } }]
const addToModelListWithNew = renderModal({
mode: ModelModalModeEnum.addCustomModelToModelList,
model,
})
fireEvent.click(screen.getByText('Add New'))
fireEvent.click(screen.getByRole('button', { name: 'common.operation.add' }))
await waitFor(() => {
expect(mockHandlers.handleSaveCredential).toHaveBeenCalledWith({
credential_id: undefined,
credentials: { api_key: 'new-key' },
name: 'New Auth',
model: 'gpt-4',
model_type: ModelTypeEnum.textGeneration,
})
})
addToModelListWithNew.unmount()
mockFormState.responses = [{ isCheckValidated: false, values: {} }]
const invalidSave = renderModal({ mode: ModelModalModeEnum.configProviderCredential })
fireEvent.click(screen.getByRole('button', { name: 'common.operation.save' }))
await waitFor(() => {
expect(mockHandlers.handleSaveCredential).toHaveBeenCalledTimes(4)
})
invalidSave.unmount()
mockState.credentialData = { credentials: { api_key: 'value' }, available_credentials: [] }
mockState.formValues = { api_key: 'value' }
const removable = renderModal({ credential: { credential_id: 'remove-1' } })
fireEvent.click(screen.getByRole('button', { name: 'common.operation.remove' }))
expect(mockHandlers.openConfirmDelete).toHaveBeenCalledWith({ credential_id: 'remove-1' }, undefined)
removable.unmount()
})
})

View File

@@ -1,8 +1,9 @@
import { fireEvent, render, screen } from '@testing-library/react'
import { vi } from 'vitest'
import ModelParameterModal from './index'
let isAPIKeySet = true
let parameterRules: Array<Record<string, unknown>> | undefined = [
let parameterRules = [
{
name: 'temperature',
label: { en_US: 'Temperature' },
@@ -61,17 +62,42 @@ vi.mock('../hooks', () => ({
}),
}))
// Mock PortalToFollowElem components to control visibility and simplify testing
vi.mock('@/app/components/base/portal-to-follow-elem', () => {
return {
PortalToFollowElem: ({ children }: { children: React.ReactNode }) => {
return (
<div>
<div data-testid="portal-wrapper">
{children}
</div>
</div>
)
},
PortalToFollowElemTrigger: ({ children, onClick }: { children: React.ReactNode, onClick: () => void }) => (
<div data-testid="portal-trigger" onClick={onClick}>
{children}
</div>
),
PortalToFollowElemContent: ({ children, className }: { children: React.ReactNode, className: string }) => (
<div data-testid="portal-content" className={className}>
{children}
</div>
),
}
})
vi.mock('./parameter-item', () => ({
default: ({ parameterRule, onChange, onSwitch }: {
parameterRule: { name: string, label: { en_US: string } }
onChange: (v: number) => void
onSwitch: (checked: boolean, val: unknown) => void
}) => (
default: ({ parameterRule, value, onChange, onSwitch }: { parameterRule: { name: string, label: { en_US: string } }, value: string | number, onChange: (v: number) => void, onSwitch: (checked: boolean, val: unknown) => void }) => (
<div data-testid={`param-${parameterRule.name}`}>
{parameterRule.label.en_US}
<button onClick={() => onChange(0.9)}>Change</button>
<button onClick={() => onSwitch(false, undefined)}>Remove</button>
<button onClick={() => onSwitch(true, 'assigned')}>Add</button>
<input
aria-label={parameterRule.name}
value={value || ''}
onChange={e => onChange(Number(e.target.value))}
/>
<button onClick={() => onSwitch?.(false, undefined)}>Remove</button>
<button onClick={() => onSwitch?.(true, 'assigned')}>Add</button>
</div>
),
}))
@@ -79,6 +105,7 @@ vi.mock('./parameter-item', () => ({
vi.mock('../model-selector', () => ({
default: ({ onSelect }: { onSelect: (value: { provider: string, model: string }) => void }) => (
<div data-testid="model-selector">
Model Selector
<button onClick={() => onSelect({ provider: 'openai', model: 'gpt-4.1' })}>Select GPT-4.1</button>
</div>
),
@@ -94,11 +121,16 @@ vi.mock('./trigger', () => ({
default: () => <button>Open Settings</button>,
}))
vi.mock('@/utils/classnames', () => ({
cn: (...args: (string | undefined | null | false)[]) => args.filter(Boolean).join(' '),
}))
// Mock config
vi.mock('@/config', async (importOriginal) => {
const actual = await importOriginal<typeof import('@/config')>()
return {
...actual,
PROVIDER_WITH_PRESET_TONE: ['openai'],
PROVIDER_WITH_PRESET_TONE: ['openai'], // ensure presets mock renders
}
})
@@ -156,19 +188,21 @@ describe('ModelParameterModal', () => {
]
})
it('should render trigger and open modal content when trigger is clicked', () => {
it('should render trigger and content', () => {
render(<ModelParameterModal {...defaultProps} />)
fireEvent.click(screen.getByText('Open Settings'))
expect(screen.getByText('Open Settings')).toBeInTheDocument()
expect(screen.getByText('Temperature')).toBeInTheDocument()
expect(screen.getByTestId('model-selector')).toBeInTheDocument()
expect(screen.getByTestId('param-temperature')).toBeInTheDocument()
fireEvent.click(screen.getByTestId('portal-trigger'))
})
it('should call onCompletionParamsChange when parameter changes and switch actions happen', () => {
it('should update params when changed and handle switch add/remove', () => {
render(<ModelParameterModal {...defaultProps} />)
fireEvent.click(screen.getByText('Open Settings'))
fireEvent.click(screen.getByText('Change'))
const input = screen.getByLabelText('temperature')
fireEvent.change(input, { target: { value: '0.9' } })
expect(defaultProps.onCompletionParamsChange).toHaveBeenCalledWith({
...defaultProps.completionParams,
temperature: 0.9,
@@ -184,18 +218,51 @@ describe('ModelParameterModal', () => {
})
})
it('should call onCompletionParamsChange when preset is selected', () => {
it('should handle preset selection', () => {
render(<ModelParameterModal {...defaultProps} />)
fireEvent.click(screen.getByText('Open Settings'))
fireEvent.click(screen.getByText('Preset 1'))
expect(defaultProps.onCompletionParamsChange).toHaveBeenCalled()
})
it('should call setModel when model selector picks another model', () => {
render(<ModelParameterModal {...defaultProps} />)
fireEvent.click(screen.getByText('Open Settings'))
fireEvent.click(screen.getByText('Select GPT-4.1'))
it('should handle debug mode toggle', () => {
const { rerender } = render(<ModelParameterModal {...defaultProps} />)
const toggle = screen.getByText(/debugAsMultipleModel/i)
fireEvent.click(toggle)
expect(defaultProps.onDebugWithMultipleModelChange).toHaveBeenCalled()
rerender(<ModelParameterModal {...defaultProps} debugWithMultipleModel />)
expect(screen.getByText(/debugAsSingleModel/i)).toBeInTheDocument()
})
it('should handle custom renderTrigger', () => {
const renderTrigger = vi.fn().mockReturnValue(<div>Custom Trigger</div>)
render(<ModelParameterModal {...defaultProps} renderTrigger={renderTrigger} readonly />)
expect(screen.getByText('Custom Trigger')).toBeInTheDocument()
expect(renderTrigger).toHaveBeenCalled()
fireEvent.click(screen.getByTestId('portal-trigger'))
expect(renderTrigger).toHaveBeenCalledTimes(1)
})
it('should handle model selection and advanced mode parameters', () => {
parameterRules = [
{
name: 'temperature',
label: { en_US: 'Temperature' },
type: 'float',
default: 0.7,
min: 0,
max: 1,
help: { en_US: 'Control randomness' },
},
]
const { rerender } = render(<ModelParameterModal {...defaultProps} />)
expect(screen.getByTestId('param-temperature')).toBeInTheDocument()
rerender(<ModelParameterModal {...defaultProps} isAdvancedMode />)
expect(screen.getByTestId('param-stop')).toBeInTheDocument()
fireEvent.click(screen.getByText('Select GPT-4.1'))
expect(defaultProps.setModel).toHaveBeenCalledWith({
modelId: 'gpt-4.1',
provider: 'openai',
@@ -203,32 +270,4 @@ describe('ModelParameterModal', () => {
features: ['vision', 'tool-call'],
})
})
it('should toggle debug mode when debug footer is clicked', () => {
render(<ModelParameterModal {...defaultProps} />)
fireEvent.click(screen.getByText('Open Settings'))
fireEvent.click(screen.getByText(/debugAsMultipleModel/i))
expect(defaultProps.onDebugWithMultipleModelChange).toHaveBeenCalled()
})
it('should render loading state when parameter rules are loading', () => {
isRulesLoading = true
render(<ModelParameterModal {...defaultProps} />)
fireEvent.click(screen.getByText('Open Settings'))
expect(screen.getByRole('status')).toBeInTheDocument()
})
it('should not open content when readonly is true', () => {
render(<ModelParameterModal {...defaultProps} readonly />)
fireEvent.click(screen.getByText('Open Settings'))
expect(screen.queryByTestId('model-selector')).not.toBeInTheDocument()
})
it('should render no parameter items when rules are undefined', () => {
parameterRules = undefined
render(<ModelParameterModal {...defaultProps} />)
fireEvent.click(screen.getByText('Open Settings'))
expect(screen.queryByTestId('param-temperature')).not.toBeInTheDocument()
expect(screen.getByTestId('model-selector')).toBeInTheDocument()
})
})

View File

@@ -1,182 +1,238 @@
import type { ModelParameterRule } from '../declarations'
import { fireEvent, render, screen } from '@testing-library/react'
import { vi } from 'vitest'
import ParameterItem from './parameter-item'
vi.mock('../hooks', () => ({
useLanguage: () => 'en_US',
}))
vi.mock('@/app/components/base/radio', () => {
const Radio = ({ children, value }: { children: React.ReactNode, value: boolean }) => <button data-testid={`radio-${value}`}>{children}</button>
Radio.Group = ({ children, onChange }: { children: React.ReactNode, onChange: (value: boolean) => void }) => (
<div>
{children}
<button onClick={() => onChange(true)}>Select True</button>
<button onClick={() => onChange(false)}>Select False</button>
</div>
)
return { default: Radio }
})
vi.mock('@/app/components/base/select', () => ({
SimpleSelect: ({ onSelect, items }: { onSelect: (item: { value: string }) => void, items: { value: string, name: string }[] }) => (
<select onChange={e => onSelect({ value: e.target.value })}>
{items.map(item => (
<option key={item.value} value={item.value}>{item.name}</option>
))}
</select>
),
}))
vi.mock('@/app/components/base/slider', () => ({
default: ({ onChange }: { onChange: (v: number) => void }) => (
<button onClick={() => onChange(2)} data-testid="slider-btn">Slide 2</button>
default: ({ value, onChange }: { value: number, onChange: (val: number) => void }) => (
<input type="range" value={value} onChange={e => onChange(Number(e.target.value))} />
),
}))
vi.mock('@/app/components/base/switch', () => ({
default: ({ onChange, value }: { onChange: (val: boolean) => void, value: boolean }) => (
<button onClick={() => onChange(!value)}>Switch</button>
),
}))
vi.mock('@/app/components/base/tag-input', () => ({
default: ({ onChange }: { onChange: (v: string[]) => void }) => (
<button onClick={() => onChange(['tag1', 'tag2'])} data-testid="tag-input">Tag</button>
default: ({ onChange }: { onChange: (val: string[]) => void }) => (
<input onChange={e => onChange(e.target.value.split(','))} />
),
}))
vi.mock('@/app/components/base/tooltip', () => ({
default: ({ popupContent }: { popupContent: React.ReactNode }) => <div>{popupContent}</div>,
}))
describe('ParameterItem', () => {
const createRule = (overrides: Partial<ModelParameterRule> = {}): ModelParameterRule => ({
name: 'temp',
label: { en_US: 'Temperature', zh_Hans: 'Temperature' },
type: 'float',
min: 0,
max: 1,
help: { en_US: 'Help text', zh_Hans: 'Help text' },
required: false,
...overrides,
})
const createProps = (overrides: {
parameterRule?: ModelParameterRule
value?: number | string | boolean | string[]
} = {}) => {
const onChange = vi.fn()
const onSwitch = vi.fn()
return {
parameterRule: createRule(),
value: 0.7,
onChange,
onSwitch,
...overrides,
}
}
beforeEach(() => {
vi.clearAllMocks()
})
// Float tests
it('should render float controls and clamp numeric input to max', () => {
const onChange = vi.fn()
render(<ParameterItem parameterRule={createRule({ type: 'float', min: 0, max: 1 })} value={0.7} onChange={onChange} />)
it('should render float input with slider', () => {
const props = createProps()
const { rerender } = render(<ParameterItem {...props} />)
expect(screen.getByText('Temperature')).toBeInTheDocument()
const input = screen.getByRole('spinbutton')
fireEvent.change(input, { target: { value: '0.8' } })
expect(props.onChange).toHaveBeenCalledWith(0.8)
fireEvent.change(input, { target: { value: '1.4' } })
expect(onChange).toHaveBeenCalledWith(1)
expect(screen.getByTestId('slider-btn')).toBeInTheDocument()
})
expect(props.onChange).toHaveBeenCalledWith(1)
it('should clamp float numeric input to min', () => {
const onChange = vi.fn()
render(<ParameterItem parameterRule={createRule({ type: 'float', min: 0.1, max: 1 })} value={0.7} onChange={onChange} />)
const input = screen.getByRole('spinbutton')
fireEvent.change(input, { target: { value: '0.05' } })
expect(onChange).toHaveBeenCalledWith(0.1)
})
fireEvent.change(input, { target: { value: '-0.2' } })
expect(props.onChange).toHaveBeenCalledWith(0)
// Int tests
it('should render int controls and clamp numeric input', () => {
const onChange = vi.fn()
render(<ParameterItem parameterRule={createRule({ type: 'int', min: 0, max: 10 })} value={5} onChange={onChange} />)
const input = screen.getByRole('spinbutton')
fireEvent.change(input, { target: { value: '15' } })
expect(onChange).toHaveBeenCalledWith(10)
fireEvent.change(input, { target: { value: '-5' } })
expect(onChange).toHaveBeenCalledWith(0)
})
const slider = screen.getByRole('slider')
fireEvent.change(slider, { target: { value: '2' } })
expect(props.onChange).toHaveBeenCalledWith(1)
it('should adjust step based on max for int type', () => {
const { rerender } = render(<ParameterItem parameterRule={createRule({ type: 'int', min: 0, max: 50 })} value={5} />)
expect(screen.getByRole('spinbutton')).toHaveAttribute('step', '1')
fireEvent.change(slider, { target: { value: '-1' } })
expect(props.onChange).toHaveBeenCalledWith(0)
rerender(<ParameterItem parameterRule={createRule({ type: 'int', min: 0, max: 500 })} value={50} />)
expect(screen.getByRole('spinbutton')).toHaveAttribute('step', '10')
fireEvent.change(slider, { target: { value: '0.4' } })
expect(props.onChange).toHaveBeenCalledWith(0.4)
rerender(<ParameterItem parameterRule={createRule({ type: 'int', min: 0, max: 2000 })} value={50} />)
expect(screen.getByRole('spinbutton')).toHaveAttribute('step', '100')
})
it('should render int input without slider if min or max is missing', () => {
render(<ParameterItem parameterRule={createRule({ type: 'int', min: 0 })} value={5} />)
expect(screen.queryByRole('slider')).not.toBeInTheDocument()
// No max -> precision step
expect(screen.getByRole('spinbutton')).toHaveAttribute('step', '0')
})
// Slider events (uses generic value mock for slider)
it('should handle slide change and clamp values', () => {
const onChange = vi.fn()
render(<ParameterItem parameterRule={createRule({ type: 'float', min: 0, max: 10 })} value={0.7} onChange={onChange} />)
// Test that the actual slider triggers the onChange logic correctly
// The implementation of Slider uses onChange(val) directly via the mock
fireEvent.click(screen.getByTestId('slider-btn'))
expect(onChange).toHaveBeenCalledWith(2)
})
// Text & String tests
it('should render exact string input and propagate text changes', () => {
const onChange = vi.fn()
render(<ParameterItem parameterRule={createRule({ type: 'string', name: 'prompt' })} value="initial" onChange={onChange} />)
fireEvent.change(screen.getByRole('textbox'), { target: { value: 'updated' } })
expect(onChange).toHaveBeenCalledWith('updated')
})
it('should render textarea for text type', () => {
const onChange = vi.fn()
const { container } = render(<ParameterItem parameterRule={createRule({ type: 'text' })} value="long text" onChange={onChange} />)
const textarea = container.querySelector('textarea')!
expect(textarea).toBeInTheDocument()
fireEvent.change(textarea, { target: { value: 'new long text' } })
expect(onChange).toHaveBeenCalledWith('new long text')
})
it('should render select for string with options', () => {
render(<ParameterItem parameterRule={createRule({ type: 'string', options: ['a', 'b'] })} value="a" />)
// SimpleSelect renders an element with text 'a'
expect(screen.getByText('a')).toBeInTheDocument()
})
// Tag Tests
it('should render tag input for tag type', () => {
const onChange = vi.fn()
render(<ParameterItem parameterRule={createRule({ type: 'tag', tagPlaceholder: { en_US: 'placeholder', zh_Hans: 'placeholder' } })} value={['a']} onChange={onChange} />)
expect(screen.getByText('placeholder')).toBeInTheDocument()
// Trigger mock tag input
fireEvent.click(screen.getByTestId('tag-input'))
expect(onChange).toHaveBeenCalledWith(['tag1', 'tag2'])
})
// Boolean tests
it('should render boolean radios and update value on click', () => {
const onChange = vi.fn()
render(<ParameterItem parameterRule={createRule({ type: 'boolean', default: false })} value={true} onChange={onChange} />)
fireEvent.click(screen.getByText('False'))
expect(onChange).toHaveBeenCalledWith(false)
})
// Switch tests
it('should call onSwitch with current value when optional switch is toggled off', () => {
const onSwitch = vi.fn()
render(<ParameterItem parameterRule={createRule()} value={0.7} onSwitch={onSwitch} />)
fireEvent.click(screen.getByRole('switch'))
expect(onSwitch).toHaveBeenCalledWith(false, 0.7)
})
it('should not render switch if required or name is stop', () => {
const { rerender } = render(<ParameterItem parameterRule={createRule({ required: true as unknown as false })} value={1} />)
expect(screen.queryByRole('switch')).not.toBeInTheDocument()
rerender(<ParameterItem parameterRule={createRule({ name: 'stop', required: false })} value={1} />)
expect(screen.queryByRole('switch')).not.toBeInTheDocument()
})
// Default Value Fallbacks (rendering without value)
it('should use default values if value is undefined', () => {
const { rerender } = render(<ParameterItem parameterRule={createRule({ type: 'float', default: 0.5 })} />)
expect(screen.getByRole('spinbutton')).toHaveValue(0.5)
rerender(<ParameterItem parameterRule={createRule({ type: 'string', default: 'hello' })} />)
expect(screen.getByRole('textbox')).toHaveValue('hello')
rerender(<ParameterItem parameterRule={createRule({ type: 'boolean', default: true })} />)
expect(screen.getByText('True')).toBeInTheDocument()
expect(screen.getByText('False')).toBeInTheDocument()
// Without default
rerender(<ParameterItem parameterRule={createRule({ type: 'float' })} />) // min is 0 by default in createRule
expect(screen.getByRole('spinbutton')).toHaveValue(0)
})
// Input Blur
it('should reset input to actual bound value on blur', () => {
render(<ParameterItem parameterRule={createRule({ type: 'float', min: 0, max: 1 })} />)
const input = screen.getByRole('spinbutton')
// change local state (which triggers clamp internally to let's say 1.4 -> 1 but leaves input text, though handleInputChange updates local state)
// Actually our test fires a change so localValue = 1, then blur sets it
fireEvent.change(input, { target: { value: '5' } })
fireEvent.blur(input)
expect(input).toHaveValue(1)
expect(input).toHaveValue(0.7)
const minBoundedProps = createProps({
parameterRule: createRule({ type: 'float', min: 1, max: 2 }),
value: 1.5,
})
rerender(<ParameterItem {...minBoundedProps} />)
fireEvent.change(screen.getByRole('slider'), { target: { value: '0' } })
expect(minBoundedProps.onChange).toHaveBeenCalledWith(1)
})
// Unsupported
it('should render no input for unsupported parameter type', () => {
render(<ParameterItem parameterRule={createRule({ type: 'unsupported' as unknown as string })} value={0.7} />)
it('should render boolean radio', () => {
const props = createProps({ parameterRule: createRule({ type: 'boolean', default: false }), value: true })
render(<ParameterItem {...props} />)
expect(screen.getByText('True')).toBeInTheDocument()
fireEvent.click(screen.getByText('Select False'))
expect(props.onChange).toHaveBeenCalledWith(false)
})
it('should render string input and select options', () => {
const props = createProps({ parameterRule: createRule({ type: 'string' }), value: 'test' })
const { rerender } = render(<ParameterItem {...props} />)
const input = screen.getByRole('textbox')
fireEvent.change(input, { target: { value: 'new' } })
expect(props.onChange).toHaveBeenCalledWith('new')
const selectProps = createProps({
parameterRule: createRule({ type: 'string', options: ['opt1', 'opt2'] }),
value: 'opt1',
})
rerender(<ParameterItem {...selectProps} />)
const select = screen.getByRole('combobox')
fireEvent.change(select, { target: { value: 'opt2' } })
expect(selectProps.onChange).toHaveBeenCalledWith('opt2')
})
it('should handle switch toggle', () => {
const props = createProps()
let view = render(<ParameterItem {...props} />)
fireEvent.click(screen.getByText('Switch'))
expect(props.onSwitch).toHaveBeenCalledWith(false, 0.7)
const intDefaultProps = createProps({
parameterRule: createRule({ type: 'int', min: 0, default: undefined }),
value: undefined,
})
view.unmount()
view = render(<ParameterItem {...intDefaultProps} />)
fireEvent.click(screen.getByText('Switch'))
expect(intDefaultProps.onSwitch).toHaveBeenCalledWith(true, 0)
const stringDefaultProps = createProps({
parameterRule: createRule({ type: 'string', default: 'preset-value' }),
value: undefined,
})
view.unmount()
view = render(<ParameterItem {...stringDefaultProps} />)
fireEvent.click(screen.getByText('Switch'))
expect(stringDefaultProps.onSwitch).toHaveBeenCalledWith(true, 'preset-value')
const booleanDefaultProps = createProps({
parameterRule: createRule({ type: 'boolean', default: true }),
value: undefined,
})
view.unmount()
view = render(<ParameterItem {...booleanDefaultProps} />)
fireEvent.click(screen.getByText('Switch'))
expect(booleanDefaultProps.onSwitch).toHaveBeenCalledWith(true, true)
const tagDefaultProps = createProps({
parameterRule: createRule({ type: 'tag', default: ['one'] }),
value: undefined,
})
view.unmount()
const tagView = render(<ParameterItem {...tagDefaultProps} />)
fireEvent.click(screen.getByText('Switch'))
expect(tagDefaultProps.onSwitch).toHaveBeenCalledWith(true, ['one'])
const zeroValueProps = createProps({
parameterRule: createRule({ type: 'float', default: 0.5 }),
value: 0,
})
tagView.unmount()
render(<ParameterItem {...zeroValueProps} />)
fireEvent.click(screen.getByText('Switch'))
expect(zeroValueProps.onSwitch).toHaveBeenCalledWith(false, 0)
})
it('should support text and tag parameter interactions', () => {
const textProps = createProps({
parameterRule: createRule({ type: 'text', name: 'prompt' }),
value: 'initial prompt',
})
const { rerender } = render(<ParameterItem {...textProps} />)
const textarea = screen.getByRole('textbox')
fireEvent.change(textarea, { target: { value: 'rewritten prompt' } })
expect(textProps.onChange).toHaveBeenCalledWith('rewritten prompt')
const tagProps = createProps({
parameterRule: createRule({
type: 'tag',
name: 'tags',
tagPlaceholder: { en_US: 'Tag hint', zh_Hans: 'Tag hint' },
}),
value: ['alpha'],
})
rerender(<ParameterItem {...tagProps} />)
fireEvent.change(screen.getByRole('textbox'), { target: { value: 'one,two' } })
expect(tagProps.onChange).toHaveBeenCalledWith(['one', 'two'])
})
it('should support int parameters and unknown type fallback', () => {
const intProps = createProps({
parameterRule: createRule({ type: 'int', min: 0, max: 500, default: 100 }),
value: 100,
})
const { rerender } = render(<ParameterItem {...intProps} />)
fireEvent.change(screen.getByRole('spinbutton'), { target: { value: '350' } })
expect(intProps.onChange).toHaveBeenCalledWith(350)
const unknownTypeProps = createProps({
parameterRule: createRule({ type: 'unsupported' }),
value: 0.7,
})
rerender(<ParameterItem {...unknownTypeProps} />)
expect(screen.queryByRole('textbox')).not.toBeInTheDocument()
expect(screen.queryByRole('spinbutton')).not.toBeInTheDocument()
})

View File

@@ -2,6 +2,19 @@ import { fireEvent, render, screen } from '@testing-library/react'
import { vi } from 'vitest'
import PresetsParameter from './presets-parameter'
vi.mock('@/app/components/base/dropdown', () => ({
default: ({ renderTrigger, items, onSelect }: { renderTrigger: (open: boolean) => React.ReactNode, items: { value: number, text: string }[], onSelect: (item: { value: number }) => void }) => (
<div>
{renderTrigger(false)}
{items.map(item => (
<button key={item.value} onClick={() => onSelect(item)}>
{item.text}
</button>
))}
</div>
),
}))
describe('PresetsParameter', () => {
beforeEach(() => {
vi.clearAllMocks()
@@ -13,39 +26,7 @@ describe('PresetsParameter', () => {
expect(screen.getByText('common.modelProvider.loadPresets')).toBeInTheDocument()
fireEvent.click(screen.getByRole('button', { name: /common\.modelProvider\.loadPresets/i }))
fireEvent.click(screen.getByText('common.model.tone.Creative'))
expect(onSelect).toHaveBeenCalledWith(1)
})
// open=true: trigger has bg-state-base-hover class
it('should apply hover background class when open is true', () => {
render(<PresetsParameter onSelect={vi.fn()} />)
fireEvent.click(screen.getByRole('button', { name: /common\.modelProvider\.loadPresets/i }))
const button = screen.getByRole('button', { name: /common\.modelProvider\.loadPresets/i })
expect(button).toHaveClass('bg-state-base-hover')
})
// Tone map branch 2: Balanced → Scales02 icon
it('should call onSelect with tone id 2 when Balanced is clicked', () => {
const onSelect = vi.fn()
render(<PresetsParameter onSelect={onSelect} />)
fireEvent.click(screen.getByRole('button', { name: /common\.modelProvider\.loadPresets/i }))
fireEvent.click(screen.getByText('common.model.tone.Balanced'))
expect(onSelect).toHaveBeenCalledWith(2)
})
// Tone map branch 3: Precise → Target04 icon
it('should call onSelect with tone id 3 when Precise is clicked', () => {
const onSelect = vi.fn()
render(<PresetsParameter onSelect={onSelect} />)
fireEvent.click(screen.getByRole('button', { name: /common\.modelProvider\.loadPresets/i }))
fireEvent.click(screen.getByText('common.model.tone.Precise'))
expect(onSelect).toHaveBeenCalledWith(3)
})
})

View File

@@ -1,5 +1,4 @@
import { render, screen } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import { fireEvent, render, screen } from '@testing-library/react'
import { vi } from 'vitest'
import StatusIndicators from './status-indicators'
@@ -9,6 +8,10 @@ vi.mock('@/service/use-plugins', () => ({
useInstalledPluginList: () => ({ data: { plugins: installedPlugins } }),
}))
vi.mock('@/app/components/base/tooltip', () => ({
default: ({ popupContent }: { popupContent: React.ReactNode }) => <div>{popupContent}</div>,
}))
vi.mock('@/app/components/workflow/nodes/_base/components/switch-plugin-version', () => ({
SwitchPluginVersion: ({ uniqueIdentifier }: { uniqueIdentifier: string }) => <div>{`SwitchVersion:${uniqueIdentifier}`}</div>,
}))
@@ -35,95 +38,57 @@ describe('StatusIndicators', () => {
expect(container).toBeEmptyDOMElement()
})
it('should render deprecated tooltip when provider model is disabled and in model list', async () => {
const user = userEvent.setup()
const { container } = render(
<StatusIndicators
needsConfiguration={false}
modelProvider={true}
inModelList={true}
disabled={true}
pluginInfo={null}
t={t}
/>,
it('should render warning states when provider model is disabled', () => {
const parentClick = vi.fn()
const { rerender } = render(
<div onClick={parentClick}>
<StatusIndicators
needsConfiguration={false}
modelProvider={true}
inModelList={true}
disabled={true}
pluginInfo={null}
t={t}
/>
</div>,
)
expect(screen.getByText('nodes.agent.modelSelectorTooltips.deprecated')).toBeInTheDocument()
const trigger = container.querySelector('[data-state]')
expect(trigger).toBeInTheDocument()
await user.hover(trigger as HTMLElement)
expect(await screen.findByText('nodes.agent.modelSelectorTooltips.deprecated')).toBeInTheDocument()
})
it('should render model-not-support tooltip when disabled model is not in model list and has no pluginInfo', async () => {
const user = userEvent.setup()
const { container } = render(
<StatusIndicators
needsConfiguration={false}
modelProvider={true}
inModelList={false}
disabled={true}
pluginInfo={null}
t={t}
/>,
rerender(
<div onClick={parentClick}>
<StatusIndicators
needsConfiguration={false}
modelProvider={true}
inModelList={false}
disabled={true}
pluginInfo={null}
t={t}
/>
</div>,
)
expect(screen.getByText('nodes.agent.modelNotSupport.title')).toBeInTheDocument()
expect(screen.getByText('nodes.agent.linkToPlugin').closest('a')).toHaveAttribute('href', '/plugins')
fireEvent.click(screen.getByText('nodes.agent.modelNotSupport.title'))
fireEvent.click(screen.getByText('nodes.agent.linkToPlugin'))
expect(parentClick).not.toHaveBeenCalled()
const trigger = container.querySelector('[data-state]')
expect(trigger).toBeInTheDocument()
await user.hover(trigger as HTMLElement)
expect(await screen.findByText('nodes.agent.modelNotSupport.title')).toBeInTheDocument()
})
it('should render switch plugin version when pluginInfo exists for disabled unsupported model', () => {
render(
<StatusIndicators
needsConfiguration={false}
modelProvider={true}
inModelList={false}
disabled={true}
pluginInfo={{ name: 'demo-plugin' }}
t={t}
/>,
rerender(
<div onClick={parentClick}>
<StatusIndicators
needsConfiguration={false}
modelProvider={true}
inModelList={false}
disabled={true}
pluginInfo={{ name: 'demo-plugin' }}
t={t}
/>
</div>,
)
expect(screen.getByText('SwitchVersion:demo@1.0.0')).toBeInTheDocument()
})
it('should render nothing when needsConfiguration is true even with disabled and modelProvider', () => {
const { container } = render(
<StatusIndicators
needsConfiguration={true}
modelProvider={true}
inModelList={true}
disabled={true}
pluginInfo={null}
t={t}
/>,
)
expect(container).toBeEmptyDOMElement()
})
it('should render SwitchVersion with empty identifier when plugin is not in installed list', () => {
installedPlugins = []
it('should render marketplace warning when provider is unavailable', () => {
render(
<StatusIndicators
needsConfiguration={false}
modelProvider={true}
inModelList={false}
disabled={true}
pluginInfo={{ name: 'missing-plugin' }}
t={t}
/>,
)
expect(screen.getByText('SwitchVersion:')).toBeInTheDocument()
})
it('should render marketplace warning tooltip when provider is unavailable', async () => {
const user = userEvent.setup()
const { container } = render(
<StatusIndicators
needsConfiguration={false}
modelProvider={false}
@@ -133,11 +98,6 @@ describe('StatusIndicators', () => {
t={t}
/>,
)
const trigger = container.querySelector('[data-state]')
expect(trigger).toBeInTheDocument()
await user.hover(trigger as HTMLElement)
expect(await screen.findByText('nodes.agent.modelNotInMarketplace.title')).toBeInTheDocument()
expect(screen.getByText('nodes.agent.modelNotInMarketplace.title')).toBeInTheDocument()
})
})

View File

@@ -1,6 +1,5 @@
import type { ComponentProps } from 'react'
import { render, screen } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import Trigger from './trigger'
vi.mock('../hooks', () => ({
@@ -25,10 +24,6 @@ describe('Trigger', () => {
const currentProvider = { provider: 'openai', label: { en_US: 'OpenAI' } } as unknown as ComponentProps<typeof Trigger>['currentProvider']
const currentModel = { model: 'gpt-4' } as unknown as ComponentProps<typeof Trigger>['currentModel']
beforeEach(() => {
vi.clearAllMocks()
})
it('should render initialized state', () => {
render(
<Trigger
@@ -49,92 +44,4 @@ describe('Trigger', () => {
)
expect(screen.getByText('gpt-4')).toBeInTheDocument()
})
// isInWorkflow=true: workflow border class + RiArrowDownSLine arrow
it('should render workflow styles when isInWorkflow is true', () => {
// Act
const { container } = render(
<Trigger
currentProvider={currentProvider}
currentModel={currentModel}
isInWorkflow
/>,
)
// Assert
expect(container.firstChild).toHaveClass('border-workflow-block-parma-bg')
expect(container.firstChild).toHaveClass('bg-workflow-block-parma-bg')
expect(container.querySelectorAll('svg').length).toBe(2)
})
// disabled=true + hasDeprecated=true: AlertTriangle + deprecated tooltip
it('should show deprecated warning when disabled with hasDeprecated', () => {
// Act
render(
<Trigger
currentProvider={currentProvider}
currentModel={currentModel}
disabled
hasDeprecated
/>,
)
// Assert - AlertTriangle renders with warning color
const warningIcon = document.querySelector('.text-\\[\\#F79009\\]')
expect(warningIcon).toBeInTheDocument()
})
// disabled=true + modelDisabled=true: status text tooltip
it('should show model status tooltip when disabled with modelDisabled', () => {
// Act
render(
<Trigger
currentProvider={currentProvider}
currentModel={{ ...currentModel, status: 'no-configure' } as unknown as typeof currentModel}
disabled
modelDisabled
/>,
)
// Assert - AlertTriangle warning icon should be present
const warningIcon = document.querySelector('.text-\\[\\#F79009\\]')
expect(warningIcon).toBeInTheDocument()
})
it('should render empty tooltip content when disabled without deprecated or modelDisabled', async () => {
const user = userEvent.setup()
const { container } = render(
<Trigger
currentProvider={currentProvider}
currentModel={currentModel}
disabled
hasDeprecated={false}
modelDisabled={false}
/>,
)
const warningIcon = document.querySelector('.text-\\[\\#F79009\\]')
expect(warningIcon).toBeInTheDocument()
const trigger = container.querySelector('[data-state]')
expect(trigger).toBeInTheDocument()
await user.hover(trigger as HTMLElement)
const tooltip = screen.queryByRole('tooltip')
if (tooltip)
expect(tooltip).toBeEmptyDOMElement()
expect(screen.queryByText('modelProvider.deprecated')).not.toBeInTheDocument()
expect(screen.queryByText('No Configure')).not.toBeInTheDocument()
})
// providerName not matching any provider: find() returns undefined
it('should render without crashing when providerName does not match any provider', () => {
// Act
render(
<Trigger
modelId="gpt-4"
providerName="unknown-provider"
/>,
)
// Assert
expect(screen.getByText('gpt-4')).toBeInTheDocument()
})
})

View File

@@ -10,22 +10,4 @@ describe('EmptyTrigger', () => {
render(<EmptyTrigger open={false} />)
expect(screen.getByText('plugin.detailPanel.configureModel')).toBeInTheDocument()
})
// open=true: hover bg class present
it('should apply hover background class when open is true', () => {
// Act
const { container } = render(<EmptyTrigger open={true} />)
// Assert
expect(container.firstChild).toHaveClass('bg-components-input-bg-hover')
})
// className prop truthy: custom className appears on root
it('should apply custom className when provided', () => {
// Act
const { container } = render(<EmptyTrigger open={false} className="custom-class" />)
// Assert
expect(container.firstChild).toHaveClass('custom-class')
})
})

View File

@@ -10,13 +10,12 @@ import PopupItem from './popup-item'
const mockUpdateModelList = vi.hoisted(() => vi.fn())
const mockUpdateModelProviders = vi.hoisted(() => vi.fn())
const mockLanguageRef = vi.hoisted(() => ({ value: 'en_US' }))
vi.mock('../hooks', async () => {
const actual = await vi.importActual<typeof import('../hooks')>('../hooks')
return {
...actual,
useLanguage: () => mockLanguageRef.value,
useLanguage: () => 'en_US',
useUpdateModelList: () => mockUpdateModelList,
useUpdateModelProviders: () => mockUpdateModelProviders,
}
@@ -70,7 +69,6 @@ const makeModel = (overrides: Partial<Model> = {}): Model => ({
describe('PopupItem', () => {
beforeEach(() => {
vi.clearAllMocks()
mockLanguageRef.value = 'en_US'
mockUseProviderContext.mockReturnValue({
modelProviders: [{ provider: 'openai' }],
})
@@ -146,87 +144,4 @@ describe('PopupItem', () => {
expect(screen.getByText('GPT-4')).toBeInTheDocument()
})
it('should not show check icon when model matches but provider does not', () => {
const defaultModel: DefaultModel = { provider: 'anthropic', model: 'gpt-4' }
render(
<PopupItem
defaultModel={defaultModel}
model={makeModel()}
onSelect={vi.fn()}
/>,
)
const checkIcons = document.querySelectorAll('.h-4.w-4.shrink-0.text-text-accent')
expect(checkIcons.length).toBe(0)
})
it('should not show mode badge when model_properties.mode is absent', () => {
const modelItem = makeModelItem({ model_properties: {} })
render(
<PopupItem
model={makeModel({ models: [modelItem] })}
onSelect={vi.fn()}
/>,
)
expect(screen.queryByText('CHAT')).not.toBeInTheDocument()
})
it('should fall back to en_US label when current locale translation is empty', () => {
mockLanguageRef.value = 'zh_Hans'
const model = makeModel({
label: { en_US: 'English Label', zh_Hans: '' },
})
render(<PopupItem model={model} onSelect={vi.fn()} />)
expect(screen.getByText('English Label')).toBeInTheDocument()
})
it('should not show context_size badge when absent', () => {
const modelItem = makeModelItem({ model_properties: { mode: 'chat' } })
render(
<PopupItem
model={makeModel({ models: [modelItem] })}
onSelect={vi.fn()}
/>,
)
expect(screen.queryByText(/K$/)).not.toBeInTheDocument()
})
it('should not show capabilities section when features are empty', () => {
const modelItem = makeModelItem({ features: [] })
render(
<PopupItem
model={makeModel({ models: [modelItem] })}
onSelect={vi.fn()}
/>,
)
expect(screen.queryByText('common.model.capabilities')).not.toBeInTheDocument()
})
it('should not show capabilities for non-qualifying model types', () => {
const modelItem = makeModelItem({
model_type: ModelTypeEnum.tts,
features: [ModelFeatureEnum.vision],
})
render(
<PopupItem
model={makeModel({ models: [modelItem] })}
onSelect={vi.fn()}
/>,
)
expect(screen.queryByText('common.model.capabilities')).not.toBeInTheDocument()
})
it('should show en_US label when language is fr_FR and fr_FR key is absent', () => {
mockLanguageRef.value = 'fr_FR'
const model = makeModel({ label: { en_US: 'FallbackLabel', zh_Hans: 'FallbackLabel' } })
render(<PopupItem model={model} onSelect={vi.fn()} />)
expect(screen.getByText('FallbackLabel')).toBeInTheDocument()
})
})

View File

@@ -1,6 +1,5 @@
import type { Model, ModelItem } from '../declarations'
import { fireEvent, render, screen } from '@testing-library/react'
import { tooltipManager } from '@/app/components/base/tooltip/TooltipManager'
import {
ConfigurationMethodEnum,
ModelFeatureEnum,
@@ -23,6 +22,21 @@ vi.mock('@/utils/tool-call', () => ({
supportFunctionCall: mockSupportFunctionCall,
}))
const mockCloseActiveTooltip = vi.hoisted(() => vi.fn())
vi.mock('@/app/components/base/tooltip/TooltipManager', () => ({
tooltipManager: {
closeActiveTooltip: mockCloseActiveTooltip,
register: vi.fn(),
clear: vi.fn(),
},
}))
vi.mock('@/app/components/base/icons/src/vender/solid/general', () => ({
XCircle: ({ onClick }: { onClick?: () => void }) => (
<button type="button" aria-label="clear-search" onClick={onClick} />
),
}))
vi.mock('../hooks', async () => {
const actual = await vi.importActual<typeof import('../hooks')>('../hooks')
return {
@@ -56,13 +70,10 @@ const makeModel = (overrides: Partial<Model> = {}): Model => ({
})
describe('Popup', () => {
let closeActiveTooltipSpy: ReturnType<typeof vi.spyOn>
beforeEach(() => {
vi.clearAllMocks()
mockLanguage = 'en_US'
mockSupportFunctionCall.mockReturnValue(true)
closeActiveTooltipSpy = vi.spyOn(tooltipManager, 'closeActiveTooltip')
})
it('should filter models by search and allow clearing search', () => {
@@ -80,9 +91,8 @@ describe('Popup', () => {
fireEvent.change(input, { target: { value: 'not-found' } })
expect(screen.getByText('No model found for “not-found”')).toBeInTheDocument()
fireEvent.change(input, { target: { value: '' } })
fireEvent.click(screen.getByRole('button', { name: 'clear-search' }))
expect((input as HTMLInputElement).value).toBe('')
expect(screen.getByText('openai')).toBeInTheDocument()
})
it('should filter by scope features including toolCall and non-toolCall checks', () => {
@@ -158,24 +168,6 @@ describe('Popup', () => {
expect(screen.getByText('openai')).toBeInTheDocument()
})
it('should filter out model when features array exists but does not include required scopeFeature', () => {
const modelWithToolCallOnly = makeModel({
models: [makeModelItem({ features: [ModelFeatureEnum.toolCall] })],
})
render(
<Popup
modelList={[modelWithToolCallOnly]}
onSelect={vi.fn()}
onHide={vi.fn()}
scopeFeatures={[ModelFeatureEnum.vision]}
/>,
)
// The model item should be filtered out because it has toolCall but not vision
expect(screen.queryByText('openai')).not.toBeInTheDocument()
})
it('should close tooltip on scroll', () => {
const { container } = render(
<Popup
@@ -186,7 +178,7 @@ describe('Popup', () => {
)
fireEvent.scroll(container.firstElementChild as HTMLElement)
expect(closeActiveTooltipSpy).toHaveBeenCalled()
expect(mockCloseActiveTooltip).toHaveBeenCalled()
})
it('should open provider settings when clicking footer link', () => {
@@ -204,35 +196,4 @@ describe('Popup', () => {
payload: 'provider',
})
})
it('should call onHide when footer settings link is clicked', () => {
const mockOnHide = vi.fn()
render(
<Popup
modelList={[makeModel()]}
onSelect={vi.fn()}
onHide={mockOnHide}
/>,
)
fireEvent.click(screen.getByText('common.model.settingsLink'))
expect(mockOnHide).toHaveBeenCalled()
})
it('should match model label when searchText is non-empty and label key exists for current language', () => {
render(
<Popup
modelList={[makeModel()]}
onSelect={vi.fn()}
onHide={vi.fn()}
/>,
)
// GPT-4 label has en_US key, so modelItem.label[language] is defined
const input = screen.getByPlaceholderText('datasetSettings.form.searchModel')
fireEvent.change(input, { target: { value: 'gpt' } })
expect(screen.getByText('openai')).toBeInTheDocument()
})
})

View File

@@ -1,6 +1,5 @@
import type { ModelProvider } from '../declarations'
import { fireEvent, render, screen, waitFor } from '@testing-library/react'
import { ToastContext } from '@/app/components/base/toast/context'
import { changeModelProviderPriority } from '@/service/common'
import { ConfigurationMethodEnum } from '../declarations'
import CredentialPanel from './credential-panel'
@@ -25,15 +24,11 @@ vi.mock('@/config', async (importOriginal) => {
}
})
vi.mock('@/app/components/base/toast/context', async (importOriginal) => {
const actual = await importOriginal<typeof import('@/app/components/base/toast/context')>()
return {
...actual,
useToastContext: () => ({
notify: mockNotify,
}),
}
})
vi.mock('@/app/components/base/toast/context', () => ({
useToastContext: () => ({
notify: mockNotify,
}),
}))
vi.mock('@/context/event-emitter', () => ({
useEventEmitterContextContext: () => ({
@@ -98,14 +93,8 @@ describe('CredentialPanel', () => {
})
})
const renderCredentialPanel = (provider: ModelProvider) => render(
<ToastContext.Provider value={{ notify: mockNotify, close: vi.fn() }}>
<CredentialPanel provider={provider} />
</ToastContext.Provider>,
)
it('should show credential name and configuration actions', () => {
renderCredentialPanel(mockProvider)
render(<CredentialPanel provider={mockProvider} />)
expect(screen.getByText('test-credential')).toBeInTheDocument()
expect(screen.getByTestId('config-provider')).toBeInTheDocument()
@@ -114,7 +103,7 @@ describe('CredentialPanel', () => {
it('should show unauthorized status label when credential is missing', () => {
mockCredentialStatus.hasCredential = false
renderCredentialPanel(mockProvider)
render(<CredentialPanel provider={mockProvider} />)
expect(screen.getByText(/modelProvider\.auth\.unAuthorized/)).toBeInTheDocument()
})
@@ -122,7 +111,7 @@ describe('CredentialPanel', () => {
it('should show removed credential label and priority tip for custom preference', () => {
mockCredentialStatus.authorized = false
mockCredentialStatus.authRemoved = true
renderCredentialPanel({ ...mockProvider, preferred_provider_type: 'custom' } as ModelProvider)
render(<CredentialPanel provider={{ ...mockProvider, preferred_provider_type: 'custom' } as ModelProvider} />)
expect(screen.getByText(/modelProvider\.auth\.authRemoved/)).toBeInTheDocument()
expect(screen.getByTestId('priority-use-tip')).toBeInTheDocument()
@@ -131,7 +120,7 @@ describe('CredentialPanel', () => {
it('should change priority and refresh related data after success', async () => {
const mockChangePriority = changeModelProviderPriority as ReturnType<typeof vi.fn>
mockChangePriority.mockResolvedValue({ result: 'success' })
renderCredentialPanel(mockProvider)
render(<CredentialPanel provider={mockProvider} />)
fireEvent.click(screen.getByTestId('priority-selector'))
@@ -149,70 +138,8 @@ describe('CredentialPanel', () => {
...mockProvider,
provider_credential_schema: null,
} as unknown as ModelProvider
renderCredentialPanel(providerNoSchema)
render(<CredentialPanel provider={providerNoSchema} />)
expect(screen.getByTestId('priority-selector')).toBeInTheDocument()
expect(screen.queryByTestId('config-provider')).not.toBeInTheDocument()
})
it('should show gray indicator when notAllowedToUse is true', () => {
mockCredentialStatus.notAllowedToUse = true
renderCredentialPanel(mockProvider)
expect(screen.getByTestId('indicator')).toHaveTextContent('gray')
})
it('should not notify or update when priority change returns non-success', async () => {
const mockChangePriority = changeModelProviderPriority as ReturnType<typeof vi.fn>
mockChangePriority.mockResolvedValue({ result: 'error' })
renderCredentialPanel(mockProvider)
fireEvent.click(screen.getByTestId('priority-selector'))
await waitFor(() => {
expect(mockChangePriority).toHaveBeenCalled()
})
expect(mockNotify).not.toHaveBeenCalled()
expect(mockUpdateModelProviders).not.toHaveBeenCalled()
expect(mockEventEmitter.emit).not.toHaveBeenCalled()
})
it('should show empty label when authorized is false and authRemoved is false', () => {
mockCredentialStatus.authorized = false
mockCredentialStatus.authRemoved = false
renderCredentialPanel(mockProvider)
expect(screen.queryByText(/modelProvider\.auth\.unAuthorized/)).not.toBeInTheDocument()
expect(screen.queryByText(/modelProvider\.auth\.authRemoved/)).not.toBeInTheDocument()
})
it('should not show PriorityUseTip when priorityUseType is system', () => {
renderCredentialPanel(mockProvider)
expect(screen.queryByTestId('priority-use-tip')).not.toBeInTheDocument()
})
it('should not iterate configurateMethods for non-predefinedModel methods', async () => {
const mockChangePriority = changeModelProviderPriority as ReturnType<typeof vi.fn>
mockChangePriority.mockResolvedValue({ result: 'success' })
const providerWithCustomMethod = {
...mockProvider,
configurate_methods: [ConfigurationMethodEnum.customizableModel],
} as unknown as ModelProvider
renderCredentialPanel(providerWithCustomMethod)
fireEvent.click(screen.getByTestId('priority-selector'))
await waitFor(() => {
expect(mockChangePriority).toHaveBeenCalled()
expect(mockNotify).toHaveBeenCalled()
})
expect(mockUpdateModelList).not.toHaveBeenCalled()
})
it('should show red indicator when hasCredential is false', () => {
mockCredentialStatus.hasCredential = false
renderCredentialPanel(mockProvider)
expect(screen.getByTestId('indicator')).toHaveTextContent('red')
})
})

View File

@@ -125,48 +125,6 @@ describe('ProviderAddedCard', () => {
expect(await screen.findByTestId('model-list')).toBeInTheDocument()
})
it('should show loading spinner while model list is being fetched', async () => {
let resolvePromise: (value: unknown) => void = () => {}
const pendingPromise = new Promise((resolve) => {
resolvePromise = resolve
})
vi.mocked(fetchModelProviderModelList).mockReturnValue(pendingPromise as ReturnType<typeof fetchModelProviderModelList>)
render(<ProviderAddedCard provider={mockProvider} />)
fireEvent.click(screen.getByTestId('show-models-button'))
expect(document.querySelector('.i-ri-loader-2-line.animate-spin')).toBeInTheDocument()
await act(async () => {
resolvePromise({ data: [] })
})
})
it('should show modelsNum text after models have loaded', async () => {
const models = [
{ model: 'gpt-4' },
{ model: 'gpt-3.5' },
]
vi.mocked(fetchModelProviderModelList).mockResolvedValue({ data: models } as unknown as { data: ModelItem[] })
render(<ProviderAddedCard provider={mockProvider} />)
fireEvent.click(screen.getByTestId('show-models-button'))
await screen.findByTestId('model-list')
const collapseBtn = screen.getByRole('button', { name: 'collapse list' })
fireEvent.click(collapseBtn)
await waitFor(() => expect(screen.queryByTestId('model-list')).not.toBeInTheDocument())
const numTexts = screen.getAllByText(/modelProvider\.modelsNum/)
expect(numTexts.length).toBeGreaterThan(0)
expect(screen.getByText(/modelProvider\.showModelsNum/)).toBeInTheDocument()
})
it('should render configure tip when provider is not in quota list and not configured', () => {
const providerWithoutQuota = {
...mockProvider,
@@ -205,16 +163,6 @@ describe('ProviderAddedCard', () => {
expect(fetchModelProviderModelList).toHaveBeenCalledTimes(1)
})
it('should apply anthropic background class for anthropic provider', () => {
const anthropicProvider = {
...mockProvider,
provider: 'langgenius/anthropic/anthropic',
} as unknown as ModelProvider
const { container } = render(<ProviderAddedCard provider={anthropicProvider} />)
expect(container.querySelector('.bg-third-party-model-bg-anthropic')).toBeInTheDocument()
})
it('should render custom model actions for workspace managers', () => {
const customConfigProvider = {
...mockProvider,
@@ -229,36 +177,4 @@ describe('ProviderAddedCard', () => {
rerender(<ProviderAddedCard provider={customConfigProvider} />)
expect(screen.queryByTestId('manage-custom-model')).not.toBeInTheDocument()
})
it('should render credential panel when showCredential is true', () => {
// Arrange: use ConfigurationMethodEnum.predefinedModel ('predefined-model') so showCredential=true
const predefinedProvider = {
...mockProvider,
configurate_methods: [ConfigurationMethodEnum.predefinedModel],
} as unknown as ModelProvider
mockIsCurrentWorkspaceManager = true
// Act
render(<ProviderAddedCard provider={predefinedProvider} />)
// Assert: credential-panel is rendered (showCredential = true branch)
expect(screen.getByTestId('credential-panel')).toBeInTheDocument()
})
it('should not render credential panel when user is not workspace manager', () => {
// Arrange: predefined-model but manager=false so showCredential=false
const predefinedProvider = {
...mockProvider,
configurate_methods: [ConfigurationMethodEnum.predefinedModel],
} as unknown as ModelProvider
mockIsCurrentWorkspaceManager = false
// Act
render(<ProviderAddedCard provider={predefinedProvider} />)
// Assert: credential-panel is not rendered (showCredential = false)
expect(screen.queryByTestId('credential-panel')).not.toBeInTheDocument()
})
})

View File

@@ -5,7 +5,6 @@ import { ModelStatusEnum } from '../declarations'
import ModelListItem from './model-list-item'
let mockModelLoadBalancingEnabled = false
let mockPlanType: string = 'pro'
vi.mock('@/context/app-context', () => ({
useAppContext: () => ({
@@ -15,7 +14,7 @@ vi.mock('@/context/app-context', () => ({
vi.mock('@/context/provider-context', () => ({
useProviderContext: () => ({
plan: { type: mockPlanType },
plan: { type: 'pro' },
}),
useProviderContextSelector: () => mockModelLoadBalancingEnabled,
}))
@@ -61,7 +60,6 @@ describe('ModelListItem', () => {
beforeEach(() => {
vi.clearAllMocks()
mockModelLoadBalancingEnabled = false
mockPlanType = 'pro'
})
it('should render model item with icon and name', () => {
@@ -129,127 +127,4 @@ describe('ModelListItem', () => {
fireEvent.click(screen.getByRole('button', { name: 'modify load balancing' }))
expect(onModifyLoadBalancing).toHaveBeenCalledWith(mockModel)
})
// Deprecated branches: opacity-60, disabled switch, no ConfigModel
it('should show deprecated model with opacity and disabled switch', () => {
// Arrange
const deprecatedModel = { ...mockModel, deprecated: true } as unknown as ModelItem
mockModelLoadBalancingEnabled = true
// Act
const { container } = render(
<ModelListItem
model={deprecatedModel}
provider={mockProvider}
isConfigurable={false}
/>,
)
// Assert
expect(container.querySelector('.opacity-60')).toBeInTheDocument()
expect(screen.queryByRole('button', { name: 'modify load balancing' })).not.toBeInTheDocument()
})
// Load balancing badge: visible when all 4 conditions met
it('should show load balancing badge when all conditions are met', () => {
// Arrange
mockModelLoadBalancingEnabled = true
const lbModel = {
...mockModel,
load_balancing_enabled: true,
has_invalid_load_balancing_configs: false,
deprecated: false,
} as unknown as ModelItem
// Act
render(
<ModelListItem
model={lbModel}
provider={mockProvider}
isConfigurable={false}
/>,
)
// Assert - Badge component should render
const badge = document.querySelector('.border-text-accent-secondary')
expect(badge).toBeInTheDocument()
})
// Plan.sandbox: ConfigModel shown without load balancing enabled
it('should show ConfigModel for sandbox plan even without load balancing enabled', () => {
// Arrange - set plan type to sandbox and keep load balancing disabled
mockModelLoadBalancingEnabled = false
mockPlanType = 'sandbox'
// Act
render(
<ModelListItem
model={mockModel}
provider={mockProvider}
isConfigurable={false}
/>,
)
// Assert - ConfigModel should show because plan.type === 'sandbox'
expect(screen.getByRole('button', { name: 'modify load balancing' })).toBeInTheDocument()
})
// Negative proof: non-sandbox plan without load balancing should NOT show ConfigModel
it('should hide ConfigModel for non-sandbox plan without load balancing enabled', () => {
// Arrange - set plan type to non-sandbox and keep load balancing disabled
mockModelLoadBalancingEnabled = false
mockPlanType = 'pro'
// Act
render(
<ModelListItem
model={mockModel}
provider={mockProvider}
isConfigurable={false}
/>,
)
// Assert - ConfigModel should NOT show because plan.type !== 'sandbox' and load balancing is disabled
expect(screen.queryByRole('button', { name: 'modify load balancing' })).not.toBeInTheDocument()
})
// model.status=credentialRemoved: switch disabled, no ConfigModel
it('should disable switch and hide ConfigModel when status is credentialRemoved', () => {
// Arrange
const removedModel = { ...mockModel, status: ModelStatusEnum.credentialRemoved } as unknown as ModelItem
mockModelLoadBalancingEnabled = true
// Act
render(
<ModelListItem
model={removedModel}
provider={mockProvider}
isConfigurable={false}
/>,
)
// Assert - ConfigModel should not render because status is not active/disabled
expect(screen.queryByRole('button', { name: 'modify load balancing' })).not.toBeInTheDocument()
const statusSwitch = screen.getByRole('switch')
expect(statusSwitch).toHaveClass('!cursor-not-allowed')
fireEvent.click(statusSwitch)
expect(statusSwitch).toHaveAttribute('aria-checked', 'false')
expect(enableModel).not.toHaveBeenCalled()
expect(disableModel).not.toHaveBeenCalled()
})
// isConfigurable=true: hover class on row
it('should apply hover class when isConfigurable is true', () => {
// Act
const { container } = render(
<ModelListItem
model={mockModel}
provider={mockProvider}
isConfigurable={true}
/>,
)
// Assert
expect(container.querySelector('.hover\\:bg-components-panel-on-panel-item-bg-hover')).toBeInTheDocument()
})
})

Some files were not shown because too many files have changed in this diff Show More