Compare commits

..

10 Commits

Author SHA1 Message Date
Frederick2313072
594906c1ff fix: MD5 and 8‑hex Suffix Collision Risk 2025-09-24 17:01:23 +08:00
Frederick2313072
80f8245f2e fix(api): sync api/uv.lock with main to resolve binary diff 2025-09-24 12:00:50 +08:00
Frederick2313072
a12b437c16 fix(api): sync api/uv.lock with main to resolve binary diff 2025-09-24 11:58:07 +08:00
Frederick2313072
12de554313 fix: add index initialization checks, improve batch vector operations and search, ensure robust exception handling. 2025-09-23 16:41:46 +08:00
Frederick2313072
1f36c0c1c5 sync docker compose files with main branch 2025-09-23 00:12:54 +08:00
Frederick2313072
8b9297563c fix 2025-09-23 00:03:31 +08:00
Frederick2313072
1cbe9eedb6 fix(pinecone): normalize index names and sanitize metadata to meet API constraints 2025-09-20 02:56:53 +08:00
Frederick2313072
90fc5a1f12 pipecone 2025-09-16 08:57:46 +08:00
Frederick2313072
41dfdf1ac0 fix:score threshold 2025-09-01 16:34:17 +08:00
Frederick2313072
dd7de74aa6 äŋŽå¤top-kįĄŦįŧ–į å›žé€€é—Žéĸ˜ 2025-09-01 14:27:43 +08:00
10811 changed files with 334818 additions and 1389978 deletions

View File

@@ -1,168 +0,0 @@
---
name: backend-code-review
description: Review backend code for quality, security, maintainability, and best practices based on established checklist rules. Use when the user requests a review, analysis, or improvement of backend files (e.g., `.py`) under the `api/` directory. Do NOT use for frontend files (e.g., `.tsx`, `.ts`, `.js`). Supports pending-change review, code snippets review, and file-focused review.
---
# Backend Code Review
## When to use this skill
Use this skill whenever the user asks to **review, analyze, or improve** backend code (e.g., `.py`) under the `api/` directory. Supports the following review modes:
- **Pending-change review**: when the user asks to review current changes (inspect staged/working-tree files slated for commit to get the changes).
- **Code snippets review**: when the user pastes code snippets (e.g., a function/class/module excerpt) into the chat and asks for a review.
- **File-focused review**: when the user points to specific files and asks for a review of those files (one file or a small, explicit set of files, e.g., `api/...`, `api/app.py`).
Do NOT use this skill when:
- The request is about frontend code or UI (e.g., `.tsx`, `.ts`, `.js`, `web/`).
- The user is not asking for a review/analysis/improvement of backend code.
- The scope is not under `api/` (unless the user explicitly asks to review backend-related changes outside `api/`).
## How to use this skill
Follow these steps when using this skill:
1. **Identify the review mode** (pending-change vs snippet vs file-focused) based on the user’s input. Keep the scope tight: review only what the user provided or explicitly referenced.
2. Follow the rules defined in **Checklist** to perform the review. If no Checklist rule matches, apply **General Review Rules** as a fallback to perform the best-effort review.
3. Compose the final output strictly follow the **Required Output Format**.
Notes when using this skill:
- Always include actionable fixes or suggestions (including possible code snippets).
- Use best-effort `File:Line` references when a file path and line numbers are available; otherwise, use the most specific identifier you can.
## Checklist
- db schema design: if the review scope includes code/files under `api/models/` or `api/migrations/`, follow [references/db-schema-rule.md](references/db-schema-rule.md) to perform the review
- architecture: if the review scope involves controller/service/core-domain/libs/model layering, dependency direction, or moving responsibilities across modules, follow [references/architecture-rule.md](references/architecture-rule.md) to perform the review
- repositories abstraction: if the review scope contains table/model operations (e.g., `select(...)`, `session.execute(...)`, joins, CRUD) and is not under `api/repositories`, `api/core/repositories`, or `api/extensions/*/repositories/`, follow [references/repositories-rule.md](references/repositories-rule.md) to perform the review
- sqlalchemy patterns: if the review scope involves SQLAlchemy session/query usage, db transaction/crud usage, or raw SQL usage, follow [references/sqlalchemy-rule.md](references/sqlalchemy-rule.md) to perform the review
## General Review Rules
### 1. Security Review
Check for:
- SQL injection vulnerabilities
- Server-Side Request Forgery (SSRF)
- Command injection
- Insecure deserialization
- Hardcoded secrets/credentials
- Improper authentication/authorization
- Insecure direct object references
### 2. Performance Review
Check for:
- N+1 queries
- Missing database indexes
- Memory leaks
- Blocking operations in async code
- Missing caching opportunities
### 3. Code Quality Review
Check for:
- Code forward compatibility
- Code duplication (DRY violations)
- Functions doing too much (SRP violations)
- Deep nesting / complex conditionals
- Magic numbers/strings
- Poor naming
- Missing error handling
- Incomplete type coverage
### 4. Testing Review
Check for:
- Missing test coverage for new code
- Tests that don't test behavior
- Flaky test patterns
- Missing edge cases
## Required Output Format
When this skill invoked, the response must exactly follow one of the two templates:
### Template A (any findings)
```markdown
# Code Review Summary
Found <X> critical issues need to be fixed:
## 🔴 Critical (Must Fix)
### 1. <brief description of the issue>
FilePath: <path> line <line>
<relevant code snippet or pointer>
#### Explanation
<detailed explanation and references of the issue>
#### Suggested Fix
1. <brief description of suggested fix>
2. <code example> (optional, omit if not applicable)
---
... (repeat for each critical issue) ...
Found <Y> suggestions for improvement:
## 🟡 Suggestions (Should Consider)
### 1. <brief description of the suggestion>
FilePath: <path> line <line>
<relevant code snippet or pointer>
#### Explanation
<detailed explanation and references of the suggestion>
#### Suggested Fix
1. <brief description of suggested fix>
2. <code example> (optional, omit if not applicable)
---
... (repeat for each suggestion) ...
Found <Z> optional nits:
## đŸŸĸ Nits (Optional)
### 1. <brief description of the nit>
FilePath: <path> line <line>
<relevant code snippet or pointer>
#### Explanation
<explanation and references of the optional nit>
#### Suggested Fix
- <minor suggestions>
---
... (repeat for each nits) ...
## ✅ What's Good
- <Positive feedback on good patterns>
```
- If there are no critical issues or suggestions or option nits or good points, just omit that section.
- If the issue number is more than 10, summarize as "Found 10+ critical issues/suggestions/optional nits" and only output the first 10 items.
- Don't compress the blank lines between sections; keep them as-is for readability.
- If there is any issue requires code changes, append a brief follow-up question to ask whether the user wants to apply the fix(es) after the structured output. For example: "Would you like me to use the Suggested fix(es) to address these issues?"
### Template B (no issues)
```markdown
## Code Review Summary
✅ No issues found.
```

View File

@@ -1,91 +0,0 @@
# Rule Catalog — Architecture
## Scope
- Covers: controller/service/core-domain/libs/model layering, dependency direction, responsibility placement, observability-friendly flow.
## Rules
### Keep business logic out of controllers
- Category: maintainability
- Severity: critical
- Description: Controllers should parse input, call services, and return serialized responses. Business decisions inside controllers make behavior hard to reuse and test.
- Suggested fix: Move domain/business logic into the service or core/domain layer. Keep controller handlers thin and orchestration-focused.
- Example:
- Bad:
```python
@bp.post("/apps/<app_id>/publish")
def publish_app(app_id: str):
payload = request.get_json() or {}
if payload.get("force") and current_user.role != "admin":
raise ValueError("only admin can force publish")
app = App.query.get(app_id)
app.status = "published"
db.session.commit()
return {"result": "ok"}
```
- Good:
```python
@bp.post("/apps/<app_id>/publish")
def publish_app(app_id: str):
payload = PublishRequest.model_validate(request.get_json() or {})
app_service.publish_app(app_id=app_id, force=payload.force, actor_id=current_user.id)
return {"result": "ok"}
```
### Preserve layer dependency direction
- Category: best practices
- Severity: critical
- Description: Controllers may depend on services, and services may depend on core/domain abstractions. Reversing this direction (for example, core importing controller/web modules) creates cycles and leaks transport concerns into domain code.
- Suggested fix: Extract shared contracts into core/domain or service-level modules and make upper layers depend on lower, not the reverse.
- Example:
- Bad:
```python
# core/policy/publish_policy.py
from controllers.console.app import request_context
def can_publish() -> bool:
return request_context.current_user.is_admin
```
- Good:
```python
# core/policy/publish_policy.py
def can_publish(role: str) -> bool:
return role == "admin"
# service layer adapts web/user context to domain input
allowed = can_publish(role=current_user.role)
```
### Keep libs business-agnostic
- Category: maintainability
- Severity: critical
- Description: Modules under `api/libs/` should remain reusable, business-agnostic building blocks. They must not encode product/domain-specific rules, workflow orchestration, or business decisions.
- Suggested fix:
- If business logic appears in `api/libs/`, extract it into the appropriate `services/` or `core/` module and keep `libs` focused on generic, cross-cutting helpers.
- Keep `libs` dependencies clean: avoid importing service/controller/domain-specific modules into `api/libs/`.
- Example:
- Bad:
```python
# api/libs/conversation_filter.py
from services.conversation_service import ConversationService
def should_archive_conversation(conversation, tenant_id: str) -> bool:
# Domain policy and service dependency are leaking into libs.
service = ConversationService()
if service.has_paid_plan(tenant_id):
return conversation.idle_days > 90
return conversation.idle_days > 30
```
- Good:
```python
# api/libs/datetime_utils.py (business-agnostic helper)
def older_than_days(idle_days: int, threshold_days: int) -> bool:
return idle_days > threshold_days
# services/conversation_service.py (business logic stays in service/core)
from libs.datetime_utils import older_than_days
def should_archive_conversation(conversation, tenant_id: str) -> bool:
threshold_days = 90 if has_paid_plan(tenant_id) else 30
return older_than_days(conversation.idle_days, threshold_days)
```

View File

@@ -1,157 +0,0 @@
# Rule Catalog — DB Schema Design
## Scope
- Covers: model/base inheritance, schema boundaries in model properties, tenant-aware schema design, index redundancy checks, dialect portability in models, and cross-database compatibility in migrations.
- Does NOT cover: session lifecycle, transaction boundaries, and query execution patterns (handled by `sqlalchemy-rule.md`).
## Rules
### Do not query other tables inside `@property`
- Category: [maintainability, performance]
- Severity: critical
- Description: A model `@property` must not open sessions or query other tables. This hides dependencies across models, tightly couples schema objects to data access, and can cause N+1 query explosions when iterating collections.
- Suggested fix:
- Keep model properties pure and local to already-loaded fields.
- Move cross-table data fetching to service/repository methods.
- For list/batch reads, fetch required related data explicitly (join/preload/bulk query) before rendering derived values.
- Example:
- Bad:
```python
class Conversation(TypeBase):
__tablename__ = "conversations"
@property
def app_name(self) -> str:
with Session(db.engine, expire_on_commit=False) as session:
app = session.execute(select(App).where(App.id == self.app_id)).scalar_one()
return app.name
```
- Good:
```python
class Conversation(TypeBase):
__tablename__ = "conversations"
@property
def display_title(self) -> str:
return self.name or "Untitled"
# Service/repository layer performs explicit batch fetch for related App rows.
```
### Prefer including `tenant_id` in model definitions
- Category: maintainability
- Severity: suggestion
- Description: In multi-tenant domains, include `tenant_id` in schema definitions whenever the entity belongs to tenant-owned data. This improves data isolation safety and keeps future partitioning/sharding strategies practical as data volume grows.
- Suggested fix:
- Add a `tenant_id` column and ensure related unique/index constraints include tenant dimension when applicable.
- Propagate `tenant_id` through service/repository contracts to keep access paths tenant-aware.
- Exception: if a table is explicitly designed as non-tenant-scoped global metadata, document that design decision clearly.
- Example:
- Bad:
```python
from sqlalchemy.orm import Mapped
class Dataset(TypeBase):
__tablename__ = "datasets"
id: Mapped[str] = mapped_column(StringUUID, primary_key=True)
name: Mapped[str] = mapped_column(sa.String(255), nullable=False)
```
- Good:
```python
from sqlalchemy.orm import Mapped
class Dataset(TypeBase):
__tablename__ = "datasets"
id: Mapped[str] = mapped_column(StringUUID, primary_key=True)
tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False, index=True)
name: Mapped[str] = mapped_column(sa.String(255), nullable=False)
```
### Detect and avoid duplicate/redundant indexes
- Category: performance
- Severity: suggestion
- Description: Review index definitions for leftmost-prefix redundancy. For example, index `(a, b, c)` can safely cover most lookups for `(a, b)`. Keeping both may increase write overhead and can mislead the optimizer into suboptimal execution plans.
- Suggested fix:
- Before adding an index, compare against existing composite indexes by leftmost-prefix rules.
- Drop or avoid creating redundant prefixes unless there is a proven query-pattern need.
- Apply the same review standard in both model `__table_args__` and migration index DDL.
- Example:
- Bad:
```python
__table_args__ = (
sa.Index("idx_msg_tenant_app", "tenant_id", "app_id"),
sa.Index("idx_msg_tenant_app_created", "tenant_id", "app_id", "created_at"),
)
```
- Good:
```python
__table_args__ = (
# Keep the wider index unless profiling proves a dedicated short index is needed.
sa.Index("idx_msg_tenant_app_created", "tenant_id", "app_id", "created_at"),
)
```
### Avoid PostgreSQL-only dialect usage in models; wrap in `models.types`
- Category: maintainability
- Severity: critical
- Description: Model/schema definitions should avoid PostgreSQL-only constructs directly in business models. When database-specific behavior is required, encapsulate it in `api/models/types.py` using both PostgreSQL and MySQL dialect implementations, then consume that abstraction from model code.
- Suggested fix:
- Do not directly place dialect-only types/operators in model columns when a portable wrapper can be used.
- Add or extend wrappers in `models.types` (for example, `AdjustedJSON`, `LongText`, `BinaryData`) to normalize behavior across PostgreSQL and MySQL.
- Example:
- Bad:
```python
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.orm import Mapped
class ToolConfig(TypeBase):
__tablename__ = "tool_configs"
config: Mapped[dict] = mapped_column(JSONB, nullable=False)
```
- Good:
```python
from sqlalchemy.orm import Mapped
from models.types import AdjustedJSON
class ToolConfig(TypeBase):
__tablename__ = "tool_configs"
config: Mapped[dict] = mapped_column(AdjustedJSON(), nullable=False)
```
### Guard migration incompatibilities with dialect checks and shared types
- Category: maintainability
- Severity: critical
- Description: Migration scripts under `api/migrations/versions/` must account for PostgreSQL/MySQL incompatibilities explicitly. For dialect-sensitive DDL or defaults, branch on the active dialect (for example, `conn.dialect.name == "postgresql"`), and prefer reusable compatibility abstractions from `models.types` where applicable.
- Suggested fix:
- In migration upgrades/downgrades, bind connection and branch by dialect for incompatible SQL fragments.
- Reuse `models.types` wrappers in column definitions when that keeps behavior aligned with runtime models.
- Avoid one-dialect-only migration logic unless there is a documented, deliberate compatibility exception.
- Example:
- Bad:
```python
with op.batch_alter_table("dataset_keyword_tables") as batch_op:
batch_op.add_column(
sa.Column(
"data_source_type",
sa.String(255),
server_default=sa.text("'database'::character varying"),
nullable=False,
)
)
```
- Good:
```python
def _is_pg(conn) -> bool:
return conn.dialect.name == "postgresql"
conn = op.get_bind()
default_expr = sa.text("'database'::character varying") if _is_pg(conn) else sa.text("'database'")
with op.batch_alter_table("dataset_keyword_tables") as batch_op:
batch_op.add_column(
sa.Column("data_source_type", sa.String(255), server_default=default_expr, nullable=False)
)
```

View File

@@ -1,61 +0,0 @@
# Rule Catalog - Repositories Abstraction
## Scope
- Covers: when to reuse existing repository abstractions, when to introduce new repositories, and how to preserve dependency direction between service/core and infrastructure implementations.
- Does NOT cover: SQLAlchemy session lifecycle and query-shape specifics (handled by `sqlalchemy-rule.md`), and table schema/migration design (handled by `db-schema-rule.md`).
## Rules
### Introduce repositories abstraction
- Category: maintainability
- Severity: suggestion
- Description: If a table/model already has a repository abstraction, all reads/writes/queries for that table should use the existing repository. If no repository exists, introduce one only when complexity justifies it, such as large/high-volume tables, repeated complex query logic, or likely storage-strategy variation.
- Suggested fix:
- First check `api/repositories`, `api/core/repositories`, and `api/extensions/*/repositories/` to verify whether the table/model already has a repository abstraction. If it exists, route all operations through it and add missing repository methods instead of bypassing it with ad-hoc SQLAlchemy access.
- If no repository exists, add one only when complexity warrants it (for example, repeated complex queries, large data domains, or multiple storage strategies), while preserving dependency direction (service/core depends on abstraction; infra provides implementation).
- Example:
- Bad:
```python
# Existing repository is ignored and service uses ad-hoc table queries.
class AppService:
def archive_app(self, app_id: str, tenant_id: str) -> None:
app = self.session.execute(
select(App).where(App.id == app_id, App.tenant_id == tenant_id)
).scalar_one()
app.archived = True
self.session.commit()
```
- Good:
```python
# Case A: Existing repository must be reused for all table operations.
class AppService:
def archive_app(self, app_id: str, tenant_id: str) -> None:
app = self.app_repo.get_by_id(app_id=app_id, tenant_id=tenant_id)
app.archived = True
self.app_repo.save(app)
# If the query is missing, extend the existing abstraction.
active_apps = self.app_repo.list_active_for_tenant(tenant_id=tenant_id)
```
- Bad:
```python
# No repository exists, but large-domain query logic is scattered in service code.
class ConversationService:
def list_recent_for_app(self, app_id: str, tenant_id: str, limit: int) -> list[Conversation]:
...
# many filters/joins/pagination variants duplicated across services
```
- Good:
```python
# Case B: Introduce repository for large/complex domains or storage variation.
class ConversationRepository(Protocol):
def list_recent_for_app(self, app_id: str, tenant_id: str, limit: int) -> list[Conversation]: ...
class SqlAlchemyConversationRepository:
def list_recent_for_app(self, app_id: str, tenant_id: str, limit: int) -> list[Conversation]:
...
class ConversationService:
def __init__(self, conversation_repo: ConversationRepository):
self.conversation_repo = conversation_repo
```

View File

@@ -1,139 +0,0 @@
# Rule Catalog — SQLAlchemy Patterns
## Scope
- Covers: SQLAlchemy session and transaction lifecycle, query construction, tenant scoping, raw SQL boundaries, and write-path concurrency safeguards.
- Does NOT cover: table/model schema and migration design details (handled by `db-schema-rule.md`).
## Rules
### Use Session context manager with explicit transaction control behavior
- Category: best practices
- Severity: critical
- Description: Session and transaction lifecycle must be explicit and bounded on write paths. Missing commits can silently drop intended updates, while ad-hoc or long-lived transactions increase contention, lock duration, and deadlock risk.
- Suggested fix:
- Use **explicit `session.commit()`** after completing a related write unit.
- Or use **`session.begin()` context manager** for automatic commit/rollback on a scoped block.
- Keep transaction windows short: avoid network I/O, heavy computation, or unrelated work inside the transaction.
- Example:
- Bad:
```python
# Missing commit: write may never be persisted.
with Session(db.engine, expire_on_commit=False) as session:
run = session.get(WorkflowRun, run_id)
run.status = "cancelled"
# Long transaction: external I/O inside a DB transaction.
with Session(db.engine, expire_on_commit=False) as session, session.begin():
run = session.get(WorkflowRun, run_id)
run.status = "cancelled"
call_external_api()
```
- Good:
```python
# Option 1: explicit commit.
with Session(db.engine, expire_on_commit=False) as session:
run = session.get(WorkflowRun, run_id)
run.status = "cancelled"
session.commit()
# Option 2: scoped transaction with automatic commit/rollback.
with Session(db.engine, expire_on_commit=False) as session, session.begin():
run = session.get(WorkflowRun, run_id)
run.status = "cancelled"
# Keep non-DB work outside transaction scope.
call_external_api()
```
### Enforce tenant_id scoping on shared-resource queries
- Category: security
- Severity: critical
- Description: Reads and writes against shared tables must be scoped by `tenant_id` to prevent cross-tenant data leakage or corruption.
- Suggested fix: Add `tenant_id` predicate to all tenant-owned entity queries and propagate tenant context through service/repository interfaces.
- Example:
- Bad:
```python
stmt = select(Workflow).where(Workflow.id == workflow_id)
workflow = session.execute(stmt).scalar_one_or_none()
```
- Good:
```python
stmt = select(Workflow).where(
Workflow.id == workflow_id,
Workflow.tenant_id == tenant_id,
)
workflow = session.execute(stmt).scalar_one_or_none()
```
### Prefer SQLAlchemy expressions over raw SQL by default
- Category: maintainability
- Severity: suggestion
- Description: Raw SQL should be exceptional. ORM/Core expressions are easier to evolve, safer to compose, and more consistent with the codebase.
- Suggested fix: Rewrite straightforward raw SQL into SQLAlchemy `select/update/delete` expressions; keep raw SQL only when required by clear technical constraints.
- Example:
- Bad:
```python
row = session.execute(
text("SELECT * FROM workflows WHERE id = :id AND tenant_id = :tenant_id"),
{"id": workflow_id, "tenant_id": tenant_id},
).first()
```
- Good:
```python
stmt = select(Workflow).where(
Workflow.id == workflow_id,
Workflow.tenant_id == tenant_id,
)
row = session.execute(stmt).scalar_one_or_none()
```
### Protect write paths with concurrency safeguards
- Category: quality
- Severity: critical
- Description: Multi-writer paths without explicit concurrency control can silently overwrite data. Choose the safeguard based on contention level, lock scope, and throughput cost instead of defaulting to one strategy.
- Suggested fix:
- **Optimistic locking**: Use when contention is usually low and retries are acceptable. Add a version (or updated_at) guard in `WHERE` and treat `rowcount == 0` as a conflict.
- **Redis distributed lock**: Use when the critical section spans multiple steps/processes (or includes non-DB side effects) and you need cross-worker mutual exclusion.
- **SELECT ... FOR UPDATE**: Use when contention is high on the same rows and strict in-transaction serialization is required. Keep transactions short to reduce lock wait/deadlock risk.
- In all cases, scope by `tenant_id` and verify affected row counts for conditional writes.
- Example:
- Bad:
```python
# No tenant scope, no conflict detection, and no lock on a contested write path.
session.execute(update(WorkflowRun).where(WorkflowRun.id == run_id).values(status="cancelled"))
session.commit() # silently overwrites concurrent updates
```
- Good:
```python
# 1) Optimistic lock (low contention, retry on conflict)
result = session.execute(
update(WorkflowRun)
.where(
WorkflowRun.id == run_id,
WorkflowRun.tenant_id == tenant_id,
WorkflowRun.version == expected_version,
)
.values(status="cancelled", version=WorkflowRun.version + 1)
)
if result.rowcount == 0:
raise WorkflowStateConflictError("stale version, retry")
# 2) Redis distributed lock (cross-worker critical section)
lock_name = f"workflow_run_lock:{tenant_id}:{run_id}"
with redis_client.lock(lock_name, timeout=20):
session.execute(
update(WorkflowRun)
.where(WorkflowRun.id == run_id, WorkflowRun.tenant_id == tenant_id)
.values(status="cancelled")
)
session.commit()
# 3) Pessimistic lock with SELECT ... FOR UPDATE (high contention)
run = session.execute(
select(WorkflowRun)
.where(WorkflowRun.id == run_id, WorkflowRun.tenant_id == tenant_id)
.with_for_update()
).scalar_one()
run.status = "cancelled"
session.commit()
```

View File

@@ -1,442 +0,0 @@
---
name: component-refactoring
description: Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component --json` shows complexity > 50 or lineCount > 300, when the user asks for code splitting, hook extraction, or complexity reduction, or when `pnpm analyze-component` warns to refactor before testing; avoid for simple/well-structured components, third-party wrappers, or when the user explicitly wants testing without refactoring.
---
# Dify Component Refactoring Skill
Refactor high-complexity React components in the Dify frontend codebase with the patterns and workflow below.
> **Complexity Threshold**: Components with complexity > 50 (measured by `pnpm analyze-component`) should be refactored before testing.
## Quick Reference
### Commands (run from `web/`)
Use paths relative to `web/` (e.g., `app/components/...`).
Use `refactor-component` for refactoring prompts and `analyze-component` for testing prompts and metrics.
```bash
cd web
# Generate refactoring prompt
pnpm refactor-component <path>
# Output refactoring analysis as JSON
pnpm refactor-component <path> --json
# Generate testing prompt (after refactoring)
pnpm analyze-component <path>
# Output testing analysis as JSON
pnpm analyze-component <path> --json
```
### Complexity Analysis
```bash
# Analyze component complexity
pnpm analyze-component <path> --json
# Key metrics to check:
# - complexity: normalized score 0-100 (target < 50)
# - maxComplexity: highest single function complexity
# - lineCount: total lines (target < 300)
```
### Complexity Score Interpretation
| Score | Level | Action |
|-------|-------|--------|
| 0-25 | đŸŸĸ Simple | Ready for testing |
| 26-50 | 🟡 Medium | Consider minor refactoring |
| 51-75 | 🟠 Complex | **Refactor before testing** |
| 76-100 | 🔴 Very Complex | **Must refactor** |
## Core Refactoring Patterns
### Pattern 1: Extract Custom Hooks
**When**: Component has complex state management, multiple `useState`/`useEffect`, or business logic mixed with UI.
**Dify Convention**: Place hooks in a `hooks/` subdirectory or alongside the component as `use-<feature>.ts`.
```typescript
// ❌ Before: Complex state logic in component
const Configuration: FC = () => {
const [modelConfig, setModelConfig] = useState<ModelConfig>(...)
const [datasetConfigs, setDatasetConfigs] = useState<DatasetConfigs>(...)
const [completionParams, setCompletionParams] = useState<FormValue>({})
// 50+ lines of state management logic...
return <div>...</div>
}
// ✅ After: Extract to custom hook
// hooks/use-model-config.ts
export const useModelConfig = (appId: string) => {
const [modelConfig, setModelConfig] = useState<ModelConfig>(...)
const [completionParams, setCompletionParams] = useState<FormValue>({})
// Related state management logic here
return { modelConfig, setModelConfig, completionParams, setCompletionParams }
}
// Component becomes cleaner
const Configuration: FC = () => {
const { modelConfig, setModelConfig } = useModelConfig(appId)
return <div>...</div>
}
```
**Dify Examples**:
- `web/app/components/app/configuration/hooks/use-advanced-prompt-config.ts`
- `web/app/components/app/configuration/debug/hooks.tsx`
- `web/app/components/workflow/hooks/use-workflow.ts`
### Pattern 2: Extract Sub-Components
**When**: Single component has multiple UI sections, conditional rendering blocks, or repeated patterns.
**Dify Convention**: Place sub-components in subdirectories or as separate files in the same directory.
```typescript
// ❌ Before: Monolithic JSX with multiple sections
const AppInfo = () => {
return (
<div>
{/* 100 lines of header UI */}
{/* 100 lines of operations UI */}
{/* 100 lines of modals */}
</div>
)
}
// ✅ After: Split into focused components
// app-info/
// ├── index.tsx (orchestration only)
// ├── app-header.tsx (header UI)
// ├── app-operations.tsx (operations UI)
// └── app-modals.tsx (modal management)
const AppInfo = () => {
const { showModal, setShowModal } = useAppInfoModals()
return (
<div>
<AppHeader appDetail={appDetail} />
<AppOperations onAction={handleAction} />
<AppModals show={showModal} onClose={() => setShowModal(null)} />
</div>
)
}
```
**Dify Examples**:
- `web/app/components/app/configuration/` directory structure
- `web/app/components/workflow/nodes/` per-node organization
### Pattern 3: Simplify Conditional Logic
**When**: Deep nesting (> 3 levels), complex ternaries, or multiple `if/else` chains.
```typescript
// ❌ Before: Deeply nested conditionals
const Template = useMemo(() => {
if (appDetail?.mode === AppModeEnum.CHAT) {
switch (locale) {
case LanguagesSupported[1]:
return <TemplateChatZh />
case LanguagesSupported[7]:
return <TemplateChatJa />
default:
return <TemplateChatEn />
}
}
if (appDetail?.mode === AppModeEnum.ADVANCED_CHAT) {
// Another 15 lines...
}
// More conditions...
}, [appDetail, locale])
// ✅ After: Use lookup tables + early returns
const TEMPLATE_MAP = {
[AppModeEnum.CHAT]: {
[LanguagesSupported[1]]: TemplateChatZh,
[LanguagesSupported[7]]: TemplateChatJa,
default: TemplateChatEn,
},
[AppModeEnum.ADVANCED_CHAT]: {
[LanguagesSupported[1]]: TemplateAdvancedChatZh,
// ...
},
}
const Template = useMemo(() => {
const modeTemplates = TEMPLATE_MAP[appDetail?.mode]
if (!modeTemplates) return null
const TemplateComponent = modeTemplates[locale] || modeTemplates.default
return <TemplateComponent appDetail={appDetail} />
}, [appDetail, locale])
```
### Pattern 4: Extract API/Data Logic
**When**: Component directly handles API calls, data transformation, or complex async operations.
**Dify Convention**:
- This skill is for component decomposition, not query/mutation design.
- When refactoring data fetching, follow `web/AGENTS.md`.
- Use `frontend-query-mutation` for contracts, query shape, data-fetching wrappers, query/mutation call-site patterns, conditional queries, invalidation, and mutation error handling.
- Do not introduce deprecated `useInvalid` / `useReset`.
- Do not add thin passthrough `useQuery` wrappers during refactoring; only extract a custom hook when it truly orchestrates multiple queries/mutations or shared derived state.
**Dify Examples**:
- `web/service/use-workflow.ts`
- `web/service/use-common.ts`
- `web/service/knowledge/use-dataset.ts`
- `web/service/knowledge/use-document.ts`
### Pattern 5: Extract Modal/Dialog Management
**When**: Component manages multiple modals with complex open/close states.
**Dify Convention**: Modals should be extracted with their state management.
```typescript
// ❌ Before: Multiple modal states in component
const AppInfo = () => {
const [showEditModal, setShowEditModal] = useState(false)
const [showDuplicateModal, setShowDuplicateModal] = useState(false)
const [showConfirmDelete, setShowConfirmDelete] = useState(false)
const [showSwitchModal, setShowSwitchModal] = useState(false)
const [showImportDSLModal, setShowImportDSLModal] = useState(false)
// 5+ more modal states...
}
// ✅ After: Extract to modal management hook
type ModalType = 'edit' | 'duplicate' | 'delete' | 'switch' | 'import' | null
const useAppInfoModals = () => {
const [activeModal, setActiveModal] = useState<ModalType>(null)
const openModal = useCallback((type: ModalType) => setActiveModal(type), [])
const closeModal = useCallback(() => setActiveModal(null), [])
return {
activeModal,
openModal,
closeModal,
isOpen: (type: ModalType) => activeModal === type,
}
}
```
### Pattern 6: Extract Form Logic
**When**: Complex form validation, submission handling, or field transformation.
**Dify Convention**: Use `@tanstack/react-form` patterns from `web/app/components/base/form/`.
```typescript
// ✅ Use existing form infrastructure
import { useAppForm } from '@/app/components/base/form'
const ConfigForm = () => {
const form = useAppForm({
defaultValues: { name: '', description: '' },
onSubmit: handleSubmit,
})
return <form.Provider>...</form.Provider>
}
```
## Dify-Specific Refactoring Guidelines
### 1. Context Provider Extraction
**When**: Component provides complex context values with multiple states.
```typescript
// ❌ Before: Large context value object
const value = {
appId, isAPIKeySet, isTrailFinished, mode, modelModeType,
promptMode, isAdvancedMode, isAgent, isOpenAI, isFunctionCall,
// 50+ more properties...
}
return <ConfigContext.Provider value={value}>...</ConfigContext.Provider>
// ✅ After: Split into domain-specific contexts
<ModelConfigProvider value={modelConfigValue}>
<DatasetConfigProvider value={datasetConfigValue}>
<UIConfigProvider value={uiConfigValue}>
{children}
</UIConfigProvider>
</DatasetConfigProvider>
</ModelConfigProvider>
```
**Dify Reference**: `web/context/` directory structure
### 2. Workflow Node Components
**When**: Refactoring workflow node components (`web/app/components/workflow/nodes/`).
**Conventions**:
- Keep node logic in `use-interactions.ts`
- Extract panel UI to separate files
- Use `_base` components for common patterns
```
nodes/<node-type>/
├── index.tsx # Node registration
├── node.tsx # Node visual component
├── panel.tsx # Configuration panel
├── use-interactions.ts # Node-specific hooks
└── types.ts # Type definitions
```
### 3. Configuration Components
**When**: Refactoring app configuration components.
**Conventions**:
- Separate config sections into subdirectories
- Use existing patterns from `web/app/components/app/configuration/`
- Keep feature toggles in dedicated components
### 4. Tool/Plugin Components
**When**: Refactoring tool-related components (`web/app/components/tools/`).
**Conventions**:
- Follow existing modal patterns
- Use service hooks from `web/service/use-tools.ts`
- Keep provider-specific logic isolated
## Refactoring Workflow
### Step 1: Generate Refactoring Prompt
```bash
pnpm refactor-component <path>
```
This command will:
- Analyze component complexity and features
- Identify specific refactoring actions needed
- Generate a prompt for AI assistant (auto-copied to clipboard on macOS)
- Provide detailed requirements based on detected patterns
### Step 2: Analyze Details
```bash
pnpm analyze-component <path> --json
```
Identify:
- Total complexity score
- Max function complexity
- Line count
- Features detected (state, effects, API, etc.)
### Step 3: Plan
Create a refactoring plan based on detected features:
| Detected Feature | Refactoring Action |
|------------------|-------------------|
| `hasState: true` + `hasEffects: true` | Extract custom hook |
| `hasAPI: true` | Extract data/service hook |
| `hasEvents: true` (many) | Extract event handlers |
| `lineCount > 300` | Split into sub-components |
| `maxComplexity > 50` | Simplify conditional logic |
### Step 4: Execute Incrementally
1. **Extract one piece at a time**
2. **Run lint, type-check, and tests after each extraction**
3. **Verify functionality before next step**
```
For each extraction:
┌────────────────────────────────────────┐
│ 1. Extract code │
│ 2. Run: pnpm lint:fix │
│ 3. Run: pnpm type-check:tsgo │
│ 4. Run: pnpm test │
│ 5. Test functionality manually │
│ 6. PASS? → Next extraction │
│ FAIL? → Fix before continuing │
└────────────────────────────────────────┘
```
### Step 5: Verify
After refactoring:
```bash
# Re-run refactor command to verify improvements
pnpm refactor-component <path>
# If complexity < 25 and lines < 200, you'll see:
# ✅ COMPONENT IS WELL-STRUCTURED
# For detailed metrics:
pnpm analyze-component <path> --json
# Target metrics:
# - complexity < 50
# - lineCount < 300
# - maxComplexity < 30
```
## Common Mistakes to Avoid
### ❌ Over-Engineering
```typescript
// ❌ Too many tiny hooks
const useButtonText = () => useState('Click')
const useButtonDisabled = () => useState(false)
const useButtonLoading = () => useState(false)
// ✅ Cohesive hook with related state
const useButtonState = () => {
const [text, setText] = useState('Click')
const [disabled, setDisabled] = useState(false)
const [loading, setLoading] = useState(false)
return { text, setText, disabled, setDisabled, loading, setLoading }
}
```
### ❌ Breaking Existing Patterns
- Follow existing directory structures
- Maintain naming conventions
- Preserve export patterns for compatibility
### ❌ Premature Abstraction
- Only extract when there's clear complexity benefit
- Don't create abstractions for single-use code
- Keep refactored code in the same domain area
## References
### Dify Codebase Examples
- **Hook extraction**: `web/app/components/app/configuration/hooks/`
- **Component splitting**: `web/app/components/app/configuration/`
- **Service hooks**: `web/service/use-*.ts`
- **Workflow patterns**: `web/app/components/workflow/hooks/`
- **Form patterns**: `web/app/components/base/form/`
### Related Skills
- `frontend-testing` - For testing refactored components
- `web/docs/test.md` - Testing specification

View File

@@ -1,493 +0,0 @@
# Complexity Reduction Patterns
This document provides patterns for reducing cognitive complexity in Dify React components.
## Understanding Complexity
### SonarJS Cognitive Complexity
The `pnpm analyze-component` tool uses SonarJS cognitive complexity metrics:
- **Total Complexity**: Sum of all functions' complexity in the file
- **Max Complexity**: Highest single function complexity
### What Increases Complexity
| Pattern | Complexity Impact |
|---------|-------------------|
| `if/else` | +1 per branch |
| Nested conditions | +1 per nesting level |
| `switch/case` | +1 per case |
| `for/while/do` | +1 per loop |
| `&&`/`||` chains | +1 per operator |
| Nested callbacks | +1 per nesting level |
| `try/catch` | +1 per catch |
| Ternary expressions | +1 per nesting |
## Pattern 1: Replace Conditionals with Lookup Tables
**Before** (complexity: ~15):
```typescript
const Template = useMemo(() => {
if (appDetail?.mode === AppModeEnum.CHAT) {
switch (locale) {
case LanguagesSupported[1]:
return <TemplateChatZh appDetail={appDetail} />
case LanguagesSupported[7]:
return <TemplateChatJa appDetail={appDetail} />
default:
return <TemplateChatEn appDetail={appDetail} />
}
}
if (appDetail?.mode === AppModeEnum.ADVANCED_CHAT) {
switch (locale) {
case LanguagesSupported[1]:
return <TemplateAdvancedChatZh appDetail={appDetail} />
case LanguagesSupported[7]:
return <TemplateAdvancedChatJa appDetail={appDetail} />
default:
return <TemplateAdvancedChatEn appDetail={appDetail} />
}
}
if (appDetail?.mode === AppModeEnum.WORKFLOW) {
// Similar pattern...
}
return null
}, [appDetail, locale])
```
**After** (complexity: ~3):
```typescript
// Define lookup table outside component
const TEMPLATE_MAP: Record<AppModeEnum, Record<string, FC<TemplateProps>>> = {
[AppModeEnum.CHAT]: {
[LanguagesSupported[1]]: TemplateChatZh,
[LanguagesSupported[7]]: TemplateChatJa,
default: TemplateChatEn,
},
[AppModeEnum.ADVANCED_CHAT]: {
[LanguagesSupported[1]]: TemplateAdvancedChatZh,
[LanguagesSupported[7]]: TemplateAdvancedChatJa,
default: TemplateAdvancedChatEn,
},
[AppModeEnum.WORKFLOW]: {
[LanguagesSupported[1]]: TemplateWorkflowZh,
[LanguagesSupported[7]]: TemplateWorkflowJa,
default: TemplateWorkflowEn,
},
// ...
}
// Clean component logic
const Template = useMemo(() => {
if (!appDetail?.mode) return null
const templates = TEMPLATE_MAP[appDetail.mode]
if (!templates) return null
const TemplateComponent = templates[locale] ?? templates.default
return <TemplateComponent appDetail={appDetail} />
}, [appDetail, locale])
```
## Pattern 2: Use Early Returns
**Before** (complexity: ~10):
```typescript
const handleSubmit = () => {
if (isValid) {
if (hasChanges) {
if (isConnected) {
submitData()
} else {
showConnectionError()
}
} else {
showNoChangesMessage()
}
} else {
showValidationError()
}
}
```
**After** (complexity: ~4):
```typescript
const handleSubmit = () => {
if (!isValid) {
showValidationError()
return
}
if (!hasChanges) {
showNoChangesMessage()
return
}
if (!isConnected) {
showConnectionError()
return
}
submitData()
}
```
## Pattern 3: Extract Complex Conditions
**Before** (complexity: high):
```typescript
const canPublish = (() => {
if (mode !== AppModeEnum.COMPLETION) {
if (!isAdvancedMode)
return true
if (modelModeType === ModelModeType.completion) {
if (!hasSetBlockStatus.history || !hasSetBlockStatus.query)
return false
return true
}
return true
}
return !promptEmpty
})()
```
**After** (complexity: lower):
```typescript
// Extract to named functions
const canPublishInCompletionMode = () => !promptEmpty
const canPublishInChatMode = () => {
if (!isAdvancedMode) return true
if (modelModeType !== ModelModeType.completion) return true
return hasSetBlockStatus.history && hasSetBlockStatus.query
}
// Clean main logic
const canPublish = mode === AppModeEnum.COMPLETION
? canPublishInCompletionMode()
: canPublishInChatMode()
```
## Pattern 4: Replace Chained Ternaries
**Before** (complexity: ~5):
```typescript
const statusText = serverActivated
? t('status.running')
: serverPublished
? t('status.inactive')
: appUnpublished
? t('status.unpublished')
: t('status.notConfigured')
```
**After** (complexity: ~2):
```typescript
const getStatusText = () => {
if (serverActivated) return t('status.running')
if (serverPublished) return t('status.inactive')
if (appUnpublished) return t('status.unpublished')
return t('status.notConfigured')
}
const statusText = getStatusText()
```
Or use lookup:
```typescript
const STATUS_TEXT_MAP = {
running: 'status.running',
inactive: 'status.inactive',
unpublished: 'status.unpublished',
notConfigured: 'status.notConfigured',
} as const
const getStatusKey = (): keyof typeof STATUS_TEXT_MAP => {
if (serverActivated) return 'running'
if (serverPublished) return 'inactive'
if (appUnpublished) return 'unpublished'
return 'notConfigured'
}
const statusText = t(STATUS_TEXT_MAP[getStatusKey()])
```
## Pattern 5: Flatten Nested Loops
**Before** (complexity: high):
```typescript
const processData = (items: Item[]) => {
const results: ProcessedItem[] = []
for (const item of items) {
if (item.isValid) {
for (const child of item.children) {
if (child.isActive) {
for (const prop of child.properties) {
if (prop.value !== null) {
results.push({
itemId: item.id,
childId: child.id,
propValue: prop.value,
})
}
}
}
}
}
}
return results
}
```
**After** (complexity: lower):
```typescript
// Use functional approach
const processData = (items: Item[]) => {
return items
.filter(item => item.isValid)
.flatMap(item =>
item.children
.filter(child => child.isActive)
.flatMap(child =>
child.properties
.filter(prop => prop.value !== null)
.map(prop => ({
itemId: item.id,
childId: child.id,
propValue: prop.value,
}))
)
)
}
```
## Pattern 6: Extract Event Handler Logic
**Before** (complexity: high in component):
```typescript
const Component = () => {
const handleSelect = (data: DataSet[]) => {
if (isEqual(data.map(item => item.id), dataSets.map(item => item.id))) {
hideSelectDataSet()
return
}
formattingChangedDispatcher()
let newDatasets = data
if (data.find(item => !item.name)) {
const newSelected = produce(data, (draft) => {
data.forEach((item, index) => {
if (!item.name) {
const newItem = dataSets.find(i => i.id === item.id)
if (newItem)
draft[index] = newItem
}
})
})
setDataSets(newSelected)
newDatasets = newSelected
}
else {
setDataSets(data)
}
hideSelectDataSet()
// 40 more lines of logic...
}
return <div>...</div>
}
```
**After** (complexity: lower):
```typescript
// Extract to hook or utility
const useDatasetSelection = (dataSets: DataSet[], setDataSets: SetState<DataSet[]>) => {
const normalizeSelection = (data: DataSet[]) => {
const hasUnloadedItem = data.some(item => !item.name)
if (!hasUnloadedItem) return data
return produce(data, (draft) => {
data.forEach((item, index) => {
if (!item.name) {
const existing = dataSets.find(i => i.id === item.id)
if (existing) draft[index] = existing
}
})
})
}
const hasSelectionChanged = (newData: DataSet[]) => {
return !isEqual(
newData.map(item => item.id),
dataSets.map(item => item.id)
)
}
return { normalizeSelection, hasSelectionChanged }
}
// Component becomes cleaner
const Component = () => {
const { normalizeSelection, hasSelectionChanged } = useDatasetSelection(dataSets, setDataSets)
const handleSelect = (data: DataSet[]) => {
if (!hasSelectionChanged(data)) {
hideSelectDataSet()
return
}
formattingChangedDispatcher()
const normalized = normalizeSelection(data)
setDataSets(normalized)
hideSelectDataSet()
}
return <div>...</div>
}
```
## Pattern 7: Reduce Boolean Logic Complexity
**Before** (complexity: ~8):
```typescript
const toggleDisabled = hasInsufficientPermissions
|| appUnpublished
|| missingStartNode
|| triggerModeDisabled
|| (isAdvancedApp && !currentWorkflow?.graph)
|| (isBasicApp && !basicAppConfig.updated_at)
```
**After** (complexity: ~3):
```typescript
// Extract meaningful boolean functions
const isAppReady = () => {
if (isAdvancedApp) return !!currentWorkflow?.graph
return !!basicAppConfig.updated_at
}
const hasRequiredPermissions = () => {
return isCurrentWorkspaceEditor && !hasInsufficientPermissions
}
const canToggle = () => {
if (!hasRequiredPermissions()) return false
if (!isAppReady()) return false
if (missingStartNode) return false
if (triggerModeDisabled) return false
return true
}
const toggleDisabled = !canToggle()
```
## Pattern 8: Simplify useMemo/useCallback Dependencies
**Before** (complexity: multiple recalculations):
```typescript
const payload = useMemo(() => {
let parameters: Parameter[] = []
let outputParameters: OutputParameter[] = []
if (!published) {
parameters = (inputs || []).map((item) => ({
name: item.variable,
description: '',
form: 'llm',
required: item.required,
type: item.type,
}))
outputParameters = (outputs || []).map((item) => ({
name: item.variable,
description: '',
type: item.value_type,
}))
}
else if (detail && detail.tool) {
parameters = (inputs || []).map((item) => ({
// Complex transformation...
}))
outputParameters = (outputs || []).map((item) => ({
// Complex transformation...
}))
}
return {
icon: detail?.icon || icon,
label: detail?.label || name,
// ...more fields
}
}, [detail, published, workflowAppId, icon, name, description, inputs, outputs])
```
**After** (complexity: separated concerns):
```typescript
// Separate transformations
const useParameterTransform = (inputs: InputVar[], detail?: ToolDetail, published?: boolean) => {
return useMemo(() => {
if (!published) {
return inputs.map(item => ({
name: item.variable,
description: '',
form: 'llm',
required: item.required,
type: item.type,
}))
}
if (!detail?.tool) return []
return inputs.map(item => ({
name: item.variable,
required: item.required,
type: item.type === 'paragraph' ? 'string' : item.type,
description: detail.tool.parameters.find(p => p.name === item.variable)?.llm_description || '',
form: detail.tool.parameters.find(p => p.name === item.variable)?.form || 'llm',
}))
}, [inputs, detail, published])
}
// Component uses hook
const parameters = useParameterTransform(inputs, detail, published)
const outputParameters = useOutputTransform(outputs, detail, published)
const payload = useMemo(() => ({
icon: detail?.icon || icon,
label: detail?.label || name,
parameters,
outputParameters,
// ...
}), [detail, icon, name, parameters, outputParameters])
```
## Target Metrics After Refactoring
| Metric | Target |
|--------|--------|
| Total Complexity | < 50 |
| Max Function Complexity | < 30 |
| Function Length | < 30 lines |
| Nesting Depth | ≤ 3 levels |
| Conditional Chains | ≤ 3 conditions |

View File

@@ -1,477 +0,0 @@
# Component Splitting Patterns
This document provides detailed guidance on splitting large components into smaller, focused components in Dify.
## When to Split Components
Split a component when you identify:
1. **Multiple UI sections** - Distinct visual areas with minimal coupling that can be composed independently
1. **Conditional rendering blocks** - Large `{condition && <JSX />}` blocks
1. **Repeated patterns** - Similar UI structures used multiple times
1. **300+ lines** - Component exceeds manageable size
1. **Modal clusters** - Multiple modals rendered in one component
## Splitting Strategies
### Strategy 1: Section-Based Splitting
Identify visual sections and extract each as a component.
```typescript
// ❌ Before: Monolithic component (500+ lines)
const ConfigurationPage = () => {
return (
<div>
{/* Header Section - 50 lines */}
<div className="header">
<h1>{t('configuration.title')}</h1>
<div className="actions">
{isAdvancedMode && <Badge>Advanced</Badge>}
<ModelParameterModal ... />
<AppPublisher ... />
</div>
</div>
{/* Config Section - 200 lines */}
<div className="config">
<Config />
</div>
{/* Debug Section - 150 lines */}
<div className="debug">
<Debug ... />
</div>
{/* Modals Section - 100 lines */}
{showSelectDataSet && <SelectDataSet ... />}
{showHistoryModal && <EditHistoryModal ... />}
{showUseGPT4Confirm && <Confirm ... />}
</div>
)
}
// ✅ After: Split into focused components
// configuration/
// ├── index.tsx (orchestration)
// ├── configuration-header.tsx
// ├── configuration-content.tsx
// ├── configuration-debug.tsx
// └── configuration-modals.tsx
// configuration-header.tsx
interface ConfigurationHeaderProps {
isAdvancedMode: boolean
onPublish: () => void
}
const ConfigurationHeader: FC<ConfigurationHeaderProps> = ({
isAdvancedMode,
onPublish,
}) => {
const { t } = useTranslation()
return (
<div className="header">
<h1>{t('configuration.title')}</h1>
<div className="actions">
{isAdvancedMode && <Badge>Advanced</Badge>}
<ModelParameterModal ... />
<AppPublisher onPublish={onPublish} />
</div>
</div>
)
}
// index.tsx (orchestration only)
const ConfigurationPage = () => {
const { modelConfig, setModelConfig } = useModelConfig()
const { activeModal, openModal, closeModal } = useModalState()
return (
<div>
<ConfigurationHeader
isAdvancedMode={isAdvancedMode}
onPublish={handlePublish}
/>
<ConfigurationContent
modelConfig={modelConfig}
onConfigChange={setModelConfig}
/>
{!isMobile && (
<ConfigurationDebug
inputs={inputs}
onSetting={handleSetting}
/>
)}
<ConfigurationModals
activeModal={activeModal}
onClose={closeModal}
/>
</div>
)
}
```
### Strategy 2: Conditional Block Extraction
Extract large conditional rendering blocks.
```typescript
// ❌ Before: Large conditional blocks
const AppInfo = () => {
return (
<div>
{expand ? (
<div className="expanded">
{/* 100 lines of expanded view */}
</div>
) : (
<div className="collapsed">
{/* 50 lines of collapsed view */}
</div>
)}
</div>
)
}
// ✅ After: Separate view components
const AppInfoExpanded: FC<AppInfoViewProps> = ({ appDetail, onAction }) => {
return (
<div className="expanded">
{/* Clean, focused expanded view */}
</div>
)
}
const AppInfoCollapsed: FC<AppInfoViewProps> = ({ appDetail, onAction }) => {
return (
<div className="collapsed">
{/* Clean, focused collapsed view */}
</div>
)
}
const AppInfo = () => {
return (
<div>
{expand
? <AppInfoExpanded appDetail={appDetail} onAction={handleAction} />
: <AppInfoCollapsed appDetail={appDetail} onAction={handleAction} />
}
</div>
)
}
```
### Strategy 3: Modal Extraction
Extract modals with their trigger logic.
```typescript
// ❌ Before: Multiple modals in one component
const AppInfo = () => {
const [showEdit, setShowEdit] = useState(false)
const [showDuplicate, setShowDuplicate] = useState(false)
const [showDelete, setShowDelete] = useState(false)
const [showSwitch, setShowSwitch] = useState(false)
const onEdit = async (data) => { /* 20 lines */ }
const onDuplicate = async (data) => { /* 20 lines */ }
const onDelete = async () => { /* 15 lines */ }
return (
<div>
{/* Main content */}
{showEdit && <EditModal onConfirm={onEdit} onClose={() => setShowEdit(false)} />}
{showDuplicate && <DuplicateModal onConfirm={onDuplicate} onClose={() => setShowDuplicate(false)} />}
{showDelete && <DeleteConfirm onConfirm={onDelete} onClose={() => setShowDelete(false)} />}
{showSwitch && <SwitchModal ... />}
</div>
)
}
// ✅ After: Modal manager component
// app-info-modals.tsx
type ModalType = 'edit' | 'duplicate' | 'delete' | 'switch' | null
interface AppInfoModalsProps {
appDetail: AppDetail
activeModal: ModalType
onClose: () => void
onSuccess: () => void
}
const AppInfoModals: FC<AppInfoModalsProps> = ({
appDetail,
activeModal,
onClose,
onSuccess,
}) => {
const handleEdit = async (data) => { /* logic */ }
const handleDuplicate = async (data) => { /* logic */ }
const handleDelete = async () => { /* logic */ }
return (
<>
{activeModal === 'edit' && (
<EditModal
appDetail={appDetail}
onConfirm={handleEdit}
onClose={onClose}
/>
)}
{activeModal === 'duplicate' && (
<DuplicateModal
appDetail={appDetail}
onConfirm={handleDuplicate}
onClose={onClose}
/>
)}
{activeModal === 'delete' && (
<DeleteConfirm
onConfirm={handleDelete}
onClose={onClose}
/>
)}
{activeModal === 'switch' && (
<SwitchModal
appDetail={appDetail}
onClose={onClose}
/>
)}
</>
)
}
// Parent component
const AppInfo = () => {
const { activeModal, openModal, closeModal } = useModalState()
return (
<div>
{/* Main content with openModal triggers */}
<Button onClick={() => openModal('edit')}>Edit</Button>
<AppInfoModals
appDetail={appDetail}
activeModal={activeModal}
onClose={closeModal}
onSuccess={handleSuccess}
/>
</div>
)
}
```
### Strategy 4: List Item Extraction
Extract repeated item rendering.
```typescript
// ❌ Before: Inline item rendering
const OperationsList = () => {
return (
<div>
{operations.map(op => (
<div key={op.id} className="operation-item">
<span className="icon">{op.icon}</span>
<span className="title">{op.title}</span>
<span className="description">{op.description}</span>
<button onClick={() => op.onClick()}>
{op.actionLabel}
</button>
{op.badge && <Badge>{op.badge}</Badge>}
{/* More complex rendering... */}
</div>
))}
</div>
)
}
// ✅ After: Extracted item component
interface OperationItemProps {
operation: Operation
onAction: (id: string) => void
}
const OperationItem: FC<OperationItemProps> = ({ operation, onAction }) => {
return (
<div className="operation-item">
<span className="icon">{operation.icon}</span>
<span className="title">{operation.title}</span>
<span className="description">{operation.description}</span>
<button onClick={() => onAction(operation.id)}>
{operation.actionLabel}
</button>
{operation.badge && <Badge>{operation.badge}</Badge>}
</div>
)
}
const OperationsList = () => {
const handleAction = useCallback((id: string) => {
const op = operations.find(o => o.id === id)
op?.onClick()
}, [operations])
return (
<div>
{operations.map(op => (
<OperationItem
key={op.id}
operation={op}
onAction={handleAction}
/>
))}
</div>
)
}
```
## Directory Structure Patterns
### Pattern A: Flat Structure (Simple Components)
For components with 2-3 sub-components:
```
component-name/
├── index.tsx # Main component
├── sub-component-a.tsx
├── sub-component-b.tsx
└── types.ts # Shared types
```
### Pattern B: Nested Structure (Complex Components)
For components with many sub-components:
```
component-name/
├── index.tsx # Main orchestration
├── types.ts # Shared types
├── hooks/
│ ├── use-feature-a.ts
│ └── use-feature-b.ts
├── components/
│ ├── header/
│ │ └── index.tsx
│ ├── content/
│ │ └── index.tsx
│ └── modals/
│ └── index.tsx
└── utils/
└── helpers.ts
```
### Pattern C: Feature-Based Structure (Dify Standard)
Following Dify's existing patterns:
```
configuration/
├── index.tsx # Main page component
├── base/ # Base/shared components
│ ├── feature-panel/
│ ├── group-name/
│ └── operation-btn/
├── config/ # Config section
│ ├── index.tsx
│ ├── agent/
│ └── automatic/
├── dataset-config/ # Dataset section
│ ├── index.tsx
│ ├── card-item/
│ └── params-config/
├── debug/ # Debug section
│ ├── index.tsx
│ └── hooks.tsx
└── hooks/ # Shared hooks
└── use-advanced-prompt-config.ts
```
## Props Design
### Minimal Props Principle
Pass only what's needed:
```typescript
// ❌ Bad: Passing entire objects when only some fields needed
<ConfigHeader appDetail={appDetail} modelConfig={modelConfig} />
// ✅ Good: Destructure to minimum required
<ConfigHeader
appName={appDetail.name}
isAdvancedMode={modelConfig.isAdvanced}
onPublish={handlePublish}
/>
```
### Callback Props Pattern
Use callbacks for child-to-parent communication:
```typescript
// Parent
const Parent = () => {
const [value, setValue] = useState('')
return (
<Child
value={value}
onChange={setValue}
onSubmit={handleSubmit}
/>
)
}
// Child
interface ChildProps {
value: string
onChange: (value: string) => void
onSubmit: () => void
}
const Child: FC<ChildProps> = ({ value, onChange, onSubmit }) => {
return (
<div>
<input value={value} onChange={e => onChange(e.target.value)} />
<button onClick={onSubmit}>Submit</button>
</div>
)
}
```
### Render Props for Flexibility
When sub-components need parent context:
```typescript
interface ListProps<T> {
items: T[]
renderItem: (item: T, index: number) => React.ReactNode
renderEmpty?: () => React.ReactNode
}
function List<T>({ items, renderItem, renderEmpty }: ListProps<T>) {
if (items.length === 0 && renderEmpty) {
return <>{renderEmpty()}</>
}
return (
<div>
{items.map((item, index) => renderItem(item, index))}
</div>
)
}
// Usage
<List
items={operations}
renderItem={(op, i) => <OperationItem key={i} operation={op} />}
renderEmpty={() => <EmptyState message="No operations" />}
/>
```

View File

@@ -1,283 +0,0 @@
# Hook Extraction Patterns
This document provides detailed guidance on extracting custom hooks from complex components in Dify.
## When to Extract Hooks
Extract a custom hook when you identify:
1. **Coupled state groups** - Multiple `useState` hooks that are always used together
1. **Complex effects** - `useEffect` with multiple dependencies or cleanup logic
1. **Business logic** - Data transformations, validations, or calculations
1. **Reusable patterns** - Logic that appears in multiple components
## Extraction Process
### Step 1: Identify State Groups
Look for state variables that are logically related:
```typescript
// ❌ These belong together - extract to hook
const [modelConfig, setModelConfig] = useState<ModelConfig>(...)
const [completionParams, setCompletionParams] = useState<FormValue>({})
const [modelModeType, setModelModeType] = useState<ModelModeType>(...)
// These are model-related state that should be in useModelConfig()
```
### Step 2: Identify Related Effects
Find effects that modify the grouped state:
```typescript
// ❌ These effects belong with the state above
useEffect(() => {
if (hasFetchedDetail && !modelModeType) {
const mode = currModel?.model_properties.mode
if (mode) {
const newModelConfig = produce(modelConfig, (draft) => {
draft.mode = mode
})
setModelConfig(newModelConfig)
}
}
}, [textGenerationModelList, hasFetchedDetail, modelModeType, currModel])
```
### Step 3: Create the Hook
```typescript
// hooks/use-model-config.ts
import type { FormValue } from '@/app/components/header/account-setting/model-provider-page/declarations'
import type { ModelConfig } from '@/models/debug'
import { produce } from 'immer'
import { useEffect, useState } from 'react'
import { ModelModeType } from '@/types/app'
interface UseModelConfigParams {
initialConfig?: Partial<ModelConfig>
currModel?: { model_properties?: { mode?: ModelModeType } }
hasFetchedDetail: boolean
}
interface UseModelConfigReturn {
modelConfig: ModelConfig
setModelConfig: (config: ModelConfig) => void
completionParams: FormValue
setCompletionParams: (params: FormValue) => void
modelModeType: ModelModeType
}
export const useModelConfig = ({
initialConfig,
currModel,
hasFetchedDetail,
}: UseModelConfigParams): UseModelConfigReturn => {
const [modelConfig, setModelConfig] = useState<ModelConfig>({
provider: 'langgenius/openai/openai',
model_id: 'gpt-3.5-turbo',
mode: ModelModeType.unset,
// ... default values
...initialConfig,
})
const [completionParams, setCompletionParams] = useState<FormValue>({})
const modelModeType = modelConfig.mode
// Fill old app data missing model mode
useEffect(() => {
if (hasFetchedDetail && !modelModeType) {
const mode = currModel?.model_properties?.mode
if (mode) {
setModelConfig(produce(modelConfig, (draft) => {
draft.mode = mode
}))
}
}
}, [hasFetchedDetail, modelModeType, currModel])
return {
modelConfig,
setModelConfig,
completionParams,
setCompletionParams,
modelModeType,
}
}
```
### Step 4: Update Component
```typescript
// Before: 50+ lines of state management
const Configuration: FC = () => {
const [modelConfig, setModelConfig] = useState<ModelConfig>(...)
// ... lots of related state and effects
}
// After: Clean component
const Configuration: FC = () => {
const {
modelConfig,
setModelConfig,
completionParams,
setCompletionParams,
modelModeType,
} = useModelConfig({
currModel,
hasFetchedDetail,
})
// Component now focuses on UI
}
```
## Naming Conventions
### Hook Names
- Use `use` prefix: `useModelConfig`, `useDatasetConfig`
- Be specific: `useAdvancedPromptConfig` not `usePrompt`
- Include domain: `useWorkflowVariables`, `useMCPServer`
### File Names
- Kebab-case: `use-model-config.ts`
- Place in `hooks/` subdirectory when multiple hooks exist
- Place alongside component for single-use hooks
### Return Type Names
- Suffix with `Return`: `UseModelConfigReturn`
- Suffix params with `Params`: `UseModelConfigParams`
## Common Hook Patterns in Dify
### 1. Data Fetching / Mutation Hooks
When hook extraction touches query or mutation code, do not use this reference as the source of truth for data-layer patterns.
- Follow `web/AGENTS.md` first.
- Use `frontend-query-mutation` for contracts, query shape, data-fetching wrappers, query/mutation call-site patterns, conditional queries, invalidation, and mutation error handling.
- Do not introduce deprecated `useInvalid` / `useReset`.
- Do not extract thin passthrough `useQuery` hooks; only extract orchestration hooks.
### 2. Form State Hook
```typescript
// Pattern: Form state + validation + submission
export const useConfigForm = (initialValues: ConfigFormValues) => {
const [values, setValues] = useState(initialValues)
const [errors, setErrors] = useState<Record<string, string>>({})
const [isSubmitting, setIsSubmitting] = useState(false)
const validate = useCallback(() => {
const newErrors: Record<string, string> = {}
if (!values.name) newErrors.name = 'Name is required'
setErrors(newErrors)
return Object.keys(newErrors).length === 0
}, [values])
const handleChange = useCallback((field: string, value: any) => {
setValues(prev => ({ ...prev, [field]: value }))
}, [])
const handleSubmit = useCallback(async (onSubmit: (values: ConfigFormValues) => Promise<void>) => {
if (!validate()) return
setIsSubmitting(true)
try {
await onSubmit(values)
} finally {
setIsSubmitting(false)
}
}, [values, validate])
return { values, errors, isSubmitting, handleChange, handleSubmit }
}
```
### 3. Modal State Hook
```typescript
// Pattern: Multiple modal management
type ModalType = 'edit' | 'delete' | 'duplicate' | null
export const useModalState = () => {
const [activeModal, setActiveModal] = useState<ModalType>(null)
const [modalData, setModalData] = useState<any>(null)
const openModal = useCallback((type: ModalType, data?: any) => {
setActiveModal(type)
setModalData(data)
}, [])
const closeModal = useCallback(() => {
setActiveModal(null)
setModalData(null)
}, [])
return {
activeModal,
modalData,
openModal,
closeModal,
isOpen: useCallback((type: ModalType) => activeModal === type, [activeModal]),
}
}
```
### 4. Toggle/Boolean Hook
```typescript
// Pattern: Boolean state with convenience methods
export const useToggle = (initialValue = false) => {
const [value, setValue] = useState(initialValue)
const toggle = useCallback(() => setValue(v => !v), [])
const setTrue = useCallback(() => setValue(true), [])
const setFalse = useCallback(() => setValue(false), [])
return [value, { toggle, setTrue, setFalse, set: setValue }] as const
}
// Usage
const [isExpanded, { toggle, setTrue: expand, setFalse: collapse }] = useToggle()
```
## Testing Extracted Hooks
After extraction, test hooks in isolation:
```typescript
// use-model-config.spec.ts
import { renderHook, act } from '@testing-library/react'
import { useModelConfig } from './use-model-config'
describe('useModelConfig', () => {
it('should initialize with default values', () => {
const { result } = renderHook(() => useModelConfig({
hasFetchedDetail: false,
}))
expect(result.current.modelConfig.provider).toBe('langgenius/openai/openai')
expect(result.current.modelModeType).toBe(ModelModeType.unset)
})
it('should update model config', () => {
const { result } = renderHook(() => useModelConfig({
hasFetchedDetail: true,
}))
act(() => {
result.current.setModelConfig({
...result.current.modelConfig,
model_id: 'gpt-4',
})
})
expect(result.current.modelConfig.model_id).toBe('gpt-4')
})
})
```

View File

@@ -1,73 +0,0 @@
---
name: frontend-code-review
description: "Trigger when the user requests a review of frontend files (e.g., `.tsx`, `.ts`, `.js`). Support both pending-change reviews and focused file reviews while applying the checklist rules."
---
# Frontend Code Review
## Intent
Use this skill whenever the user asks to review frontend code (especially `.tsx`, `.ts`, or `.js` files). Support two review modes:
1. **Pending-change review** – inspect staged/working-tree files slated for commit and flag checklist violations before submission.
2. **File-targeted review** – review the specific file(s) the user names and report the relevant checklist findings.
Stick to the checklist below for every applicable file and mode.
## Checklist
See [references/code-quality.md](references/code-quality.md), [references/performance.md](references/performance.md), [references/business-logic.md](references/business-logic.md) for the living checklist split by category—treat it as the canonical set of rules to follow.
Flag each rule violation with urgency metadata so future reviewers can prioritize fixes.
## Review Process
1. Open the relevant component/module. Gather lines that relate to class names, React Flow hooks, prop memoization, and styling.
2. For each rule in the review point, note where the code deviates and capture a representative snippet.
3. Compose the review section per the template below. Group violations first by **Urgent** flag, then by category order (Code Quality, Performance, Business Logic).
## Required output
When invoked, the response must exactly follow one of the two templates:
### Template A (any findings)
```
# Code review
Found <N> urgent issues need to be fixed:
## 1 <brief description of bug>
FilePath: <path> line <line>
<relevant code snippet or pointer>
### Suggested fix
<brief description of suggested fix>
---
... (repeat for each urgent issue) ...
Found <M> suggestions for improvement:
## 1 <brief description of suggestion>
FilePath: <path> line <line>
<relevant code snippet or pointer>
### Suggested fix
<brief description of suggested fix>
---
... (repeat for each suggestion) ...
```
If there are no urgent issues, omit that section. If there are no suggestions, omit that section.
If the issue number is more than 10, summarize as "10+ urgent issues" or "10+ suggestions" and just output the first 10 issues.
Don't compress the blank lines between sections; keep them as-is for readability.
If you use Template A (i.e., there are issues to fix) and at least one issue requires code changes, append a brief follow-up question after the structured output asking whether the user wants you to apply the suggested fix(es). For example: "Would you like me to use the Suggested fix section to address these issues?"
### Template B (no issues)
```
## Code review
No issues found.
```

View File

@@ -1,15 +0,0 @@
# Rule Catalog — Business Logic
## Can't use workflowStore in Node components
IsUrgent: True
### Description
File path pattern of node components: `web/app/components/workflow/nodes/[nodeName]/node.tsx`
Node components are also used when creating a RAG Pipe from a template, but in that context there is no workflowStore Provider, which results in a blank screen. [This Issue](https://github.com/langgenius/dify/issues/29168) was caused by exactly this reason.
### Suggested Fix
Use `import { useNodes } from 'reactflow'` instead of `import useNodes from '@/app/components/workflow/store/workflow/use-nodes'`.

View File

@@ -1,44 +0,0 @@
# Rule Catalog — Code Quality
## Conditional class names use utility function
IsUrgent: True
Category: Code Quality
### Description
Ensure conditional CSS is handled via the shared `classNames` instead of custom ternaries, string concatenation, or template strings. Centralizing class logic keeps components consistent and easier to maintain.
### Suggested Fix
```ts
import { cn } from '@/utils/classnames'
const classNames = cn(isActive ? 'text-primary-600' : 'text-gray-500')
```
## Tailwind-first styling
IsUrgent: True
Category: Code Quality
### Description
Favor Tailwind CSS utility classes instead of adding new `.module.css` files unless a Tailwind combination cannot achieve the required styling. Keeping styles in Tailwind improves consistency and reduces maintenance overhead.
Update this file when adding, editing, or removing Code Quality rules so the catalog remains accurate.
## Classname ordering for easy overrides
### Description
When writing components, always place the incoming `className` prop after the component’s own class values so that downstream consumers can override or extend the styling. This keeps your component’s defaults but still lets external callers change or remove specific styles.
Example:
```tsx
import { cn } from '@/utils/classnames'
const Button = ({ className }) => {
return <div className={cn('bg-primary-600', className)}></div>
}
```

View File

@@ -1,45 +0,0 @@
# Rule Catalog — Performance
## React Flow data usage
IsUrgent: True
Category: Performance
### Description
When rendering React Flow, prefer `useNodes`/`useEdges` for UI consumption and rely on `useStoreApi` inside callbacks that mutate or read node/edge state. Avoid manually pulling Flow data outside of these hooks.
## Complex prop memoization
IsUrgent: True
Category: Performance
### Description
Wrap complex prop values (objects, arrays, maps) in `useMemo` prior to passing them into child components to guarantee stable references and prevent unnecessary renders.
Update this file when adding, editing, or removing Performance rules so the catalog remains accurate.
Wrong:
```tsx
<HeavyComp
config={{
provider: ...,
detail: ...
}}
/>
```
Right:
```tsx
const config = useMemo(() => ({
provider: ...,
detail: ...
}), [provider, detail]);
<HeavyComp
config={config}
/>
```

View File

@@ -1,44 +0,0 @@
---
name: frontend-query-mutation
description: Guide for implementing Dify frontend query and mutation patterns with TanStack Query and oRPC. Trigger when creating or updating contracts in web/contract, wiring router composition, consuming consoleQuery or marketplaceQuery in components or services, deciding whether to call queryOptions() directly or extract a helper or use-* hook, handling conditional queries, cache invalidation, mutation error handling, or migrating legacy service calls to contract-first query and mutation helpers.
---
# Frontend Query & Mutation
## Intent
- Keep contract as the single source of truth in `web/contract/*`.
- Prefer contract-shaped `queryOptions()` and `mutationOptions()`.
- Keep invalidation and mutation flow knowledge in the service layer.
- Keep abstractions minimal to preserve TypeScript inference.
## Workflow
1. Identify the change surface.
- Read `references/contract-patterns.md` for contract files, router composition, client helpers, and query or mutation call-site shape.
- Read `references/runtime-rules.md` for conditional queries, invalidation, error handling, and legacy migrations.
- Read both references when a task spans contract shape and runtime behavior.
2. Implement the smallest abstraction that fits the task.
- Default to direct `useQuery(...)` or `useMutation(...)` calls with oRPC helpers at the call site.
- Extract a small shared query helper only when multiple call sites share the same extra options.
- Create `web/service/use-{domain}.ts` only for orchestration or shared domain behavior.
3. Preserve Dify conventions.
- Keep contract inputs in `{ params, query?, body? }` shape.
- Bind invalidation in the service-layer mutation definition.
- Prefer `mutate(...)`; use `mutateAsync(...)` only when Promise semantics are required.
## Files Commonly Touched
- `web/contract/console/*.ts`
- `web/contract/marketplace.ts`
- `web/contract/router.ts`
- `web/service/client.ts`
- `web/service/use-*.ts`
- component and hook call sites using `consoleQuery` or `marketplaceQuery`
## References
- Use `references/contract-patterns.md` for contract shape, router registration, query and mutation helpers, and anti-patterns that degrade inference.
- Use `references/runtime-rules.md` for conditional queries, invalidation, `mutate` versus `mutateAsync`, and legacy migration rules.
Treat this skill as the single query and mutation entry point for Dify frontend work. Keep detailed rules in the reference files instead of duplicating them in project docs.

View File

@@ -1,4 +0,0 @@
interface:
display_name: "Frontend Query & Mutation"
short_description: "Dify TanStack Query and oRPC patterns"
default_prompt: "Use this skill when implementing or reviewing Dify frontend contracts, query and mutation call sites, conditional queries, invalidation, or legacy query/mutation migrations."

View File

@@ -1,98 +0,0 @@
# Contract Patterns
## Table of Contents
- Intent
- Minimal structure
- Core workflow
- Query usage decision rule
- Mutation usage decision rule
- Anti-patterns
- Contract rules
- Type export
## Intent
- Keep contract as the single source of truth in `web/contract/*`.
- Default query usage to call-site `useQuery(consoleQuery|marketplaceQuery.xxx.queryOptions(...))` when endpoint behavior maps 1:1 to the contract.
- Keep abstractions minimal and preserve TypeScript inference.
## Minimal Structure
```text
web/contract/
├── base.ts
├── router.ts
├── marketplace.ts
└── console/
├── billing.ts
└── ...other domains
web/service/client.ts
```
## Core Workflow
1. Define contract in `web/contract/console/{domain}.ts` or `web/contract/marketplace.ts`.
- Use `base.route({...}).output(type<...>())` as the baseline.
- Add `.input(type<...>())` only when the request has `params`, `query`, or `body`.
- For `GET` without input, omit `.input(...)`; do not use `.input(type<unknown>())`.
2. Register contract in `web/contract/router.ts`.
- Import directly from domain files and nest by API prefix.
3. Consume from UI call sites via oRPC query utilities.
```typescript
import { useQuery } from '@tanstack/react-query'
import { consoleQuery } from '@/service/client'
const invoiceQuery = useQuery(consoleQuery.billing.invoices.queryOptions({
staleTime: 5 * 60 * 1000,
throwOnError: true,
select: invoice => invoice.url,
}))
```
## Query Usage Decision Rule
1. Default to direct `*.queryOptions(...)` usage at the call site.
2. If 3 or more call sites share the same extra options, extract a small query helper, not a `use-*` passthrough hook.
3. Create `web/service/use-{domain}.ts` only for orchestration.
- Combine multiple queries or mutations.
- Share domain-level derived state or invalidation helpers.
```typescript
const invoicesBaseQueryOptions = () =>
consoleQuery.billing.invoices.queryOptions({ retry: false })
const invoiceQuery = useQuery({
...invoicesBaseQueryOptions(),
throwOnError: true,
})
```
## Mutation Usage Decision Rule
1. Default to mutation helpers from `consoleQuery` or `marketplaceQuery`, for example `useMutation(consoleQuery.billing.bindPartnerStack.mutationOptions(...))`.
2. If the mutation flow is heavily custom, use oRPC clients as `mutationFn`, for example `consoleClient.xxx` or `marketplaceClient.xxx`, instead of handwritten non-oRPC mutation logic.
## Anti-Patterns
- Do not wrap `useQuery` with `options?: Partial<UseQueryOptions>`.
- Do not split local `queryKey` and `queryFn` when oRPC `queryOptions` already exists and fits the use case.
- Do not create thin `use-*` passthrough hooks for a single endpoint.
- These patterns can degrade inference, especially around `throwOnError` and `select`, and add unnecessary indirection.
## Contract Rules
- Input structure: always use `{ params, query?, body? }`.
- No-input `GET`: omit `.input(...)`; do not use `.input(type<unknown>())`.
- Path params: use `{paramName}` in the path and match it in the `params` object.
- Router nesting: group by API prefix, for example `/billing/*` becomes `billing: {}`.
- No barrel files: import directly from specific files.
- Types: import from `@/types/` and use the `type<T>()` helper.
- Mutations: prefer `mutationOptions`; use explicit `mutationKey` mainly for defaults, filtering, and devtools.
## Type Export
```typescript
export type ConsoleInputs = InferContractRouterInputs<typeof consoleRouterContract>
```

View File

@@ -1,130 +0,0 @@
# Runtime Rules
## Table of Contents
- Conditional queries
- Cache invalidation
- Key API guide
- `mutate` vs `mutateAsync`
- Legacy migration
## Conditional Queries
Prefer contract-shaped `queryOptions(...)`.
When required input is missing, prefer `input: skipToken` instead of placeholder params or non-null assertions.
Use `enabled` only for extra business gating after the input itself is already valid.
```typescript
import { skipToken, useQuery } from '@tanstack/react-query'
// Disable the query by skipping input construction.
function useAccessMode(appId: string | undefined) {
return useQuery(consoleQuery.accessControl.appAccessMode.queryOptions({
input: appId
? { params: { appId } }
: skipToken,
}))
}
// Avoid runtime-only guards that bypass type checking.
function useBadAccessMode(appId: string | undefined) {
return useQuery(consoleQuery.accessControl.appAccessMode.queryOptions({
input: { params: { appId: appId! } },
enabled: !!appId,
}))
}
```
## Cache Invalidation
Bind invalidation in the service-layer mutation definition.
Components may add UI feedback in call-site callbacks, but they should not decide which queries to invalidate.
Use:
- `.key()` for namespace or prefix invalidation
- `.queryKey(...)` only for exact cache reads or writes such as `getQueryData` and `setQueryData`
- `queryClient.invalidateQueries(...)` in mutation `onSuccess`
Do not use deprecated `useInvalid` from `use-base.ts`.
```typescript
// Service layer owns cache invalidation.
export const useUpdateAccessMode = () => {
const queryClient = useQueryClient()
return useMutation(consoleQuery.accessControl.updateAccessMode.mutationOptions({
onSuccess: () => {
queryClient.invalidateQueries({
queryKey: consoleQuery.accessControl.appWhitelistSubjects.key(),
})
},
}))
}
// Component only adds UI behavior.
updateAccessMode({ appId, mode }, {
onSuccess: () => toast.success('...'),
})
// Avoid putting invalidation knowledge in the component.
mutate({ appId, mode }, {
onSuccess: () => {
queryClient.invalidateQueries({
queryKey: consoleQuery.accessControl.appWhitelistSubjects.key(),
})
},
})
```
## Key API Guide
- `.key(...)`
- Use for partial matching operations.
- Prefer it for invalidation, refetch, and cancel patterns.
- Example: `queryClient.invalidateQueries({ queryKey: consoleQuery.billing.key() })`
- `.queryKey(...)`
- Use for a specific query's full key.
- Prefer it for exact cache addressing and direct reads or writes.
- `.mutationKey(...)`
- Use for a specific mutation's full key.
- Prefer it for mutation defaults registration, mutation-status filtering, and devtools grouping.
## `mutate` vs `mutateAsync`
Prefer `mutate` by default.
Use `mutateAsync` only when Promise semantics are truly required, such as parallel mutations or sequential steps with result dependencies.
Rules:
- Event handlers should usually call `mutate(...)` with `onSuccess` or `onError`.
- Every `await mutateAsync(...)` must be wrapped in `try/catch`.
- Do not use `mutateAsync` when callbacks already express the flow clearly.
```typescript
// Default case.
mutation.mutate(data, {
onSuccess: result => router.push(result.url),
})
// Promise semantics are required.
try {
const order = await createOrder.mutateAsync(orderData)
await confirmPayment.mutateAsync({ orderId: order.id, token })
router.push(`/orders/${order.id}`)
}
catch (error) {
toast.error(error instanceof Error ? error.message : 'Unknown error')
}
```
## Legacy Migration
When touching old code, migrate it toward these rules:
| Old pattern | New pattern |
|---|---|
| `useInvalid(key)` in service layer | `queryClient.invalidateQueries(...)` inside mutation `onSuccess` |
| component-triggered invalidation after mutation | move invalidation into the service-layer mutation definition |
| imperative fetch plus manual invalidation | wrap it in `useMutation(...mutationOptions(...))` |
| `await mutateAsync()` without `try/catch` | switch to `mutate(...)` or add `try/catch` |

View File

@@ -1,336 +0,0 @@
---
name: frontend-testing
description: Generate Vitest + React Testing Library tests for Dify frontend components, hooks, and utilities. Triggers on testing, spec files, coverage, Vitest, RTL, unit tests, integration tests, or write/review test requests.
---
# Dify Frontend Testing Skill
This skill enables Claude to generate high-quality, comprehensive frontend tests for the Dify project following established conventions and best practices.
> **âš ī¸ Authoritative Source**: This skill is derived from `web/docs/test.md`. Use Vitest mock/timer APIs (`vi.*`).
## When to Apply This Skill
Apply this skill when the user:
- Asks to **write tests** for a component, hook, or utility
- Asks to **review existing tests** for completeness
- Mentions **Vitest**, **React Testing Library**, **RTL**, or **spec files**
- Requests **test coverage** improvement
- Uses `pnpm analyze-component` output as context
- Mentions **testing**, **unit tests**, or **integration tests** for frontend code
- Wants to understand **testing patterns** in the Dify codebase
**Do NOT apply** when:
- User is asking about backend/API tests (Python/pytest)
- User is asking about E2E tests (Playwright/Cypress)
- User is only asking conceptual questions without code context
## Quick Reference
### Tech Stack
| Tool | Version | Purpose |
|------|---------|---------|
| Vitest | 4.0.16 | Test runner |
| React Testing Library | 16.0 | Component testing |
| jsdom | - | Test environment |
| nock | 14.0 | HTTP mocking |
| TypeScript | 5.x | Type safety |
### Key Commands
```bash
# Run all tests
pnpm test
# Watch mode
pnpm test:watch
# Run specific file
pnpm test path/to/file.spec.tsx
# Generate coverage report
pnpm test:coverage
# Analyze component complexity
pnpm analyze-component <path>
# Review existing test
pnpm analyze-component <path> --review
```
### File Naming
- Test files: `ComponentName.spec.tsx` inside a same-level `__tests__/` directory
- Placement rule: Component, hook, and utility tests must live in a sibling `__tests__/` folder at the same level as the source under test. For example, `foo/index.tsx` maps to `foo/__tests__/index.spec.tsx`, and `foo/bar.ts` maps to `foo/__tests__/bar.spec.ts`.
- Integration tests: `web/__tests__/` directory
## Test Structure Template
```typescript
import { render, screen, fireEvent, waitFor } from '@testing-library/react'
import Component from './index'
// ✅ Import real project components (DO NOT mock these)
// import Loading from '@/app/components/base/loading'
// import { ChildComponent } from './child-component'
// ✅ Mock external dependencies only
vi.mock('@/service/api')
vi.mock('next/navigation', () => ({
useRouter: () => ({ push: vi.fn() }),
usePathname: () => '/test',
}))
// ✅ Zustand stores: Use real stores (auto-mocked globally)
// Set test state with: useAppStore.setState({ ... })
// Shared state for mocks (if needed)
let mockSharedState = false
describe('ComponentName', () => {
beforeEach(() => {
vi.clearAllMocks() // ✅ Reset mocks BEFORE each test
mockSharedState = false // ✅ Reset shared state
})
// Rendering tests (REQUIRED)
describe('Rendering', () => {
it('should render without crashing', () => {
// Arrange
const props = { title: 'Test' }
// Act
render(<Component {...props} />)
// Assert
expect(screen.getByText('Test')).toBeInTheDocument()
})
})
// Props tests (REQUIRED)
describe('Props', () => {
it('should apply custom className', () => {
render(<Component className="custom" />)
expect(screen.getByRole('button')).toHaveClass('custom')
})
})
// User Interactions
describe('User Interactions', () => {
it('should handle click events', () => {
const handleClick = vi.fn()
render(<Component onClick={handleClick} />)
fireEvent.click(screen.getByRole('button'))
expect(handleClick).toHaveBeenCalledTimes(1)
})
})
// Edge Cases (REQUIRED)
describe('Edge Cases', () => {
it('should handle null data', () => {
render(<Component data={null} />)
expect(screen.getByText(/no data/i)).toBeInTheDocument()
})
it('should handle empty array', () => {
render(<Component items={[]} />)
expect(screen.getByText(/empty/i)).toBeInTheDocument()
})
})
})
```
## Testing Workflow (CRITICAL)
### âš ī¸ Incremental Approach Required
**NEVER generate all test files at once.** For complex components or multi-file directories:
1. **Analyze & Plan**: List all files, order by complexity (simple → complex)
1. **Process ONE at a time**: Write test → Run test → Fix if needed → Next
1. **Verify before proceeding**: Do NOT continue to next file until current passes
```
For each file:
┌────────────────────────────────────────┐
│ 1. Write test │
│ 2. Run: pnpm test <file>.spec.tsx │
│ 3. PASS? → Mark complete, next file │
│ FAIL? → Fix first, then continue │
└────────────────────────────────────────┘
```
### Complexity-Based Order
Process in this order for multi-file testing:
1. đŸŸĸ Utility functions (simplest)
1. đŸŸĸ Custom hooks
1. 🟡 Simple components (presentational)
1. 🟡 Medium components (state, effects)
1. 🔴 Complex components (API, routing)
1. 🔴 Integration tests (index files - last)
### When to Refactor First
- **Complexity > 50**: Break into smaller pieces before testing
- **500+ lines**: Consider splitting before testing
- **Many dependencies**: Extract logic into hooks first
> 📖 See `references/workflow.md` for complete workflow details and todo list format.
## Testing Strategy
### Path-Level Testing (Directory Testing)
When assigned to test a directory/path, test **ALL content** within that path:
- Test all components, hooks, utilities in the directory (not just `index` file)
- Use incremental approach: one file at a time, verify each before proceeding
- Goal: 100% coverage of ALL files in the directory
### Integration Testing First
**Prefer integration testing** when writing tests for a directory:
- ✅ **Import real project components** directly (including base components and siblings)
- ✅ **Only mock**: API services (`@/service/*`), `next/navigation`, complex context providers
- ❌ **DO NOT mock** base components (`@/app/components/base/*`)
- ❌ **DO NOT mock** sibling/child components in the same directory
> See [Test Structure Template](#test-structure-template) for correct import/mock patterns.
### `nuqs` Query State Testing (Required for URL State Hooks)
When a component or hook uses `useQueryState` / `useQueryStates`:
- ✅ Use `NuqsTestingAdapter` (prefer shared helpers in `web/test/nuqs-testing.tsx`)
- ✅ Assert URL synchronization via `onUrlUpdate` (`searchParams`, `options.history`)
- ✅ For custom parsers (`createParser`), keep `parse` and `serialize` bijective and add round-trip edge cases (`%2F`, `%25`, spaces, legacy encoded values)
- ✅ Verify default-clearing behavior (default values should be removed from URL when applicable)
- âš ī¸ Only mock `nuqs` directly when URL behavior is explicitly out of scope for the test
## Core Principles
### 1. AAA Pattern (Arrange-Act-Assert)
Every test should clearly separate:
- **Arrange**: Setup test data and render component
- **Act**: Perform user actions
- **Assert**: Verify expected outcomes
### 2. Black-Box Testing
- Test observable behavior, not implementation details
- Use semantic queries (getByRole, getByLabelText)
- Avoid testing internal state directly
- **Prefer pattern matching over hardcoded strings** in assertions:
```typescript
// ❌ Avoid: hardcoded text assertions
expect(screen.getByText('Loading...')).toBeInTheDocument()
// ✅ Better: role-based queries
expect(screen.getByRole('status')).toBeInTheDocument()
// ✅ Better: pattern matching
expect(screen.getByText(/loading/i)).toBeInTheDocument()
```
### 3. Single Behavior Per Test
Each test verifies ONE user-observable behavior:
```typescript
// ✅ Good: One behavior
it('should disable button when loading', () => {
render(<Button loading />)
expect(screen.getByRole('button')).toBeDisabled()
})
// ❌ Bad: Multiple behaviors
it('should handle loading state', () => {
render(<Button loading />)
expect(screen.getByRole('button')).toBeDisabled()
expect(screen.getByText('Loading...')).toBeInTheDocument()
expect(screen.getByRole('button')).toHaveClass('loading')
})
```
### 4. Semantic Naming
Use `should <behavior> when <condition>`:
```typescript
it('should show error message when validation fails')
it('should call onSubmit when form is valid')
it('should disable input when isReadOnly is true')
```
## Required Test Scenarios
### Always Required (All Components)
1. **Rendering**: Component renders without crashing
1. **Props**: Required props, optional props, default values
1. **Edge Cases**: null, undefined, empty values, boundary conditions
### Conditional (When Present)
| Feature | Test Focus |
|---------|-----------|
| `useState` | Initial state, transitions, cleanup |
| `useEffect` | Execution, dependencies, cleanup |
| Event handlers | All onClick, onChange, onSubmit, keyboard |
| API calls | Loading, success, error states |
| Routing | Navigation, params, query strings |
| `useCallback`/`useMemo` | Referential equality |
| Context | Provider values, consumer behavior |
| Forms | Validation, submission, error display |
## Coverage Goals (Per File)
For each test file generated, aim for:
- ✅ **100%** function coverage
- ✅ **100%** statement coverage
- ✅ **>95%** branch coverage
- ✅ **>95%** line coverage
> **Note**: For multi-file directories, process one file at a time with full coverage each. See `references/workflow.md`.
## Detailed Guides
For more detailed information, refer to:
- `references/workflow.md` - **Incremental testing workflow** (MUST READ for multi-file testing)
- `references/mocking.md` - Mock patterns, Zustand store testing, and best practices
- `references/async-testing.md` - Async operations and API calls
- `references/domain-components.md` - Workflow, Dataset, Configuration testing
- `references/common-patterns.md` - Frequently used testing patterns
- `references/checklist.md` - Test generation checklist and validation steps
## Authoritative References
### Primary Specification (MUST follow)
- **`web/docs/test.md`** - The canonical testing specification. This skill is derived from this document.
### Reference Examples in Codebase
- `web/utils/classnames.spec.ts` - Utility function tests
- `web/app/components/base/button/index.spec.tsx` - Component tests
- `web/__mocks__/provider-context.ts` - Mock factory example
### Project Configuration
- `web/vitest.config.ts` - Vitest configuration
- `web/vitest.setup.ts` - Test environment setup
- `web/scripts/analyze-component.js` - Component analysis tool
- Modules are not mocked automatically. Global mocks live in `web/vitest.setup.ts` (for example `react-i18next`, `next/image`); mock other modules like `ky` or `mime` locally in test files.

View File

@@ -1,293 +0,0 @@
/**
* Test Template for React Components
*
* WHY THIS STRUCTURE?
* - Organized sections make tests easy to navigate and maintain
* - Mocks at top ensure consistent test isolation
* - Factory functions reduce duplication and improve readability
* - describe blocks group related scenarios for better debugging
*
* INSTRUCTIONS:
* 1. Replace `ComponentName` with your component name
* 2. Update import path
* 3. Add/remove test sections based on component features (use analyze-component)
* 4. Follow AAA pattern: Arrange → Act → Assert
*
* RUN FIRST: pnpm analyze-component <path> to identify required test scenarios
*/
import { render, screen, fireEvent, waitFor } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
// import ComponentName from './index'
// ============================================================================
// Mocks
// ============================================================================
// WHY: Mocks must be hoisted to top of file (Vitest requirement).
// They run BEFORE imports, so keep them before component imports.
// i18n (automatically mocked)
// WHY: Global mock in web/vitest.setup.ts is auto-loaded by Vitest setup
// The global mock provides: useTranslation, Trans, useMixedTranslation, useGetLanguage
// No explicit mock needed for most tests
//
// Override only if custom translations are required:
// import { createReactI18nextMock } from '@/test/i18n-mock'
// vi.mock('react-i18next', () => createReactI18nextMock({
// 'my.custom.key': 'Custom Translation',
// 'button.save': 'Save',
// }))
// Router (if component uses useRouter, usePathname, useSearchParams)
// WHY: Isolates tests from Next.js routing, enables testing navigation behavior
// const mockPush = vi.fn()
// vi.mock('@/next/navigation', () => ({
// useRouter: () => ({ push: mockPush }),
// usePathname: () => '/test-path',
// }))
// API services (if component fetches data)
// WHY: Prevents real network calls, enables testing all states (loading/success/error)
// vi.mock('@/service/api')
// import * as api from '@/service/api'
// const mockedApi = vi.mocked(api)
// Shared mock state (for portal/dropdown components)
// WHY: Portal components like PortalToFollowElem need shared state between
// parent and child mocks to correctly simulate open/close behavior
// let mockOpenState = false
// ============================================================================
// Test Data Factories
// ============================================================================
// WHY FACTORIES?
// - Avoid hard-coded test data scattered across tests
// - Easy to create variations with overrides
// - Type-safe when using actual types from source
// - Single source of truth for default test values
// const createMockProps = (overrides = {}) => ({
// // Default props that make component render successfully
// ...overrides,
// })
// const createMockItem = (overrides = {}) => ({
// id: 'item-1',
// name: 'Test Item',
// ...overrides,
// })
// ============================================================================
// Test Helpers
// ============================================================================
// const renderComponent = (props = {}) => {
// return render(<ComponentName {...createMockProps(props)} />)
// }
// ============================================================================
// Tests
// ============================================================================
describe('ComponentName', () => {
// WHY beforeEach with clearAllMocks?
// - Ensures each test starts with clean slate
// - Prevents mock call history from leaking between tests
// - MUST be beforeEach (not afterEach) to reset BEFORE assertions like toHaveBeenCalledTimes
beforeEach(() => {
vi.clearAllMocks()
// Reset shared mock state if used (CRITICAL for portal/dropdown tests)
// mockOpenState = false
})
// --------------------------------------------------------------------------
// Rendering Tests (REQUIRED - Every component MUST have these)
// --------------------------------------------------------------------------
// WHY: Catches import errors, missing providers, and basic render issues
describe('Rendering', () => {
it('should render without crashing', () => {
// Arrange - Setup data and mocks
// const props = createMockProps()
// Act - Render the component
// render(<ComponentName {...props} />)
// Assert - Verify expected output
// Prefer getByRole for accessibility; it's what users "see"
// expect(screen.getByRole('...')).toBeInTheDocument()
})
it('should render with default props', () => {
// WHY: Verifies component works without optional props
// render(<ComponentName />)
// expect(screen.getByText('...')).toBeInTheDocument()
})
})
// --------------------------------------------------------------------------
// Props Tests (REQUIRED - Every component MUST test prop behavior)
// --------------------------------------------------------------------------
// WHY: Props are the component's API contract. Test them thoroughly.
describe('Props', () => {
it('should apply custom className', () => {
// WHY: Common pattern in Dify - components should merge custom classes
// render(<ComponentName className="custom-class" />)
// expect(screen.getByTestId('component')).toHaveClass('custom-class')
})
it('should use default values for optional props', () => {
// WHY: Verifies TypeScript defaults work at runtime
// render(<ComponentName />)
// expect(screen.getByRole('...')).toHaveAttribute('...', 'default-value')
})
})
// --------------------------------------------------------------------------
// User Interactions (if component has event handlers - on*, handle*)
// --------------------------------------------------------------------------
// WHY: Event handlers are core functionality. Test from user's perspective.
describe('User Interactions', () => {
it('should call onClick when clicked', async () => {
// WHY userEvent over fireEvent?
// - userEvent simulates real user behavior (focus, hover, then click)
// - fireEvent is lower-level, doesn't trigger all browser events
// const user = userEvent.setup()
// const handleClick = vi.fn()
// render(<ComponentName onClick={handleClick} />)
//
// await user.click(screen.getByRole('button'))
//
// expect(handleClick).toHaveBeenCalledTimes(1)
})
it('should call onChange when value changes', async () => {
// const user = userEvent.setup()
// const handleChange = vi.fn()
// render(<ComponentName onChange={handleChange} />)
//
// await user.type(screen.getByRole('textbox'), 'new value')
//
// expect(handleChange).toHaveBeenCalled()
})
})
// --------------------------------------------------------------------------
// State Management (if component uses useState/useReducer)
// --------------------------------------------------------------------------
// WHY: Test state through observable UI changes, not internal state values
describe('State Management', () => {
it('should update state on interaction', async () => {
// WHY test via UI, not state?
// - State is implementation detail; UI is what users see
// - If UI works correctly, state must be correct
// const user = userEvent.setup()
// render(<ComponentName />)
//
// // Initial state - verify what user sees
// expect(screen.getByText('Initial')).toBeInTheDocument()
//
// // Trigger state change via user action
// await user.click(screen.getByRole('button'))
//
// // New state - verify UI updated
// expect(screen.getByText('Updated')).toBeInTheDocument()
})
})
// --------------------------------------------------------------------------
// Async Operations (if component fetches data - useQuery, fetch)
// --------------------------------------------------------------------------
// WHY: Async operations have 3 states users experience: loading, success, error
describe('Async Operations', () => {
it('should show loading state', () => {
// WHY never-resolving promise?
// - Keeps component in loading state for assertion
// - Alternative: use fake timers
// mockedApi.fetchData.mockImplementation(() => new Promise(() => {}))
// render(<ComponentName />)
//
// expect(screen.getByText(/loading/i)).toBeInTheDocument()
})
it('should show data on success', async () => {
// WHY waitFor?
// - Component updates asynchronously after fetch resolves
// - waitFor retries assertion until it passes or times out
// mockedApi.fetchData.mockResolvedValue({ items: ['Item 1'] })
// render(<ComponentName />)
//
// await waitFor(() => {
// expect(screen.getByText('Item 1')).toBeInTheDocument()
// })
})
it('should show error on failure', async () => {
// mockedApi.fetchData.mockRejectedValue(new Error('Network error'))
// render(<ComponentName />)
//
// await waitFor(() => {
// expect(screen.getByText(/error/i)).toBeInTheDocument()
// })
})
})
// --------------------------------------------------------------------------
// Edge Cases (REQUIRED - Every component MUST handle edge cases)
// --------------------------------------------------------------------------
// WHY: Real-world data is messy. Components must handle:
// - Null/undefined from API failures or optional fields
// - Empty arrays/strings from user clearing data
// - Boundary values (0, MAX_INT, special characters)
describe('Edge Cases', () => {
it('should handle null value', () => {
// WHY test null specifically?
// - API might return null for missing data
// - Prevents "Cannot read property of null" in production
// render(<ComponentName value={null} />)
// expect(screen.getByText(/no data/i)).toBeInTheDocument()
})
it('should handle undefined value', () => {
// WHY test undefined separately from null?
// - TypeScript treats them differently
// - Optional props are undefined, not null
// render(<ComponentName value={undefined} />)
// expect(screen.getByText(/no data/i)).toBeInTheDocument()
})
it('should handle empty array', () => {
// WHY: Empty state often needs special UI (e.g., "No items yet")
// render(<ComponentName items={[]} />)
// expect(screen.getByText(/empty/i)).toBeInTheDocument()
})
it('should handle empty string', () => {
// WHY: Empty strings are truthy in JS but visually empty
// render(<ComponentName text="" />)
// expect(screen.getByText(/placeholder/i)).toBeInTheDocument()
})
})
// --------------------------------------------------------------------------
// Accessibility (optional but recommended for Dify's enterprise users)
// --------------------------------------------------------------------------
// WHY: Dify has enterprise customers who may require accessibility compliance
describe('Accessibility', () => {
it('should have accessible name', () => {
// WHY getByRole with name?
// - Tests that screen readers can identify the element
// - Enforces proper labeling practices
// render(<ComponentName label="Test Label" />)
// expect(screen.getByRole('button', { name: /test label/i })).toBeInTheDocument()
})
it('should support keyboard navigation', async () => {
// WHY: Some users can't use a mouse
// const user = userEvent.setup()
// render(<ComponentName />)
//
// await user.tab()
// expect(screen.getByRole('button')).toHaveFocus()
})
})
})

View File

@@ -1,207 +0,0 @@
/**
* Test Template for Custom Hooks
*
* Instructions:
* 1. Replace `useHookName` with your hook name
* 2. Update import path
* 3. Add/remove test sections based on hook features
*/
import { renderHook, act, waitFor } from '@testing-library/react'
// import { useHookName } from './use-hook-name'
// ============================================================================
// Mocks
// ============================================================================
// API services (if hook fetches data)
// vi.mock('@/service/api')
// import * as api from '@/service/api'
// const mockedApi = vi.mocked(api)
// ============================================================================
// Test Helpers
// ============================================================================
// Wrapper for hooks that need context
// const createWrapper = (contextValue = {}) => {
// return ({ children }: { children: React.ReactNode }) => (
// <SomeContext.Provider value={contextValue}>
// {children}
// </SomeContext.Provider>
// )
// }
// ============================================================================
// Tests
// ============================================================================
describe('useHookName', () => {
beforeEach(() => {
vi.clearAllMocks()
})
// --------------------------------------------------------------------------
// Initial State
// --------------------------------------------------------------------------
describe('Initial State', () => {
it('should return initial state', () => {
// const { result } = renderHook(() => useHookName())
//
// expect(result.current.value).toBe(initialValue)
// expect(result.current.isLoading).toBe(false)
})
it('should accept initial value from props', () => {
// const { result } = renderHook(() => useHookName({ initialValue: 'custom' }))
//
// expect(result.current.value).toBe('custom')
})
})
// --------------------------------------------------------------------------
// State Updates
// --------------------------------------------------------------------------
describe('State Updates', () => {
it('should update value when setValue is called', () => {
// const { result } = renderHook(() => useHookName())
//
// act(() => {
// result.current.setValue('new value')
// })
//
// expect(result.current.value).toBe('new value')
})
it('should reset to initial value', () => {
// const { result } = renderHook(() => useHookName({ initialValue: 'initial' }))
//
// act(() => {
// result.current.setValue('changed')
// })
// expect(result.current.value).toBe('changed')
//
// act(() => {
// result.current.reset()
// })
// expect(result.current.value).toBe('initial')
})
})
// --------------------------------------------------------------------------
// Async Operations
// --------------------------------------------------------------------------
describe('Async Operations', () => {
it('should fetch data on mount', async () => {
// mockedApi.fetchData.mockResolvedValue({ data: 'test' })
//
// const { result } = renderHook(() => useHookName())
//
// // Initially loading
// expect(result.current.isLoading).toBe(true)
//
// // Wait for data
// await waitFor(() => {
// expect(result.current.isLoading).toBe(false)
// })
//
// expect(result.current.data).toEqual({ data: 'test' })
})
it('should handle fetch error', async () => {
// mockedApi.fetchData.mockRejectedValue(new Error('Network error'))
//
// const { result } = renderHook(() => useHookName())
//
// await waitFor(() => {
// expect(result.current.error).toBeTruthy()
// })
//
// expect(result.current.error?.message).toBe('Network error')
})
it('should refetch when dependency changes', async () => {
// mockedApi.fetchData.mockResolvedValue({ data: 'test' })
//
// const { result, rerender } = renderHook(
// ({ id }) => useHookName(id),
// { initialProps: { id: '1' } }
// )
//
// await waitFor(() => {
// expect(mockedApi.fetchData).toHaveBeenCalledWith('1')
// })
//
// rerender({ id: '2' })
//
// await waitFor(() => {
// expect(mockedApi.fetchData).toHaveBeenCalledWith('2')
// })
})
})
// --------------------------------------------------------------------------
// Side Effects
// --------------------------------------------------------------------------
describe('Side Effects', () => {
it('should call callback when value changes', () => {
// const callback = vi.fn()
// const { result } = renderHook(() => useHookName({ onChange: callback }))
//
// act(() => {
// result.current.setValue('new value')
// })
//
// expect(callback).toHaveBeenCalledWith('new value')
})
it('should cleanup on unmount', () => {
// const cleanup = vi.fn()
// vi.spyOn(window, 'addEventListener')
// vi.spyOn(window, 'removeEventListener')
//
// const { unmount } = renderHook(() => useHookName())
//
// expect(window.addEventListener).toHaveBeenCalled()
//
// unmount()
//
// expect(window.removeEventListener).toHaveBeenCalled()
})
})
// --------------------------------------------------------------------------
// Edge Cases
// --------------------------------------------------------------------------
describe('Edge Cases', () => {
it('should handle null input', () => {
// const { result } = renderHook(() => useHookName(null))
//
// expect(result.current.value).toBeNull()
})
it('should handle rapid updates', () => {
// const { result } = renderHook(() => useHookName())
//
// act(() => {
// result.current.setValue('1')
// result.current.setValue('2')
// result.current.setValue('3')
// })
//
// expect(result.current.value).toBe('3')
})
})
// --------------------------------------------------------------------------
// With Context (if hook uses context)
// --------------------------------------------------------------------------
describe('With Context', () => {
it('should use context value', () => {
// const wrapper = createWrapper({ someValue: 'context-value' })
// const { result } = renderHook(() => useHookName(), { wrapper })
//
// expect(result.current.contextValue).toBe('context-value')
})
})
})

View File

@@ -1,154 +0,0 @@
/**
* Test Template for Utility Functions
*
* Instructions:
* 1. Replace `utilityFunction` with your function name
* 2. Update import path
* 3. Use test.each for data-driven tests
*/
// import { utilityFunction } from './utility'
// ============================================================================
// Tests
// ============================================================================
describe('utilityFunction', () => {
// --------------------------------------------------------------------------
// Basic Functionality
// --------------------------------------------------------------------------
describe('Basic Functionality', () => {
it('should return expected result for valid input', () => {
// expect(utilityFunction('input')).toBe('expected-output')
})
it('should handle multiple arguments', () => {
// expect(utilityFunction('a', 'b', 'c')).toBe('abc')
})
})
// --------------------------------------------------------------------------
// Data-Driven Tests
// --------------------------------------------------------------------------
describe('Input/Output Mapping', () => {
test.each([
// [input, expected]
['input1', 'output1'],
['input2', 'output2'],
['input3', 'output3'],
])('should return %s for input %s', (input, expected) => {
// expect(utilityFunction(input)).toBe(expected)
})
})
// --------------------------------------------------------------------------
// Edge Cases
// --------------------------------------------------------------------------
describe('Edge Cases', () => {
it('should handle empty string', () => {
// expect(utilityFunction('')).toBe('')
})
it('should handle null', () => {
// expect(utilityFunction(null)).toBe(null)
// or
// expect(() => utilityFunction(null)).toThrow()
})
it('should handle undefined', () => {
// expect(utilityFunction(undefined)).toBe(undefined)
// or
// expect(() => utilityFunction(undefined)).toThrow()
})
it('should handle empty array', () => {
// expect(utilityFunction([])).toEqual([])
})
it('should handle empty object', () => {
// expect(utilityFunction({})).toEqual({})
})
})
// --------------------------------------------------------------------------
// Boundary Conditions
// --------------------------------------------------------------------------
describe('Boundary Conditions', () => {
it('should handle minimum value', () => {
// expect(utilityFunction(0)).toBe(0)
})
it('should handle maximum value', () => {
// expect(utilityFunction(Number.MAX_SAFE_INTEGER)).toBe(...)
})
it('should handle negative numbers', () => {
// expect(utilityFunction(-1)).toBe(...)
})
})
// --------------------------------------------------------------------------
// Type Coercion (if applicable)
// --------------------------------------------------------------------------
describe('Type Handling', () => {
it('should handle numeric string', () => {
// expect(utilityFunction('123')).toBe(123)
})
it('should handle boolean', () => {
// expect(utilityFunction(true)).toBe(...)
})
})
// --------------------------------------------------------------------------
// Error Cases
// --------------------------------------------------------------------------
describe('Error Handling', () => {
it('should throw for invalid input', () => {
// expect(() => utilityFunction('invalid')).toThrow('Error message')
})
it('should throw with specific error type', () => {
// expect(() => utilityFunction('invalid')).toThrow(ValidationError)
})
})
// --------------------------------------------------------------------------
// Complex Objects (if applicable)
// --------------------------------------------------------------------------
describe('Object Handling', () => {
it('should preserve object structure', () => {
// const input = { a: 1, b: 2 }
// expect(utilityFunction(input)).toEqual({ a: 1, b: 2 })
})
it('should handle nested objects', () => {
// const input = { nested: { deep: 'value' } }
// expect(utilityFunction(input)).toEqual({ nested: { deep: 'transformed' } })
})
it('should not mutate input', () => {
// const input = { a: 1 }
// const inputCopy = { ...input }
// utilityFunction(input)
// expect(input).toEqual(inputCopy)
})
})
// --------------------------------------------------------------------------
// Array Handling (if applicable)
// --------------------------------------------------------------------------
describe('Array Handling', () => {
it('should process all elements', () => {
// expect(utilityFunction([1, 2, 3])).toEqual([2, 4, 6])
})
it('should handle single element array', () => {
// expect(utilityFunction([1])).toEqual([2])
})
it('should preserve order', () => {
// expect(utilityFunction(['c', 'a', 'b'])).toEqual(['c', 'a', 'b'])
})
})
})

View File

@@ -1,345 +0,0 @@
# Async Testing Guide
## Core Async Patterns
### 1. waitFor - Wait for Condition
```typescript
import { render, screen, waitFor } from '@testing-library/react'
it('should load and display data', async () => {
render(<DataComponent />)
// Wait for element to appear
await waitFor(() => {
expect(screen.getByText('Loaded Data')).toBeInTheDocument()
})
})
it('should hide loading spinner after load', async () => {
render(<DataComponent />)
// Wait for element to disappear
await waitFor(() => {
expect(screen.queryByText('Loading...')).not.toBeInTheDocument()
})
})
```
### 2. findBy\* - Async Queries
```typescript
it('should show user name after fetch', async () => {
render(<UserProfile />)
// findBy returns a promise, auto-waits up to 1000ms
const userName = await screen.findByText('John Doe')
expect(userName).toBeInTheDocument()
// findByRole with options
const button = await screen.findByRole('button', { name: /submit/i })
expect(button).toBeEnabled()
})
```
### 3. userEvent for Async Interactions
```typescript
import userEvent from '@testing-library/user-event'
it('should submit form', async () => {
const user = userEvent.setup()
const onSubmit = vi.fn()
render(<Form onSubmit={onSubmit} />)
// userEvent methods are async
await user.type(screen.getByLabelText('Email'), 'test@example.com')
await user.click(screen.getByRole('button', { name: /submit/i }))
await waitFor(() => {
expect(onSubmit).toHaveBeenCalledWith({ email: 'test@example.com' })
})
})
```
## Fake Timers
### When to Use Fake Timers
- Testing components with `setTimeout`/`setInterval`
- Testing debounce/throttle behavior
- Testing animations or delayed transitions
- Testing polling or retry logic
### Basic Fake Timer Setup
```typescript
describe('Debounced Search', () => {
beforeEach(() => {
vi.useFakeTimers()
})
afterEach(() => {
vi.useRealTimers()
})
it('should debounce search input', async () => {
const onSearch = vi.fn()
render(<SearchInput onSearch={onSearch} debounceMs={300} />)
// Type in the input
fireEvent.change(screen.getByRole('textbox'), { target: { value: 'query' } })
// Search not called immediately
expect(onSearch).not.toHaveBeenCalled()
// Advance timers
vi.advanceTimersByTime(300)
// Now search is called
expect(onSearch).toHaveBeenCalledWith('query')
})
})
```
### Fake Timers with Async Code
```typescript
it('should retry on failure', async () => {
vi.useFakeTimers()
const fetchData = vi.fn()
.mockRejectedValueOnce(new Error('Network error'))
.mockResolvedValueOnce({ data: 'success' })
render(<RetryComponent fetchData={fetchData} retryDelayMs={1000} />)
// First call fails
await waitFor(() => {
expect(fetchData).toHaveBeenCalledTimes(1)
})
// Advance timer for retry
vi.advanceTimersByTime(1000)
// Second call succeeds
await waitFor(() => {
expect(fetchData).toHaveBeenCalledTimes(2)
expect(screen.getByText('success')).toBeInTheDocument()
})
vi.useRealTimers()
})
```
### Common Fake Timer Utilities
```typescript
// Run all pending timers
vi.runAllTimers()
// Run only pending timers (not new ones created during execution)
vi.runOnlyPendingTimers()
// Advance by specific time
vi.advanceTimersByTime(1000)
// Get current fake time
Date.now()
// Clear all timers
vi.clearAllTimers()
```
## API Testing Patterns
### Loading → Success → Error States
```typescript
describe('DataFetcher', () => {
beforeEach(() => {
vi.clearAllMocks()
})
it('should show loading state', () => {
mockedApi.fetchData.mockImplementation(() => new Promise(() => {})) // Never resolves
render(<DataFetcher />)
expect(screen.getByTestId('loading-spinner')).toBeInTheDocument()
})
it('should show data on success', async () => {
mockedApi.fetchData.mockResolvedValue({ items: ['Item 1', 'Item 2'] })
render(<DataFetcher />)
// Use findBy* for multiple async elements (better error messages than waitFor with multiple assertions)
const item1 = await screen.findByText('Item 1')
const item2 = await screen.findByText('Item 2')
expect(item1).toBeInTheDocument()
expect(item2).toBeInTheDocument()
expect(screen.queryByTestId('loading-spinner')).not.toBeInTheDocument()
})
it('should show error on failure', async () => {
mockedApi.fetchData.mockRejectedValue(new Error('Failed to fetch'))
render(<DataFetcher />)
await waitFor(() => {
expect(screen.getByText(/failed to fetch/i)).toBeInTheDocument()
})
})
it('should retry on error', async () => {
mockedApi.fetchData.mockRejectedValue(new Error('Network error'))
render(<DataFetcher />)
await waitFor(() => {
expect(screen.getByRole('button', { name: /retry/i })).toBeInTheDocument()
})
mockedApi.fetchData.mockResolvedValue({ items: ['Item 1'] })
fireEvent.click(screen.getByRole('button', { name: /retry/i }))
await waitFor(() => {
expect(screen.getByText('Item 1')).toBeInTheDocument()
})
})
})
```
### Testing Mutations
```typescript
it('should submit form and show success', async () => {
const user = userEvent.setup()
mockedApi.createItem.mockResolvedValue({ id: '1', name: 'New Item' })
render(<CreateItemForm />)
await user.type(screen.getByLabelText('Name'), 'New Item')
await user.click(screen.getByRole('button', { name: /create/i }))
// Button should be disabled during submission
expect(screen.getByRole('button', { name: /creating/i })).toBeDisabled()
await waitFor(() => {
expect(screen.getByText(/created successfully/i)).toBeInTheDocument()
})
expect(mockedApi.createItem).toHaveBeenCalledWith({ name: 'New Item' })
})
```
## useEffect Testing
### Testing Effect Execution
```typescript
it('should fetch data on mount', async () => {
const fetchData = vi.fn().mockResolvedValue({ data: 'test' })
render(<ComponentWithEffect fetchData={fetchData} />)
await waitFor(() => {
expect(fetchData).toHaveBeenCalledTimes(1)
})
})
```
### Testing Effect Dependencies
```typescript
it('should refetch when id changes', async () => {
const fetchData = vi.fn().mockResolvedValue({ data: 'test' })
const { rerender } = render(<ComponentWithEffect id="1" fetchData={fetchData} />)
await waitFor(() => {
expect(fetchData).toHaveBeenCalledWith('1')
})
rerender(<ComponentWithEffect id="2" fetchData={fetchData} />)
await waitFor(() => {
expect(fetchData).toHaveBeenCalledWith('2')
expect(fetchData).toHaveBeenCalledTimes(2)
})
})
```
### Testing Effect Cleanup
```typescript
it('should cleanup subscription on unmount', () => {
const subscribe = vi.fn()
const unsubscribe = vi.fn()
subscribe.mockReturnValue(unsubscribe)
const { unmount } = render(<SubscriptionComponent subscribe={subscribe} />)
expect(subscribe).toHaveBeenCalledTimes(1)
unmount()
expect(unsubscribe).toHaveBeenCalledTimes(1)
})
```
## Common Async Pitfalls
### ❌ Don't: Forget to await
```typescript
// Bad - test may pass even if assertion fails
it('should load data', () => {
render(<Component />)
waitFor(() => {
expect(screen.getByText('Data')).toBeInTheDocument()
})
})
// Good - properly awaited
it('should load data', async () => {
render(<Component />)
await waitFor(() => {
expect(screen.getByText('Data')).toBeInTheDocument()
})
})
```
### ❌ Don't: Use multiple assertions in single waitFor
```typescript
// Bad - if first assertion fails, won't know about second
await waitFor(() => {
expect(screen.getByText('Title')).toBeInTheDocument()
expect(screen.getByText('Description')).toBeInTheDocument()
})
// Good - separate waitFor or use findBy
const title = await screen.findByText('Title')
const description = await screen.findByText('Description')
expect(title).toBeInTheDocument()
expect(description).toBeInTheDocument()
```
### ❌ Don't: Mix fake timers with real async
```typescript
// Bad - fake timers don't work well with real Promises
vi.useFakeTimers()
await waitFor(() => {
expect(screen.getByText('Data')).toBeInTheDocument()
}) // May timeout!
// Good - use runAllTimers or advanceTimersByTime
vi.useFakeTimers()
render(<Component />)
vi.runAllTimers()
expect(screen.getByText('Data')).toBeInTheDocument()
```

View File

@@ -1,208 +0,0 @@
# Test Generation Checklist
Use this checklist when generating or reviewing tests for Dify frontend components.
## Pre-Generation
- [ ] Read the component source code completely
- [ ] Identify component type (component, hook, utility, page)
- [ ] Run `pnpm analyze-component <path>` if available
- [ ] Note complexity score and features detected
- [ ] Check for existing tests in the same directory
- [ ] **Identify ALL files in the directory** that need testing (not just index)
## Testing Strategy
### âš ī¸ Incremental Workflow (CRITICAL for Multi-File)
- [ ] **NEVER generate all tests at once** - process one file at a time
- [ ] Order files by complexity: utilities → hooks → simple → complex → integration
- [ ] Create a todo list to track progress before starting
- [ ] For EACH file: write → run test → verify pass → then next
- [ ] **DO NOT proceed** to next file until current one passes
### Path-Level Coverage
- [ ] **Test ALL files** in the assigned directory/path
- [ ] List all components, hooks, utilities that need coverage
- [ ] Decide: single spec file (integration) or multiple spec files (unit)
### Complexity Assessment
- [ ] Run `pnpm analyze-component <path>` for complexity score
- [ ] **Complexity > 50**: Consider refactoring before testing
- [ ] **500+ lines**: Consider splitting before testing
- [ ] **30-50 complexity**: Use multiple describe blocks, organized structure
### Integration vs Mocking
- [ ] **DO NOT mock base components** (`Loading`, `Button`, `Tooltip`, etc.)
- [ ] Import real project components instead of mocking
- [ ] Only mock: API calls, complex context providers, third-party libs with side effects
- [ ] Prefer integration testing when using single spec file
## Required Test Sections
### All Components MUST Have
- [ ] **Rendering tests** - Component renders without crashing
- [ ] **Props tests** - Required props, optional props, default values
- [ ] **Edge cases** - null, undefined, empty values, boundaries
### Conditional Sections (Add When Feature Present)
| Feature | Add Tests For |
|---------|---------------|
| `useState` | Initial state, transitions, cleanup |
| `useEffect` | Execution, dependencies, cleanup |
| Event handlers | onClick, onChange, onSubmit, keyboard |
| API calls | Loading, success, error states |
| Routing | Navigation, params, query strings |
| `useCallback`/`useMemo` | Referential equality |
| Context | Provider values, consumer behavior |
| Forms | Validation, submission, error display |
## Code Quality Checklist
### Structure
- [ ] Uses `describe` blocks to group related tests
- [ ] Test names follow `should <behavior> when <condition>` pattern
- [ ] AAA pattern (Arrange-Act-Assert) is clear
- [ ] Comments explain complex test scenarios
### Mocks
- [ ] **DO NOT mock base components** (`@/app/components/base/*`)
- [ ] `vi.clearAllMocks()` in `beforeEach` (not `afterEach`)
- [ ] Shared mock state reset in `beforeEach`
- [ ] i18n uses global mock (auto-loaded in `web/vitest.setup.ts`); only override locally for custom translations
- [ ] Router mocks match actual Next.js API
- [ ] Mocks reflect actual component conditional behavior
- [ ] Only mock: API services, complex context providers, third-party libs
- [ ] For `nuqs` URL-state tests, wrap with `NuqsTestingAdapter` (prefer `web/test/nuqs-testing.tsx`)
- [ ] For `nuqs` URL-state tests, assert `onUrlUpdate` payload (`searchParams`, `options.history`)
- [ ] If custom `nuqs` parser exists, add round-trip tests for encoded edge cases (`%2F`, `%25`, spaces, legacy encoded values)
### Queries
- [ ] Prefer semantic queries (`getByRole`, `getByLabelText`)
- [ ] Use `queryBy*` for absence assertions
- [ ] Use `findBy*` for async elements
- [ ] `getByTestId` only as last resort
### Async
- [ ] All async tests use `async/await`
- [ ] `waitFor` wraps async assertions
- [ ] Fake timers properly setup/teardown
- [ ] No floating promises
### TypeScript
- [ ] No `any` types without justification
- [ ] Mock data uses actual types from source
- [ ] Factory functions have proper return types
## Coverage Goals (Per File)
For the current file being tested:
- [ ] 100% function coverage
- [ ] 100% statement coverage
- [ ] >95% branch coverage
- [ ] >95% line coverage
## Post-Generation (Per File)
**Run these checks after EACH test file, not just at the end:**
- [ ] Run `pnpm test path/to/file.spec.tsx` - **MUST PASS before next file**
- [ ] Fix any failures immediately
- [ ] Mark file as complete in todo list
- [ ] Only then proceed to next file
### After All Files Complete
- [ ] Run full directory test: `pnpm test path/to/directory/`
- [ ] Check coverage report: `pnpm test:coverage`
- [ ] Run `pnpm lint:fix` on all test files
- [ ] Run `pnpm type-check:tsgo`
## Common Issues to Watch
### False Positives
```typescript
// ❌ Mock doesn't match actual behavior
vi.mock('./Component', () => () => <div>Mocked</div>)
// ✅ Mock matches actual conditional logic
vi.mock('./Component', () => ({ isOpen }: any) =>
isOpen ? <div>Content</div> : null
)
```
### State Leakage
```typescript
// ❌ Shared state not reset
let mockState = false
vi.mock('./useHook', () => () => mockState)
// ✅ Reset in beforeEach
beforeEach(() => {
mockState = false
})
```
### Async Race Conditions
```typescript
// ❌ Not awaited
it('loads data', () => {
render(<Component />)
expect(screen.getByText('Data')).toBeInTheDocument()
})
// ✅ Properly awaited
it('loads data', async () => {
render(<Component />)
await waitFor(() => {
expect(screen.getByText('Data')).toBeInTheDocument()
})
})
```
### Missing Edge Cases
Always test these scenarios:
- `null` / `undefined` inputs
- Empty strings / arrays / objects
- Boundary values (0, -1, MAX_INT)
- Error states
- Loading states
- Disabled states
## Quick Commands
```bash
# Run specific test
pnpm test path/to/file.spec.tsx
# Run with coverage
pnpm test:coverage path/to/file.spec.tsx
# Watch mode
pnpm test:watch path/to/file.spec.tsx
# Update snapshots (use sparingly)
pnpm test -u path/to/file.spec.tsx
# Analyze component
pnpm analyze-component path/to/component.tsx
# Review existing test
pnpm analyze-component path/to/component.tsx --review
```

View File

@@ -1,449 +0,0 @@
# Common Testing Patterns
## Query Priority
Use queries in this order (most to least preferred):
```typescript
// 1. getByRole - Most recommended (accessibility)
screen.getByRole('button', { name: /submit/i })
screen.getByRole('textbox', { name: /email/i })
screen.getByRole('heading', { level: 1 })
// 2. getByLabelText - Form fields
screen.getByLabelText('Email address')
screen.getByLabelText(/password/i)
// 3. getByPlaceholderText - When no label
screen.getByPlaceholderText('Search...')
// 4. getByText - Non-interactive elements
screen.getByText('Welcome to Dify')
screen.getByText(/loading/i)
// 5. getByDisplayValue - Current input value
screen.getByDisplayValue('current value')
// 6. getByAltText - Images
screen.getByAltText('Company logo')
// 7. getByTitle - Tooltip elements
screen.getByTitle('Close')
// 8. getByTestId - Last resort only!
screen.getByTestId('custom-element')
```
## Event Handling Patterns
### Click Events
```typescript
// Basic click
fireEvent.click(screen.getByRole('button'))
// With userEvent (preferred for realistic interaction)
const user = userEvent.setup()
await user.click(screen.getByRole('button'))
// Double click
await user.dblClick(screen.getByRole('button'))
// Right click
await user.pointer({ keys: '[MouseRight]', target: screen.getByRole('button') })
```
### Form Input
```typescript
const user = userEvent.setup()
// Type in input
await user.type(screen.getByRole('textbox'), 'Hello World')
// Clear and type
await user.clear(screen.getByRole('textbox'))
await user.type(screen.getByRole('textbox'), 'New value')
// Select option
await user.selectOptions(screen.getByRole('combobox'), 'option-value')
// Check checkbox
await user.click(screen.getByRole('checkbox'))
// Upload file
const file = new File(['content'], 'test.pdf', { type: 'application/pdf' })
await user.upload(screen.getByLabelText(/upload/i), file)
```
### Keyboard Events
```typescript
const user = userEvent.setup()
// Press Enter
await user.keyboard('{Enter}')
// Press Escape
await user.keyboard('{Escape}')
// Keyboard shortcut
await user.keyboard('{Control>}a{/Control}') // Ctrl+A
// Tab navigation
await user.tab()
// Arrow keys
await user.keyboard('{ArrowDown}')
await user.keyboard('{ArrowUp}')
```
## Component State Testing
### Testing State Transitions
```typescript
describe('Counter', () => {
it('should increment count', async () => {
const user = userEvent.setup()
render(<Counter initialCount={0} />)
// Initial state
expect(screen.getByText('Count: 0')).toBeInTheDocument()
// Trigger transition
await user.click(screen.getByRole('button', { name: /increment/i }))
// New state
expect(screen.getByText('Count: 1')).toBeInTheDocument()
})
})
```
### Testing Controlled Components
```typescript
describe('ControlledInput', () => {
it('should call onChange with new value', async () => {
const user = userEvent.setup()
const handleChange = vi.fn()
render(<ControlledInput value="" onChange={handleChange} />)
await user.type(screen.getByRole('textbox'), 'a')
expect(handleChange).toHaveBeenCalledWith('a')
})
it('should display controlled value', () => {
render(<ControlledInput value="controlled" onChange={vi.fn()} />)
expect(screen.getByRole('textbox')).toHaveValue('controlled')
})
})
```
## Conditional Rendering Testing
```typescript
describe('ConditionalComponent', () => {
it('should show loading state', () => {
render(<DataDisplay isLoading={true} data={null} />)
expect(screen.getByText(/loading/i)).toBeInTheDocument()
expect(screen.queryByTestId('data-content')).not.toBeInTheDocument()
})
it('should show error state', () => {
render(<DataDisplay isLoading={false} data={null} error="Failed to load" />)
expect(screen.getByText(/failed to load/i)).toBeInTheDocument()
})
it('should show data when loaded', () => {
render(<DataDisplay isLoading={false} data={{ name: 'Test' }} />)
expect(screen.getByText('Test')).toBeInTheDocument()
})
it('should show empty state when no data', () => {
render(<DataDisplay isLoading={false} data={[]} />)
expect(screen.getByText(/no data/i)).toBeInTheDocument()
})
})
```
## List Rendering Testing
```typescript
describe('ItemList', () => {
const items = [
{ id: '1', name: 'Item 1' },
{ id: '2', name: 'Item 2' },
{ id: '3', name: 'Item 3' },
]
it('should render all items', () => {
render(<ItemList items={items} />)
expect(screen.getAllByRole('listitem')).toHaveLength(3)
items.forEach(item => {
expect(screen.getByText(item.name)).toBeInTheDocument()
})
})
it('should handle item selection', async () => {
const user = userEvent.setup()
const onSelect = vi.fn()
render(<ItemList items={items} onSelect={onSelect} />)
await user.click(screen.getByText('Item 2'))
expect(onSelect).toHaveBeenCalledWith(items[1])
})
it('should handle empty list', () => {
render(<ItemList items={[]} />)
expect(screen.getByText(/no items/i)).toBeInTheDocument()
})
})
```
## Modal/Dialog Testing
```typescript
describe('Modal', () => {
it('should not render when closed', () => {
render(<Modal isOpen={false} onClose={vi.fn()} />)
expect(screen.queryByRole('dialog')).not.toBeInTheDocument()
})
it('should render when open', () => {
render(<Modal isOpen={true} onClose={vi.fn()} />)
expect(screen.getByRole('dialog')).toBeInTheDocument()
})
it('should call onClose when clicking overlay', async () => {
const user = userEvent.setup()
const handleClose = vi.fn()
render(<Modal isOpen={true} onClose={handleClose} />)
await user.click(screen.getByTestId('modal-overlay'))
expect(handleClose).toHaveBeenCalled()
})
it('should call onClose when pressing Escape', async () => {
const user = userEvent.setup()
const handleClose = vi.fn()
render(<Modal isOpen={true} onClose={handleClose} />)
await user.keyboard('{Escape}')
expect(handleClose).toHaveBeenCalled()
})
it('should trap focus inside modal', async () => {
const user = userEvent.setup()
render(
<Modal isOpen={true} onClose={vi.fn()}>
<button>First</button>
<button>Second</button>
</Modal>
)
// Focus should cycle within modal
await user.tab()
expect(screen.getByText('First')).toHaveFocus()
await user.tab()
expect(screen.getByText('Second')).toHaveFocus()
await user.tab()
expect(screen.getByText('First')).toHaveFocus() // Cycles back
})
})
```
## Form Testing
```typescript
describe('LoginForm', () => {
it('should submit valid form', async () => {
const user = userEvent.setup()
const onSubmit = vi.fn()
render(<LoginForm onSubmit={onSubmit} />)
await user.type(screen.getByLabelText(/email/i), 'test@example.com')
await user.type(screen.getByLabelText(/password/i), 'password123')
await user.click(screen.getByRole('button', { name: /sign in/i }))
expect(onSubmit).toHaveBeenCalledWith({
email: 'test@example.com',
password: 'password123',
})
})
it('should show validation errors', async () => {
const user = userEvent.setup()
render(<LoginForm onSubmit={vi.fn()} />)
// Submit empty form
await user.click(screen.getByRole('button', { name: /sign in/i }))
expect(screen.getByText(/email is required/i)).toBeInTheDocument()
expect(screen.getByText(/password is required/i)).toBeInTheDocument()
})
it('should validate email format', async () => {
const user = userEvent.setup()
render(<LoginForm onSubmit={vi.fn()} />)
await user.type(screen.getByLabelText(/email/i), 'invalid-email')
await user.click(screen.getByRole('button', { name: /sign in/i }))
expect(screen.getByText(/invalid email/i)).toBeInTheDocument()
})
it('should disable submit button while submitting', async () => {
const user = userEvent.setup()
const onSubmit = vi.fn(() => new Promise(resolve => setTimeout(resolve, 100)))
render(<LoginForm onSubmit={onSubmit} />)
await user.type(screen.getByLabelText(/email/i), 'test@example.com')
await user.type(screen.getByLabelText(/password/i), 'password123')
await user.click(screen.getByRole('button', { name: /sign in/i }))
expect(screen.getByRole('button', { name: /signing in/i })).toBeDisabled()
await waitFor(() => {
expect(screen.getByRole('button', { name: /sign in/i })).toBeEnabled()
})
})
})
```
## Data-Driven Tests with test.each
```typescript
describe('StatusBadge', () => {
test.each([
['success', 'bg-green-500'],
['warning', 'bg-yellow-500'],
['error', 'bg-red-500'],
['info', 'bg-blue-500'],
])('should apply correct class for %s status', (status, expectedClass) => {
render(<StatusBadge status={status} />)
expect(screen.getByTestId('status-badge')).toHaveClass(expectedClass)
})
test.each([
{ input: null, expected: 'Unknown' },
{ input: undefined, expected: 'Unknown' },
{ input: '', expected: 'Unknown' },
{ input: 'invalid', expected: 'Unknown' },
])('should show "Unknown" for invalid input: $input', ({ input, expected }) => {
render(<StatusBadge status={input} />)
expect(screen.getByText(expected)).toBeInTheDocument()
})
})
```
## Debugging Tips
```typescript
// Print entire DOM
screen.debug()
// Print specific element
screen.debug(screen.getByRole('button'))
// Log testing playground URL
screen.logTestingPlaygroundURL()
// Pretty print DOM
import { prettyDOM } from '@testing-library/react'
console.log(prettyDOM(screen.getByRole('dialog')))
// Check available roles
import { getRoles } from '@testing-library/react'
console.log(getRoles(container))
```
## Common Mistakes to Avoid
### ❌ Don't Use Implementation Details
```typescript
// Bad - testing implementation
expect(component.state.isOpen).toBe(true)
expect(wrapper.find('.internal-class').length).toBe(1)
// Good - testing behavior
expect(screen.getByRole('dialog')).toBeInTheDocument()
```
### ❌ Don't Forget Cleanup
```typescript
// Bad - may leak state between tests
it('test 1', () => {
render(<Component />)
})
// Good - cleanup is automatic with RTL, but reset mocks
beforeEach(() => {
vi.clearAllMocks()
})
```
### ❌ Don't Use Exact String Matching (Prefer Black-Box Assertions)
```typescript
// ❌ Bad - hardcoded strings are brittle
expect(screen.getByText('Submit Form')).toBeInTheDocument()
expect(screen.getByText('Loading...')).toBeInTheDocument()
// ✅ Good - role-based queries (most semantic)
expect(screen.getByRole('button', { name: /submit/i })).toBeInTheDocument()
expect(screen.getByRole('status')).toBeInTheDocument()
// ✅ Good - pattern matching (flexible)
expect(screen.getByText(/submit/i)).toBeInTheDocument()
expect(screen.getByText(/loading/i)).toBeInTheDocument()
// ✅ Good - test behavior, not exact UI text
expect(screen.getByRole('button')).toBeDisabled()
expect(screen.getByRole('alert')).toBeInTheDocument()
```
**Why prefer black-box assertions?**
- Text content may change (i18n, copy updates)
- Role-based queries test accessibility
- Pattern matching is resilient to minor changes
- Tests focus on behavior, not implementation details
### ❌ Don't Assert on Absence Without Query
```typescript
// Bad - throws if not found
expect(screen.getByText('Error')).not.toBeInTheDocument() // Error!
// Good - use queryBy for absence assertions
expect(screen.queryByText('Error')).not.toBeInTheDocument()
```

View File

@@ -1,523 +0,0 @@
# Domain-Specific Component Testing
This guide covers testing patterns for Dify's domain-specific components.
## Workflow Components (`workflow/`)
Workflow components handle node configuration, data flow, and graph operations.
### Key Test Areas
1. **Node Configuration**
1. **Data Validation**
1. **Variable Passing**
1. **Edge Connections**
1. **Error Handling**
### Example: Node Configuration Panel
```typescript
import { render, screen, fireEvent, waitFor } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import NodeConfigPanel from './node-config-panel'
import { createMockNode, createMockWorkflowContext } from '@/__mocks__/workflow'
// Mock workflow context
vi.mock('@/app/components/workflow/hooks', () => ({
useWorkflowStore: () => mockWorkflowStore,
useNodesInteractions: () => mockNodesInteractions,
}))
let mockWorkflowStore = {
nodes: [],
edges: [],
updateNode: vi.fn(),
}
let mockNodesInteractions = {
handleNodeSelect: vi.fn(),
handleNodeDelete: vi.fn(),
}
describe('NodeConfigPanel', () => {
beforeEach(() => {
vi.clearAllMocks()
mockWorkflowStore = {
nodes: [],
edges: [],
updateNode: vi.fn(),
}
})
describe('Node Configuration', () => {
it('should render node type selector', () => {
const node = createMockNode({ type: 'llm' })
render(<NodeConfigPanel node={node} />)
expect(screen.getByLabelText(/model/i)).toBeInTheDocument()
})
it('should update node config on change', async () => {
const user = userEvent.setup()
const node = createMockNode({ type: 'llm' })
render(<NodeConfigPanel node={node} />)
await user.selectOptions(screen.getByLabelText(/model/i), 'gpt-4')
expect(mockWorkflowStore.updateNode).toHaveBeenCalledWith(
node.id,
expect.objectContaining({ model: 'gpt-4' })
)
})
})
describe('Data Validation', () => {
it('should show error for invalid input', async () => {
const user = userEvent.setup()
const node = createMockNode({ type: 'code' })
render(<NodeConfigPanel node={node} />)
// Enter invalid code
const codeInput = screen.getByLabelText(/code/i)
await user.clear(codeInput)
await user.type(codeInput, 'invalid syntax {{{')
await waitFor(() => {
expect(screen.getByText(/syntax error/i)).toBeInTheDocument()
})
})
it('should validate required fields', async () => {
const node = createMockNode({ type: 'http', data: { url: '' } })
render(<NodeConfigPanel node={node} />)
fireEvent.click(screen.getByRole('button', { name: /save/i }))
await waitFor(() => {
expect(screen.getByText(/url is required/i)).toBeInTheDocument()
})
})
})
describe('Variable Passing', () => {
it('should display available variables from upstream nodes', () => {
const upstreamNode = createMockNode({
id: 'node-1',
type: 'start',
data: { outputs: [{ name: 'user_input', type: 'string' }] },
})
const currentNode = createMockNode({
id: 'node-2',
type: 'llm',
})
mockWorkflowStore.nodes = [upstreamNode, currentNode]
mockWorkflowStore.edges = [{ source: 'node-1', target: 'node-2' }]
render(<NodeConfigPanel node={currentNode} />)
// Variable selector should show upstream variables
fireEvent.click(screen.getByRole('button', { name: /add variable/i }))
expect(screen.getByText('user_input')).toBeInTheDocument()
})
it('should insert variable into prompt template', async () => {
const user = userEvent.setup()
const node = createMockNode({ type: 'llm' })
render(<NodeConfigPanel node={node} />)
// Click variable button
await user.click(screen.getByRole('button', { name: /insert variable/i }))
await user.click(screen.getByText('user_input'))
const promptInput = screen.getByLabelText(/prompt/i)
expect(promptInput).toHaveValue(expect.stringContaining('{{user_input}}'))
})
})
})
```
## Dataset Components (`dataset/`)
Dataset components handle file uploads, data display, and search/filter operations.
### Key Test Areas
1. **File Upload**
1. **File Type Validation**
1. **Pagination**
1. **Search & Filtering**
1. **Data Format Handling**
### Example: Document Uploader
```typescript
import { render, screen, fireEvent, waitFor } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import DocumentUploader from './document-uploader'
vi.mock('@/service/datasets', () => ({
uploadDocument: vi.fn(),
parseDocument: vi.fn(),
}))
import * as datasetService from '@/service/datasets'
const mockedService = vi.mocked(datasetService)
describe('DocumentUploader', () => {
beforeEach(() => {
vi.clearAllMocks()
})
describe('File Upload', () => {
it('should accept valid file types', async () => {
const user = userEvent.setup()
const onUpload = vi.fn()
mockedService.uploadDocument.mockResolvedValue({ id: 'doc-1' })
render(<DocumentUploader onUpload={onUpload} />)
const file = new File(['content'], 'test.pdf', { type: 'application/pdf' })
const input = screen.getByLabelText(/upload/i)
await user.upload(input, file)
await waitFor(() => {
expect(mockedService.uploadDocument).toHaveBeenCalledWith(
expect.any(FormData)
)
})
})
it('should reject invalid file types', async () => {
const user = userEvent.setup()
render(<DocumentUploader />)
const file = new File(['content'], 'test.exe', { type: 'application/x-msdownload' })
const input = screen.getByLabelText(/upload/i)
await user.upload(input, file)
expect(screen.getByText(/unsupported file type/i)).toBeInTheDocument()
expect(mockedService.uploadDocument).not.toHaveBeenCalled()
})
it('should show upload progress', async () => {
const user = userEvent.setup()
// Mock upload with progress
mockedService.uploadDocument.mockImplementation(() => {
return new Promise((resolve) => {
setTimeout(() => resolve({ id: 'doc-1' }), 100)
})
})
render(<DocumentUploader />)
const file = new File(['content'], 'test.pdf', { type: 'application/pdf' })
await user.upload(screen.getByLabelText(/upload/i), file)
expect(screen.getByRole('progressbar')).toBeInTheDocument()
await waitFor(() => {
expect(screen.queryByRole('progressbar')).not.toBeInTheDocument()
})
})
})
describe('Error Handling', () => {
it('should handle upload failure', async () => {
const user = userEvent.setup()
mockedService.uploadDocument.mockRejectedValue(new Error('Upload failed'))
render(<DocumentUploader />)
const file = new File(['content'], 'test.pdf', { type: 'application/pdf' })
await user.upload(screen.getByLabelText(/upload/i), file)
await waitFor(() => {
expect(screen.getByText(/upload failed/i)).toBeInTheDocument()
})
})
it('should allow retry after failure', async () => {
const user = userEvent.setup()
mockedService.uploadDocument
.mockRejectedValueOnce(new Error('Network error'))
.mockResolvedValueOnce({ id: 'doc-1' })
render(<DocumentUploader />)
const file = new File(['content'], 'test.pdf', { type: 'application/pdf' })
await user.upload(screen.getByLabelText(/upload/i), file)
await waitFor(() => {
expect(screen.getByRole('button', { name: /retry/i })).toBeInTheDocument()
})
await user.click(screen.getByRole('button', { name: /retry/i }))
await waitFor(() => {
expect(screen.getByText(/uploaded successfully/i)).toBeInTheDocument()
})
})
})
})
```
### Example: Document List with Pagination
```typescript
describe('DocumentList', () => {
describe('Pagination', () => {
it('should load first page on mount', async () => {
mockedService.getDocuments.mockResolvedValue({
data: [{ id: '1', name: 'Doc 1' }],
total: 50,
page: 1,
pageSize: 10,
})
render(<DocumentList datasetId="ds-1" />)
await waitFor(() => {
expect(screen.getByText('Doc 1')).toBeInTheDocument()
})
expect(mockedService.getDocuments).toHaveBeenCalledWith('ds-1', { page: 1 })
})
it('should navigate to next page', async () => {
const user = userEvent.setup()
mockedService.getDocuments.mockResolvedValue({
data: [{ id: '1', name: 'Doc 1' }],
total: 50,
page: 1,
pageSize: 10,
})
render(<DocumentList datasetId="ds-1" />)
await waitFor(() => {
expect(screen.getByText('Doc 1')).toBeInTheDocument()
})
mockedService.getDocuments.mockResolvedValue({
data: [{ id: '11', name: 'Doc 11' }],
total: 50,
page: 2,
pageSize: 10,
})
await user.click(screen.getByRole('button', { name: /next/i }))
await waitFor(() => {
expect(screen.getByText('Doc 11')).toBeInTheDocument()
})
})
})
describe('Search & Filtering', () => {
it('should filter by search query', async () => {
const user = userEvent.setup()
vi.useFakeTimers()
render(<DocumentList datasetId="ds-1" />)
await user.type(screen.getByPlaceholderText(/search/i), 'test query')
// Debounce
vi.advanceTimersByTime(300)
await waitFor(() => {
expect(mockedService.getDocuments).toHaveBeenCalledWith(
'ds-1',
expect.objectContaining({ search: 'test query' })
)
})
vi.useRealTimers()
})
})
})
```
## Configuration Components (`app/configuration/`, `config/`)
Configuration components handle forms, validation, and data persistence.
### Key Test Areas
1. **Form Validation**
1. **Save/Reset**
1. **Required vs Optional Fields**
1. **Configuration Persistence**
1. **Error Feedback**
### Example: App Configuration Form
```typescript
import { render, screen, fireEvent, waitFor } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import AppConfigForm from './app-config-form'
vi.mock('@/service/apps', () => ({
updateAppConfig: vi.fn(),
getAppConfig: vi.fn(),
}))
import * as appService from '@/service/apps'
const mockedService = vi.mocked(appService)
describe('AppConfigForm', () => {
const defaultConfig = {
name: 'My App',
description: '',
icon: 'default',
openingStatement: '',
}
beforeEach(() => {
vi.clearAllMocks()
mockedService.getAppConfig.mockResolvedValue(defaultConfig)
})
describe('Form Validation', () => {
it('should require app name', async () => {
const user = userEvent.setup()
render(<AppConfigForm appId="app-1" />)
await waitFor(() => {
expect(screen.getByLabelText(/name/i)).toHaveValue('My App')
})
// Clear name field
await user.clear(screen.getByLabelText(/name/i))
await user.click(screen.getByRole('button', { name: /save/i }))
expect(screen.getByText(/name is required/i)).toBeInTheDocument()
expect(mockedService.updateAppConfig).not.toHaveBeenCalled()
})
it('should validate name length', async () => {
const user = userEvent.setup()
render(<AppConfigForm appId="app-1" />)
await waitFor(() => {
expect(screen.getByLabelText(/name/i)).toBeInTheDocument()
})
// Enter very long name
await user.clear(screen.getByLabelText(/name/i))
await user.type(screen.getByLabelText(/name/i), 'a'.repeat(101))
expect(screen.getByText(/name must be less than 100 characters/i)).toBeInTheDocument()
})
it('should allow empty optional fields', async () => {
const user = userEvent.setup()
mockedService.updateAppConfig.mockResolvedValue({ success: true })
render(<AppConfigForm appId="app-1" />)
await waitFor(() => {
expect(screen.getByLabelText(/name/i)).toHaveValue('My App')
})
// Leave description empty (optional)
await user.click(screen.getByRole('button', { name: /save/i }))
await waitFor(() => {
expect(mockedService.updateAppConfig).toHaveBeenCalled()
})
})
})
describe('Save/Reset Functionality', () => {
it('should save configuration', async () => {
const user = userEvent.setup()
mockedService.updateAppConfig.mockResolvedValue({ success: true })
render(<AppConfigForm appId="app-1" />)
await waitFor(() => {
expect(screen.getByLabelText(/name/i)).toHaveValue('My App')
})
await user.clear(screen.getByLabelText(/name/i))
await user.type(screen.getByLabelText(/name/i), 'Updated App')
await user.click(screen.getByRole('button', { name: /save/i }))
await waitFor(() => {
expect(mockedService.updateAppConfig).toHaveBeenCalledWith(
'app-1',
expect.objectContaining({ name: 'Updated App' })
)
})
expect(screen.getByText(/saved successfully/i)).toBeInTheDocument()
})
it('should reset to default values', async () => {
const user = userEvent.setup()
render(<AppConfigForm appId="app-1" />)
await waitFor(() => {
expect(screen.getByLabelText(/name/i)).toHaveValue('My App')
})
// Make changes
await user.clear(screen.getByLabelText(/name/i))
await user.type(screen.getByLabelText(/name/i), 'Changed Name')
// Reset
await user.click(screen.getByRole('button', { name: /reset/i }))
expect(screen.getByLabelText(/name/i)).toHaveValue('My App')
})
it('should show unsaved changes warning', async () => {
const user = userEvent.setup()
render(<AppConfigForm appId="app-1" />)
await waitFor(() => {
expect(screen.getByLabelText(/name/i)).toHaveValue('My App')
})
// Make changes
await user.type(screen.getByLabelText(/name/i), ' Updated')
expect(screen.getByText(/unsaved changes/i)).toBeInTheDocument()
})
})
describe('Error Handling', () => {
it('should show error on save failure', async () => {
const user = userEvent.setup()
mockedService.updateAppConfig.mockRejectedValue(new Error('Server error'))
render(<AppConfigForm appId="app-1" />)
await waitFor(() => {
expect(screen.getByLabelText(/name/i)).toHaveValue('My App')
})
await user.click(screen.getByRole('button', { name: /save/i }))
await waitFor(() => {
expect(screen.getByText(/failed to save/i)).toBeInTheDocument()
})
})
})
})
```

View File

@@ -1,537 +0,0 @@
# Mocking Guide for Dify Frontend Tests
## âš ī¸ Important: What NOT to Mock
### DO NOT Mock Base Components
**Never mock components from `@/app/components/base/`** such as:
- `Loading`, `Spinner`
- `Button`, `Input`, `Select`
- `Tooltip`, `Modal`, `Dropdown`
- `Icon`, `Badge`, `Tag`
**Why?**
- Base components will have their own dedicated tests
- Mocking them creates false positives (tests pass but real integration fails)
- Using real components tests actual integration behavior
```typescript
// ❌ WRONG: Don't mock base components
vi.mock('@/app/components/base/loading', () => () => <div>Loading</div>)
vi.mock('@/app/components/base/button', () => ({ children }: any) => <button>{children}</button>)
// ✅ CORRECT: Import and use real base components
import Loading from '@/app/components/base/loading'
import Button from '@/app/components/base/button'
// They will render normally in tests
```
### What TO Mock
Only mock these categories:
1. **API services** (`@/service/*`) - Network calls
1. **Complex context providers** - When setup is too difficult
1. **Third-party libraries with side effects** - `next/navigation`, external SDKs
1. **i18n** - Always mock to return keys
### Zustand Stores - DO NOT Mock Manually
**Zustand is globally mocked** in `web/vitest.setup.ts`. Use real stores with `setState()`:
```typescript
// ✅ CORRECT: Use real store, set test state
import { useAppStore } from '@/app/components/app/store'
useAppStore.setState({ appDetail: { id: 'test', name: 'Test' } })
render(<MyComponent />)
// ❌ WRONG: Don't mock the store module
vi.mock('@/app/components/app/store', () => ({ ... }))
```
See [Zustand Store Testing](#zustand-store-testing) section for full details.
## Mock Placement
| Location | Purpose |
|----------|---------|
| `web/vitest.setup.ts` | Global mocks shared by all tests (`react-i18next`, `next/image`, `zustand`) |
| `web/__mocks__/zustand.ts` | Zustand mock implementation (auto-resets stores after each test) |
| `web/__mocks__/` | Reusable mock factories shared across multiple test files |
| Test file | Test-specific mocks, inline with `vi.mock()` |
Modules are not mocked automatically. Use `vi.mock` in test files, or add global mocks in `web/vitest.setup.ts`.
**Note**: Zustand is special - it's globally mocked but you should NOT mock store modules manually. See [Zustand Store Testing](#zustand-store-testing).
## Essential Mocks
### 1. i18n (Auto-loaded via Global Mock)
A global mock is defined in `web/vitest.setup.ts` and is auto-loaded by Vitest setup.
The global mock provides:
- `useTranslation` - returns translation keys with namespace prefix
- `Trans` component - renders i18nKey and components
- `useMixedTranslation` (from `@/app/components/plugins/marketplace/hooks`)
- `useGetLanguage` (from `@/context/i18n`) - returns `'en-US'`
**Default behavior**: Most tests should use the global mock (no local override needed).
**For custom translations**: Use the helper function from `@/test/i18n-mock`:
```typescript
import { createReactI18nextMock } from '@/test/i18n-mock'
vi.mock('react-i18next', () => createReactI18nextMock({
'my.custom.key': 'Custom translation',
'button.save': 'Save',
}))
```
**Avoid**: Manually defining `useTranslation` mocks that just return the key - the global mock already does this.
### 2. Next.js Router
```typescript
const mockPush = vi.fn()
const mockReplace = vi.fn()
vi.mock('next/navigation', () => ({
useRouter: () => ({
push: mockPush,
replace: mockReplace,
back: vi.fn(),
prefetch: vi.fn(),
}),
usePathname: () => '/current-path',
useSearchParams: () => new URLSearchParams('?key=value'),
}))
describe('Component', () => {
beforeEach(() => {
vi.clearAllMocks()
})
it('should navigate on click', () => {
render(<Component />)
fireEvent.click(screen.getByRole('button'))
expect(mockPush).toHaveBeenCalledWith('/expected-path')
})
})
```
### 2.1 `nuqs` Query State (Preferred: Testing Adapter)
For tests that validate URL query behavior, use `NuqsTestingAdapter` instead of mocking `nuqs` directly.
```typescript
import { renderHookWithNuqs } from '@/test/nuqs-testing'
it('should sync query to URL with push history', async () => {
const { result, onUrlUpdate } = renderHookWithNuqs(() => useMyQueryState(), {
searchParams: '?page=1',
})
act(() => {
result.current.setQuery({ page: 2 })
})
await waitFor(() => expect(onUrlUpdate).toHaveBeenCalled())
const update = onUrlUpdate.mock.calls[onUrlUpdate.mock.calls.length - 1][0]
expect(update.options.history).toBe('push')
expect(update.searchParams.get('page')).toBe('2')
})
```
Use direct `vi.mock('nuqs')` only when URL synchronization is intentionally out of scope.
### 3. Portal Components (with Shared State)
```typescript
// âš ī¸ Important: Use shared state for components that depend on each other
let mockPortalOpenState = false
vi.mock('@/app/components/base/portal-to-follow-elem', () => ({
PortalToFollowElem: ({ children, open, ...props }: any) => {
mockPortalOpenState = open || false // Update shared state
return <div data-testid="portal" data-open={open}>{children}</div>
},
PortalToFollowElemContent: ({ children }: any) => {
// ✅ Matches actual: returns null when portal is closed
if (!mockPortalOpenState) return null
return <div data-testid="portal-content">{children}</div>
},
PortalToFollowElemTrigger: ({ children }: any) => (
<div data-testid="portal-trigger">{children}</div>
),
}))
describe('Component', () => {
beforeEach(() => {
vi.clearAllMocks()
mockPortalOpenState = false // ✅ Reset shared state
})
})
```
### 4. API Service Mocks
```typescript
import * as api from '@/service/api'
vi.mock('@/service/api')
const mockedApi = vi.mocked(api)
describe('Component', () => {
beforeEach(() => {
vi.clearAllMocks()
// Setup default mock implementation
mockedApi.fetchData.mockResolvedValue({ data: [] })
})
it('should show data on success', async () => {
mockedApi.fetchData.mockResolvedValue({ data: [{ id: 1 }] })
render(<Component />)
await waitFor(() => {
expect(screen.getByText('1')).toBeInTheDocument()
})
})
it('should show error on failure', async () => {
mockedApi.fetchData.mockRejectedValue(new Error('Network error'))
render(<Component />)
await waitFor(() => {
expect(screen.getByText(/error/i)).toBeInTheDocument()
})
})
})
```
### 5. HTTP Mocking with Nock
```typescript
import nock from 'nock'
const GITHUB_HOST = 'https://api.github.com'
const GITHUB_PATH = '/repos/owner/repo'
const mockGithubApi = (status: number, body: Record<string, unknown>, delayMs = 0) => {
return nock(GITHUB_HOST)
.get(GITHUB_PATH)
.delay(delayMs)
.reply(status, body)
}
describe('GithubComponent', () => {
afterEach(() => {
nock.cleanAll()
})
it('should display repo info', async () => {
mockGithubApi(200, { name: 'dify', stars: 1000 })
render(<GithubComponent />)
await waitFor(() => {
expect(screen.getByText('dify')).toBeInTheDocument()
})
})
it('should handle API error', async () => {
mockGithubApi(500, { message: 'Server error' })
render(<GithubComponent />)
await waitFor(() => {
expect(screen.getByText(/error/i)).toBeInTheDocument()
})
})
})
```
### 6. Context Providers
```typescript
import { ProviderContext } from '@/context/provider-context'
import { createMockProviderContextValue, createMockPlan } from '@/__mocks__/provider-context'
describe('Component with Context', () => {
it('should render for free plan', () => {
const mockContext = createMockPlan('sandbox')
render(
<ProviderContext.Provider value={mockContext}>
<Component />
</ProviderContext.Provider>
)
expect(screen.getByText('Upgrade')).toBeInTheDocument()
})
it('should render for pro plan', () => {
const mockContext = createMockPlan('professional')
render(
<ProviderContext.Provider value={mockContext}>
<Component />
</ProviderContext.Provider>
)
expect(screen.queryByText('Upgrade')).not.toBeInTheDocument()
})
})
```
### 7. React Query
```typescript
import { QueryClient, QueryClientProvider } from '@tanstack/react-query'
const createTestQueryClient = () => new QueryClient({
defaultOptions: {
queries: { retry: false },
mutations: { retry: false },
},
})
const renderWithQueryClient = (ui: React.ReactElement) => {
const queryClient = createTestQueryClient()
return render(
<QueryClientProvider client={queryClient}>
{ui}
</QueryClientProvider>
)
}
```
## Mock Best Practices
### ✅ DO
1. **Use real base components** - Import from `@/app/components/base/` directly
1. **Use real project components** - Prefer importing over mocking
1. **Use real Zustand stores** - Set test state via `store.setState()`
1. **Reset mocks in `beforeEach`**, not `afterEach`
1. **Match actual component behavior** in mocks (when mocking is necessary)
1. **Use factory functions** for complex mock data
1. **Import actual types** for type safety
1. **Reset shared mock state** in `beforeEach`
### ❌ DON'T
1. **Don't mock base components** (`Loading`, `Button`, `Tooltip`, etc.)
1. **Don't mock Zustand store modules** - Use real stores with `setState()`
1. Don't mock components you can import directly
1. Don't create overly simplified mocks that miss conditional logic
1. Don't forget to clean up nock after each test
1. Don't use `any` types in mocks without necessity
### Mock Decision Tree
```
Need to use a component in test?
│
├─ Is it from @/app/components/base/*?
│ └─ YES → Import real component, DO NOT mock
│
├─ Is it a project component?
│ └─ YES → Prefer importing real component
│ Only mock if setup is extremely complex
│
├─ Is it an API service (@/service/*)?
│ └─ YES → Mock it
│
├─ Is it a third-party lib with side effects?
│ └─ YES → Mock it (next/navigation, external SDKs)
│
├─ Is it a Zustand store?
│ └─ YES → DO NOT mock the module!
│ Use real store + setState() to set test state
│ (Global mock handles auto-reset)
│
└─ Is it i18n?
└─ YES → Uses shared mock (auto-loaded). Override only for custom translations
```
## Zustand Store Testing
### Global Zustand Mock (Auto-loaded)
Zustand is globally mocked in `web/vitest.setup.ts` following the [official Zustand testing guide](https://zustand.docs.pmnd.rs/guides/testing). The mock in `web/__mocks__/zustand.ts` provides:
- Real store behavior with `getState()`, `setState()`, `subscribe()` methods
- Automatic store reset after each test via `afterEach`
- Proper test isolation between tests
### ✅ Recommended: Use Real Stores (Official Best Practice)
**DO NOT mock store modules manually.** Import and use the real store, then use `setState()` to set test state:
```typescript
// ✅ CORRECT: Use real store with setState
import { useAppStore } from '@/app/components/app/store'
describe('MyComponent', () => {
it('should render app details', () => {
// Arrange: Set test state via setState
useAppStore.setState({
appDetail: {
id: 'test-app',
name: 'Test App',
mode: 'chat',
},
})
// Act
render(<MyComponent />)
// Assert
expect(screen.getByText('Test App')).toBeInTheDocument()
// Can also verify store state directly
expect(useAppStore.getState().appDetail?.name).toBe('Test App')
})
// No cleanup needed - global mock auto-resets after each test
})
```
### ❌ Avoid: Manual Store Module Mocking
Manual mocking conflicts with the global Zustand mock and loses store functionality:
```typescript
// ❌ WRONG: Don't mock the store module
vi.mock('@/app/components/app/store', () => ({
useStore: (selector) => mockSelector(selector), // Missing getState, setState!
}))
// ❌ WRONG: This conflicts with global zustand mock
vi.mock('@/app/components/workflow/store', () => ({
useWorkflowStore: vi.fn(() => mockState),
}))
```
**Problems with manual mocking:**
1. Loses `getState()`, `setState()`, `subscribe()` methods
1. Conflicts with global Zustand mock behavior
1. Requires manual maintenance of store API
1. Tests don't reflect actual store behavior
### When Manual Store Mocking is Necessary
In rare cases where the store has complex initialization or side effects, you can mock it, but ensure you provide the full store API:
```typescript
// If you MUST mock (rare), include full store API
const mockStore = {
appDetail: { id: 'test', name: 'Test' },
setAppDetail: vi.fn(),
}
vi.mock('@/app/components/app/store', () => ({
useStore: Object.assign(
(selector: (state: typeof mockStore) => unknown) => selector(mockStore),
{
getState: () => mockStore,
setState: vi.fn(),
subscribe: vi.fn(),
},
),
}))
```
### Store Testing Decision Tree
```
Need to test a component using Zustand store?
│
├─ Can you use the real store?
│ └─ YES → Use real store + setState (RECOMMENDED)
│ useAppStore.setState({ ... })
│
├─ Does the store have complex initialization/side effects?
│ └─ YES → Consider mocking, but include full API
│ (getState, setState, subscribe)
│
└─ Are you testing the store itself (not a component)?
└─ YES → Test store directly with getState/setState
const store = useMyStore
store.setState({ count: 0 })
store.getState().increment()
expect(store.getState().count).toBe(1)
```
### Example: Testing Store Actions
```typescript
import { useCounterStore } from '@/stores/counter'
describe('Counter Store', () => {
it('should increment count', () => {
// Initial state (auto-reset by global mock)
expect(useCounterStore.getState().count).toBe(0)
// Call action
useCounterStore.getState().increment()
// Verify state change
expect(useCounterStore.getState().count).toBe(1)
})
it('should reset to initial state', () => {
// Set some state
useCounterStore.setState({ count: 100 })
expect(useCounterStore.getState().count).toBe(100)
// After this test, global mock will reset to initial state
})
})
```
## Factory Function Pattern
```typescript
// __mocks__/data-factories.ts
import type { User, Project } from '@/types'
export const createMockUser = (overrides: Partial<User> = {}): User => ({
id: 'user-1',
name: 'Test User',
email: 'test@example.com',
role: 'member',
createdAt: new Date().toISOString(),
...overrides,
})
export const createMockProject = (overrides: Partial<Project> = {}): Project => ({
id: 'project-1',
name: 'Test Project',
description: 'A test project',
owner: createMockUser(),
members: [],
createdAt: new Date().toISOString(),
...overrides,
})
// Usage in tests
it('should display project owner', () => {
const project = createMockProject({
owner: createMockUser({ name: 'John Doe' }),
})
render(<ProjectCard project={project} />)
expect(screen.getByText('John Doe')).toBeInTheDocument()
})
```

View File

@@ -1,269 +0,0 @@
# Testing Workflow Guide
This guide defines the workflow for generating tests, especially for complex components or directories with multiple files.
## Scope Clarification
This guide addresses **multi-file workflow** (how to process multiple test files). For coverage requirements within a single test file, see `web/docs/test.md` § Coverage Goals.
| Scope | Rule |
|-------|------|
| **Single file** | Complete coverage in one generation (100% function, >95% branch) |
| **Multi-file directory** | Process one file at a time, verify each before proceeding |
## âš ī¸ Critical Rule: Incremental Approach for Multi-File Testing
When testing a **directory with multiple files**, **NEVER generate all test files at once.** Use an incremental, verify-as-you-go approach.
### Why Incremental?
| Batch Approach (❌) | Incremental Approach (✅) |
|---------------------|---------------------------|
| Generate 5+ tests at once | Generate 1 test at a time |
| Run tests only at the end | Run test immediately after each file |
| Multiple failures compound | Single point of failure, easy to debug |
| Hard to identify root cause | Clear cause-effect relationship |
| Mock issues affect many files | Mock issues caught early |
| Messy git history | Clean, atomic commits possible |
## Single File Workflow
When testing a **single component, hook, or utility**:
```
1. Read source code completely
2. Run `pnpm analyze-component <path>` (if available)
3. Check complexity score and features detected
4. Write the test file
5. Run test: `pnpm test <file>.spec.tsx`
6. Fix any failures
7. Verify coverage meets goals (100% function, >95% branch)
```
## Directory/Multi-File Workflow (MUST FOLLOW)
When testing a **directory or multiple files**, follow this strict workflow:
### Step 1: Analyze and Plan
1. **List all files** that need tests in the directory
1. **Categorize by complexity**:
- đŸŸĸ **Simple**: Utility functions, simple hooks, presentational components
- 🟡 **Medium**: Components with state, effects, or event handlers
- 🔴 **Complex**: Components with API calls, routing, or many dependencies
1. **Order by dependency**: Test dependencies before dependents
1. **Create a todo list** to track progress
### Step 2: Determine Processing Order
Process files in this recommended order:
```
1. Utility functions (simplest, no React)
2. Custom hooks (isolated logic)
3. Simple presentational components (few/no props)
4. Medium complexity components (state, effects)
5. Complex components (API, routing, many deps)
6. Container/index components (integration tests - last)
```
**Rationale**:
- Simpler files help establish mock patterns
- Hooks used by components should be tested first
- Integration tests (index files) depend on child components working
### Step 3: Process Each File Incrementally
**For EACH file in the ordered list:**
```
┌─────────────────────────────────────────────┐
│ 1. Write test file │
│ 2. Run: pnpm test <file>.spec.tsx │
│ 3. If FAIL → Fix immediately, re-run │
│ 4. If PASS → Mark complete in todo list │
│ 5. ONLY THEN proceed to next file │
└─────────────────────────────────────────────┘
```
**DO NOT proceed to the next file until the current one passes.**
### Step 4: Final Verification
After all individual tests pass:
```bash
# Run all tests in the directory together
pnpm test path/to/directory/
# Check coverage
pnpm test:coverage path/to/directory/
```
## Component Complexity Guidelines
Use `pnpm analyze-component <path>` to assess complexity before testing.
### 🔴 Very Complex Components (Complexity > 50)
**Consider refactoring BEFORE testing:**
- Break component into smaller, testable pieces
- Extract complex logic into custom hooks
- Separate container and presentational layers
**If testing as-is:**
- Use integration tests for complex workflows
- Use `test.each()` for data-driven testing
- Multiple `describe` blocks for organization
- Consider testing major sections separately
### 🟡 Medium Complexity (Complexity 30-50)
- Group related tests in `describe` blocks
- Test integration scenarios between internal parts
- Focus on state transitions and side effects
- Use helper functions to reduce test complexity
### đŸŸĸ Simple Components (Complexity < 30)
- Standard test structure
- Focus on props, rendering, and edge cases
- Usually straightforward to test
### 📏 Large Files (500+ lines)
Regardless of complexity score:
- **Strongly consider refactoring** before testing
- If testing as-is, test major sections separately
- Create helper functions for test setup
- May need multiple test files
## Todo List Format
When testing multiple files, use a todo list like this:
```
Testing: path/to/directory/
Ordered by complexity (simple → complex):
☐ utils/helper.ts [utility, simple]
☐ hooks/use-custom-hook.ts [hook, simple]
☐ empty-state.tsx [component, simple]
☐ item-card.tsx [component, medium]
☐ list.tsx [component, complex]
☐ index.tsx [integration]
Progress: 0/6 complete
```
Update status as you complete each:
- ☐ → âŗ (in progress)
- âŗ → ✅ (complete and verified)
- âŗ → ❌ (blocked, needs attention)
## When to Stop and Verify
**Always run tests after:**
- Completing a test file
- Making changes to fix a failure
- Modifying shared mocks
- Updating test utilities or helpers
**Signs you should pause:**
- More than 2 consecutive test failures
- Mock-related errors appearing
- Unclear why a test is failing
- Test passing but coverage unexpectedly low
## Common Pitfalls to Avoid
### ❌ Don't: Generate Everything First
```
# BAD: Writing all files then testing
Write component-a.spec.tsx
Write component-b.spec.tsx
Write component-c.spec.tsx
Write component-d.spec.tsx
Run pnpm test ← Multiple failures, hard to debug
```
### ✅ Do: Verify Each Step
```
# GOOD: Incremental with verification
Write component-a.spec.tsx
Run pnpm test component-a.spec.tsx ✅
Write component-b.spec.tsx
Run pnpm test component-b.spec.tsx ✅
...continue...
```
### ❌ Don't: Skip Verification for "Simple" Components
Even simple components can have:
- Import errors
- Missing mock setup
- Incorrect assumptions about props
**Always verify, regardless of perceived simplicity.**
### ❌ Don't: Continue When Tests Fail
Failing tests compound:
- A mock issue in file A affects files B, C, D
- Fixing A later requires revisiting all dependent tests
- Time wasted on debugging cascading failures
**Fix failures immediately before proceeding.**
## Integration with Claude's Todo Feature
When using Claude for multi-file testing:
1. **Ask Claude to create a todo list** before starting
1. **Request one file at a time** or ensure Claude processes incrementally
1. **Verify each test passes** before asking for the next
1. **Mark todos complete** as you progress
Example prompt:
```
Test all components in `path/to/directory/`.
First, analyze the directory and create a todo list ordered by complexity.
Then, process ONE file at a time, waiting for my confirmation that tests pass
before proceeding to the next.
```
## Summary Checklist
Before starting multi-file testing:
- [ ] Listed all files needing tests
- [ ] Ordered by complexity (simple → complex)
- [ ] Created todo list for tracking
- [ ] Understand dependencies between files
During testing:
- [ ] Processing ONE file at a time
- [ ] Running tests after EACH file
- [ ] Fixing failures BEFORE proceeding
- [ ] Updating todo list progress
After completion:
- [ ] All individual tests pass
- [ ] Full directory test run passes
- [ ] Coverage goals met
- [ ] Todo list shows all complete

View File

@@ -1,15 +0,0 @@
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "npx -y block-no-verify@1.1.1"
}
]
}
]
}
}

View File

@@ -0,0 +1,19 @@
{
"permissions": {
"allow": [],
"deny": []
},
"env": {
"__comment": "Environment variables for MCP servers. Override in .claude/settings.local.json with actual values.",
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"enabledMcpjsonServers": [
"context7",
"sequential-thinking",
"github",
"fetch",
"playwright",
"ide"
],
"enableAllProjectMcpServers": true
}

View File

@@ -1 +0,0 @@
../../.agents/skills/backend-code-review

View File

@@ -1 +0,0 @@
../../.agents/skills/component-refactoring

View File

@@ -1 +0,0 @@
../../.agents/skills/frontend-code-review

View File

@@ -1 +0,0 @@
../../.agents/skills/frontend-query-mutation

View File

@@ -1 +0,0 @@
../../.agents/skills/frontend-testing

View File

@@ -1,5 +0,0 @@
[run]
omit =
api/tests/*
api/migrations/*
api/core/rag/datasource/vdb/*

View File

@@ -1,4 +1,4 @@
FROM mcr.microsoft.com/devcontainers/python:3.12-bookworm
FROM mcr.microsoft.com/devcontainers/python:3.12
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install libgmp-dev libmpfr-dev libmpc-dev

View File

@@ -6,15 +6,12 @@
"context": "..",
"dockerfile": "Dockerfile"
},
"mounts": [
"source=dify-dev-tmp,target=/tmp,type=volume"
],
"features": {
"ghcr.io/devcontainers/features/node:1": {
"nodeGypDependencies": true,
"version": "lts"
},
"ghcr.io/devcontainers-extra/features/npm-package:1": {
"ghcr.io/devcontainers-contrib/features/npm-package:1": {
"package": "typescript",
"version": "latest"
},
@@ -37,13 +34,19 @@
},
"postStartCommand": "./.devcontainer/post_start_command.sh",
"postCreateCommand": "./.devcontainer/post_create_command.sh"
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "python --version",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
}
// "remoteUser": "root"
}

View File

@@ -1,16 +1,15 @@
#!/bin/bash
WORKSPACE_ROOT=$(pwd)
export COREPACK_ENABLE_DOWNLOAD_PROMPT=0
corepack enable
cd web && pnpm install
pipx install uv
echo "alias start-api=\"cd $WORKSPACE_ROOT/api && uv run python -m flask run --host 0.0.0.0 --port=5001 --debug\"" >> ~/.bashrc
echo "alias start-worker=\"cd $WORKSPACE_ROOT/api && uv run python -m celery -A app.celery worker -P threads -c 1 --loglevel INFO -Q dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention\"" >> ~/.bashrc
echo "alias start-web=\"cd $WORKSPACE_ROOT/web && pnpm dev:inspect\"" >> ~/.bashrc
echo "alias start-web-prod=\"cd $WORKSPACE_ROOT/web && pnpm build && pnpm start\"" >> ~/.bashrc
echo "alias start-containers=\"cd $WORKSPACE_ROOT/docker && docker-compose -f docker-compose.middleware.yaml -p dify --env-file middleware.env up -d\"" >> ~/.bashrc
echo "alias stop-containers=\"cd $WORKSPACE_ROOT/docker && docker-compose -f docker-compose.middleware.yaml -p dify --env-file middleware.env down\"" >> ~/.bashrc
echo 'alias start-api="cd /workspaces/dify/api && uv run python -m flask run --host 0.0.0.0 --port=5001 --debug"' >> ~/.bashrc
echo 'alias start-worker="cd /workspaces/dify/api && uv run python -m celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletion,plugin,workflow_storage"' >> ~/.bashrc
echo 'alias start-web="cd /workspaces/dify/web && pnpm dev"' >> ~/.bashrc
echo 'alias start-web-prod="cd /workspaces/dify/web && pnpm build && pnpm start"' >> ~/.bashrc
echo 'alias start-containers="cd /workspaces/dify/docker && docker-compose -f docker-compose.middleware.yaml -p dify --env-file middleware.env up -d"' >> ~/.bashrc
echo 'alias stop-containers="cd /workspaces/dify/docker && docker-compose -f docker-compose.middleware.yaml -p dify --env-file middleware.env down"' >> ~/.bashrc
source /home/vscode/.bashrc

View File

@@ -29,7 +29,7 @@ trim_trailing_whitespace = false
# Matches multiple files with brace expansion notation
# Set default charset
[*.{js,jsx,ts,tsx,mjs}]
[*.{js,tsx}]
indent_style = space
indent_size = 2

View File

@@ -1,13 +0,0 @@
have_fun: false
memory_config:
disabled: false
code_review:
disable: true
comment_severity_threshold: MEDIUM
max_review_comments: -1
pull_request_opened:
help: false
summary: false
code_review: false
include_drafts: false
ignore_patterns: []

258
.github/CODEOWNERS vendored
View File

@@ -1,258 +0,0 @@
# CODEOWNERS
# This file defines code ownership for the Dify project.
# Each line is a file pattern followed by one or more owners.
# Owners can be @username, @org/team-name, or email addresses.
# For more information, see: https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
* @crazywoola @laipz8200 @Yeuoly
# CODEOWNERS file
/.github/CODEOWNERS @laipz8200 @crazywoola
# Agents
/.agents/skills/ @hyoban
# Docs
/docs/ @crazywoola
# Backend (default owner, more specific rules below will override)
/api/ @QuantumGhost
# Backend - MCP
/api/core/mcp/ @Nov1c444
/api/core/entities/mcp_provider.py @Nov1c444
/api/services/tools/mcp_tools_manage_service.py @Nov1c444
/api/controllers/mcp/ @Nov1c444
/api/controllers/console/app/mcp_server.py @Nov1c444
# Backend - Tests
/api/tests/ @laipz8200 @QuantumGhost
/api/tests/**/*mcp* @Nov1c444
# Backend - Workflow - Engine (Core graph execution engine)
/api/core/workflow/graph_engine/ @laipz8200 @QuantumGhost
/api/core/workflow/runtime/ @laipz8200 @QuantumGhost
/api/core/workflow/graph/ @laipz8200 @QuantumGhost
/api/core/workflow/graph_events/ @laipz8200 @QuantumGhost
/api/core/workflow/node_events/ @laipz8200 @QuantumGhost
# Backend - Workflow - Nodes (Agent, Iteration, Loop, LLM)
/api/core/workflow/nodes/agent/ @Nov1c444
/api/core/workflow/nodes/iteration/ @Nov1c444
/api/core/workflow/nodes/loop/ @Nov1c444
/api/core/workflow/nodes/llm/ @Nov1c444
# Backend - RAG (Retrieval Augmented Generation)
/api/core/rag/ @JohnJyong
/api/services/rag_pipeline/ @JohnJyong
/api/services/dataset_service.py @JohnJyong
/api/services/knowledge_service.py @JohnJyong
/api/services/external_knowledge_service.py @JohnJyong
/api/services/hit_testing_service.py @JohnJyong
/api/services/metadata_service.py @JohnJyong
/api/services/vector_service.py @JohnJyong
/api/services/entities/knowledge_entities/ @JohnJyong
/api/services/entities/external_knowledge_entities/ @JohnJyong
/api/controllers/console/datasets/ @JohnJyong
/api/controllers/service_api/dataset/ @JohnJyong
/api/models/dataset.py @JohnJyong
/api/tasks/rag_pipeline/ @JohnJyong
/api/tasks/add_document_to_index_task.py @JohnJyong
/api/tasks/batch_clean_document_task.py @JohnJyong
/api/tasks/clean_document_task.py @JohnJyong
/api/tasks/clean_notion_document_task.py @JohnJyong
/api/tasks/document_indexing_task.py @JohnJyong
/api/tasks/document_indexing_sync_task.py @JohnJyong
/api/tasks/document_indexing_update_task.py @JohnJyong
/api/tasks/duplicate_document_indexing_task.py @JohnJyong
/api/tasks/recover_document_indexing_task.py @JohnJyong
/api/tasks/remove_document_from_index_task.py @JohnJyong
/api/tasks/retry_document_indexing_task.py @JohnJyong
/api/tasks/sync_website_document_indexing_task.py @JohnJyong
/api/tasks/batch_create_segment_to_index_task.py @JohnJyong
/api/tasks/create_segment_to_index_task.py @JohnJyong
/api/tasks/delete_segment_from_index_task.py @JohnJyong
/api/tasks/disable_segment_from_index_task.py @JohnJyong
/api/tasks/disable_segments_from_index_task.py @JohnJyong
/api/tasks/enable_segment_to_index_task.py @JohnJyong
/api/tasks/enable_segments_to_index_task.py @JohnJyong
/api/tasks/clean_dataset_task.py @JohnJyong
/api/tasks/deal_dataset_index_update_task.py @JohnJyong
/api/tasks/deal_dataset_vector_index_task.py @JohnJyong
# Backend - Plugins
/api/core/plugin/ @Mairuis @Yeuoly @Stream29
/api/services/plugin/ @Mairuis @Yeuoly @Stream29
/api/controllers/console/workspace/plugin.py @Mairuis @Yeuoly @Stream29
/api/controllers/inner_api/plugin/ @Mairuis @Yeuoly @Stream29
/api/tasks/process_tenant_plugin_autoupgrade_check_task.py @Mairuis @Yeuoly @Stream29
# Backend - Trigger/Schedule/Webhook
/api/controllers/trigger/ @Mairuis @Yeuoly
/api/controllers/console/app/workflow_trigger.py @Mairuis @Yeuoly
/api/controllers/console/workspace/trigger_providers.py @Mairuis @Yeuoly
/api/core/trigger/ @Mairuis @Yeuoly
/api/core/app/layers/trigger_post_layer.py @Mairuis @Yeuoly
/api/services/trigger/ @Mairuis @Yeuoly
/api/models/trigger.py @Mairuis @Yeuoly
/api/fields/workflow_trigger_fields.py @Mairuis @Yeuoly
/api/repositories/workflow_trigger_log_repository.py @Mairuis @Yeuoly
/api/repositories/sqlalchemy_workflow_trigger_log_repository.py @Mairuis @Yeuoly
/api/libs/schedule_utils.py @Mairuis @Yeuoly
/api/services/workflow/scheduler.py @Mairuis @Yeuoly
/api/schedule/trigger_provider_refresh_task.py @Mairuis @Yeuoly
/api/schedule/workflow_schedule_task.py @Mairuis @Yeuoly
/api/tasks/trigger_processing_tasks.py @Mairuis @Yeuoly
/api/tasks/trigger_subscription_refresh_tasks.py @Mairuis @Yeuoly
/api/tasks/workflow_schedule_tasks.py @Mairuis @Yeuoly
/api/tasks/workflow_cfs_scheduler/ @Mairuis @Yeuoly
/api/events/event_handlers/sync_plugin_trigger_when_app_created.py @Mairuis @Yeuoly
/api/events/event_handlers/update_app_triggers_when_app_published_workflow_updated.py @Mairuis @Yeuoly
/api/events/event_handlers/sync_workflow_schedule_when_app_published.py @Mairuis @Yeuoly
/api/events/event_handlers/sync_webhook_when_app_created.py @Mairuis @Yeuoly
# Backend - Async Workflow
/api/services/async_workflow_service.py @Mairuis @Yeuoly
/api/tasks/async_workflow_tasks.py @Mairuis @Yeuoly
# Backend - Billing
/api/services/billing_service.py @hj24 @zyssyz123
/api/controllers/console/billing/ @hj24 @zyssyz123
# Backend - Enterprise
/api/configs/enterprise/ @GarfieldDai @GareArc
/api/services/enterprise/ @GarfieldDai @GareArc
/api/services/feature_service.py @GarfieldDai @GareArc
/api/controllers/console/feature.py @GarfieldDai @GareArc
/api/controllers/web/feature.py @GarfieldDai @GareArc
# Backend - Database Migrations
/api/migrations/ @snakevash @laipz8200 @MRZHUH
# Backend - Vector DB Middleware
/api/configs/middleware/vdb/* @JohnJyong
# Frontend
/web/ @iamjoel
# Frontend - Web Tests
/.github/workflows/web-tests.yml @iamjoel
# Frontend - App - Orchestration
/web/app/components/workflow/ @iamjoel @zxhlyh
/web/app/components/workflow-app/ @iamjoel @zxhlyh
/web/app/components/app/configuration/ @iamjoel @zxhlyh
/web/app/components/app/app-publisher/ @iamjoel @zxhlyh
# Frontend - WebApp - Chat
/web/app/components/base/chat/ @iamjoel @zxhlyh
# Frontend - WebApp - Completion
/web/app/components/share/text-generation/ @iamjoel @zxhlyh
# Frontend - App - List and Creation
/web/app/components/apps/ @JzoNgKVO @iamjoel
/web/app/components/app/create-app-dialog/ @JzoNgKVO @iamjoel
/web/app/components/app/create-app-modal/ @JzoNgKVO @iamjoel
/web/app/components/app/create-from-dsl-modal/ @JzoNgKVO @iamjoel
# Frontend - App - API Documentation
/web/app/components/develop/ @JzoNgKVO @iamjoel
# Frontend - App - Logs and Annotations
/web/app/components/app/workflow-log/ @JzoNgKVO @iamjoel
/web/app/components/app/log/ @JzoNgKVO @iamjoel
/web/app/components/app/log-annotation/ @JzoNgKVO @iamjoel
/web/app/components/app/annotation/ @JzoNgKVO @iamjoel
# Frontend - App - Monitoring
/web/app/(commonLayout)/app/(appDetailLayout)/\[appId\]/overview/ @JzoNgKVO @iamjoel
/web/app/components/app/overview/ @JzoNgKVO @iamjoel
# Frontend - App - Settings
/web/app/components/app-sidebar/ @JzoNgKVO @iamjoel
# Frontend - RAG - Hit Testing
/web/app/components/datasets/hit-testing/ @JzoNgKVO @iamjoel
# Frontend - RAG - List and Creation
/web/app/components/datasets/list/ @iamjoel @WTW0313
/web/app/components/datasets/create/ @iamjoel @WTW0313
/web/app/components/datasets/create-from-pipeline/ @iamjoel @WTW0313
/web/app/components/datasets/external-knowledge-base/ @iamjoel @WTW0313
# Frontend - RAG - Orchestration (general rule first, specific rules below override)
/web/app/components/rag-pipeline/ @iamjoel @WTW0313
/web/app/components/rag-pipeline/components/rag-pipeline-main.tsx @iamjoel @zxhlyh
/web/app/components/rag-pipeline/store/ @iamjoel @zxhlyh
# Frontend - RAG - Documents List
/web/app/components/datasets/documents/list.tsx @iamjoel @WTW0313
/web/app/components/datasets/documents/create-from-pipeline/ @iamjoel @WTW0313
# Frontend - RAG - Segments List
/web/app/components/datasets/documents/detail/ @iamjoel @WTW0313
# Frontend - RAG - Settings
/web/app/components/datasets/settings/ @iamjoel @WTW0313
# Frontend - Ecosystem - Plugins
/web/app/components/plugins/ @iamjoel @zhsama
# Frontend - Ecosystem - Tools
/web/app/components/tools/ @iamjoel @Yessenia-d
# Frontend - Ecosystem - MarketPlace
/web/app/components/plugins/marketplace/ @iamjoel @Yessenia-d
# Frontend - Login and Registration
/web/app/signin/ @douxc @iamjoel
/web/app/signup/ @douxc @iamjoel
/web/app/reset-password/ @douxc @iamjoel
/web/app/install/ @douxc @iamjoel
/web/app/init/ @douxc @iamjoel
/web/app/forgot-password/ @douxc @iamjoel
/web/app/account/ @douxc @iamjoel
# Frontend - Service Authentication
/web/service/base.ts @douxc @iamjoel
# Frontend - WebApp Authentication and Access Control
/web/app/(shareLayout)/components/ @douxc @iamjoel
/web/app/(shareLayout)/webapp-signin/ @douxc @iamjoel
/web/app/(shareLayout)/webapp-reset-password/ @douxc @iamjoel
/web/app/components/app/app-access-control/ @douxc @iamjoel
# Frontend - Explore Page
/web/app/components/explore/ @CodingOnStar @iamjoel
# Frontend - Personal Settings
/web/app/components/header/account-setting/ @CodingOnStar @iamjoel
/web/app/components/header/account-dropdown/ @CodingOnStar @iamjoel
# Frontend - Analytics
/web/app/components/base/ga/ @CodingOnStar @iamjoel
# Frontend - Base Components
/web/app/components/base/ @iamjoel @zxhlyh
# Frontend - Base Components Tests
/web/app/components/base/**/*.spec.tsx @hyoban @CodingOnStar
# Frontend - Utils and Hooks
/web/utils/classnames.ts @iamjoel @zxhlyh
/web/utils/time.ts @iamjoel @zxhlyh
/web/utils/format.ts @iamjoel @zxhlyh
/web/utils/clipboard.ts @iamjoel @zxhlyh
/web/hooks/use-document-title.ts @iamjoel @zxhlyh
# Frontend - Billing and Education
/web/app/components/billing/ @iamjoel @zxhlyh
/web/app/education-apply/ @iamjoel @zxhlyh
# Frontend - Workspace
/web/app/components/header/account-dropdown/workplace-selector/ @iamjoel @zxhlyh
# Docker
/docker/* @laipz8200

View File

@@ -1,8 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: "\U0001F510 Security Vulnerabilities"
url: "https://github.com/langgenius/dify/security/advisories/new"
about: Report security vulnerabilities through GitHub Security Advisories to ensure responsible disclosure. 💡 Please do not report security vulnerabilities in public issues.
- name: "\U0001F4A1 Model Providers & Plugins"
url: "https://github.com/langgenius/dify-official-plugins/issues/new/choose"
about: Report issues with official plugins or model providers, you will need to provide the plugin version and other relevant details.

View File

@@ -1,6 +1,8 @@
name: "✨ Refactor or Chore"
description: Refactor existing code or perform maintenance chores to improve readability and reliability.
title: "[Refactor/Chore] "
name: "✨ Refactor"
description: Refactor existing code for improved readability and maintainability.
title: "[Chore/Refactor] "
labels:
- refactor
body:
- type: checkboxes
attributes:
@@ -9,7 +11,7 @@ body:
options:
- label: I have read the [Contributing Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) and [Language Policy](https://github.com/langgenius/dify/issues/1542).
required: true
- label: This is only for refactors or chores; if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- label: This is only for refactoring, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
required: true
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
@@ -23,14 +25,14 @@ body:
id: description
attributes:
label: Description
placeholder: "Describe the refactor or chore you are proposing."
placeholder: "Describe the refactor you are proposing."
validations:
required: true
- type: textarea
id: motivation
attributes:
label: Motivation
placeholder: "Explain why this refactor or chore is necessary."
placeholder: "Explain why this refactor is necessary."
validations:
required: false
- type: textarea

13
.github/ISSUE_TEMPLATE/tracker.yml vendored Normal file
View File

@@ -0,0 +1,13 @@
name: "👾 Tracker"
description: For inner usages, please do not use this template.
title: "[Tracker] "
labels:
- tracker
body:
- type: textarea
id: content
attributes:
label: Blockers
placeholder: "- [ ] ..."
validations:
required: true

View File

@@ -1,11 +0,0 @@
name: Setup Web Environment
runs:
using: composite
steps:
- name: Setup Vite+
uses: voidzero-dev/setup-vp@20553a7a7429c429a74894104a2835d7fed28a72 # v1.3.0
with:
node-version-file: .nvmrc
cache: true
run-install: true

212
.github/dependabot.yml vendored
View File

@@ -1,212 +0,0 @@
version: 2
updates:
- package-ecosystem: "pip"
directory: "/api"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
groups:
flask:
patterns:
- "flask"
- "flask-*"
- "werkzeug"
- "gunicorn"
google:
patterns:
- "google-*"
- "googleapis-*"
opentelemetry:
patterns:
- "opentelemetry-*"
pydantic:
patterns:
- "pydantic"
- "pydantic-*"
llm:
patterns:
- "langfuse"
- "langsmith"
- "litellm"
- "mlflow*"
- "opik"
- "weave*"
- "arize*"
- "tiktoken"
- "transformers"
database:
patterns:
- "sqlalchemy"
- "psycopg2*"
- "psycogreen"
- "redis*"
- "alembic*"
storage:
patterns:
- "boto3*"
- "botocore*"
- "azure-*"
- "bce-*"
- "cos-python-*"
- "esdk-obs-*"
- "google-cloud-storage"
- "opendal"
- "oss2"
- "supabase*"
- "tos*"
vdb:
patterns:
- "alibabacloud*"
- "chromadb"
- "clickhouse-*"
- "clickzetta-*"
- "couchbase"
- "elasticsearch"
- "opensearch-py"
- "oracledb"
- "pgvect*"
- "pymilvus"
- "pymochow"
- "pyobvector"
- "qdrant-client"
- "intersystems-*"
- "tablestore"
- "tcvectordb"
- "tidb-vector"
- "upstash-*"
- "volcengine-*"
- "weaviate-*"
- "xinference-*"
- "mo-vector"
- "mysql-connector-*"
dev:
patterns:
- "coverage"
- "dotenv-linter"
- "faker"
- "lxml-stubs"
- "basedpyright"
- "ruff"
- "pytest*"
- "types-*"
- "boto3-stubs"
- "hypothesis"
- "pandas-stubs"
- "scipy-stubs"
- "import-linter"
- "celery-types"
- "mypy*"
- "pyrefly"
python-packages:
patterns:
- "*"
- package-ecosystem: "uv"
directory: "/api"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
groups:
flask:
patterns:
- "flask"
- "flask-*"
- "werkzeug"
- "gunicorn"
google:
patterns:
- "google-*"
- "googleapis-*"
opentelemetry:
patterns:
- "opentelemetry-*"
pydantic:
patterns:
- "pydantic"
- "pydantic-*"
llm:
patterns:
- "langfuse"
- "langsmith"
- "litellm"
- "mlflow*"
- "opik"
- "weave*"
- "arize*"
- "tiktoken"
- "transformers"
database:
patterns:
- "sqlalchemy"
- "psycopg2*"
- "psycogreen"
- "redis*"
- "alembic*"
storage:
patterns:
- "boto3*"
- "botocore*"
- "azure-*"
- "bce-*"
- "cos-python-*"
- "esdk-obs-*"
- "google-cloud-storage"
- "opendal"
- "oss2"
- "supabase*"
- "tos*"
vdb:
patterns:
- "alibabacloud*"
- "chromadb"
- "clickhouse-*"
- "clickzetta-*"
- "couchbase"
- "elasticsearch"
- "opensearch-py"
- "oracledb"
- "pgvect*"
- "pymilvus"
- "pymochow"
- "pyobvector"
- "qdrant-client"
- "intersystems-*"
- "tablestore"
- "tcvectordb"
- "tidb-vector"
- "upstash-*"
- "volcengine-*"
- "weaviate-*"
- "xinference-*"
- "mo-vector"
- "mysql-connector-*"
dev:
patterns:
- "coverage"
- "dotenv-linter"
- "faker"
- "lxml-stubs"
- "basedpyright"
- "ruff"
- "pytest*"
- "types-*"
- "boto3-stubs"
- "hypothesis"
- "pandas-stubs"
- "scipy-stubs"
- "import-linter"
- "celery-types"
- "mypy*"
- "pyrefly"
python-packages:
patterns:
- "*"
- package-ecosystem: "github-actions"
directory: "/"
open-pull-requests-limit: 5
schedule:
interval: "weekly"
groups:
github-actions-dependencies:
patterns:
- "*"

3
.github/labeler.yml vendored
View File

@@ -1,3 +0,0 @@
web:
- changed-files:
- any-glob-to-any-file: 'web/**'

View File

@@ -20,4 +20,4 @@
- [x] I understand that this PR may be closed in case there was no previous discussion or issues. (This doesn't apply to typos!)
- [x] I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change.
- [x] I've updated the documentation accordingly.
- [x] I ran `make lint` and `make type-check` (backend) and `cd web && npx lint-staged` (frontend) to appease the lint gods
- [x] I ran `dev/reformat`(backend) and `cd web && npx lint-staged`(frontend) to appease the lint gods

View File

@@ -1,19 +0,0 @@
name: Anti-Slop PR Check
on:
pull_request_target:
types: [opened, edited, synchronize]
permissions:
pull-requests: write
contents: read
jobs:
anti-slop:
runs-on: ubuntu-latest
steps:
- uses: peakoss/anti-slop@85daca1880e9e1af197fc06ea03349daf08f4202 # v0.2.1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
close-pr: false
failure-add-pr-labels: "needs-revision"

View File

@@ -2,40 +2,32 @@ name: Run Pytest
on:
workflow_call:
secrets:
CODECOV_TOKEN:
required: false
permissions:
contents: read
concurrency:
group: api-tests-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
api-unit:
name: API Unit Tests
test:
name: API Tests
runs-on: ubuntu-latest
env:
COVERAGE_FILE: coverage-unit
defaults:
run:
shell: bash
strategy:
matrix:
python-version:
- "3.11"
- "3.12"
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
@@ -47,54 +39,41 @@ jobs:
- name: Install dependencies
run: uv sync --project api --dev
- name: Run Unit tests
run: |
uv run --project api bash dev/pytest/pytest_unit_tests.sh
- name: Run ty check
run: |
cd api
uv add --dev ty
uv run ty check || true
- name: Run pyrefly check
run: |
cd api
uv add --dev pyrefly
uv run pyrefly check || true
- name: Coverage Summary
run: |
set -x
# Extract coverage percentage and create a summary
TOTAL_COVERAGE=$(python -c 'import json; print(json.load(open("coverage.json"))["totals"]["percent_covered_display"])')
# Create a detailed coverage summary
echo "### Test Coverage Summary :test_tube:" >> $GITHUB_STEP_SUMMARY
echo "Total Coverage: ${TOTAL_COVERAGE}%" >> $GITHUB_STEP_SUMMARY
uv run --project api coverage report --format=markdown >> $GITHUB_STEP_SUMMARY
- name: Run dify config tests
run: uv run --project api dev/pytest/pytest_config_tests.py
- name: Run Unit Tests
run: uv run --project api bash dev/pytest/pytest_unit_tests.sh
- name: Upload unit coverage data
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
- name: MyPy Cache
uses: actions/cache@v4
with:
name: api-coverage-unit
path: coverage-unit
retention-days: 1
path: api/.mypy_cache
key: mypy-${{ matrix.python-version }}-${{ runner.os }}-${{ hashFiles('api/uv.lock') }}
api-integration:
name: API Integration Tests
runs-on: ubuntu-latest
env:
COVERAGE_FILE: coverage-integration
STORAGE_TYPE: opendal
OPENDAL_SCHEME: fs
OPENDAL_FS_ROOT: /tmp/dify-storage
defaults:
run:
shell: bash
strategy:
matrix:
python-version:
- "3.12"
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
cache-dependency-glob: api/uv.lock
- name: Check UV lockfile
run: uv lock --project api --check
- name: Install dependencies
run: uv sync --project api --dev
- name: Run MyPy Checks
run: dev/mypy-check
- name: Set up dotenvs
run: |
@@ -105,12 +84,12 @@ jobs:
run: sh .github/workflows/expose_service_ports.sh
- name: Set up Sandbox
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
uses: hoverkraft-tech/compose-action@v2.0.2
with:
compose-file: |
docker/docker-compose.middleware.yaml
services: |
db_postgres
db
redis
sandbox
ssrf_proxy
@@ -119,94 +98,11 @@ jobs:
run: |
cp api/tests/integration_tests/.env.example api/tests/integration_tests/.env
- name: Run Integration Tests
run: |
uv run --project api pytest \
-n auto \
--timeout "${PYTEST_TIMEOUT:-180}" \
api/tests/integration_tests/workflow \
api/tests/integration_tests/tools \
api/tests/test_containers_integration_tests
- name: Run Workflow
run: uv run --project api bash dev/pytest/pytest_workflow.sh
- name: Upload integration coverage data
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: api-coverage-integration
path: coverage-integration
retention-days: 1
- name: Run Tool
run: uv run --project api bash dev/pytest/pytest_tools.sh
api-coverage:
name: API Coverage
runs-on: ubuntu-latest
needs:
- api-unit
- api-integration
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
COVERAGE_FILE: .coverage
defaults:
run:
shell: bash
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"
cache-dependency-glob: api/uv.lock
- name: Install dependencies
run: uv sync --project api --dev
- name: Download coverage data
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
path: coverage-data
pattern: api-coverage-*
merge-multiple: true
- name: Combine coverage
run: |
set -euo pipefail
echo "### API Coverage" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
echo "Merged backend coverage report generated for Codecov project status." >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
unit_coverage="$(find coverage-data -type f -name coverage-unit -print -quit)"
integration_coverage="$(find coverage-data -type f -name coverage-integration -print -quit)"
: "${unit_coverage:?coverage-unit artifact not found}"
: "${integration_coverage:?coverage-integration artifact not found}"
report_file="$(mktemp)"
uv run --project api coverage combine "$unit_coverage" "$integration_coverage"
uv run --project api coverage report --show-missing | tee "$report_file"
echo "Summary: \`$(tail -n 1 "$report_file")\`" >> "$GITHUB_STEP_SUMMARY"
{
echo ""
echo "<details><summary>Coverage report</summary>"
echo ""
echo '```'
cat "$report_file"
echo '```'
echo "</details>"
} >> "$GITHUB_STEP_SUMMARY"
uv run --project api coverage xml -o coverage.xml
- name: Report coverage
if: ${{ env.CODECOV_TOKEN != '' }}
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
files: ./coverage.xml
disable_search: true
flags: api
env:
CODECOV_TOKEN: ${{ env.CODECOV_TOKEN }}
- name: Run TestContainers
run: uv run --project api bash dev/pytest/pytest_testcontainers.sh

View File

@@ -2,11 +2,6 @@ name: autofix.ci
on:
pull_request:
branches: ["main"]
merge_group:
branches: ["main"]
types: [checks_requested]
push:
branches: ["main"]
permissions:
contents: read
@@ -15,111 +10,24 @@ jobs:
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
steps:
- name: Complete merge group check
if: github.event_name == 'merge_group'
run: echo "autofix.ci updates pull request branches, not merge group refs."
- uses: actions/checkout@v4
- if: github.event_name != 'merge_group'
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Check Docker Compose inputs
if: github.event_name != 'merge_group'
id: docker-compose-changes
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
# Use uv to ensure we have the same ruff version in CI and locally.
- uses: astral-sh/setup-uv@v6
with:
files: |
docker/generate_docker_compose
docker/.env.example
docker/docker-compose-template.yaml
docker/docker-compose.yaml
- name: Check web inputs
if: github.event_name != 'merge_group'
id: web-changes
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
web/**
package.json
pnpm-lock.yaml
pnpm-workspace.yaml
.nvmrc
- name: Check api inputs
if: github.event_name != 'merge_group'
id: api-changes
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
api/**
- if: github.event_name != 'merge_group'
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
with:
python-version: "3.11"
- if: github.event_name != 'merge_group'
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
- name: Generate Docker Compose
if: github.event_name != 'merge_group' && steps.docker-compose-changes.outputs.any_changed == 'true'
run: |
cd docker
./generate_docker_compose
- if: github.event_name != 'merge_group' && steps.api-changes.outputs.any_changed == 'true'
run: |
python-version: "3.12"
- run: |
cd api
uv sync --dev
# fmt first to avoid line too long
uv run ruff format ..
# Fix lint errors
uv run ruff check --fix .
uv run ruff check --fix-only .
# Format code
uv run ruff format ..
- name: count migration progress
if: github.event_name != 'merge_group' && steps.api-changes.outputs.any_changed == 'true'
run: |
cd api
./cnt_base.sh
uv run ruff format .
- name: ast-grep
if: github.event_name != 'merge_group' && steps.api-changes.outputs.any_changed == 'true'
run: |
# ast-grep exits 1 if no matches are found; allow idempotent runs.
uvx --from ast-grep-cli ast-grep --pattern 'db.session.query($WHATEVER).filter($HERE)' --rewrite 'db.session.query($WHATEVER).where($HERE)' -l py --update-all || true
uvx --from ast-grep-cli ast-grep --pattern 'session.query($WHATEVER).filter($HERE)' --rewrite 'session.query($WHATEVER).where($HERE)' -l py --update-all || true
uvx --from ast-grep-cli ast-grep -p '$A = db.Column($$$B)' -r '$A = mapped_column($$$B)' -l py --update-all || true
uvx --from ast-grep-cli ast-grep -p '$A : $T = db.Column($$$B)' -r '$A : $T = mapped_column($$$B)' -l py --update-all || true
# Convert Optional[T] to T | None (ignoring quoted types)
cat > /tmp/optional-rule.yml << 'EOF'
id: convert-optional-to-union
language: python
rule:
kind: generic_type
all:
- has:
kind: identifier
pattern: Optional
- has:
kind: type_parameter
has:
kind: type
pattern: $T
fix: $T | None
EOF
uvx --from ast-grep-cli ast-grep scan . --inline-rules "$(cat /tmp/optional-rule.yml)" --update-all
# Fix forward references that were incorrectly converted (Python doesn't support "Type" | None syntax)
find . -name "*.py" -type f -exec sed -i.bak -E 's/"([^"]+)" \| None/Optional["\1"]/g; s/'"'"'([^'"'"']+)'"'"' \| None/Optional['"'"'\1'"'"']/g' {} \;
find . -name "*.py.bak" -type f -delete
- name: Setup web environment
if: github.event_name != 'merge_group' && steps.web-changes.outputs.any_changed == 'true'
uses: ./.github/actions/setup-web
- name: ESLint autofix
if: github.event_name != 'merge_group' && steps.web-changes.outputs.any_changed == 'true'
uvx --from ast-grep-cli sg --pattern 'db.session.query($WHATEVER).filter($HERE)' --rewrite 'db.session.query($WHATEVER).where($HERE)' -l py --update-all
uvx --from ast-grep-cli sg --pattern 'session.query($WHATEVER).filter($HERE)' --rewrite 'session.query($WHATEVER).where($HERE)' -l py --update-all
- name: mdformat
run: |
cd web
vp exec eslint --concurrency=2 --prune-suppressions --quiet || true
- if: github.event_name != 'merge_group'
uses: autofix-ci/action@7a166d7532b277f34e16238930461bf77f9d7ed8 # v1.3.3
uvx mdformat .
- uses: autofix-ci/action@635ffb0c9798bd160680f18fd73371e355b85f27

View File

@@ -4,11 +4,10 @@ on:
push:
branches:
- "main"
- "deploy/**"
- "deploy/dev"
- "deploy/enterprise"
- "build/**"
- "release/e-*"
- "hotfix/**"
- "feat/hitl-backend"
tags:
- "*"
@@ -24,39 +23,27 @@ env:
jobs:
build:
runs-on: ${{ matrix.runs_on }}
runs-on: ${{ matrix.platform == 'linux/arm64' && 'arm64_runner' || 'ubuntu-latest' }}
if: github.repository == 'langgenius/dify'
strategy:
matrix:
include:
- service_name: "build-api-amd64"
image_name_env: "DIFY_API_IMAGE_NAME"
artifact_context: "api"
build_context: "{{defaultContext}}:api"
file: "Dockerfile"
context: "api"
platform: linux/amd64
runs_on: ubuntu-latest
- service_name: "build-api-arm64"
image_name_env: "DIFY_API_IMAGE_NAME"
artifact_context: "api"
build_context: "{{defaultContext}}:api"
file: "Dockerfile"
context: "api"
platform: linux/arm64
runs_on: ubuntu-24.04-arm
- service_name: "build-web-amd64"
image_name_env: "DIFY_WEB_IMAGE_NAME"
artifact_context: "web"
build_context: "{{defaultContext}}"
file: "web/Dockerfile"
context: "web"
platform: linux/amd64
runs_on: ubuntu-latest
- service_name: "build-web-arm64"
image_name_env: "DIFY_WEB_IMAGE_NAME"
artifact_context: "web"
build_context: "{{defaultContext}}"
file: "web/Dockerfile"
context: "web"
platform: linux/arm64
runs_on: ubuntu-24.04-arm
steps:
- name: Prepare
@@ -65,26 +52,28 @@ jobs:
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- name: Login to Docker Hub
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@v3
with:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
uses: docker/setup-buildx-action@v3
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6.0.0
uses: docker/metadata-action@v5
with:
images: ${{ env[matrix.image_name_env] }}
- name: Build Docker image
id: build
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
uses: docker/build-push-action@v6
with:
context: ${{ matrix.build_context }}
file: ${{ matrix.file }}
context: "{{defaultContext}}:${{ matrix.context }}"
platforms: ${{ matrix.platform }}
build-args: COMMIT_SHA=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
labels: ${{ steps.meta.outputs.labels }}
@@ -101,9 +90,9 @@ jobs:
touch "/tmp/digests/${sanitized_digest}"
- name: Upload digest
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@v4
with:
name: digests-${{ matrix.artifact_context }}-${{ env.PLATFORM_PAIR }}
name: digests-${{ matrix.context }}-${{ env.PLATFORM_PAIR }}
path: /tmp/digests/*
if-no-files-found: error
retention-days: 1
@@ -123,21 +112,21 @@ jobs:
context: "web"
steps:
- name: Download digests
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-${{ matrix.context }}-*
merge-multiple: true
- name: Login to Docker Hub
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@v3
with:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_TOKEN }}
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6.0.0
uses: docker/metadata-action@v5
with:
images: ${{ env[matrix.image_name_env] }}
tags: |

View File

@@ -8,18 +8,18 @@ concurrency:
cancel-in-progress: true
jobs:
db-migration-test-postgres:
db-migration-test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
python-version: "3.12"
@@ -40,12 +40,12 @@ jobs:
cp middleware.env.example middleware.env
- name: Set up Middlewares
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
uses: hoverkraft-tech/compose-action@v2.0.2
with:
compose-file: |
docker/docker-compose.middleware.yaml
services: |
db_postgres
db
redis
- name: Prepare configs
@@ -57,60 +57,3 @@ jobs:
env:
DEBUG: true
run: uv run --directory api flask upgrade-db
db-migration-test-mysql:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"
cache-dependency-glob: api/uv.lock
- name: Install dependencies
run: uv sync --project api
- name: Ensure Offline migration are supported
run: |
# upgrade
uv run --directory api flask db upgrade 'base:head' --sql
# downgrade
uv run --directory api flask db downgrade 'head:base' --sql
- name: Prepare middleware env for MySQL
run: |
cd docker
cp middleware.env.example middleware.env
sed -i 's/DB_TYPE=postgresql/DB_TYPE=mysql/' middleware.env
sed -i 's/DB_HOST=db_postgres/DB_HOST=db_mysql/' middleware.env
sed -i 's/DB_PORT=5432/DB_PORT=3306/' middleware.env
sed -i 's/DB_USERNAME=postgres/DB_USERNAME=mysql/' middleware.env
- name: Set up Middlewares
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
with:
compose-file: |
docker/docker-compose.middleware.yaml
services: |
db_mysql
redis
- name: Prepare configs for MySQL
run: |
cd api
cp .env.example .env
sed -i 's/DB_TYPE=postgresql/DB_TYPE=mysql/' .env
sed -i 's/DB_PORT=5432/DB_PORT=3306/' .env
sed -i 's/DB_USERNAME=postgres/DB_USERNAME=root/' .env
- name: Run DB Migration
env:
DEBUG: true
run: uv run --directory api flask upgrade-db

View File

@@ -1,28 +0,0 @@
name: Deploy Agent Dev
permissions:
contents: read
on:
workflow_run:
workflows: ["Build and Push API & Web"]
branches:
- "deploy/agent-dev"
types:
- completed
jobs:
deploy:
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/agent-dev'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@0ff4204d59e8e51228ff73bce53f80d53301dee2 # v1.2.5
with:
host: ${{ secrets.AGENT_DEV_SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
${{ vars.SSH_SCRIPT || secrets.SSH_SCRIPT }}

View File

@@ -12,11 +12,10 @@ jobs:
deploy:
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/dev'
github.event.workflow_run.conclusion == 'success'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@0ff4204d59e8e51228ff73bce53f80d53301dee2 # v1.2.5
uses: appleboy/ssh-action@v0.1.8
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USER }}

View File

@@ -19,23 +19,11 @@ jobs:
github.event.workflow_run.head_branch == 'deploy/enterprise'
steps:
- name: trigger deployments
env:
DEV_ENV_ADDRS: ${{ vars.DEV_ENV_ADDRS }}
DEPLOY_SECRET: ${{ secrets.DEPLOY_SECRET }}
run: |
IFS=',' read -ra ENDPOINTS <<< "${DEV_ENV_ADDRS:-}"
BODY='{"project":"dify-api","tag":"deploy-enterprise"}'
for ENDPOINT in "${ENDPOINTS[@]}"; do
ENDPOINT="$(echo "$ENDPOINT" | xargs)"
[ -z "$ENDPOINT" ] && continue
API_SIGNATURE=$(printf '%s' "$BODY" | openssl dgst -sha256 -hmac "$DEPLOY_SECRET" | awk '{print "sha256="$2}')
curl -sSf -X POST \
-H "Content-Type: application/json" \
-H "X-Hub-Signature-256: $API_SIGNATURE" \
-d "$BODY" \
"$ENDPOINT"
done
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
with:
host: ${{ secrets.ENTERPRISE_SSH_HOST }}
username: ${{ secrets.ENTERPRISE_SSH_USER }}
password: ${{ secrets.ENTERPRISE_SSH_PASSWORD }}
script: |
${{ vars.ENTERPRISE_SSH_SCRIPT || secrets.ENTERPRISE_SSH_SCRIPT }}

View File

@@ -1,25 +0,0 @@
name: Deploy HITL
on:
workflow_run:
workflows: ["Build and Push API & Web"]
branches:
- "build/feat/hitl"
types:
- completed
jobs:
deploy:
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'build/feat/hitl'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@0ff4204d59e8e51228ff73bce53f80d53301dee2 # v1.2.5
with:
host: ${{ secrets.HITL_SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
${{ vars.SSH_SCRIPT || secrets.SSH_SCRIPT }}

28
.github/workflows/deploy-rag-dev.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Deploy RAG Dev
permissions:
contents: read
on:
workflow_run:
workflows: ["Build and Push API & Web"]
branches:
- "deploy/rag-dev"
types:
- completed
jobs:
deploy:
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/rag-dev'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
with:
host: ${{ secrets.RAG_SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
${{ vars.SSH_SCRIPT || secrets.SSH_SCRIPT }}

View File

@@ -6,12 +6,7 @@ on:
- "main"
paths:
- api/Dockerfile
- web/docker/**
- web/Dockerfile
- package.json
- pnpm-lock.yaml
- pnpm-workspace.yaml
- .nvmrc
concurrency:
group: docker-build-${{ github.head_ref || github.run_id }}
@@ -19,40 +14,35 @@ concurrency:
jobs:
build-docker:
runs-on: ${{ matrix.runs_on }}
runs-on: ubuntu-latest
strategy:
matrix:
include:
- service_name: "api-amd64"
platform: linux/amd64
runs_on: ubuntu-latest
context: "{{defaultContext}}:api"
file: "Dockerfile"
context: "api"
- service_name: "api-arm64"
platform: linux/arm64
runs_on: ubuntu-24.04-arm
context: "{{defaultContext}}:api"
file: "Dockerfile"
context: "api"
- service_name: "web-amd64"
platform: linux/amd64
runs_on: ubuntu-latest
context: "{{defaultContext}}"
file: "web/Dockerfile"
context: "web"
- service_name: "web-arm64"
platform: linux/arm64
runs_on: ubuntu-24.04-arm
context: "{{defaultContext}}"
file: "web/Dockerfile"
context: "web"
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
uses: docker/setup-buildx-action@v3
- name: Build Docker Image
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
uses: docker/build-push-action@v6
with:
push: false
context: ${{ matrix.context }}
file: ${{ matrix.file }}
context: "{{defaultContext}}:${{ matrix.context }}"
file: "${{ matrix.file }}"
platforms: ${{ matrix.platform }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -1,7 +1,6 @@
#!/bin/bash
yq eval '.services.weaviate.ports += ["8080:8080"]' -i docker/docker-compose.yaml
yq eval '.services.weaviate.ports += ["50051:50051"]' -i docker/docker-compose.yaml
yq eval '.services.qdrant.ports += ["6333:6333"]' -i docker/docker-compose.yaml
yq eval '.services.chroma.ports += ["8000:8000"]' -i docker/docker-compose.yaml
yq eval '.services["milvus-standalone"].ports += ["19530:19530"]' -i docker/docker-compose.yaml
@@ -14,4 +13,4 @@ yq eval '.services.tidb.ports += ["4000:4000"]' -i docker/tidb/docker-compose.ya
yq eval '.services.oceanbase.ports += ["2881:2881"]' -i docker/docker-compose.yaml
yq eval '.services.opengauss.ports += ["6600:6600"]' -i docker/docker-compose.yaml
echo "Ports exposed for sandbox, weaviate (HTTP 8080, gRPC 50051), tidb, qdrant, chroma, milvus, pgvector, pgvecto-rs, elasticsearch, couchbase, opengauss"
echo "Ports exposed for sandbox, weaviate, tidb, qdrant, chroma, milvus, pgvector, pgvecto-rs, elasticsearch, couchbase, opengauss"

View File

@@ -1,14 +0,0 @@
name: "Pull Request Labeler"
on:
pull_request_target:
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
with:
sync-labels: true

View File

@@ -3,14 +3,10 @@ name: Main CI Pipeline
on:
pull_request:
branches: ["main"]
merge_group:
branches: ["main"]
types: [checks_requested]
push:
branches: ["main"]
permissions:
actions: write
contents: write
pull-requests: write
checks: write
@@ -21,405 +17,62 @@ concurrency:
cancel-in-progress: true
jobs:
pre_job:
name: Skip Duplicate Checks
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip || 'false' }}
steps:
- id: skip_check
continue-on-error: true
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf # v5.3.1
with:
cancel_others: 'true'
concurrent_skipping: same_content_newer
# Check which paths were changed to determine which tests to run
check-changes:
name: Check Changed Files
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
runs-on: ubuntu-latest
outputs:
api-changed: ${{ steps.changes.outputs.api }}
e2e-changed: ${{ steps.changes.outputs.e2e }}
web-changed: ${{ steps.changes.outputs.web }}
vdb-changed: ${{ steps.changes.outputs.vdb }}
migration-changed: ${{ steps.changes.outputs.migration }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: dorny/paths-filter@fbd0ab8f3e69293af611ebaee6363fc25e6d187d # v4.0.1
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: changes
with:
filters: |
api:
- 'api/**'
- 'docker/**'
- '.github/workflows/api-tests.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.middleware.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
- 'docker/ssrf_proxy/**'
- 'docker/volumes/sandbox/conf/**'
web:
- 'web/**'
- 'package.json'
- 'pnpm-lock.yaml'
- 'pnpm-workspace.yaml'
- '.nvmrc'
- '.github/workflows/web-tests.yml'
- '.github/actions/setup-web/**'
e2e:
- 'api/**'
- 'api/pyproject.toml'
- 'api/uv.lock'
- 'e2e/**'
- 'web/**'
- 'package.json'
- 'pnpm-lock.yaml'
- 'pnpm-workspace.yaml'
- '.nvmrc'
- 'docker/docker-compose.middleware.yaml'
- 'docker/middleware.env.example'
- '.github/workflows/web-e2e.yml'
- '.github/actions/setup-web/**'
vdb:
- 'api/core/rag/datasource/**'
- 'api/tests/integration_tests/vdb/**'
- 'docker/**'
- '.github/workflows/vdb-tests.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
- 'docker/certbot/**'
- 'docker/couchbase-server/**'
- 'docker/elasticsearch/**'
- 'docker/iris/**'
- 'docker/nginx/**'
- 'docker/pgvector/**'
- 'docker/ssrf_proxy/**'
- 'docker/startupscripts/**'
- 'docker/tidb/**'
- 'docker/volumes/**'
- 'api/uv.lock'
- 'api/pyproject.toml'
migration:
- 'api/migrations/**'
- 'api/.env.example'
- '.github/workflows/db-migration-test.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.middleware.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
- 'docker/ssrf_proxy/**'
- 'docker/volumes/sandbox/conf/**'
# Run tests in parallel while always emitting stable required checks.
api-tests-run:
name: Run API Tests
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.api-changed == 'true'
uses: ./.github/workflows/api-tests.yml
secrets: inherit
api-tests-skip:
name: Skip API Tests
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.api-changed != 'true'
runs-on: ubuntu-latest
steps:
- name: Report skipped API tests
run: echo "No API-related changes detected; skipping API tests."
# Run tests in parallel
api-tests:
name: API Tests
if: ${{ always() }}
needs:
- pre_job
- check-changes
- api-tests-run
- api-tests-skip
runs-on: ubuntu-latest
steps:
- name: Finalize API Tests status
env:
SHOULD_SKIP_WORKFLOW: ${{ needs.pre_job.outputs.should_skip }}
TESTS_CHANGED: ${{ needs.check-changes.outputs.api-changed }}
RUN_RESULT: ${{ needs.api-tests-run.result }}
SKIP_RESULT: ${{ needs.api-tests-skip.result }}
run: |
if [[ "$SHOULD_SKIP_WORKFLOW" == 'true' ]]; then
echo "API tests were skipped because this workflow run duplicated a successful or newer run."
exit 0
fi
if [[ "$TESTS_CHANGED" == 'true' ]]; then
if [[ "$RUN_RESULT" == 'success' ]]; then
echo "API tests ran successfully."
exit 0
fi
echo "API tests were required but finished with result: $RUN_RESULT" >&2
exit 1
fi
if [[ "$SKIP_RESULT" == 'success' ]]; then
echo "API tests were skipped because no API-related files changed."
exit 0
fi
echo "API tests were not required, but the skip job finished with result: $SKIP_RESULT" >&2
exit 1
web-tests-run:
name: Run Web Tests
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.web-changed == 'true'
uses: ./.github/workflows/web-tests.yml
secrets: inherit
web-tests-skip:
name: Skip Web Tests
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.web-changed != 'true'
runs-on: ubuntu-latest
steps:
- name: Report skipped web tests
run: echo "No web-related changes detected; skipping web tests."
needs: check-changes
if: needs.check-changes.outputs.api-changed == 'true'
uses: ./.github/workflows/api-tests.yml
web-tests:
name: Web Tests
if: ${{ always() }}
needs:
- pre_job
- check-changes
- web-tests-run
- web-tests-skip
runs-on: ubuntu-latest
steps:
- name: Finalize Web Tests status
env:
SHOULD_SKIP_WORKFLOW: ${{ needs.pre_job.outputs.should_skip }}
TESTS_CHANGED: ${{ needs.check-changes.outputs.web-changed }}
RUN_RESULT: ${{ needs.web-tests-run.result }}
SKIP_RESULT: ${{ needs.web-tests-skip.result }}
run: |
if [[ "$SHOULD_SKIP_WORKFLOW" == 'true' ]]; then
echo "Web tests were skipped because this workflow run duplicated a successful or newer run."
exit 0
fi
if [[ "$TESTS_CHANGED" == 'true' ]]; then
if [[ "$RUN_RESULT" == 'success' ]]; then
echo "Web tests ran successfully."
exit 0
fi
echo "Web tests were required but finished with result: $RUN_RESULT" >&2
exit 1
fi
if [[ "$SKIP_RESULT" == 'success' ]]; then
echo "Web tests were skipped because no web-related files changed."
exit 0
fi
echo "Web tests were not required, but the skip job finished with result: $SKIP_RESULT" >&2
exit 1
web-e2e-run:
name: Run Web Full-Stack E2E
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.e2e-changed == 'true'
uses: ./.github/workflows/web-e2e.yml
web-e2e-skip:
name: Skip Web Full-Stack E2E
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.e2e-changed != 'true'
runs-on: ubuntu-latest
steps:
- name: Report skipped web full-stack e2e
run: echo "No E2E-related changes detected; skipping web full-stack E2E."
web-e2e:
name: Web Full-Stack E2E
if: ${{ always() }}
needs:
- pre_job
- check-changes
- web-e2e-run
- web-e2e-skip
runs-on: ubuntu-latest
steps:
- name: Finalize Web Full-Stack E2E status
env:
SHOULD_SKIP_WORKFLOW: ${{ needs.pre_job.outputs.should_skip }}
TESTS_CHANGED: ${{ needs.check-changes.outputs.e2e-changed }}
RUN_RESULT: ${{ needs.web-e2e-run.result }}
SKIP_RESULT: ${{ needs.web-e2e-skip.result }}
run: |
if [[ "$SHOULD_SKIP_WORKFLOW" == 'true' ]]; then
echo "Web full-stack E2E was skipped because this workflow run duplicated a successful or newer run."
exit 0
fi
if [[ "$TESTS_CHANGED" == 'true' ]]; then
if [[ "$RUN_RESULT" == 'success' ]]; then
echo "Web full-stack E2E ran successfully."
exit 0
fi
echo "Web full-stack E2E was required but finished with result: $RUN_RESULT" >&2
exit 1
fi
if [[ "$SKIP_RESULT" == 'success' ]]; then
echo "Web full-stack E2E was skipped because no E2E-related files changed."
exit 0
fi
echo "Web full-stack E2E was not required, but the skip job finished with result: $SKIP_RESULT" >&2
exit 1
needs: check-changes
if: needs.check-changes.outputs.web-changed == 'true'
uses: ./.github/workflows/web-tests.yml
style-check:
name: Style Check
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
uses: ./.github/workflows/style.yml
vdb-tests-run:
name: Run VDB Tests
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.vdb-changed == 'true'
uses: ./.github/workflows/vdb-tests.yml
vdb-tests-skip:
name: Skip VDB Tests
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.vdb-changed != 'true'
runs-on: ubuntu-latest
steps:
- name: Report skipped VDB tests
run: echo "No VDB-related changes detected; skipping VDB tests."
vdb-tests:
name: VDB Tests
if: ${{ always() }}
needs:
- pre_job
- check-changes
- vdb-tests-run
- vdb-tests-skip
runs-on: ubuntu-latest
steps:
- name: Finalize VDB Tests status
env:
SHOULD_SKIP_WORKFLOW: ${{ needs.pre_job.outputs.should_skip }}
TESTS_CHANGED: ${{ needs.check-changes.outputs.vdb-changed }}
RUN_RESULT: ${{ needs.vdb-tests-run.result }}
SKIP_RESULT: ${{ needs.vdb-tests-skip.result }}
run: |
if [[ "$SHOULD_SKIP_WORKFLOW" == 'true' ]]; then
echo "VDB tests were skipped because this workflow run duplicated a successful or newer run."
exit 0
fi
if [[ "$TESTS_CHANGED" == 'true' ]]; then
if [[ "$RUN_RESULT" == 'success' ]]; then
echo "VDB tests ran successfully."
exit 0
fi
echo "VDB tests were required but finished with result: $RUN_RESULT" >&2
exit 1
fi
if [[ "$SKIP_RESULT" == 'success' ]]; then
echo "VDB tests were skipped because no VDB-related files changed."
exit 0
fi
echo "VDB tests were not required, but the skip job finished with result: $SKIP_RESULT" >&2
exit 1
db-migration-test-run:
name: Run DB Migration Test
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.migration-changed == 'true'
uses: ./.github/workflows/db-migration-test.yml
db-migration-test-skip:
name: Skip DB Migration Test
needs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.migration-changed != 'true'
runs-on: ubuntu-latest
steps:
- name: Report skipped DB migration tests
run: echo "No migration-related changes detected; skipping DB migration tests."
needs: check-changes
if: needs.check-changes.outputs.vdb-changed == 'true'
uses: ./.github/workflows/vdb-tests.yml
db-migration-test:
name: DB Migration Test
if: ${{ always() }}
needs:
- pre_job
- check-changes
- db-migration-test-run
- db-migration-test-skip
runs-on: ubuntu-latest
steps:
- name: Finalize DB Migration Test status
env:
SHOULD_SKIP_WORKFLOW: ${{ needs.pre_job.outputs.should_skip }}
TESTS_CHANGED: ${{ needs.check-changes.outputs.migration-changed }}
RUN_RESULT: ${{ needs.db-migration-test-run.result }}
SKIP_RESULT: ${{ needs.db-migration-test-skip.result }}
run: |
if [[ "$SHOULD_SKIP_WORKFLOW" == 'true' ]]; then
echo "DB migration tests were skipped because this workflow run duplicated a successful or newer run."
exit 0
fi
if [[ "$TESTS_CHANGED" == 'true' ]]; then
if [[ "$RUN_RESULT" == 'success' ]]; then
echo "DB migration tests ran successfully."
exit 0
fi
echo "DB migration tests were required but finished with result: $RUN_RESULT" >&2
exit 1
fi
if [[ "$SKIP_RESULT" == 'success' ]]; then
echo "DB migration tests were skipped because no migration-related files changed."
exit 0
fi
echo "DB migration tests were not required, but the skip job finished with result: $SKIP_RESULT" >&2
exit 1
needs: check-changes
if: needs.check-changes.outputs.migration-changed == 'true'
uses: ./.github/workflows/db-migration-test.yml

View File

@@ -1,88 +0,0 @@
name: Comment with Pyrefly Diff
on:
workflow_run:
workflows:
- Pyrefly Diff Check
types:
- completed
permissions: {}
jobs:
comment:
name: Comment PR with pyrefly diff
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
issues: write
pull-requests: write
if: ${{ github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.pull_requests[0].head.repo.full_name != github.repository }}
steps:
- name: Download pyrefly diff artifact
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const fs = require('fs');
const artifacts = await github.rest.actions.listWorkflowRunArtifacts({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }},
});
const match = artifacts.data.artifacts.find((artifact) =>
artifact.name === 'pyrefly_diff'
);
if (!match) {
throw new Error('pyrefly_diff artifact not found');
}
const download = await github.rest.actions.downloadArtifact({
owner: context.repo.owner,
repo: context.repo.repo,
artifact_id: match.id,
archive_format: 'zip',
});
fs.writeFileSync('pyrefly_diff.zip', Buffer.from(download.data));
- name: Unzip artifact
run: unzip -o pyrefly_diff.zip
- name: Post comment
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const fs = require('fs');
let diff = fs.readFileSync('pyrefly_diff.txt', { encoding: 'utf8' });
let prNumber = null;
try {
prNumber = parseInt(fs.readFileSync('pr_number.txt', { encoding: 'utf8' }), 10);
} catch (err) {
// Fallback to workflow_run payload if artifact is missing or incomplete.
const prs = context.payload.workflow_run.pull_requests || [];
if (prs.length > 0 && prs[0].number) {
prNumber = prs[0].number;
}
}
if (!prNumber) {
throw new Error('PR number not found in artifact or workflow_run payload');
}
const MAX_CHARS = 65000;
if (diff.length > MAX_CHARS) {
diff = diff.slice(0, MAX_CHARS);
diff = diff.slice(0, diff.lastIndexOf('\\n'));
diff += '\\n\\n... (truncated) ...';
}
const body = diff.trim()
? '### Pyrefly Diff\n<details>\n<summary>base → PR</summary>\n\n```diff\n' + diff + '\n```\n</details>'
: '### Pyrefly Diff\nNo changes detected.';
await github.rest.issues.createComment({
issue_number: prNumber,
owner: context.repo.owner,
repo: context.repo.repo,
body,
});

View File

@@ -1,111 +0,0 @@
name: Pyrefly Diff Check
on:
pull_request:
paths:
- 'api/**/*.py'
permissions:
contents: read
jobs:
pyrefly-diff:
runs-on: ubuntu-latest
permissions:
contents: read
issues: write
pull-requests: write
steps:
- name: Checkout PR branch
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Setup Python & UV
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
- name: Install dependencies
run: uv sync --project api --dev
- name: Prepare diagnostics extractor
run: |
git show ${{ github.event.pull_request.head.sha }}:api/libs/pyrefly_diagnostics.py > /tmp/pyrefly_diagnostics.py
- name: Run pyrefly on PR branch
run: |
uv run --directory api --dev pyrefly check 2>&1 \
| uv run --directory api python /tmp/pyrefly_diagnostics.py > /tmp/pyrefly_pr.txt || true
- name: Checkout base branch
run: git checkout ${{ github.base_ref }}
- name: Run pyrefly on base branch
run: |
uv run --directory api --dev pyrefly check 2>&1 \
| uv run --directory api python /tmp/pyrefly_diagnostics.py > /tmp/pyrefly_base.txt || true
- name: Compute diff
run: |
diff -u /tmp/pyrefly_base.txt /tmp/pyrefly_pr.txt > pyrefly_diff.txt || true
- name: Check if line counts match
id: line_count_check
run: |
base_lines=$(wc -l < /tmp/pyrefly_base.txt)
pr_lines=$(wc -l < /tmp/pyrefly_pr.txt)
if [ "$base_lines" -eq "$pr_lines" ]; then
echo "same=true" >> $GITHUB_OUTPUT
else
echo "same=false" >> $GITHUB_OUTPUT
fi
- name: Save PR number
run: |
echo ${{ github.event.pull_request.number }} > pr_number.txt
- name: Upload pyrefly diff
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: pyrefly_diff
path: |
pyrefly_diff.txt
pr_number.txt
- name: Comment PR with pyrefly diff
if: ${{ github.event.pull_request.head.repo.full_name == github.repository && steps.line_count_check.outputs.same == 'false' }}
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const fs = require('fs');
let diff = fs.readFileSync('pyrefly_diff.txt', { encoding: 'utf8' });
const prNumber = context.payload.pull_request.number;
const MAX_CHARS = 65000;
if (diff.length > MAX_CHARS) {
diff = diff.slice(0, MAX_CHARS);
diff = diff.slice(0, diff.lastIndexOf('\n'));
diff += '\n\n... (truncated) ...';
}
const body = diff.trim()
? [
'### Pyrefly Diff',
'<details>',
'<summary>base → PR</summary>',
'',
'```diff',
diff,
'```',
'</details>',
].join('\n')
: '### Pyrefly Diff\nNo changes detected.';
await github.rest.issues.createComment({
issue_number: prNumber,
owner: context.repo.owner,
repo: context.repo.repo,
body,
});

View File

@@ -1,28 +0,0 @@
name: Semantic Pull Request
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
merge_group:
branches: ["main"]
types: [checks_requested]
jobs:
lint:
name: Validate PR title
permissions:
pull-requests: read
runs-on: ubuntu-latest
steps:
- name: Complete merge group check
if: github.event_name == 'merge_group'
run: echo "Semantic PR title validation is handled on pull requests."
- name: Check title
if: github.event_name == 'pull_request'
uses: amannn/action-semantic-pull-request@48f256284bd46cdaab1048c3721360e808335d50 # v6.1.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -18,7 +18,7 @@ jobs:
pull-requests: write
steps:
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
- uses: actions/stale@v5
with:
days-before-issue-stale: 15
days-before-issue-close: 3

View File

@@ -12,6 +12,7 @@ permissions:
statuses: write
contents: read
jobs:
python-style:
name: Python Style
@@ -19,13 +20,13 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
uses: tj-actions/changed-files@v46
with:
files: |
api/**
@@ -33,7 +34,7 @@ jobs:
- name: Setup UV and Python
if: steps.changed-files.outputs.any_changed == 'true'
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
uses: astral-sh/setup-uv@v6
with:
enable-cache: false
python-version: "3.12"
@@ -43,14 +44,6 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true'
run: uv sync --project api --dev
- name: Run Import Linter
if: steps.changed-files.outputs.any_changed == 'true'
run: uv run --directory api --dev lint-imports
- name: Run Type Checks
if: steps.changed-files.outputs.any_changed == 'true'
run: make type-check-core
- name: Dotenv check
if: steps.changed-files.outputs.any_changed == 'true'
run: uv run --project api dotenv-linter ./api/.env.example ./web/.env.example
@@ -61,69 +54,72 @@ jobs:
defaults:
run:
working-directory: ./web
permissions:
checks: write
pull-requests: read
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
uses: tj-actions/changed-files@v46
with:
files: |
web/**
package.json
pnpm-lock.yaml
pnpm-workspace.yaml
.nvmrc
.github/workflows/style.yml
.github/actions/setup-web/**
files: web/**
- name: Setup web environment
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/actions/setup-web
- name: Restore ESLint cache
if: steps.changed-files.outputs.any_changed == 'true'
id: eslint-cache-restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
path: web/.eslintcache
key: ${{ runner.os }}-web-eslint-${{ hashFiles('web/package.json', 'pnpm-lock.yaml', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}-${{ github.sha }}
restore-keys: |
${{ runner.os }}-web-eslint-${{ hashFiles('web/package.json', 'pnpm-lock.yaml', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}-
package_json_file: web/package.json
run_install: false
- name: Setup NodeJS
uses: actions/setup-node@v4
if: steps.changed-files.outputs.any_changed == 'true'
with:
node-version: 22
cache: pnpm
cache-dependency-path: ./web/package.json
- name: Web dependencies
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: pnpm install --frozen-lockfile
- name: Web style check
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: vp run lint:ci
run: pnpm run lint
- name: Web tsslint
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: vp run lint:tss
docker-compose-template:
name: Docker Compose Template
runs-on: ubuntu-latest
- name: Web type check
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: vp run type-check
- name: Web dead code check
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: vp run knip
- name: Save ESLint cache
if: steps.changed-files.outputs.any_changed == 'true' && success() && steps.eslint-cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
path: web/.eslintcache
key: ${{ steps.eslint-cache-restore.outputs.cache-primary-key }}
persist-credentials: false
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@v46
with:
files: |
docker/generate_docker_compose
docker/.env.example
docker/docker-compose-template.yaml
docker/docker-compose.yaml
- name: Generate Docker Compose
if: steps.changed-files.outputs.any_changed == 'true'
run: |
cd docker
./generate_docker_compose
- name: Check for changes
if: steps.changed-files.outputs.any_changed == 'true'
run: git diff --exit-code
superlinter:
name: SuperLinter
@@ -131,14 +127,14 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
uses: tj-actions/changed-files@v46
with:
files: |
**.sh
@@ -149,7 +145,7 @@ jobs:
.editorconfig
- name: Super-linter
uses: super-linter/super-linter/slim@61abc07d755095a68f4987d1c2c3d1d64408f1f9 # v8.5.0
uses: super-linter/super-linter/slim@v8
if: steps.changed-files.outputs.any_changed == 'true'
env:
BASH_SEVERITY: warning

View File

@@ -6,9 +6,6 @@ on:
- main
paths:
- sdks/**
- package.json
- pnpm-lock.yaml
- pnpm-workspace.yaml
concurrency:
group: sdk-tests-${{ github.head_ref || github.run_id }}
@@ -19,19 +16,23 @@ jobs:
name: unit test for Node.js SDK
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [16, 18, 20, 22]
defaults:
run:
working-directory: sdks/nodejs-client
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: Use Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: 22
node-version: ${{ matrix.node-version }}
cache: ''
cache-dependency-path: 'pnpm-lock.yaml'

View File

@@ -0,0 +1,78 @@
name: Check i18n Files and Create PR
on:
push:
branches: [main]
paths:
- 'web/i18n/en-US/*.ts'
permissions:
contents: write
pull-requests: write
jobs:
check-and-update:
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
defaults:
run:
working-directory: web
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
token: ${{ secrets.GITHUB_TOKEN }}
- name: Check for file changes in i18n/en-US
id: check_files
run: |
recent_commit_sha=$(git rev-parse HEAD)
second_recent_commit_sha=$(git rev-parse HEAD~1)
changed_files=$(git diff --name-only $recent_commit_sha $second_recent_commit_sha -- 'i18n/en-US/*.ts')
echo "Changed files: $changed_files"
if [ -n "$changed_files" ]; then
echo "FILES_CHANGED=true" >> $GITHUB_ENV
file_args=""
for file in $changed_files; do
filename=$(basename "$file" .ts)
file_args="$file_args --file=$filename"
done
echo "FILE_ARGS=$file_args" >> $GITHUB_ENV
echo "File arguments: $file_args"
else
echo "FILES_CHANGED=false" >> $GITHUB_ENV
fi
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
package_json_file: web/package.json
run_install: false
- name: Set up Node.js
if: env.FILES_CHANGED == 'true'
uses: actions/setup-node@v4
with:
node-version: 'lts/*'
cache: pnpm
cache-dependency-path: ./web/package.json
- name: Install dependencies
if: env.FILES_CHANGED == 'true'
working-directory: ./web
run: pnpm install --frozen-lockfile
- name: Generate i18n translations
if: env.FILES_CHANGED == 'true'
working-directory: ./web
run: pnpm run auto-gen-i18n ${{ env.FILE_ARGS }}
- name: Create Pull Request
if: env.FILES_CHANGED == 'true'
uses: peter-evans/create-pull-request@v6
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: Update i18n files based on en-US changes
title: 'chore: translate i18n files'
body: This PR was automatically created to update i18n files based on changes in en-US locale.
branch: chore/automated-i18n-updates

View File

@@ -1,426 +0,0 @@
name: Translate i18n Files with Claude Code
# Note: claude-code-action doesn't support push events directly.
# Push events are bridged by trigger-i18n-sync.yml via repository_dispatch.
on:
repository_dispatch:
types: [i18n-sync]
workflow_dispatch:
inputs:
files:
description: 'Specific files to translate (space-separated, e.g., "app common"). Required for full mode; leave empty in incremental mode to use en-US files changed since HEAD~1.'
required: false
type: string
languages:
description: 'Specific languages to translate (space-separated, e.g., "zh-Hans ja-JP"). Leave empty for all supported target languages except en-US.'
required: false
type: string
mode:
description: 'Sync mode: incremental (compare with previous en-US revision) or full (sync all keys in scope)'
required: false
default: incremental
type: choice
options:
- incremental
- full
permissions:
contents: write
pull-requests: write
concurrency:
group: translate-i18n-${{ github.event_name }}-${{ github.ref }}
cancel-in-progress: false
jobs:
translate:
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
timeout-minutes: 120
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Configure Git
run: |
git config --global user.name "github-actions[bot]"
git config --global user.email "github-actions[bot]@users.noreply.github.com"
- name: Setup web environment
uses: ./.github/actions/setup-web
- name: Prepare sync context
id: context
shell: bash
run: |
DEFAULT_TARGET_LANGS=$(awk "
/value: '/ {
value=\$2
gsub(/[',]/, \"\", value)
}
/supported: true/ && value != \"en-US\" {
printf \"%s \", value
}
" web/i18n-config/languages.ts | sed 's/[[:space:]]*$//')
generate_changes_json() {
node <<'NODE'
const { execFileSync } = require('node:child_process')
const fs = require('node:fs')
const path = require('node:path')
const repoRoot = process.cwd()
const baseSha = process.env.BASE_SHA || ''
const headSha = process.env.HEAD_SHA || ''
const files = (process.env.CHANGED_FILES || '').split(/\s+/).filter(Boolean)
const englishPath = fileStem => path.join(repoRoot, 'web', 'i18n', 'en-US', `${fileStem}.json`)
const readCurrentJson = (fileStem) => {
const filePath = englishPath(fileStem)
if (!fs.existsSync(filePath))
return null
return JSON.parse(fs.readFileSync(filePath, 'utf8'))
}
const readBaseJson = (fileStem) => {
if (!baseSha)
return null
try {
const relativePath = `web/i18n/en-US/${fileStem}.json`
const content = execFileSync('git', ['show', `${baseSha}:${relativePath}`], { encoding: 'utf8' })
return JSON.parse(content)
}
catch (error) {
return null
}
}
const compareJson = (beforeValue, afterValue) => JSON.stringify(beforeValue) === JSON.stringify(afterValue)
const changes = {}
for (const fileStem of files) {
const currentJson = readCurrentJson(fileStem)
const beforeJson = readBaseJson(fileStem) || {}
const afterJson = currentJson || {}
const added = {}
const updated = {}
const deleted = []
for (const [key, value] of Object.entries(afterJson)) {
if (!(key in beforeJson)) {
added[key] = value
continue
}
if (!compareJson(beforeJson[key], value)) {
updated[key] = {
before: beforeJson[key],
after: value,
}
}
}
for (const key of Object.keys(beforeJson)) {
if (!(key in afterJson))
deleted.push(key)
}
changes[fileStem] = {
fileDeleted: currentJson === null,
added,
updated,
deleted,
}
}
fs.writeFileSync(
'/tmp/i18n-changes.json',
JSON.stringify({
baseSha,
headSha,
files,
changes,
})
)
NODE
}
if [ "${{ github.event_name }}" = "repository_dispatch" ]; then
BASE_SHA="${{ github.event.client_payload.base_sha }}"
HEAD_SHA="${{ github.event.client_payload.head_sha }}"
CHANGED_FILES="${{ github.event.client_payload.changed_files }}"
TARGET_LANGS="$DEFAULT_TARGET_LANGS"
SYNC_MODE="${{ github.event.client_payload.sync_mode || 'incremental' }}"
if [ -n "${{ github.event.client_payload.changes_base64 }}" ]; then
printf '%s' '${{ github.event.client_payload.changes_base64 }}' | base64 -d > /tmp/i18n-changes.json
CHANGES_AVAILABLE="true"
CHANGES_SOURCE="embedded"
elif [ -n "$BASE_SHA" ] && [ -n "$CHANGED_FILES" ]; then
export BASE_SHA HEAD_SHA CHANGED_FILES
generate_changes_json
CHANGES_AVAILABLE="true"
CHANGES_SOURCE="recomputed"
else
printf '%s' '{"baseSha":"","headSha":"","files":[],"changes":{}}' > /tmp/i18n-changes.json
CHANGES_AVAILABLE="false"
CHANGES_SOURCE="unavailable"
fi
else
BASE_SHA=""
HEAD_SHA=$(git rev-parse HEAD)
if [ -n "${{ github.event.inputs.languages }}" ]; then
TARGET_LANGS="${{ github.event.inputs.languages }}"
else
TARGET_LANGS="$DEFAULT_TARGET_LANGS"
fi
SYNC_MODE="${{ github.event.inputs.mode || 'incremental' }}"
if [ -n "${{ github.event.inputs.files }}" ]; then
CHANGED_FILES="${{ github.event.inputs.files }}"
elif [ "$SYNC_MODE" = "incremental" ]; then
BASE_SHA=$(git rev-parse HEAD~1 2>/dev/null || true)
if [ -n "$BASE_SHA" ]; then
CHANGED_FILES=$(git diff --name-only "$BASE_SHA" "$HEAD_SHA" -- 'web/i18n/en-US/*.json' 2>/dev/null | sed -n 's@^.*/@@p' | sed 's/\.json$//' | tr '\n' ' ' | sed 's/[[:space:]]*$//')
else
CHANGED_FILES=$(find web/i18n/en-US -maxdepth 1 -type f -name '*.json' -print | sed -n 's@^.*/@@p' | sed 's/\.json$//' | sort | tr '\n' ' ' | sed 's/[[:space:]]*$//')
fi
elif [ "$SYNC_MODE" = "full" ]; then
echo "workflow_dispatch full mode requires the files input to stay within CI limits." >&2
exit 1
else
CHANGED_FILES=""
fi
if [ "$SYNC_MODE" = "incremental" ] && [ -n "$CHANGED_FILES" ]; then
export BASE_SHA HEAD_SHA CHANGED_FILES
generate_changes_json
CHANGES_AVAILABLE="true"
CHANGES_SOURCE="local"
else
printf '%s' '{"baseSha":"","headSha":"","files":[],"changes":{}}' > /tmp/i18n-changes.json
CHANGES_AVAILABLE="false"
CHANGES_SOURCE="unavailable"
fi
fi
FILE_ARGS=""
if [ -n "$CHANGED_FILES" ]; then
FILE_ARGS="--file $CHANGED_FILES"
fi
LANG_ARGS=""
if [ -n "$TARGET_LANGS" ]; then
LANG_ARGS="--lang $TARGET_LANGS"
fi
{
echo "DEFAULT_TARGET_LANGS=$DEFAULT_TARGET_LANGS"
echo "BASE_SHA=$BASE_SHA"
echo "HEAD_SHA=$HEAD_SHA"
echo "CHANGED_FILES=$CHANGED_FILES"
echo "TARGET_LANGS=$TARGET_LANGS"
echo "SYNC_MODE=$SYNC_MODE"
echo "CHANGES_AVAILABLE=$CHANGES_AVAILABLE"
echo "CHANGES_SOURCE=$CHANGES_SOURCE"
echo "FILE_ARGS=$FILE_ARGS"
echo "LANG_ARGS=$LANG_ARGS"
} >> "$GITHUB_OUTPUT"
echo "Files: ${CHANGED_FILES:-<none>}"
echo "Languages: ${TARGET_LANGS:-<none>}"
echo "Mode: $SYNC_MODE"
- name: Run Claude Code for Translation Sync
if: steps.context.outputs.CHANGED_FILES != ''
uses: anthropics/claude-code-action@88c168b39e7e64da0286d812b6e9fbebb6708185 # v1.0.82
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
allowed_bots: 'github-actions[bot]'
show_full_output: ${{ github.event_name == 'workflow_dispatch' }}
prompt: |
You are the i18n sync agent for the Dify repository.
Your job is to keep translations synchronized with the English source files under `${{ github.workspace }}/web/i18n/en-US/`.
Use absolute paths at all times:
- Repo root: `${{ github.workspace }}`
- Web directory: `${{ github.workspace }}/web`
- Language config: `${{ github.workspace }}/web/i18n-config/languages.ts`
Inputs:
- Files in scope: `${{ steps.context.outputs.CHANGED_FILES }}`
- Target languages: `${{ steps.context.outputs.TARGET_LANGS }}`
- Sync mode: `${{ steps.context.outputs.SYNC_MODE }}`
- Base SHA: `${{ steps.context.outputs.BASE_SHA }}`
- Head SHA: `${{ steps.context.outputs.HEAD_SHA }}`
- Scoped file args: `${{ steps.context.outputs.FILE_ARGS }}`
- Scoped language args: `${{ steps.context.outputs.LANG_ARGS }}`
- Structured change set available: `${{ steps.context.outputs.CHANGES_AVAILABLE }}`
- Structured change set source: `${{ steps.context.outputs.CHANGES_SOURCE }}`
- Structured change set file: `/tmp/i18n-changes.json`
Tool rules:
- Use Read for repository files.
- Use Edit for JSON updates.
- Use Bash only for `pnpm`.
- Do not use Bash for `git`, `gh`, or branch management.
Required execution plan:
1. Resolve target languages.
- Use the provided `Target languages` value as the source of truth.
- If it is unexpectedly empty, read `${{ github.workspace }}/web/i18n-config/languages.ts` and use every language with `supported: true` except `en-US`.
2. Stay strictly in scope.
- Only process the files listed in `Files in scope`.
- Only process the resolved target languages, never `en-US`.
- Do not touch unrelated i18n files.
- Do not modify `${{ github.workspace }}/web/i18n/en-US/`.
3. Resolve source changes.
- If `Structured change set available` is `true`, read `/tmp/i18n-changes.json` and use it as the source of truth for file-level and key-level changes.
- For each file entry:
- `added` contains new English keys that need translations.
- `updated` contains stale keys whose English source changed; re-translate using the `after` value.
- `deleted` contains keys that should be removed from locale files.
- `fileDeleted: true` means the English file no longer exists; remove the matching locale file if present.
- Read the current English JSON file for any file that still exists so wording, placeholders, and surrounding terminology stay accurate.
- If `Structured change set available` is `false`, treat this as a scoped full sync and use the current English files plus scoped checks as the source of truth.
4. Run a scoped pre-check before editing:
- `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }}`
- Use this command as the source of truth for missing and extra keys inside the current scope.
5. Apply translations.
- For every target language and scoped file:
- If `fileDeleted` is `true`, remove the locale file if it exists and skip the rest of that file.
- If the locale file does not exist yet, create it with `Write` and then continue with `Edit` as needed.
- ADD missing keys.
- UPDATE stale translations when the English value changed.
- DELETE removed keys. Prefer `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }} --auto-remove` for extra keys so deletions stay in scope.
- Preserve placeholders exactly: `{{variable}}`, `${variable}`, HTML tags, component tags, and variable names.
- Match the existing terminology and register used by each locale.
- Prefer one Edit per file when stable, but prioritize correctness over batching.
6. Verify only the edited files.
- Run `pnpm --dir ${{ github.workspace }}/web lint:fix --quiet -- <relative edited i18n file paths>`
- Run `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }}`
- If verification fails, fix the remaining problems before continuing.
7. Stop after the scoped locale files are updated and verification passes.
- Do not create branches, commits, or pull requests.
claude_args: |
--max-turns 120
--allowedTools "Read,Write,Edit,Bash(pnpm *),Bash(pnpm:*),Glob,Grep"
- name: Prepare branch metadata
id: pr_meta
if: steps.context.outputs.CHANGED_FILES != ''
shell: bash
run: |
if [ -z "$(git -C "${{ github.workspace }}" status --porcelain -- web/i18n/)" ]; then
echo "has_changes=false" >> "$GITHUB_OUTPUT"
exit 0
fi
SCOPE_HASH=$(printf '%s|%s|%s' "${{ steps.context.outputs.CHANGED_FILES }}" "${{ steps.context.outputs.TARGET_LANGS }}" "${{ steps.context.outputs.SYNC_MODE }}" | sha256sum | cut -c1-8)
HEAD_SHORT=$(printf '%s' "${{ steps.context.outputs.HEAD_SHA }}" | cut -c1-12)
BRANCH_NAME="chore/i18n-sync-${HEAD_SHORT}-${SCOPE_HASH}"
{
echo "has_changes=true"
echo "branch_name=$BRANCH_NAME"
} >> "$GITHUB_OUTPUT"
- name: Commit translation changes
if: steps.pr_meta.outputs.has_changes == 'true'
shell: bash
run: |
git -C "${{ github.workspace }}" checkout -B "${{ steps.pr_meta.outputs.branch_name }}"
git -C "${{ github.workspace }}" add web/i18n/
git -C "${{ github.workspace }}" commit -m "chore(i18n): sync translations with en-US"
- name: Push translation branch
if: steps.pr_meta.outputs.has_changes == 'true'
shell: bash
run: |
if git -C "${{ github.workspace }}" ls-remote --exit-code --heads origin "${{ steps.pr_meta.outputs.branch_name }}" >/dev/null 2>&1; then
git -C "${{ github.workspace }}" push --force-with-lease origin "${{ steps.pr_meta.outputs.branch_name }}"
else
git -C "${{ github.workspace }}" push --set-upstream origin "${{ steps.pr_meta.outputs.branch_name }}"
fi
- name: Create or update translation PR
if: steps.pr_meta.outputs.has_changes == 'true'
env:
BRANCH_NAME: ${{ steps.pr_meta.outputs.branch_name }}
FILES_IN_SCOPE: ${{ steps.context.outputs.CHANGED_FILES }}
TARGET_LANGS: ${{ steps.context.outputs.TARGET_LANGS }}
SYNC_MODE: ${{ steps.context.outputs.SYNC_MODE }}
CHANGES_SOURCE: ${{ steps.context.outputs.CHANGES_SOURCE }}
BASE_SHA: ${{ steps.context.outputs.BASE_SHA }}
HEAD_SHA: ${{ steps.context.outputs.HEAD_SHA }}
REPO_NAME: ${{ github.repository }}
shell: bash
run: |
PR_BODY_FILE=/tmp/i18n-pr-body.md
LANG_COUNT=$(printf '%s\n' "$TARGET_LANGS" | wc -w | tr -d ' ')
if [ "$LANG_COUNT" = "0" ]; then
LANG_COUNT="0"
fi
export LANG_COUNT
node <<'NODE' > "$PR_BODY_FILE"
const fs = require('node:fs')
const changesPath = '/tmp/i18n-changes.json'
const changes = fs.existsSync(changesPath)
? JSON.parse(fs.readFileSync(changesPath, 'utf8'))
: { changes: {} }
const filesInScope = (process.env.FILES_IN_SCOPE || '').split(/\s+/).filter(Boolean)
const lines = [
'## Summary',
'',
`- **Files synced**: \`${process.env.FILES_IN_SCOPE || '<none>'}\``,
`- **Languages updated**: ${process.env.TARGET_LANGS || '<none>'} (${process.env.LANG_COUNT} languages)`,
`- **Sync mode**: ${process.env.SYNC_MODE}${process.env.BASE_SHA ? ` (base: \`${process.env.BASE_SHA.slice(0, 10)}\`, head: \`${process.env.HEAD_SHA.slice(0, 10)}\`)` : ` (head: \`${process.env.HEAD_SHA.slice(0, 10)}\`)`}`,
'',
'### Key changes',
]
for (const fileName of filesInScope) {
const fileChange = changes.changes?.[fileName] || { added: {}, updated: {}, deleted: [], fileDeleted: false }
const addedKeys = Object.keys(fileChange.added || {})
const updatedKeys = Object.keys(fileChange.updated || {})
const deletedKeys = fileChange.deleted || []
lines.push(`- \`${fileName}\`: +${addedKeys.length} / ~${updatedKeys.length} / -${deletedKeys.length}${fileChange.fileDeleted ? ' (file deleted in en-US)' : ''}`)
}
lines.push(
'',
'## Verification',
'',
`- \`pnpm --dir web run i18n:check --file ${process.env.FILES_IN_SCOPE} --lang ${process.env.TARGET_LANGS}\``,
`- \`pnpm --dir web lint:fix --quiet -- <edited i18n files>\``,
'',
'## Notes',
'',
'- This PR was generated from structured en-US key changes produced by `trigger-i18n-sync.yml`.',
`- Structured change source: ${process.env.CHANGES_SOURCE || 'unknown'}.`,
'- Branch name is deterministic for the head SHA and scope, so reruns update the same PR instead of opening duplicates.',
'',
'🤖 Generated with [Claude Code](https://claude.com/claude-code)'
)
process.stdout.write(lines.join('\n'))
NODE
EXISTING_PR_NUMBER=$(gh pr list --repo "$REPO_NAME" --head "$BRANCH_NAME" --state open --json number --jq '.[0].number')
if [ -n "$EXISTING_PR_NUMBER" ] && [ "$EXISTING_PR_NUMBER" != "null" ]; then
gh pr edit "$EXISTING_PR_NUMBER" --repo "$REPO_NAME" --title "chore(i18n): sync translations with en-US" --body-file "$PR_BODY_FILE"
else
gh pr create --repo "$REPO_NAME" --head "$BRANCH_NAME" --base main --title "chore(i18n): sync translations with en-US" --body-file "$PR_BODY_FILE"
fi

View File

@@ -1,171 +0,0 @@
name: Trigger i18n Sync on Push
on:
push:
branches: [main]
paths:
- 'web/i18n/en-US/*.json'
permissions:
contents: write
concurrency:
group: trigger-i18n-sync-${{ github.ref }}
cancel-in-progress: true
jobs:
trigger:
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Detect changed files and build structured change set
id: detect
shell: bash
run: |
BASE_SHA="${{ github.event.before }}"
if [ -z "$BASE_SHA" ] || [ "$BASE_SHA" = "0000000000000000000000000000000000000000" ]; then
BASE_SHA=$(git rev-parse HEAD~1 2>/dev/null || true)
fi
HEAD_SHA="${{ github.sha }}"
if [ -n "$BASE_SHA" ]; then
CHANGED_FILES=$(git diff --name-only "$BASE_SHA" "$HEAD_SHA" -- 'web/i18n/en-US/*.json' 2>/dev/null | sed -n 's@^.*/@@p' | sed 's/\.json$//' | tr '\n' ' ' | sed 's/[[:space:]]*$//')
else
CHANGED_FILES=$(find web/i18n/en-US -maxdepth 1 -type f -name '*.json' -print | sed -n 's@^.*/@@p' | sed 's/\.json$//' | sort | tr '\n' ' ' | sed 's/[[:space:]]*$//')
fi
export BASE_SHA HEAD_SHA CHANGED_FILES
node <<'NODE'
const { execFileSync } = require('node:child_process')
const fs = require('node:fs')
const path = require('node:path')
const repoRoot = process.cwd()
const baseSha = process.env.BASE_SHA || ''
const headSha = process.env.HEAD_SHA || ''
const files = (process.env.CHANGED_FILES || '').split(/\s+/).filter(Boolean)
const englishPath = fileStem => path.join(repoRoot, 'web', 'i18n', 'en-US', `${fileStem}.json`)
const readCurrentJson = (fileStem) => {
const filePath = englishPath(fileStem)
if (!fs.existsSync(filePath))
return null
return JSON.parse(fs.readFileSync(filePath, 'utf8'))
}
const readBaseJson = (fileStem) => {
if (!baseSha)
return null
try {
const relativePath = `web/i18n/en-US/${fileStem}.json`
const content = execFileSync('git', ['show', `${baseSha}:${relativePath}`], { encoding: 'utf8' })
return JSON.parse(content)
}
catch (error) {
return null
}
}
const compareJson = (beforeValue, afterValue) => JSON.stringify(beforeValue) === JSON.stringify(afterValue)
const changes = {}
for (const fileStem of files) {
const beforeJson = readBaseJson(fileStem) || {}
const afterJson = readCurrentJson(fileStem) || {}
const added = {}
const updated = {}
const deleted = []
for (const [key, value] of Object.entries(afterJson)) {
if (!(key in beforeJson)) {
added[key] = value
continue
}
if (!compareJson(beforeJson[key], value)) {
updated[key] = {
before: beforeJson[key],
after: value,
}
}
}
for (const key of Object.keys(beforeJson)) {
if (!(key in afterJson))
deleted.push(key)
}
changes[fileStem] = {
fileDeleted: readCurrentJson(fileStem) === null,
added,
updated,
deleted,
}
}
fs.writeFileSync(
'/tmp/i18n-changes.json',
JSON.stringify({
baseSha,
headSha,
files,
changes,
})
)
NODE
if [ -n "$CHANGED_FILES" ]; then
echo "has_changes=true" >> "$GITHUB_OUTPUT"
else
echo "has_changes=false" >> "$GITHUB_OUTPUT"
fi
echo "base_sha=$BASE_SHA" >> "$GITHUB_OUTPUT"
echo "head_sha=$HEAD_SHA" >> "$GITHUB_OUTPUT"
echo "changed_files=$CHANGED_FILES" >> "$GITHUB_OUTPUT"
- name: Trigger i18n sync workflow
if: steps.detect.outputs.has_changes == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
env:
BASE_SHA: ${{ steps.detect.outputs.base_sha }}
HEAD_SHA: ${{ steps.detect.outputs.head_sha }}
CHANGED_FILES: ${{ steps.detect.outputs.changed_files }}
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const fs = require('fs')
const changesJson = fs.readFileSync('/tmp/i18n-changes.json', 'utf8')
const changesBase64 = Buffer.from(changesJson).toString('base64')
const maxEmbeddedChangesChars = 48000
const changesEmbedded = changesBase64.length <= maxEmbeddedChangesChars
if (!changesEmbedded) {
console.log(`Structured change set too large to embed safely (${changesBase64.length} chars). Downstream workflow will regenerate it from git history.`)
}
await github.rest.repos.createDispatchEvent({
owner: context.repo.owner,
repo: context.repo.repo,
event_type: 'i18n-sync',
client_payload: {
changed_files: process.env.CHANGED_FILES,
changes_base64: changesEmbedded ? changesBase64 : '',
changes_embedded: changesEmbedded,
sync_mode: 'incremental',
base_sha: process.env.BASE_SHA,
head_sha: process.env.HEAD_SHA,
},
})

View File

@@ -1,95 +0,0 @@
name: Run Full VDB Tests
on:
schedule:
- cron: '0 3 * * 1'
workflow_dispatch:
permissions:
contents: read
concurrency:
group: vdb-tests-full-${{ github.ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
name: Full VDB Tests
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.12"
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Free Disk Space
uses: endersonmenezes/free-disk-space@7901478139cff6e9d44df5972fd8ab8fcade4db1 # v3.2.2
with:
remove_dotnet: true
remove_haskell: true
remove_tool_cache: true
- name: Setup UV and Python
uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
cache-dependency-glob: api/uv.lock
- name: Check UV lockfile
run: uv lock --project api --check
- name: Install dependencies
run: uv sync --project api --dev
- name: Set up dotenvs
run: |
cp docker/.env.example docker/.env
cp docker/middleware.env.example docker/middleware.env
- name: Expose Service Ports
run: sh .github/workflows/expose_service_ports.sh
# - name: Set up Vector Store (TiDB)
# uses: hoverkraft-tech/compose-action@v2.0.2
# with:
# compose-file: docker/tidb/docker-compose.yaml
# services: |
# tidb
# tiflash
- name: Set up Full Vector Store Matrix
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
with:
compose-file: |
docker/docker-compose.yaml
services: |
weaviate
qdrant
couchbase-server
etcd
minio
milvus-standalone
pgvecto-rs
pgvector
chroma
elasticsearch
oceanbase
- name: setup test config
run: |
echo $(pwd)
ls -lah .
cp api/tests/integration_tests/.env.example api/tests/integration_tests/.env
# - name: Check VDB Ready (TiDB)
# run: uv run --project api python api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py
- name: Test Vector Stores
run: uv run --project api bash dev/pytest/pytest_vdb.sh

View File

@@ -1,39 +1,37 @@
name: Run VDB Smoke Tests
name: Run VDB Tests
on:
workflow_call:
permissions:
contents: read
concurrency:
group: vdb-tests-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
name: VDB Smoke Tests
name: VDB Tests
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.11"
- "3.12"
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Free Disk Space
uses: endersonmenezes/free-disk-space@7901478139cff6e9d44df5972fd8ab8fcade4db1 # v3.2.2
uses: endersonmenezes/free-disk-space@v2
with:
remove_dotnet: true
remove_haskell: true
remove_tool_cache: true
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
@@ -53,26 +51,31 @@ jobs:
- name: Expose Service Ports
run: sh .github/workflows/expose_service_ports.sh
# - name: Set up Vector Store (TiDB)
# uses: hoverkraft-tech/compose-action@v2.0.2
# with:
# compose-file: docker/tidb/docker-compose.yaml
# services: |
# tidb
# tiflash
- name: Set up Vector Store (TiDB)
uses: hoverkraft-tech/compose-action@v2.0.2
with:
compose-file: docker/tidb/docker-compose.yaml
services: |
tidb
tiflash
- name: Set up Vector Stores for Smoke Coverage
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
- name: Set up Vector Stores (Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS, Chroma, MyScale, ElasticSearch, Couchbase, OceanBase)
uses: hoverkraft-tech/compose-action@v2.0.2
with:
compose-file: |
docker/docker-compose.yaml
services: |
db_postgres
redis
weaviate
qdrant
couchbase-server
etcd
minio
milvus-standalone
pgvecto-rs
pgvector
chroma
elasticsearch
oceanbase
- name: setup test config
run: |
@@ -80,13 +83,8 @@ jobs:
ls -lah .
cp api/tests/integration_tests/.env.example api/tests/integration_tests/.env
# - name: Check VDB Ready (TiDB)
# run: uv run --project api python api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py
- name: Check VDB Ready (TiDB)
run: uv run --project api python api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py
- name: Test Vector Stores
run: |
uv run --project api pytest --timeout "${PYTEST_TIMEOUT:-180}" \
api/tests/integration_tests/vdb/chroma \
api/tests/integration_tests/vdb/pgvector \
api/tests/integration_tests/vdb/qdrant \
api/tests/integration_tests/vdb/weaviate
run: uv run --project api bash dev/pytest/pytest_vdb.sh

View File

@@ -1,68 +0,0 @@
name: Web Full-Stack E2E
on:
workflow_call:
permissions:
contents: read
concurrency:
group: web-e2e-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
name: Web Full-Stack E2E
runs-on: ubuntu-latest
defaults:
run:
shell: bash
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Setup web dependencies
uses: ./.github/actions/setup-web
- name: Setup UV and Python
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"
cache-dependency-glob: api/uv.lock
- name: Install API dependencies
run: uv sync --project api --dev
- name: Install Playwright browser
working-directory: ./e2e
run: vp run e2e:install
- name: Run isolated source-api and built-web Cucumber E2E tests
working-directory: ./e2e
env:
E2E_ADMIN_EMAIL: e2e-admin@example.com
E2E_ADMIN_NAME: E2E Admin
E2E_ADMIN_PASSWORD: E2eAdmin12345
E2E_FORCE_WEB_BUILD: "1"
E2E_INIT_PASSWORD: E2eInit12345
run: vp run e2e:full
- name: Upload Cucumber report
if: ${{ !cancelled() }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: cucumber-report
path: e2e/cucumber-report
retention-days: 7
- name: Upload E2E logs
if: ${{ !cancelled() }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: e2e-logs
path: e2e/.logs
retention-days: 7

View File

@@ -2,12 +2,6 @@ name: Web Tests
on:
workflow_call:
secrets:
CODECOV_TOKEN:
required: false
permissions:
contents: read
concurrency:
group: web-tests-${{ github.head_ref || github.run_id }}
@@ -15,77 +9,45 @@ concurrency:
jobs:
test:
name: Web Tests (${{ matrix.shardIndex }}/${{ matrix.shardTotal }})
name: Web Tests
runs-on: ubuntu-latest
env:
VITEST_COVERAGE_SCOPE: app-components
strategy:
fail-fast: false
matrix:
shardIndex: [1, 2, 3, 4]
shardTotal: [4]
defaults:
run:
shell: bash
working-directory: ./web
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Setup web environment
uses: ./.github/actions/setup-web
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@v46
with:
files: web/**
- name: Install pnpm
if: steps.changed-files.outputs.any_changed == 'true'
uses: pnpm/action-setup@v4
with:
package_json_file: web/package.json
run_install: false
- name: Setup Node.js
uses: actions/setup-node@v4
if: steps.changed-files.outputs.any_changed == 'true'
with:
node-version: 22
cache: pnpm
cache-dependency-path: ./web/package.json
- name: Install dependencies
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: pnpm install --frozen-lockfile
- name: Run tests
run: vp test run --reporter=blob --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }} --coverage
- name: Upload blob report
if: ${{ !cancelled() }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: blob-report-${{ matrix.shardIndex }}
path: web/.vitest-reports/*
include-hidden-files: true
retention-days: 1
merge-reports:
name: Merge Test Reports
if: ${{ !cancelled() }}
needs: [test]
runs-on: ubuntu-latest
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
defaults:
run:
shell: bash
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Setup web environment
uses: ./.github/actions/setup-web
- name: Download blob reports
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
path: web/.vitest-reports
pattern: blob-report-*
merge-multiple: true
- name: Merge reports
run: vp test --merge-reports --coverage --silent=passed-only
- name: Report coverage
if: ${{ env.CODECOV_TOKEN != '' }}
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
directory: web/coverage
flags: web
env:
CODECOV_TOKEN: ${{ env.CODECOV_TOKEN }}
run: pnpm test

34
.gitignore vendored
View File

@@ -6,9 +6,6 @@ __pycache__/
# C extensions
*.so
# *db files
*.db
# Distribution / packaging
.Python
build/
@@ -100,7 +97,6 @@ __pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat-schedule.db
celerybeat.pid
# SageMath parsed files
@@ -127,18 +123,17 @@ venv.bak/
# mkdocs documentation
/site
# type checking
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
pyrightconfig.json
!api/pyrightconfig.json
# Pyre type checker
.pyre/
.idea/'
.DS_Store
web/.vscode/settings.json
# Intellij IDEA Files
.idea/*
@@ -185,17 +180,13 @@ docker/volumes/couchbase/*
docker/volumes/oceanbase/*
docker/volumes/plugin_daemon/*
docker/volumes/matrixone/*
docker/volumes/mysql/*
docker/volumes/seekdb/*
!docker/volumes/oceanbase/init.d
docker/volumes/iris/*
docker/nginx/conf.d/default.conf
docker/nginx/ssl/*
!docker/nginx/ssl/.gitkeep
docker/middleware.env
docker/docker-compose.override.yaml
docker/env-backup/*
sdks/python-client/build
sdks/python-client/dist
@@ -204,6 +195,7 @@ sdks/python-client/dify_client.egg-info
.vscode/*
!.vscode/launch.json.template
!.vscode/README.md
pyrightconfig.json
api/.vscode
# vscode Code History Extension
.history
@@ -212,8 +204,6 @@ api/.vscode
# pnpm
/.pnpm-store
node_modules
.vite-hooks/_
# plugin migrate
plugins.jsonl
@@ -221,24 +211,10 @@ plugins.jsonl
# mise
mise.toml
# Next.js build output
.next/
# AI Assistant
.roo/
/.claude/worktrees/
api/.env.backup
/clickzetta
# Benchmark
scripts/stress-test/setup/config/
scripts/stress-test/reports/
# mcp
.playwright-mcp/
.serena/
# settings
*.local.json
*.local.md
# Code Agent Folder
.qoder/*

34
.mcp.json Normal file
View File

@@ -0,0 +1,34 @@
{
"mcpServers": {
"context7": {
"type": "http",
"url": "https://mcp.context7.com/mcp"
},
"sequential-thinking": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sequential-thinking"],
"env": {}
},
"github": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_PERSONAL_ACCESS_TOKEN}"
}
},
"fetch": {
"type": "stdio",
"command": "uvx",
"args": ["mcp-server-fetch"],
"env": {}
},
"playwright": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"],
"env": {}
}
}
}

1
.npmrc
View File

@@ -1 +0,0 @@
save-exact=true

1
.nvmrc
View File

@@ -1 +0,0 @@
22

View File

@@ -1,119 +0,0 @@
#!/bin/sh
# get the list of modified files
files=$(git diff --cached --name-only)
# check if api or web directory is modified
api_modified=false
web_modified=false
skip_web_checks=false
git_path() {
git rev-parse --git-path "$1"
}
if [ -f "$(git_path MERGE_HEAD)" ] || \
[ -f "$(git_path CHERRY_PICK_HEAD)" ] || \
[ -f "$(git_path REVERT_HEAD)" ] || \
[ -f "$(git_path SQUASH_MSG)" ] || \
[ -d "$(git_path rebase-merge)" ] || \
[ -d "$(git_path rebase-apply)" ]; then
skip_web_checks=true
fi
for file in $files
do
# Use POSIX compliant pattern matching
case "$file" in
api/*.py)
# set api_modified flag to true
api_modified=true
;;
web/*)
# set web_modified flag to true
web_modified=true
;;
esac
done
# run linters based on the modified modules
if $api_modified; then
echo "Running Ruff linter on api module"
# run Ruff linter auto-fixing
uv run --project api --dev ruff check --fix ./api
# run Ruff linter checks
uv run --project api --dev ruff check ./api || status=$?
status=${status:-0}
if [ $status -ne 0 ]; then
echo "Ruff linter on api module error, exit code: $status"
echo "Please run 'dev/reformat' to fix the fixable linting errors."
exit 1
fi
fi
if $web_modified; then
if $skip_web_checks; then
echo "Git operation in progress, skipping web checks"
exit 0
fi
echo "Running ESLint on web module"
if git diff --cached --quiet -- 'web/**/*.ts' 'web/**/*.tsx'; then
web_ts_modified=false
else
ts_diff_status=$?
if [ $ts_diff_status -eq 1 ]; then
web_ts_modified=true
else
echo "Unable to determine staged TypeScript changes (git exit code: $ts_diff_status)."
exit $ts_diff_status
fi
fi
cd ./web || exit 1
vp staged
if $web_ts_modified; then
echo "Running TypeScript type-check:tsgo"
if ! pnpm run type-check:tsgo; then
echo "Type check failed. Please run 'pnpm run type-check:tsgo' to fix the errors."
exit 1
fi
else
echo "No staged TypeScript changes detected, skipping type-check:tsgo"
fi
echo "Running unit tests check"
modified_files=$(git diff --cached --name-only -- utils | grep -v '\.spec\.ts$' || true)
if [ -n "$modified_files" ]; then
for file in $modified_files; do
test_file="${file%.*}.spec.ts"
echo "Checking for test file: $test_file"
# check if the test file exists
if [ -f "../$test_file" ]; then
echo "Detected changes in $file, running corresponding unit tests..."
pnpm run test "../$test_file"
if [ $? -ne 0 ]; then
echo "Unit tests failed. Please fix the errors before committing."
exit 1
fi
echo "Unit tests for $file passed."
else
echo "Warning: $file does not have a corresponding test file."
fi
done
echo "All unit tests for modified web/utils files have passed."
fi
cd ../
fi

View File

@@ -8,7 +8,8 @@
"module": "flask",
"env": {
"FLASK_APP": "app.py",
"FLASK_ENV": "development"
"FLASK_ENV": "development",
"GEVENT_SUPPORT": "True"
},
"args": [
"run",
@@ -27,7 +28,9 @@
"type": "debugpy",
"request": "launch",
"module": "celery",
"env": {},
"env": {
"GEVENT_SUPPORT": "True"
},
"args": [
"-A",
"app.celery",
@@ -37,7 +40,7 @@
"-c",
"1",
"-Q",
"dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention,workflow_based_app_execution",
"dataset,generation,mail,ops_trace",
"--loglevel",
"INFO"
],

View File

@@ -1,45 +0,0 @@
# AGENTS.md
## Project Overview
Dify is an open-source platform for developing LLM applications with an intuitive interface combining agentic AI workflows, RAG pipelines, agent capabilities, and model management.
The codebase is split into:
- **Backend API** (`/api`): Python Flask application organized with Domain-Driven Design
- **Frontend Web** (`/web`): Next.js application using TypeScript and React
- **Docker deployment** (`/docker`): Containerized deployment configurations
## Backend Workflow
- Read `api/AGENTS.md` for details
- Run backend CLI commands through `uv run --project api <command>`.
- Integration tests are CI-only and are not expected to run in the local environment.
## Frontend Workflow
- Read `web/AGENTS.md` for details
## Testing & Quality Practices
- Follow TDD: red → green → refactor.
- Use `pytest` for backend tests with Arrange-Act-Assert structure.
- Enforce strong typing; avoid `Any` and prefer explicit type annotations.
- Write self-documenting code; only add comments that explain intent.
## Language Style
- **Python**: Keep type hints on functions and attributes, and implement relevant special methods (e.g., `__repr__`, `__str__`). Prefer `TypedDict` over `dict` or `Mapping` for type safety and better code documentation.
- **TypeScript**: Use the strict config, rely on ESLint (`pnpm lint:fix` preferred) plus `pnpm type-check:tsgo`, and avoid `any` types.
## General Practices
- Prefer editing existing files; add new documentation only when requested.
- Inject dependencies through constructors and preserve clean architecture boundaries.
- Handle errors with domain-specific exceptions at the correct layer.
## Project Conventions
- Backend architecture adheres to DDD and Clean Architecture principles.
- Async work runs through Celery with Redis as the broker.
- Frontend user-facing strings must use `web/i18n/en-US/`; avoid hardcoded text.

1
AGENTS.md Symbolic link
View File

@@ -0,0 +1 @@
CLAUDE.md

View File

@@ -1 +0,0 @@
AGENTS.md

89
CLAUDE.md Normal file
View File

@@ -0,0 +1,89 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Dify is an open-source platform for developing LLM applications with an intuitive interface combining agentic AI workflows, RAG pipelines, agent capabilities, and model management.
The codebase consists of:
- **Backend API** (`/api`): Python Flask application with Domain-Driven Design architecture
- **Frontend Web** (`/web`): Next.js 15 application with TypeScript and React 19
- **Docker deployment** (`/docker`): Containerized deployment configurations
## Development Commands
### Backend (API)
All Python commands must be prefixed with `uv run --project api`:
```bash
# Start development servers
./dev/start-api # Start API server
./dev/start-worker # Start Celery worker
# Run tests
uv run --project api pytest # Run all tests
uv run --project api pytest tests/unit_tests/ # Unit tests only
uv run --project api pytest tests/integration_tests/ # Integration tests
# Code quality
./dev/reformat # Run all formatters and linters
uv run --project api ruff check --fix ./ # Fix linting issues
uv run --project api ruff format ./ # Format code
uv run --project api mypy . # Type checking
```
### Frontend (Web)
```bash
cd web
pnpm lint # Run ESLint
pnpm eslint-fix # Fix ESLint issues
pnpm test # Run Jest tests
```
## Testing Guidelines
### Backend Testing
- Use `pytest` for all backend tests
- Write tests first (TDD approach)
- Test structure: Arrange-Act-Assert
## Code Style Requirements
### Python
- Use type hints for all functions and class attributes
- No `Any` types unless absolutely necessary
- Implement special methods (`__repr__`, `__str__`) appropriately
### TypeScript/JavaScript
- Strict TypeScript configuration
- ESLint with Prettier integration
- Avoid `any` type
## Important Notes
- **Environment Variables**: Always use UV for Python commands: `uv run --project api <command>`
- **Comments**: Only write meaningful comments that explain "why", not "what"
- **File Creation**: Always prefer editing existing files over creating new ones
- **Documentation**: Don't create documentation files unless explicitly requested
- **Code Quality**: Always run `./dev/reformat` before committing backend changes
## Common Development Tasks
### Adding a New API Endpoint
1. Create controller in `/api/controllers/`
1. Add service logic in `/api/services/`
1. Update routes in controller's `__init__.py`
1. Write tests in `/api/tests/`
## Project-Specific Conventions
- All async tasks use Celery with Redis as broker
- **Internationalization**: Frontend supports multiple languages with English (`web/i18n/en-US/`) as the source. All user-facing text must use i18n keys, no hardcoded strings. Edit corresponding module files in `en-US/` directory for translations.

View File

@@ -77,8 +77,6 @@ How we prioritize:
For setting up the frontend service, please refer to our comprehensive [guide](https://github.com/langgenius/dify/blob/main/web/README.md) in the `web/README.md` file. This document provides detailed instructions to help you set up the frontend environment properly.
**Testing**: All React components must have comprehensive test coverage. See [web/docs/test.md](https://github.com/langgenius/dify/blob/main/web/docs/test.md) for the canonical frontend testing guidelines and follow every requirement described there.
#### Backend
For setting up the backend service, kindly refer to our detailed [instructions](https://github.com/langgenius/dify/blob/main/api/README.md) in the `api/README.md` file. This document contains step-by-step guidance to help you get the backend up and running smoothly.
@@ -97,8 +95,3 @@ Feel free to reach out if you encounter any issues during the setup process.
## Getting Help
If you ever get stuck or get a burning question while contributing, simply shoot your queries our way via the related GitHub issue, or hop onto our [Discord](https://discord.gg/8Tpq4AcN9c) for a quick chat.
## Automated Agent Contributions
> [!NOTE]
> If you are an automated agentīŧŒ just add 🤖🤖🤖 to the end of the PR title to opt-in.

97
CONTRIBUTING_CN.md Normal file
View File

@@ -0,0 +1,97 @@
# č´ĄįŒŽæŒ‡å—
非常感č°ĸäŊ č€ƒč™‘ä¸ē Dify 做å‡ēč´ĄįŒŽīŧäŊœä¸ē一åŽļčĩ„æēæœ‰é™įš„创业å…Ŧ司īŧŒæˆ‘äģŦå¸Œæœ›æ‰“é€ æœ€į›´č§‚įš„ LLM åē”ᔍåŧ€å‘å’ŒįŽĄį†åˇĨäŊœæĩį¨‹ã€‚į¤žåŒēįš„æ¯ä¸€äģŊč´ĄįŒŽå¯šæˆ‘äģŦæĨ蝴éƒŊåŧĨčļŗįč´ĩ。
我äģŦ需čρäŋæŒæ•æˇå’ŒåŋĢ速čŋ­äģŖīŧŒåŒæ—ļäšŸå¸Œæœ›įĄŽäŋč´ĄįŒŽč€…čƒŊčŽˇåž—å°Ŋ可čƒŊæĩį•…įš„å‚ä¸ŽäŊ“éĒŒã€‚čŋ™äģŊč´ĄįŒŽæŒ‡å—æ—¨åœ¨å¸ŽåŠŠäŊ į†Ÿæ‚‰äģŖį åē“和我äģŦįš„åˇĨäŊœæ–šåŧīŧŒčŽŠäŊ å¯äģĨå°ŊåŋĢčŋ›å…Ĩ有čļŖįš„åŧ€å‘įŽ¯čŠ‚ã€‚
æœŦ指南和 Dify ä¸€æ ˇåœ¨ä¸æ–­åŽŒå–„ä¸­ã€‚åĻ‚æžœæœ‰äģģäŊ•æģžåŽäēŽéĄšį›ŽåŽžé™…æƒ…å†ĩįš„åœ°æ–šīŧŒæŗč¯ˇč°…č§ŖīŧŒæˆ‘äģŦ也æŦĸčŋŽäģģäŊ•攚čŋ›åģēčŽŽã€‚
å…ŗäēŽčŽ¸å¯č¯īŧŒč¯ˇčŠąä¸€åˆ†é’Ÿé˜…č¯ģ我äģŦįŽ€įŸ­įš„[čŽ¸å¯å’Œč´ĄįŒŽč€…åčŽŽ](./LICENSE)。同æ—ļäšŸč¯ˇéĩåžĒį¤žåŒē[行ä¸ē准则](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)。
## åŧ€å§‹äš‹å‰
æƒŗå¯ģ扞可äģĨį€æ‰‹įš„äģģåŠĄīŧŸæĩč§ˆæˆ‘äģŦįš„[新手友åĨŊ莎éĸ˜](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22)åšļ选拊一ä¸Ēåŧ€å§‹īŧ
æœ‰é…ˇį‚Ģįš„æ–°æ¨Ąåž‹čŋčĄŒæ—ļ或åˇĨå…ˇčρæˇģ加īŧŸåœ¨æˆ‘äģŦįš„[插äģļäģ“åē“](https://github.com/langgenius/dify-plugins)åŧ€å¯ PRīŧŒåą•į¤ēäŊ įš„äŊœå“ã€‚
需čĻæ›´æ–°įŽ°æœ‰æ¨Ąåž‹čŋčĄŒæ—ļ、åˇĨå…ˇæˆ–äŋŽå¤ bugīŧŸå‰åž€æˆ‘äģŦįš„[厘斚插äģļäģ“åē“](https://github.com/langgenius/dify-official-plugins)å¤§åą•čēĢæ‰‹īŧ
加å…Ĩ我äģŦīŧŒä¸€čĩˇč´ĄįŒŽīŧŒå…ąåŒæ‰“é€ į˛žåŊŠéĄšį›ŽīŧđŸ’Ąâœ¨
č¯ˇčŽ°åž—åœ¨ PR 描čŋ°ä¸­å…ŗč”įŽ°æœ‰ issue 或创åģēæ–°įš„ issue。
### Bug æŠĨ告
> [!IMPORTANT]
> 提äē¤ bug æŠĨ告æ—ļč¯ˇåŠĄåŋ…包åĢäģĨ下äŋĄæ¯īŧš
- 清晰描čŋ°æ€§įš„æ ‡éĸ˜
- č¯Ļįģ†įš„ bug 描čŋ°īŧŒåŒ…æ‹ŦäģģäŊ•错蝝äŋĄæ¯
- å¤įŽ°æ­ĨéǤ
- éĸ„æœŸčĄŒä¸ē
- **æ—Ĩåŋ—**īŧŒåĻ‚æžœæ˜¯åŽįĢ¯é—Žéĸ˜īŧŒčŋ™į‚šåžˆé‡čρīŧŒå¯äģĨ在 docker-compose æ—Ĩåŋ—中扞到
- æˆĒå›žæˆ–č§†éĸ‘īŧˆåĻ‚æžœé€‚į”¨īŧ‰
äŧ˜å…ˆįē§åˆ’分īŧš
| 问éĸ˜įąģ型 | äŧ˜å…ˆįē§ |
| -------------------------------------------------- | ---------- |
| æ ¸åŋƒåŠŸčƒŊ bugīŧˆä瑿œåŠĄã€į™ģåŊ•å¤ąč´Ĩ、åē”į”¨æ— æŗ•äŊŋį”¨ã€åŽ‰å…¨æŧæ´žīŧ‰ | į´§æ€Ĩ |
| éžå…ŗé”Ž bug、性čƒŊäŧ˜åŒ– | 䏭ᭉäŧ˜å…ˆįē§ |
| 小äŋŽå¤īŧˆæ‹ŧå†™é”™č¯¯ã€į•ŒéĸæˇˇäšąäŊ†å¯į”¨īŧ‰ | äŊŽäŧ˜å…ˆįē§ |
### 功čƒŊč¯ˇæą‚
> [!NOTE]
> 提äē¤åŠŸčƒŊč¯ˇæą‚æ—ļč¯ˇåŠĄåŋ…包åĢäģĨ下äŋĄæ¯īŧš
- 清晰描čŋ°æ€§įš„æ ‡éĸ˜
- č¯Ļįģ†įš„功čƒŊ描čŋ°
- 功čƒŊäŊŋᔍåœē景
- å…ļäģ–į›¸å…ŗä¸Šä¸‹æ–‡æˆ–æˆĒ回
äŧ˜å…ˆįē§åˆ’分īŧš
| 功čƒŊįąģ型 | äŧ˜å…ˆįē§ |
| -------------------------------------------------- | ---------- |
| čĸĢå›ĸé˜Ÿæˆå‘˜æ ‡čŽ°ä¸ēé̘äŧ˜å…ˆįē§įš„功čƒŊ | é̘äŧ˜å…ˆįē§ |
| æĨč‡Ē[į¤žåŒē反éψæŋ](https://github.com/langgenius/dify/discussions/categories/feedbacks)įš„įƒ­é—¨åŠŸčƒŊč¯ˇæą‚ | 䏭ᭉäŧ˜å…ˆįē§ |
| 非核åŋƒåŠŸčƒŊ和小攚čŋ› | äŊŽäŧ˜å…ˆįē§ |
| 有äģˇå€ŧäŊ†éžį´§æ€Ĩįš„åŠŸčƒŊ | æœĒæĨį‰šæ€§ |
## 提äē¤ PR
### éĄšį›ŽčŽžįŊŽ
### PR 提ä礿ĩį¨‹
1. Fork æœŦäģ“åē“
1. 在提äē¤ PR 䚋前īŧŒč¯ˇå…ˆåˆ›åģē issue 莨čŽēäŊ æƒŗčĻåšįš„äŋŽæ”š
1. ä¸ēäŊ įš„äŋŽæ”šåˆ›åģē一ä¸Ēæ–°įš„åˆ†æ”¯
1. 蝎ä¸ēäŊ įš„äŋŽæ”šæˇģåŠ į›¸åē”įš„æĩ‹č¯•
1. įĄŽäŋäŊ įš„äģŖį čƒŊ通čŋ‡įŽ°æœ‰įš„æĩ‹č¯•
1. 蝎圍 PR 描čŋ°ä¸­å…ŗč”ᛏ兺 issueīŧŒæ ŧåŧä¸ē `fixes #<issueįŧ–åˇ>`
1. į­‰åž…åˆåšļīŧ
#### 前į̝
å…ŗäēŽå‰įĢ¯æœåŠĄįš„čŽžįŊŽīŧŒč¯ˇå‚č€ƒ `web/README.md` 文äģļä¸­įš„[č¯Ļį솿Œ‡å—](https://github.com/langgenius/dify/blob/main/web/README.md)。č¯Ĩæ–‡æĄŖæäž›äē†å¸ŽåŠŠäŊ æ­ŖįĄŽé…įŊŽå‰įĢ¯įŽ¯åĸƒįš„č¯Ļįģ†č¯´æ˜Žã€‚
#### 后į̝
å…ŗäēŽåŽįĢ¯æœåŠĄįš„čŽžįŊŽīŧŒč¯ˇå‚č€ƒ `api/README.md` 文äģļä¸­įš„[č¯Ļįģ†č¯´æ˜Ž](https://github.com/langgenius/dify/blob/main/api/README.md)。č¯Ĩæ–‡æĄŖåŒ…åĢäē†å¸ŽåŠŠäŊ éĄē刊čŋčĄŒåŽįĢ¯įš„æ­ĨéĒ¤č¯´æ˜Žã€‚
#### å…ļäģ–æŗ¨æ„äē‹éĄš
我äģŦåģēčŽŽåœ¨åŧ€å§‹čŽžįŊŽäš‹å‰äģ”įģ†é˜…č¯ģæœŦæ–‡æĄŖīŧŒå› ä¸ē厃包åĢäģĨ下重čρäŋĄæ¯īŧš
- 前įŊŽæĄäģļ和䞝čĩ–饚
- åŽ‰čŖ…æ­ĨéǤ
- 配įŊŽįģ†čŠ‚
- å¸¸č§é—Žéĸ˜č§Ŗå†ŗæ–šæĄˆ
åĻ‚æžœåœ¨čŽžįŊŽčŋ‡į¨‹ä¸­é‡åˆ°äģģäŊ•é—Žéĸ˜īŧŒč¯ˇéšæ—ļ联įŗģ我äģŦ。
## čŽˇå–å¸ŽåŠŠ
åĻ‚æžœäŊ åœ¨č´ĄįŒŽčŋ‡į¨‹ä¸­é‡åˆ°å›°éšžæˆ–æœ‰į´§æ€Ĩ问éĸ˜īŧŒå¯äģĨ通čŋ‡į›¸å…ŗ GitHub issue 向我äģŦ提闎īŧŒæˆ–加å…Ĩ我äģŦįš„ [Discord](https://discord.gg/8Tpq4AcN9c) čŋ›čĄŒåŋĢ速ä礿ĩã€‚

95
CONTRIBUTING_DE.md Normal file
View File

@@ -0,0 +1,95 @@
# MITWIRKEN
Sie mÃļchten also zu Dify beitragen - das ist großartig, wir kÃļnnen es kaum erwarten zu sehen, was Sie entwickeln. Als Startup mit begrenztem Personal und Finanzierung haben wir große Ambitionen, den intuitivsten Workflow fÃŧr die Entwicklung und Verwaltung von LLM-Anwendungen zu gestalten. Jede Hilfe aus der Community zählt wirklich.
Wir mÃŧssen wendig sein und schnell liefern, aber wir mÃļchten auch sicherstellen, dass Mitwirkende wie Sie eine mÃļglichst reibungslose Erfahrung beim Beitragen haben. Wir haben diesen Leitfaden zusammengestellt, damit Sie sich schnell mit der Codebasis und unserer Arbeitsweise mit Mitwirkenden vertraut machen kÃļnnen.
Dieser Leitfaden ist, wie Dify selbst, in ständiger Entwicklung. Wir sind dankbar fÃŧr Ihr Verständnis, falls er manchmal hinter dem eigentlichen Projekt zurÃŧckbleibt, und begrÃŧßen jedes Feedback zur Verbesserung.
Bitte nehmen Sie sich einen Moment Zeit, um unsere [Lizenz- und Mitwirkungsvereinbarung](./LICENSE) zu lesen. Die Community hält sich außerdem an den [Verhaltenskodex](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md).
## Bevor Sie loslegen
Suchen Sie nach einer Aufgabe? DurchstÃļbern Sie unsere [Einsteiger-Issues](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) und wählen Sie eines zum Einstieg!
Haben Sie eine neue Modell-Runtime oder ein Tool hinzuzufÃŧgen? Öffnen Sie einen PR in unserem [Plugin-Repository](https://github.com/langgenius/dify-plugins).
MÃļchten Sie eine bestehende Modell-Runtime oder ein Tool aktualisieren oder Bugs beheben? Besuchen Sie unser [offizielles Plugin-Repository](https://github.com/langgenius/dify-official-plugins)!
Vergessen Sie nicht, in der PR-Beschreibung ein bestehendes Issue zu verlinken oder ein neues zu erstellen.
### Fehlermeldungen
> [!WICHTIG]
> Bitte stellen Sie sicher, dass Sie folgende Informationen bei der Einreichung eines Fehlerberichts angeben:
- Ein klarer und beschreibender Titel
- Eine detaillierte Beschreibung des Fehlers, einschließlich Fehlermeldungen
- Schritte zur Reproduktion des Fehlers
- Erwartetes Verhalten
- **Logs** bei Backend-Problemen (sehr wichtig, zu finden in docker-compose logs)
- Screenshots oder Videos, falls zutreffend
Unsere Priorisierung:
| Fehlertyp | Priorität |
| ------------------------------------------------------------ | --------------- |
| Fehler in Kernfunktionen (Cloud-Service, Login nicht mÃļglich, Anwendungen funktionieren nicht, SicherheitslÃŧcken) | Kritisch |
| Nicht-kritische Fehler, Leistungsverbesserungen | Mittlere Priorität |
| Kleinere Korrekturen (Tippfehler, verwirrende aber funktionierende UI) | Niedrige Priorität |
### Feature-Anfragen
> [!HINWEIS]
> Bitte stellen Sie sicher, dass Sie folgende Informationen bei der Einreichung einer Feature-Anfrage angeben:
- Ein klarer und beschreibender Titel
- Eine detaillierte Beschreibung des Features
- Ein Anwendungsfall fÃŧr das Feature
- Zusätzlicher Kontext oder Screenshots zur Feature-Anfrage
Unsere Priorisierung:
| Feature-Typ | Priorität |
| ------------------------------------------------------------ | --------------- |
| Hochprioritäre Features (durch Teammitglied gekennzeichnet) | Hohe Priorität |
| Beliebte Feature-Anfragen aus unserem [Community-Feedback-Board](https://github.com/langgenius/dify/discussions/categories/feedbacks) | Mittlere Priorität |
| Nicht-Kernfunktionen und kleinere Verbesserungen | Niedrige Priorität |
| Wertvoll, aber nicht dringend | Zukunfts-Feature |
## Einreichen Ihres PRs
### Pull-Request-Prozess
1. Repository forken
1. Vor dem Erstellen eines PRs bitte ein Issue zur Diskussion der Änderungen erstellen
1. Einen neuen Branch fÃŧr Ihre Änderungen erstellen
1. Tests fÃŧr Ihre Änderungen hinzufÃŧgen
1. Sicherstellen, dass Ihr Code die bestehenden Tests besteht
1. Issue in der PR-Beschreibung verlinken (`fixes #<issue_number>`)
1. Auf den Merge warten!
### Projekt einrichten
#### Frontend
FÃŧr die Einrichtung des Frontend-Service folgen Sie bitte unserer ausfÃŧhrlichen [Anleitung](https://github.com/langgenius/dify/blob/main/web/README.md) in der Datei `web/README.md`.
#### Backend
FÃŧr die Einrichtung des Backend-Service folgen Sie bitte unseren detaillierten [Anweisungen](https://github.com/langgenius/dify/blob/main/api/README.md) in der Datei `api/README.md`.
#### Weitere Hinweise
Wir empfehlen, dieses Dokument sorgfältig zu lesen, da es wichtige Informationen enthält Ãŧber:
- Voraussetzungen und Abhängigkeiten
- Installationsschritte
- Konfigurationsdetails
- Häufige ProblemlÃļsungen
Bei Problemen während der Einrichtung kÃļnnen Sie sich gerne an uns wenden.
## Hilfe bekommen
Wenn Sie beim Mitwirken Fragen haben oder nicht weiterkommen, stellen Sie Ihre Fragen einfach im entsprechenden GitHub Issue oder besuchen Sie unseren [Discord](https://discord.gg/8Tpq4AcN9c) fÃŧr einen schnellen Austausch.

97
CONTRIBUTING_ES.md Normal file
View File

@@ -0,0 +1,97 @@
# CONTRIBUIR
Así que estÃĄs buscando contribuir a Dify - eso es fantÃĄstico, estamos ansiosos por ver lo que haces. Como una startup con personal y financiaciÃŗn limitados, tenemos grandes ambiciones de diseÃąar el flujo de trabajo mÃĄs intuitivo para construir y gestionar aplicaciones LLM. Cualquier ayuda de la comunidad cuenta, realmente.
Necesitamos ser ÃĄgiles y enviar rÃĄpidamente dado donde estamos, pero tambiÊn queremos asegurarnos de que colaboradores como tÃē obtengan una experiencia lo mÃĄs fluida posible al contribuir. Hemos elaborado esta guía de contribuciÃŗn con ese propÃŗsito, con el objetivo de familiarizarte con la base de cÃŗdigo y cÃŗmo trabajamos con los colaboradores, para que puedas pasar rÃĄpidamente a la parte divertida.
Esta guía, como Dify mismo, es un trabajo en constante progreso. Agradecemos mucho tu comprensiÃŗn si a veces se queda atrÃĄs del proyecto real, y damos la bienvenida a cualquier comentario para que podamos mejorar.
En tÊrminos de licencia, por favor tÃŗmate un minuto para leer nuestro breve [Acuerdo de Licencia y Colaborador](./LICENSE). La comunidad tambiÊn se adhiere al [cÃŗdigo de conducta](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md).
## Antes de empezar
ÂŋBuscas algo en lo que trabajar? Explora nuestros [buenos primeros issues](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) y elige uno para comenzar.
ÂŋTienes un nuevo modelo o herramienta genial para aÃąadir? Abre un PR en nuestro [repositorio de plugins](https://github.com/langgenius/dify-plugins) y muÊstranos lo que has construido.
ÂŋNecesitas actualizar un modelo existente, herramienta o corregir algunos errores? Dirígete a nuestro [repositorio oficial de plugins](https://github.com/langgenius/dify-official-plugins) y haz tu magia.
ÂĄÃšnete a la diversiÃŗn, contribuye y construyamos algo increíble juntos! 💡✨
No olvides vincular un issue existente o abrir uno nuevo en la descripciÃŗn del PR.
### Informes de errores
> [!IMPORTANT]
> Por favor, asegÃērate de incluir la siguiente informaciÃŗn al enviar un informe de error:
- Un título claro y descriptivo
- Una descripciÃŗn detallada del error, incluyendo cualquier mensaje de error
- Pasos para reproducir el error
- Comportamiento esperado
- **Logs**, si estÃĄn disponibles, para problemas del backend, esto es realmente importante, puedes encontrarlos en los logs de docker-compose
- Capturas de pantalla o videos, si es aplicable
CÃŗmo priorizamos:
| Tipo de Issue | Prioridad |
| ------------------------------------------------------------ | --------------- |
| Errores en funciones principales (servicio en la nube, no poder iniciar sesiÃŗn, aplicaciones que no funcionan, fallos de seguridad) | Crítica |
| Errores no críticos, mejoras de rendimiento | Prioridad Media |
| Correcciones menores (errores tipogrÃĄficos, UI confusa pero funcional) | Prioridad Baja |
### Solicitudes de funcionalidades
> [!NOTE]
> Por favor, asegÃērate de incluir la siguiente informaciÃŗn al enviar una solicitud de funcionalidad:
- Un título claro y descriptivo
- Una descripciÃŗn detallada de la funcionalidad
- Un caso de uso para la funcionalidad
- Cualquier otro contexto o capturas de pantalla sobre la solicitud de funcionalidad
CÃŗmo priorizamos:
| Tipo de Funcionalidad | Prioridad |
| ------------------------------------------------------------ | --------------- |
| Funcionalidades de alta prioridad etiquetadas por un miembro del equipo | Prioridad Alta |
| Solicitudes populares de funcionalidades de nuestro [tablero de comentarios de la comunidad](https://github.com/langgenius/dify/discussions/categories/feedbacks) | Prioridad Media |
| Funcionalidades no principales y mejoras menores | Prioridad Baja |
| Valiosas pero no inmediatas | Futura-Funcionalidad |
## Enviando tu PR
### Proceso de Pull Request
1. Haz un fork del repositorio
1. Antes de redactar un PR, por favor crea un issue para discutir los cambios que quieres hacer
1. Crea una nueva rama para tus cambios
1. Por favor aÃąade pruebas para tus cambios en consecuencia
1. AsegÃērate de que tu cÃŗdigo pasa las pruebas existentes
1. Por favor vincula el issue en la descripciÃŗn del PR, `fixes #<nÃēmero_del_issue>`
1. ÂĄFusiona tu cÃŗdigo!
### ConfiguraciÃŗn del proyecto
#### Frontend
Para configurar el servicio frontend, por favor consulta nuestra [guía completa](https://github.com/langgenius/dify/blob/main/web/README.md) en el archivo `web/README.md`. Este documento proporciona instrucciones detalladas para ayudarte a configurar el entorno frontend correctamente.
#### Backend
Para configurar el servicio backend, por favor consulta nuestras [instrucciones detalladas](https://github.com/langgenius/dify/blob/main/api/README.md) en el archivo `api/README.md`. Este documento contiene una guía paso a paso para ayudarte a poner en marcha el backend sin problemas.
#### Otras cosas a tener en cuenta
Recomendamos revisar este documento cuidadosamente antes de proceder con la configuraciÃŗn, ya que contiene informaciÃŗn esencial sobre:
- Requisitos previos y dependencias
- Pasos de instalaciÃŗn
- Detalles de configuraciÃŗn
- Consejos comunes de soluciÃŗn de problemas
No dudes en contactarnos si encuentras algÃēn problema durante el proceso de configuraciÃŗn.
## Obteniendo Ayuda
Si alguna vez te quedas atascado o tienes una pregunta urgente mientras contribuyes, simplemente envíanos tus consultas a travÊs del issue relacionado de GitHub, o Ãēnete a nuestro [Discord](https://discord.gg/8Tpq4AcN9c) para una charla rÃĄpida.

97
CONTRIBUTING_FR.md Normal file
View File

@@ -0,0 +1,97 @@
# CONTRIBUER
Vous cherchez donc à contribuer à Dify - c'est fantastique, nous avons hÃĸte de voir ce que vous allez faire. En tant que startup avec un personnel et un financement limitÊs, nous avons de grandes ambitions pour concevoir le flux de travail le plus intuitif pour construire et gÊrer des applications LLM. Toute aide de la communautÊ compte, vraiment.
Nous devons ÃĒtre agiles et livrer rapidement compte tenu de notre position, mais nous voulons aussi nous assurer que des contributeurs comme vous obtiennent une expÊrience aussi fluide que possible lors de leur contribution. Nous avons ÊlaborÊ ce guide de contribution dans ce but, visant à vous familiariser avec la base de code et comment nous travaillons avec les contributeurs, afin que vous puissiez rapidement passer à la partie amusante.
Ce guide, comme Dify lui-mÃĒme, est un travail en constante Êvolution. Nous apprÊcions grandement votre comprÊhension si parfois il est en retard par rapport au projet rÊel, et nous accueillons tout commentaire pour nous aider à nous amÊliorer.
En termes de licence, veuillez prendre une minute pour lire notre bref [Accord de Licence et de Contributeur](./LICENSE). La communautÊ adhère Êgalement au [code de conduite](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md).
## Avant de vous lancer
Vous cherchez quelque chose à rÊaliser ? Parcourez nos [problèmes pour dÊbutants](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) et choisissez-en un pour commencer !
Vous avez un nouveau modèle ou un nouvel outil à ajouter ? Ouvrez une PR dans notre [dÊpôt de plugins](https://github.com/langgenius/dify-plugins) et montrez-nous ce que vous avez crÊÊ.
Vous devez mettre à jour un modèle existant, un outil ou corriger des bugs ? Rendez-vous sur notre [dÊpôt officiel de plugins](https://github.com/langgenius/dify-official-plugins) et faites votre magie !
Rejoignez l'aventure, contribuez, et construisons ensemble quelque chose d'extraordinaire ! 💡✨
N'oubliez pas de lier un problème existant ou d'ouvrir un nouveau problème dans la description de votre PR.
### Rapports de bugs
> [!IMPORTANT]
> Veuillez vous assurer d'inclure les informations suivantes lors de la soumission d'un rapport de bug :
- Un titre clair et descriptif
- Une description dÊtaillÊe du bug, y compris tous les messages d'erreur
- Les Êtapes pour reproduire le bug
- Comportement attendu
- **Logs**, si disponibles, pour les problèmes de backend, c'est vraiment important, vous pouvez les trouver dans les logs de docker-compose
- Captures d'Êcran ou vidÊos, si applicable
Comment nous priorisons :
| Type de Problème | PrioritÊ |
| ------------------------------------------------------------ | --------------- |
| Bugs dans les fonctions principales (service cloud, impossibilitÊ de se connecter, applications qui ne fonctionnent pas, failles de sÊcuritÊ) | Critique |
| Bugs non critiques, amÊliorations de performance | PrioritÊ Moyenne |
| Corrections mineures (fautes de frappe, UI confuse mais fonctionnelle) | PrioritÊ Basse |
### Demandes de fonctionnalitÊs
> [!NOTE]
> Veuillez vous assurer d'inclure les informations suivantes lors de la soumission d'une demande de fonctionnalitÊ :
- Un titre clair et descriptif
- Une description dÊtaillÊe de la fonctionnalitÊ
- Un cas d'utilisation pour la fonctionnalitÊ
- Tout autre contexte ou captures d'Êcran concernant la demande de fonctionnalitÊ
Comment nous priorisons :
| Type de FonctionnalitÊ | PrioritÊ |
| ------------------------------------------------------------ | --------------- |
| FonctionnalitÊs hautement prioritaires ÊtiquetÊes par un membre de l'Êquipe | PrioritÊ Haute |
| Demandes populaires de fonctionnalitÊs de notre [tableau de feedback communautaire](https://github.com/langgenius/dify/discussions/categories/feedbacks) | PrioritÊ Moyenne |
| FonctionnalitÊs non essentielles et amÊliorations mineures | PrioritÊ Basse |
| PrÊcieuses mais non immÊdiates | FonctionnalitÊ Future |
## Soumettre votre PR
### Processus de Pull Request
1. Forkez le dÊpôt
1. Avant de rÊdiger une PR, veuillez crÊer un problème pour discuter des changements que vous souhaitez apporter
1. CrÊez une nouvelle branche pour vos changements
1. Veuillez ajouter des tests pour vos changements en consÊquence
1. Assurez-vous que votre code passe les tests existants
1. Veuillez lier le problème dans la description de la PR, `fixes #<numÊro_du_problème>`
1. Faites fusionner votre code !
### Configuration du projet
#### Frontend
Pour configurer le service frontend, veuillez consulter notre [guide complet](https://github.com/langgenius/dify/blob/main/web/README.md) dans le fichier `web/README.md`. Ce document fournit des instructions dÊtaillÊes pour vous aider à configurer correctement l'environnement frontend.
#### Backend
Pour configurer le service backend, veuillez consulter nos [instructions dÊtaillÊes](https://github.com/langgenius/dify/blob/main/api/README.md) dans le fichier `api/README.md`. Ce document contient un guide Êtape par Êtape pour vous aider à faire fonctionner le backend sans problème.
#### Autres choses à noter
Nous recommandons de revoir attentivement ce document avant de procÊder à la configuration, car il contient des informations essentielles sur :
- PrÊrequis et dÊpendances
- Étapes d'installation
- DÊtails de configuration
- Conseils courants de dÊpannage
N'hÊsitez pas à nous contacter si vous rencontrez des problèmes pendant le processus de configuration.
## Obtenir de l'aide
Si jamais vous ÃĒtes bloquÊ ou avez une question urgente en contribuant, envoyez-nous simplement vos questions via le problème GitHub concernÊ, ou rejoignez notre [Discord](https://discord.gg/8Tpq4AcN9c) pour une discussion rapide.

97
CONTRIBUTING_JA.md Normal file
View File

@@ -0,0 +1,97 @@
# č˛ĸįŒŽã‚Ŧイド
DifyãĢč˛ĸįŒŽã—ã‚ˆã†ã¨ãŠč€ƒãˆã§ã™ã‹īŧŸį´ æ™´ã‚‰ã—ã„ã§ã™ã­ã€‚į§ãŸãĄã¯ã€ã‚ãĒたがおぎようãĒč˛ĸįŒŽã‚’ã—ãĻくださるぎか、とãĻもæĨŊしãŋãĢしãĻいぞす。゚ã‚ŋãƒŧトã‚ĸップとしãĻ限られたäēēå“Ąã¨čŗ‡é‡‘ãŽä¸­ã§ã€LLMã‚ĸプãƒĒã‚ąãƒŧã‚ˇãƒ§ãƒŗãŽæ§‹į¯‰ã¨įŽĄį†ãŽãŸã‚ãŽæœ€ã‚‚į›´æ„Ÿįš„ãĒワãƒŧクフロãƒŧã‚’č¨­č¨ˆã™ã‚‹ã¨ã„ã†å¤§ããĒį›Žæ¨™ã‚’æŒãŖãĻã„ãžã™ã€‚ã‚ŗãƒŸãƒĨãƒ‹ãƒ†ã‚Ŗã‹ã‚‰ãŽã‚ã‚‰ã‚†ã‚‹æ”¯æ´ãŒã€æœŦåŊ“ãĢ重čρãĒæ„å‘ŗã‚’æŒãĄãžã™ã€‚
į§ãŸãĄã¯čŋ…速ãĢ開į™ēã‚’é€˛ã‚ã‚‹åŋ…čĻãŒã‚ã‚Šãžã™ãŒã€åŒæ™‚ãĢč˛ĸįŒŽč€…ãŽįš†æ§˜ãĢã¨ãŖãĻ゚ムãƒŧã‚ēãĒįĩŒé¨“ã‚’æäž›ã—ãŸã„ã¨č€ƒãˆãĻいぞす。こぎã‚Ŧã‚¤ãƒ‰ã¯ã€ã‚ŗãƒŧドベãƒŧã‚šã¨į§ãŸãĄãŽč˛ĸįŒŽč€…ã¨ãŽå”åƒæ–šæŗ•ã‚’į†č§Ŗã—ãĻいただき、すぐãĢæĨŊしい開į™ēãĢ取り掛かれるようãĢã™ã‚‹ã“ã¨ã‚’į›Žįš„ã¨ã—ãĻいぞす。
こぎã‚Ŧイドは、Difyč‡ĒäŊ“と同様ãĢ、常ãĢé€˛åŒ–ã—įļšã‘ãĻã„ãžã™ã€‚åŽŸéš›ãŽãƒ—ãƒ­ã‚¸ã‚§ã‚¯ãƒˆãŽé€˛čĄŒįŠļæŗã¨å¤šå°‘ãŽãšã‚ŒãŒį”Ÿã˜ã‚‹å ´åˆã‚‚ã”ã–ã„ãžã™ãŒã€ã”į†č§Ŗã„ãŸã ã‘ãžã™ã¨åš¸ã„ã§ã™ã€‚æ”šå–„ãŽãŸã‚ãŽãƒ•ã‚Ŗãƒŧドバックも歓čŋŽã„たしぞす。
ナイã‚ģãƒŗã‚šãĢついãĻは、[ナイã‚ģãƒŗã‚šã¨č˛ĸįŒŽč€…åŒæ„æ›¸](./LICENSE)をご一čĒ­ãã ã•ã„ã€‚ãžãŸã€ã‚ŗãƒŸãƒĨãƒ‹ãƒ†ã‚Ŗã¯[čĄŒå‹•čĻį¯„](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)ãĢåž“ãŖãĻいぞす。
## 始める前ãĢ
取りįĩ„むずきčĒ˛éĄŒã‚’ãŠæŽĸしですかīŧŸ[初åŋƒč€…向けぎčĒ˛éĄŒ](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22)から選んで始めãĻãŋぞしょうīŧ
新しいãƒĸデãƒĢãƒŠãƒŗã‚ŋイムやツãƒŧãƒĢをčŋŊ加したいですかīŧŸ[ãƒ—ãƒŠã‚°ã‚¤ãƒŗãƒĒポジトãƒĒ](https://github.com/langgenius/dify-plugins)でPRをäŊœæˆã—、あãĒたぎ成果をčĻ‹ã›ãĻください。
æ—ĸ存ぎãƒĸデãƒĢãƒŠãƒŗã‚ŋイムやツãƒŧãƒĢぎ更新、バグäŋŽæ­Ŗã‚’したいですかīŧŸ[å…Ŧåŧãƒ—ãƒŠã‚°ã‚¤ãƒŗãƒĒポジトãƒĒ](https://github.com/langgenius/dify-official-plugins)でäŊœæĨ­ã‚’é€˛ã‚ãĻください。
参加しãĻ、č˛ĸįŒŽã—ãĻã€ä¸€įˇ’ãĢį´ æ™´ã‚‰ã—ã„ã‚‚ãŽã‚’äŊœã‚Šãžã—ょうīŧđŸ’Ąâœ¨
PRぎčĒŦ明ãĢは、æ—ĸå­˜ãŽã‚¤ã‚ˇãƒĨãƒŧへぎãƒĒãƒŗã‚¯ã‚’åĢã‚ã‚‹ã‹ã€æ–°ã—ã„ã‚¤ã‚ˇãƒĨãƒŧをäŊœæˆã™ã‚‹ã“とをåŋ˜ã‚ŒãĒいでください。
### ãƒã‚°å ąå‘Š
> [!IMPORTANT]
> ãƒã‚°å ąå‘Šæ™‚ãĢは、äģĨä¸‹ãŽæƒ…å ąã‚’åŋ…ずåĢめãĻくださいīŧš
- 明įĸēで分かりやすいã‚ŋイトãƒĢ
- エナãƒŧãƒĄãƒƒã‚ģãƒŧジをåĢã‚€čŠŗį´°ãĒバグぎčĒŦ明
- ãƒã‚°ãŽå†įžæ‰‹é †
- 期垅される動äŊœ
- ãƒãƒƒã‚¯ã‚¨ãƒŗãƒ‰ãŽå•éĄŒãŽå ´åˆã¯**ログ**īŧˆdocker-composeぎログでįĸēčĒå¯čƒŊīŧ‰ãŒéžå¸¸ãĢ重čĻã§ã™
- 芲åŊ“する場合ぱクãƒĒãƒŧãƒŗã‚ˇãƒ§ãƒƒãƒˆã‚„å‹•į”ģ
å„Ē先順äŊãŽäģ˜ã‘æ–šīŧš
| å•éĄŒãŽį¨ŽéĄž | å„Ē先åēĻ |
| ------------------------------------------------------------ | --------- |
| ã‚ŗã‚ĸ抟čƒŊぎバグīŧˆã‚¯ãƒŠã‚Ļドã‚ĩãƒŧãƒ“ã‚šã€ãƒ­ã‚°ã‚¤ãƒŗä¸å¯ã€ã‚ĸプãƒĒã‚ąãƒŧã‚ˇãƒ§ãƒŗä¸å…ˇåˆã€ã‚ģキãƒĨãƒĒãƒ†ã‚Ŗč„†åŧ࿀§īŧ‰ | 最重čρ |
| 重čρåēĻぎäŊŽã„バグ、パフりãƒŧãƒžãƒŗã‚šæ”šå–„ | 䏭ፋåēĻ |
| čģŊ垎ãĒäŋŽæ­Ŗīŧˆã‚ŋイプミ゚、分かりãĢくいが動äŊœã™ã‚‹UIīŧ‰ | äŊŽ |
### 抟čƒŊãƒĒクエ゚ト
> [!NOTE]
> 抟čƒŊãƒĒクエ゚ト時ãĢは、äģĨä¸‹ãŽæƒ…å ąã‚’åŋ…ずåĢめãĻくださいīŧš
- 明įĸēで分かりやすいã‚ŋイトãƒĢ
- 抟čƒŊãŽčŠŗį´°ãĒčĒŦ明
- äŊŋᔍäē‹äž‹
- そぎäģ–ãŽæ–‡č„ˆã‚„į”ģéĸぎ゚クãƒĒãƒŧãƒŗã‚ˇãƒ§ãƒƒãƒˆ
å„Ē先順äŊãŽäģ˜ã‘æ–šīŧš
| 抟čƒŊãŽį¨ŽéĄž | å„Ē先åēĻ |
| ------------------------------------------------------------ | --------- |
| チãƒŧãƒ ãƒĄãƒŗãƒãƒŧãĢã‚ˆãŖãĻé̘å„Ē先åēĻとナベãƒĢäģ˜ã‘された抟čƒŊ | é̘ |
| [ã‚ŗãƒŸãƒĨãƒ‹ãƒ†ã‚Ŗãƒ•ã‚Ŗãƒŧドボãƒŧド](https://github.com/langgenius/dify/discussions/categories/feedbacks)でぎäēēæ°—ぎ抟čƒŊãƒĒクエ゚ト | 䏭ፋåēĻ |
| éžã‚ŗã‚ĸ抟čƒŊとčģŊ垎ãĒ攚善 | äŊŽ |
| äžĄå€¤ã¯ã‚ã‚‹ãŒįˇŠæ€Ĩ性ぎäŊŽã„もぎ | 将æĨ寞åŋœ |
## PRぎ提å‡ē
### プãƒĢãƒĒクエ゚トぎプロã‚ģ゚
1. ãƒĒポジトãƒĒをフりãƒŧクする
1. PRをäŊœæˆã™ã‚‹å‰ãĢ、変更内厚ãĢついãĻã‚¤ã‚ˇãƒĨãƒŧã§č­°čĢ–ã™ã‚‹
1. å¤‰æ›´į”¨ãŽæ–°ã—ã„ãƒ–ãƒŠãƒŗãƒã‚’äŊœæˆã™ã‚‹
1. 変更ãĢåŋœã˜ãŸãƒ†ã‚šãƒˆã‚’čŋŊ加する
1. æ—ĸ存ぎテ゚トをパ゚することをįĸēčĒã™ã‚‹
1. PRぎčĒŦ明文ãĢã‚¤ã‚ˇãƒĨãƒŧをãƒĒãƒŗã‚¯ã™ã‚‹īŧˆ`fixes #<issue_number>`īŧ‰
1. マãƒŧジ厌äē†īŧ
### プロジェクトぎã‚ģットã‚ĸップ
#### ãƒ•ãƒ­ãƒŗãƒˆã‚¨ãƒŗãƒ‰
ãƒ•ãƒ­ãƒŗãƒˆã‚¨ãƒŗãƒ‰ã‚ĩãƒŧビ゚ぎã‚ģットã‚ĸップãĢついãĻは、`web/README.md`ぎ[ã‚Ŧイド](https://github.com/langgenius/dify/blob/main/web/README.md)ã‚’å‚į…§ã—ãĻください。こぎドキãƒĨãƒĄãƒŗãƒˆãĢã¯ã€ãƒ•ãƒ­ãƒŗãƒˆã‚¨ãƒŗãƒ‰į’°åĸƒã‚’遊切ãĢã‚ģットã‚ĸãƒƒãƒ—ã™ã‚‹ãŸã‚ãŽčŠŗį´°ãĒæ‰‹é †ãŒč¨˜čŧ‰ã•れãĻいぞす。
#### ãƒãƒƒã‚¯ã‚¨ãƒŗãƒ‰
ãƒãƒƒã‚¯ã‚¨ãƒŗãƒ‰ã‚ĩãƒŧビ゚ぎã‚ģットã‚ĸップãĢついãĻは、`api/README.md`ぎ[手順](https://github.com/langgenius/dify/blob/main/api/README.md)ã‚’å‚į…§ã—ãĻください。こぎドキãƒĨãƒĄãƒŗãƒˆãĢã¯ã€ãƒãƒƒã‚¯ã‚¨ãƒŗãƒ‰ã‚’æ­Ŗã—ãå‹•äŊœã•せるためぎ゚テップバイ゚テップぎã‚ŦイドがåĢぞれãĻいぞす。
#### そぎäģ–ãŽæŗ¨æ„į‚š
ã‚ģットã‚ĸãƒƒãƒ—ã‚’é€˛ã‚ã‚‹å‰ãĢ、äģĨ下ぎ重čρãĒæƒ…å ąãŒåĢぞれãĻいるため、こぎドキãƒĨãƒĄãƒŗãƒˆã‚’æŗ¨æ„æˇąãįĸēčĒã™ã‚‹ã“ã¨ã‚’ãŠå‹§ã‚ã—ãžã™īŧš
- å‰ææĄäģļと䞝存é–ĸäŋ‚
- ã‚¤ãƒŗã‚šãƒˆãƒŧãƒĢ手順
- č¨­åŽšãŽčŠŗį´°
- 一čˆŦįš„ãĒトナブãƒĢã‚ˇãƒĨãƒŧãƒ†ã‚Ŗãƒŗã‚°ãŽãƒ’ãƒŗãƒˆ
ã‚ģットã‚ĸップ中ãĢå•éĄŒãŒį™ēį”Ÿã—ãŸå ´åˆã¯ã€ãŠæ°—čģŊãĢお問い合わせください。
## ã‚ĩポãƒŧトを受ける
č˛ĸįŒŽä¸­ãĢčĄŒãčŠ°ãžãŖãŸã‚Šã€įˇŠæ€ĨぎčŗĒ問がある場合は、é–ĸé€Ŗã™ã‚‹GitHubã‚¤ã‚ˇãƒĨãƒŧでčŗĒ問するか、[Discord](https://discord.gg/8Tpq4AcN9c)で気čģŊãĢãƒãƒŖãƒƒãƒˆã—ãĻください。

97
CONTRIBUTING_KR.md Normal file
View File

@@ -0,0 +1,97 @@
# 기ė—Ŧ하기
Dify뗐 기ė—Ŧí•˜ë ¤ęŗ  í•˜ė‹œëŠ”ęĩ°ėš” - ė •ë§ ëŠ‹ė§‘ë‹ˆë‹¤, ë‹šė‹ ė´ ëŦ´ė—‡ė„ í• ė§€ 기대가 됩니다. ė¸ë Ĩęŗŧ ėžę¸ˆė´ ė œí•œëœ ėŠ¤íƒ€íŠ¸ė—…ėœŧëĄœė„œ, 뚰ëĻŦ는 LLM ė• í”ŒëĻŦėŧ€ė´ė…˜ė„ ęĩŦėļ•í•˜ęŗ  관ëĻŦ하기 ėœ„í•œ 가ėžĨ ė§ę´€ė ė¸ ė›ŒíŦí”ŒëĄœėš°ëĨŧ ė„¤ęŗ„í•˜ęŗ ėž 하는 큰 ė•ŧë§ė„ 氀맀溠 ėžˆėŠĩ니다. ėģ¤ëŽ¤ë‹ˆí‹°ė˜ ëĒ¨ë“  ë„ė›€ė€ ė •ë§ ė¤‘ėš”í•Šë‹ˆë‹¤.
뚰ëĻŦ는 현ėžŦ ėƒí™Šė—ė„œ ë¯ŧė˛Ší•˜ę˛Œ ëš ëĨ´ę˛Œ ë°°íŦ해ė•ŧ í•˜ė§€ë§Œ, ë™ė‹œė— ë‹šė‹ ęŗŧ ę°™ė€ 기ė—Ŧėžë“¤ė´ 기ė—Ŧ하는 ęŗŧė •ė—ė„œ ėĩœëŒ€í•œ ė›í™œí•œ ę˛Ŋí—˜ė„ ė–ģė„ 눘 ėžˆë„ëĄ í•˜ęŗ  ė‹ļėŠĩ니다. 뚰ëĻŦ는 ė´ëŸŦ한 ëĒŠė ėœŧ로 ė´ 기ė—Ŧ ę°€ė´ë“œëĨŧ ėž‘ė„ąí–ˆėœŧ늰, ė—ŦëŸŦëļ„ė´ ėŊ”ë“œë˛ ė´ėŠ¤ė™€ 뚰ëĻŦ가 기ė—Ŧėžë“¤ęŗŧ ė–´ë–ģ枌 í˜‘ė—…í•˜ëŠ”ė§€ė— 대해 ėšœėˆ™í•´ė§ˆ 눘 ėžˆë„ëĄ ë•ęŗ , ëš ëĨ´ę˛Œ ėžŦë¯¸ėžˆëŠ” ëļ€ëļ„ėœŧ로 ë„˜ė–´ę°ˆ 눘 ėžˆë„ëĄ í•˜ęŗ ėž 합니다.
ė´ ę°€ė´ë“œëŠ” Dify ėžė˛´ė™€ 마ė°Ŧę°€ė§€ëĄœ ëŠėž„ė—†ė´ ė§„í–‰ ė¤‘ė¸ ėž‘ė—…ėž…ë‹ˆë‹¤. 때로는 ė‹¤ė œ í”„ëĄœė íŠ¸ëŗ´ë‹¤ ë’¤ė˛˜ė§ˆ 눘 ėžˆë‹¤ëŠ” ė ė„ ė´í•´í•´ ėŖŧė‹œëŠ´ 감ė‚Ŧ하겠ėœŧ늰, ę°œė„ ė„ ėœ„í•œ í”ŧë“œë°ąė€ ė–¸ė œë“ ė§€ í™˜ė˜í•Šë‹ˆë‹¤.
ëŧė´ė„ŧ늤 ė¸ĄëŠ´ė—ė„œ, 간ëžĩ한 [ëŧė´ė„ŧ늤 및 기ė—Ŧėž ë™ė˜ė„œ](./LICENSE)ëĨŧ ėŊė–´ëŗ´ëŠ” ė‹œę°„ė„ 氀렏ėŖŧė„¸ėš”. ėģ¤ëŽ¤ë‹ˆí‹°ëŠ” 또한 [행동 강령](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)ė„ ė¤€ėˆ˜í•Šë‹ˆë‹¤.
## ė‹œėž‘í•˜ę¸° 렄뗐
래ëĻŦ할 ėž‘ė—…ė„ ė°žęŗ  ęŗ„ė‹ ę°€ėš”? [ė´ˆëŗ´ėžëĨŧ ėœ„í•œ ė´ėŠˆ](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22)ëĨŧ ė‚´íŽ´ëŗ´ęŗ  ė‹œėž‘í•  ę˛ƒė„ ė„ íƒí•˜ė„¸ėš”!
ėļ”가할 ėƒˆëĄœėš´ ëĒ¨ë¸ ëŸ°íƒ€ėž„ė´ë‚˜ 도ęĩŦ가 ėžˆë‚˜ėš”? 뚰ëĻŦė˜ [플ëŸŦęˇ¸ė¸ ė €ėžĨė†Œ](https://github.com/langgenius/dify-plugins)뗐 PRė„ ė—´ęŗ  ë‹šė‹ ė´ 만든 ę˛ƒė„ ëŗ´ė—ŦėŖŧė„¸ėš”.
ę¸°ėĄ´ ëĒ¨ë¸ ëŸ°íƒ€ėž„, 도ęĩŦëĨŧ ė—…ë°ė´íŠ¸í•˜ęą°ë‚˜ 버그ëĨŧ ėˆ˜ė •í•´ė•ŧ í•˜ë‚˜ėš”? 뚰ëĻŦė˜ [ęŗĩė‹ 플ëŸŦęˇ¸ė¸ ė €ėžĨė†Œ](https://github.com/langgenius/dify-official-plugins)로 ę°€ė„œ ë‹šė‹ ė˜ ë§ˆë˛•ė„ íŽŧėš˜ė„¸ėš”!
함ęģ˜ ėϐ揰溠, 기ė—Ŧí•˜ęŗ , ëŠ‹ė§„ ę˛ƒė„ 함ęģ˜ ë§Œë“¤ė–´ ë´…ė‹œë‹¤! 💡✨
PR 네ëDž뗐 ę¸°ėĄ´ ė´ėŠˆëĨŧ ė—°ę˛°í•˜ęą°ë‚˜ 냈 ė´ėŠˆëĨŧ ė—Ŧ는 ę˛ƒė„ ėžŠė§€ ë§ˆė„¸ėš”.
### 버그 ëŗ´ęŗ 
> [!IMPORTANT]
> 버그 ëŗ´ęŗ ė„œëĨŧ 렜ėļœí•  때 ë‹¤ėŒ ė •ëŗ´ëĨŧ íŦ함해 ėŖŧė„¸ėš”:
- ëĒ…í™•í•˜ęŗ  네ëĒ…ė ė¸ 렜ëĒŠ
- 똤ëĨ˜ ëŠ”ė‹œė§€ëĨŧ íŦ함한 ë˛„ęˇ¸ė— 대한 ėƒė„¸í•œ 네ëĒ…
- 버그ëĨŧ ėžŦ현하는 ë‹¨ęŗ„
- ė˜ˆėƒë˜ëŠ” ë™ėž‘
- 가ëŠĨ한 ę˛Ŋ뚰 **로그**, ë°ąė—”ë“œ ė´ėŠˆė˜ ę˛Ŋ뚰 ë§¤ėš° ė¤‘ėš”í•Šë‹ˆë‹¤. docker-compose ëĄœęˇ¸ė—ė„œ ė°žė„ 눘 ėžˆėŠĩ니다
- 해당되는 ę˛Ŋ뚰 늤íŦëϰ냎 또는 ëš„ë””ė˜¤
ėš°ė„ ėˆœėœ„ 枰렕 방법:
| ė´ėŠˆ ėœ í˜• | ėš°ė„ ėˆœėœ„ |
| ------------------------------------------------------------ | --------------- |
| í•ĩė‹Ŧ 기ëŠĨė˜ 버그(클ëŧėš°ë“œ ė„œëš„ėŠ¤, ëĄœęˇ¸ė¸ ëļˆę°€, ė• í”ŒëĻŦėŧ€ė´ė…˜ ėž‘ë™ ëļˆëŠĨ, ëŗ´ė•ˆ 뎍ė•Ŋ렐) | ė¤‘ëŒ€ |
| ëš„ė¤‘ėš” 버그, ė„ąëŠĨ í–Ĩ냁 | 뤑氄 ėš°ė„ ėˆœėœ„ |
| ė‚Ŧė†Œí•œ ėˆ˜ė •(ė˜¤íƒ€, í˜ŧëž€ėŠ¤ëŸŊė§€ë§Œ ėž‘ë™í•˜ëŠ” UI) | ë‚Žė€ ėš°ė„ ėˆœėœ„ |
### 기ëŠĨ ėš”ė˛­
> [!NOTE]
> 기ëŠĨ ėš”ė˛­ė„ 렜ėļœí•  때 ë‹¤ėŒ ė •ëŗ´ëĨŧ íŦ함해 ėŖŧė„¸ėš”:
- ëĒ…í™•í•˜ęŗ  네ëĒ…ė ė¸ 렜ëĒŠ
- 기ëŠĨ뗐 대한 ėƒė„¸í•œ 네ëĒ…
- 해당 기ëŠĨė˜ ė‚ŦėšŠ ė‚Ŧ례
- 기ëŠĨ ėš”ė˛­ė— 관한 기타 ėģ¨í…ėŠ¤íŠ¸ 또는 늤íŦëϰ냎
ėš°ė„ ėˆœėœ„ 枰렕 방법:
| 기ëŠĨ ėœ í˜• | ėš°ė„ ėˆœėœ„ |
| ------------------------------------------------------------ | --------------- |
| 팀 ęĩŦė„ąė›ė— ė˜í•´ ë ˆė´ë¸”ė´ ė§€ė •ëœ ęŗ ėš°ė„ ėˆœėœ„ 기ëŠĨ | ë†’ė€ ėš°ė„ ėˆœėœ„ |
| 뚰ëĻŦė˜ [ėģ¤ëŽ¤ë‹ˆí‹° í”ŧ드백 ëŗ´ë“œ](https://github.com/langgenius/dify/discussions/categories/feedbacks)ė—ė„œ ė¸ę¸° ėžˆëŠ” 기ëŠĨ ėš”ė˛­ | 뤑氄 ėš°ė„ ėˆœėœ„ |
| 비í•ĩė‹Ŧ 기ëŠĨ 및 ė‚Ŧė†Œí•œ ę°œė„  | ë‚Žė€ ėš°ė„ ėˆœėœ„ |
| ę°€ėš˜ ėžˆė§€ë§Œ ėĻ‰ė‹œ í•„ėš”í•˜ė§€ ė•Šė€ 기ëŠĨ | 미래 기ëŠĨ |
## PR 렜ėļœí•˜ę¸°
### Pull Request í”„ëĄœė„¸ėŠ¤
1. ė €ėžĨė†ŒëĨŧ íŦíŦí•˜ė„¸ėš”
1. PRė„ ėž‘ė„ąí•˜ę¸° 렄뗐, ëŗ€ę˛Ŋí•˜ęŗ ėž 하는 ë‚´ėšŠė— 대해 ë…ŧė˜í•˜ę¸° ėœ„í•œ ė´ėŠˆëĨŧ ėƒė„ąí•´ ėŖŧė„¸ėš”
1. ëŗ€ę˛Ŋ ė‚Ŧí•­ė„ ėœ„í•œ 냈 ë¸Œëžœėš˜ëĨŧ ë§Œë“œė„¸ėš”
1. ëŗ€ę˛Ŋ ė‚Ŧí•­ė— 대한 í…ŒėŠ¤íŠ¸ëĨŧ ė ė ˆížˆ ėļ”가해 ėŖŧė„¸ėš”
1. ėŊ”ë“œę°€ ę¸°ėĄ´ í…ŒėŠ¤íŠ¸ëĨŧ í†ĩęŗŧí•˜ëŠ”ė§€ í™•ė¸í•˜ė„¸ėš”
1. PR 네ëDž뗐 ė´ėŠˆëĨŧ ė—°ę˛°í•´ ėŖŧė„¸ėš”, `fixes #<ė´ėŠˆ_번호>`
1. ëŗ‘í•Š ė™„ëŖŒ!
### í”„ëĄœė íŠ¸ ė„¤ė •í•˜ę¸°
#### í”„ëĄ íŠ¸ė—”ë“œ
í”„ëĄ íŠ¸ė—”ë“œ ė„œëš„ėŠ¤ëĨŧ ė„¤ė •í•˜ë ¤ëŠ´, `web/README.md` 파ėŧ뗐 ėžˆëŠ” 뚰ëĻŦė˜ [ėĸ…핊 ę°€ė´ë“œ](https://github.com/langgenius/dify/blob/main/web/README.md)ëĨŧ ė°¸ėĄ°í•˜ė„¸ėš”. ė´ ëŦ¸ė„œëŠ” í”„ëĄ íŠ¸ė—”ë“œ 환ę˛Ŋė„ ė ė ˆížˆ ė„¤ė •í•˜ëŠ” 데 ë„ė›€ė´ 되는 ėžė„¸í•œ ė§€ėš¨ė„ 렜ęŗĩ합니다.
#### ë°ąė—”ë“œ
ë°ąė—”ë“œ ė„œëš„ėŠ¤ëĨŧ ė„¤ė •í•˜ë ¤ëŠ´, `api/README.md` 파ėŧ뗐 ėžˆëŠ” 뚰ëĻŦė˜ [ėƒė„¸ ė§€ėš¨](https://github.com/langgenius/dify/blob/main/api/README.md)ė„ ė°¸ėĄ°í•˜ė„¸ėš”. ė´ ëŦ¸ė„œëŠ” ë°ąė—”ë“œëĨŧ ė›í™œí•˜ę˛Œ ė‹¤í–‰í•˜ëŠ” 데 ë„ė›€ė´ 되는 ë‹¨ęŗ„ëŗ„ ę°€ė´ë“œëĨŧ íŦí•¨í•˜ęŗ  ėžˆėŠĩ니다.
#### 기타 및溠 ė‚Ŧ항
ė„¤ė •ė„ ė§„í–‰í•˜ę¸° 렄뗐 ė´ ëŦ¸ė„œëĨŧ ėŖŧė˜ 깊게 검토하는 ę˛ƒė„ ęļŒėžĨ합니다. ë‹¤ėŒęŗŧ ę°™ė€ í•„ėˆ˜ ė •ëŗ´ę°€ íŦí•¨ë˜ė–´ ėžˆėŠĩ니다:
- í•„ėˆ˜ ėĄ°ęą´ 및 ėĸ…ė†ė„ą
- ė„¤ėš˜ ë‹¨ęŗ„
- ęĩŦė„ą 넏ëļ€ ė •ëŗ´
- ėŧë°˜ė ė¸ ëŦ¸ė œ 해결 팁
네렕 ęŗŧė •ė—ė„œ ëŦ¸ė œę°€ ë°œėƒí•˜ëŠ´ ė–¸ė œë“ ė§€ ė—°ëŊ해 ėŖŧė„¸ėš”.
## ë„ė›€ 받기
기ė—Ŧ하는 ë™ė•ˆ 막히거나 긴급한 마ëŦ¸ė´ ėžˆėœŧ늴, 관련 GitHub ė´ėŠˆëĨŧ í†ĩ해 마ëŦ¸ė„ ëŗ´ë‚´ęą°ë‚˜, ëš ëĨ¸ 대화ëĨŧ ėœ„í•´ 뚰ëĻŦė˜ [Discord](https://discord.gg/8Tpq4AcN9c)뗐 ė°¸ė—Ŧí•˜ė„¸ėš”.

97
CONTRIBUTING_PT.md Normal file
View File

@@ -0,0 +1,97 @@
# CONTRIBUINDO
EntÃŖo vocÃĒ estÃĄ procurando contribuir para o Dify - isso Ê incrível, mal podemos esperar para ver o que vocÃĒ vai fazer. Como uma startup com equipe e financiamento limitados, temos grandes ambiçÃĩes de projetar o fluxo de trabalho mais intuitivo para construir e gerenciar aplicaçÃĩes LLM. Qualquer ajuda da comunidade conta, verdadeiramente.
Precisamos ser ÃĄgeis e entregar rapidamente considerando onde estamos, mas tambÊm queremos garantir que colaboradores como vocÃĒ tenham uma experiÃĒncia o mais tranquila possível ao contribuir. Montamos este guia de contribuiÃ§ÃŖo com esse propÃŗsito, visando familiarizÃĄ-lo com a base de cÃŗdigo e como trabalhamos com os colaboradores, para que vocÃĒ possa rapidamente passar para a parte divertida.
Este guia, como o prÃŗprio Dify, Ê um trabalho em constante evoluÃ§ÃŖo. Agradecemos muito a sua compreensÃŖo se às vezes ele ficar atrasado em relaÃ§ÃŖo ao projeto real, e damos as boas-vindas a qualquer feedback para que possamos melhorar.
Em termos de licenciamento, por favor, dedique um minuto para ler nosso breve [Acordo de Licença e Contribuidor](./LICENSE). A comunidade tambÊm adere ao [cÃŗdigo de conduta](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md).
## Antes de começar
Procurando algo para resolver? Navegue por nossos [problemas para iniciantes](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) e escolha um para começar!
Tem um novo modelo ou ferramenta para adicionar? Abra um PR em nosso [repositÃŗrio de plugins](https://github.com/langgenius/dify-plugins) e mostre-nos o que vocÃĒ construiu.
Precisa atualizar um modelo existente, ferramenta ou corrigir alguns bugs? VÃĄ para nosso [repositÃŗrio oficial de plugins](https://github.com/langgenius/dify-official-plugins) e faça sua mÃĄgica!
Junte-se à diversÃŖo, contribua e vamos construir algo incrível juntos! 💡✨
NÃŖo se esqueça de vincular um problema existente ou abrir um novo problema na descriÃ§ÃŖo do PR.
### RelatÃŗrios de bugs
> [!IMPORTANT]
> Por favor, certifique-se de incluir as seguintes informaçÃĩes ao enviar um relatÃŗrio de bug:
- Um título claro e descritivo
- Uma descriÃ§ÃŖo detalhada do bug, incluindo quaisquer mensagens de erro
- Passos para reproduzir o bug
- Comportamento esperado
- **Logs**, se disponíveis, para problemas de backend, isso Ê realmente importante, vocÃĒ pode encontrÃĄ-los nos logs do docker-compose
- Capturas de tela ou vídeos, se aplicÃĄvel
Como priorizamos:
| Tipo de Problema | Prioridade |
| ------------------------------------------------------------ | --------------- |
| Bugs em funçÃĩes centrais (serviço em nuvem, nÃŖo conseguir fazer login, aplicaçÃĩes nÃŖo funcionando, falhas de segurança) | Crítica |
| Bugs nÃŖo críticos, melhorias de desempenho | Prioridade MÊdia |
| CorreçÃĩes menores (erros de digitaÃ§ÃŖo, interface confusa mas funcional) | Prioridade Baixa |
### SolicitaçÃĩes de recursos
> [!NOTE]
> Por favor, certifique-se de incluir as seguintes informaçÃĩes ao enviar uma solicitaÃ§ÃŖo de recurso:
- Um título claro e descritivo
- Uma descriÃ§ÃŖo detalhada do recurso
- Um caso de uso para o recurso
- Qualquer outro contexto ou capturas de tela sobre a solicitaÃ§ÃŖo de recurso
Como priorizamos:
| Tipo de Recurso | Prioridade |
| ------------------------------------------------------------ | --------------- |
| Recursos de alta prioridade conforme rotulado por um membro da equipe | Prioridade Alta |
| SolicitaçÃĩes populares de recursos do nosso [quadro de feedback da comunidade](https://github.com/langgenius/dify/discussions/categories/feedbacks) | Prioridade MÊdia |
| Recursos nÃŖo essenciais e melhorias menores | Prioridade Baixa |
| Valiosos mas nÃŖo imediatos | Recurso Futuro |
## Enviando seu PR
### Processo de Pull Request
1. Faça um fork do repositÃŗrio
1. Antes de elaborar um PR, por favor crie um problema para discutir as mudanças que vocÃĒ quer fazer
1. Crie um novo branch para suas alteraçÃĩes
1. Por favor, adicione testes para suas alteraçÃĩes conforme apropriado
1. Certifique-se de que seu cÃŗdigo passa nos testes existentes
1. Por favor, vincule o problema na descriÃ§ÃŖo do PR, `fixes #<nÃēmero_do_problema>`
1. Faça o merge do seu cÃŗdigo!
### Configurando o projeto
#### Frontend
Para configurar o serviço frontend, por favor consulte nosso [guia abrangente](https://github.com/langgenius/dify/blob/main/web/README.md) no arquivo `web/README.md`. Este documento fornece instruçÃĩes detalhadas para ajudÃĄ-lo a configurar o ambiente frontend adequadamente.
#### Backend
Para configurar o serviço backend, por favor consulte nossas [instruçÃĩes detalhadas](https://github.com/langgenius/dify/blob/main/api/README.md) no arquivo `api/README.md`. Este documento contÊm um guia passo a passo para ajudÃĄ-lo a colocar o backend em funcionamento sem problemas.
#### Outras coisas a observar
Recomendamos revisar este documento cuidadosamente antes de prosseguir com a configuraÃ§ÃŖo, pois ele contÊm informaçÃĩes essenciais sobre:
- PrÊ-requisitos e dependÃĒncias
- Etapas de instalaÃ§ÃŖo
- Detalhes de configuraÃ§ÃŖo
- Dicas comuns de soluÃ§ÃŖo de problemas
Sinta-se à vontade para entrar em contato se encontrar quaisquer problemas durante o processo de configuraÃ§ÃŖo.
## Obtendo Ajuda
Se vocÃĒ ficar preso ou tiver uma dÃēvida urgente enquanto contribui, simplesmente envie suas perguntas atravÊs do problema relacionado no GitHub, ou entre no nosso [Discord](https://discord.gg/8Tpq4AcN9c) para uma conversa rÃĄpida.

97
CONTRIBUTING_TR.md Normal file
View File

@@ -0,0 +1,97 @@
# KATKIDA BULUNMAK
Demek Dify'a katkÄąda bulunmak istiyorsunuz - bu harika, ne yapacağınÄązÄą gÃļrmek için sabÄąrsÄązlanÄąyoruz. SÄąnÄąrlÄą personel ve finansmana sahip bir startup olarak, LLM uygulamalarÄą oluşturmak ve yÃļnetmek için en sezgisel iş akÄąÅŸÄąnÄą tasarlama konusunda bÃŧyÃŧk hedeflerimiz var. Topluluktan gelen her tÃŧrlÃŧ yardÄąm gerçekten Ãļnemli.
Bulunduğumuz noktada çevik olmamÄąz ve hÄązlÄą hareket etmemiz gerekiyor, ancak sizin gibi katkÄąda bulunanlarÄąn mÃŧmkÃŧn olduğunca sorunsuz bir deneyim yaşamasÄąnÄą da sağlamak istiyoruz. Bu katkÄą rehberini bu amaçla hazÄąrladÄąk; sizi kod tabanÄąyla ve katkÄąda bulunanlarla nasÄąl çalÄąÅŸtığımÄązla tanÄąÅŸtÄąrmayÄą, bÃļylece hÄązlÄąca eğlenceli kÄąsma geçebilmenizi hedefliyoruz.
Bu rehber, Dify'Äąn kendisi gibi, sÃŧrekli gelişen bir çalÄąÅŸmadÄąr. Bazen gerçek projenin gerisinde kalÄąrsa anlayÄąÅŸÄąnÄąz için çok minnettarÄąz ve gelişmemize yardÄąmcÄą olacak her tÃŧrlÃŧ geri bildirimi memnuniyetle karÅŸÄąlÄąyoruz.
Lisanslama konusunda, lÃŧtfen kÄąsa [Lisans ve KatkÄąda Bulunan AnlaşmamÄązÄą](./LICENSE) okumak için bir dakikanÄązÄą ayÄąrÄąn. Topluluk ayrÄąca [davranÄąÅŸ kurallarÄąna](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md) da uyar.
## Başlamadan Önce
Üzerinde çalÄąÅŸacak bir şey mi arÄąyorsunuz? [İlk katkÄąda bulunanlar için iyi sorunlarÄąmÄąza](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) gÃļz atÄąn ve başlamak için birini seçin!
Eklenecek harika bir yeni model runtime'Äą veya aracÄąnÄąz mÄą var? [Eklenti depomuzda](https://github.com/langgenius/dify-plugins) bir PR aÃ§Äąn ve ne yaptığınÄązÄą bize gÃļsterin.
Mevcut bir model runtime'ÄąnÄą, aracÄą gÃŧncellemek veya bazÄą hatalarÄą dÃŧzeltmek mi istiyorsunuz? [Resmi eklenti depomuza](https://github.com/langgenius/dify-official-plugins) gidin ve sihrinizi gÃļsterin!
Eğlenceye katÄąlÄąn, katkÄąda bulunun ve birlikte harika bir şeyler inşa edelim! 💡✨
PR aÃ§ÄąklamasÄąnda mevcut bir sorunu bağlamayÄą veya yeni bir sorun açmayÄą unutmayÄąn.
### Hata RaporlarÄą
> [!IMPORTANT]
> LÃŧtfen bir hata raporu gÃļnderirken aşağıdaki bilgileri dahil ettiğinizden emin olun:
- Net ve aÃ§ÄąklayÄącÄą bir başlÄąk
- Hata mesajlarÄą dahil hatanÄąn ayrÄąntÄąlÄą bir aÃ§ÄąklamasÄą
- HatayÄą tekrarlamak için adÄąmlar
- Beklenen davranÄąÅŸ
- MÃŧmkÃŧnse **Loglar**, backend sorunlarÄą için, bu gerçekten Ãļnemlidir, bunlarÄą docker-compose loglarÄąnda bulabilirsiniz
- Uygunsa ekran gÃļrÃŧntÃŧleri veya videolar
NasÄąl Ãļnceliklendiriyoruz:
| Sorun TÃŧrÃŧ | Öncelik |
| ------------------------------------------------------------ | --------------- |
| Temel işlevlerdeki hatalar (bulut hizmeti, giriş yapamama, çalÄąÅŸmayan uygulamalar, gÃŧvenlik aÃ§ÄąklarÄą) | Kritik |
| Kritik olmayan hatalar, performans artÄąÅŸlarÄą | Orta Öncelik |
| KÃŧçÃŧk dÃŧzeltmeler (yazÄąm hatalarÄą, kafa karÄąÅŸtÄąrÄącÄą ama çalÄąÅŸan UI) | DÃŧşÃŧk Öncelik |
### Özellik İstekleri
> [!NOTE]
> LÃŧtfen bir Ãļzellik isteği gÃļnderirken aşağıdaki bilgileri dahil ettiğinizden emin olun:
- Net ve aÃ§ÄąklayÄącÄą bir başlÄąk
- Özelliğin ayrÄąntÄąlÄą bir aÃ§ÄąklamasÄą
- Özellik için bir kullanÄąm durumu
- Özellik isteği hakkÄąnda diğer bağlamlar veya ekran gÃļrÃŧntÃŧleri
NasÄąl Ãļnceliklendiriyoruz:
| Özellik TÃŧrÃŧ | Öncelik |
| ------------------------------------------------------------ | --------------- |
| Bir ekip Ãŧyesi tarafÄąndan etiketlenen YÃŧksek Öncelikli Özellikler | YÃŧksek Öncelik |
| [Topluluk geri bildirim panosundan](https://github.com/langgenius/dify/discussions/categories/feedbacks) popÃŧler Ãļzellik istekleri | Orta Öncelik |
| Temel olmayan Ãļzellikler ve kÃŧçÃŧk geliştirmeler | DÃŧşÃŧk Öncelik |
| Değerli ama acil olmayan | Gelecek-Özellik |
## PR'nizi GÃļndermek
### Pull Request SÃŧreci
1. Depoyu fork edin
1. Bir PR taslağı oluşturmadan Ãļnce, yapmak istediğiniz değişiklikleri tartÄąÅŸmak için lÃŧtfen bir sorun oluşturun
1. Değişiklikleriniz için yeni bir dal oluşturun
1. LÃŧtfen değişiklikleriniz için uygun testler ekleyin
1. Kodunuzun mevcut testleri geçtiğinden emin olun
1. LÃŧtfen PR aÃ§ÄąklamasÄąnda sorunu bağlayÄąn, `fixes #<sorun_numarasÄą>`
1. Kodunuzu birleştirin!
### Projeyi Kurma
#### Frontend
Frontend hizmetini kurmak için, lÃŧtfen `web/README.md` dosyasÄąndaki kapsamlÄą [rehberimize](https://github.com/langgenius/dify/blob/main/web/README.md) bakÄąn. Bu belge, frontend ortamÄąnÄą dÃŧzgÃŧn bir şekilde kurmanÄąza yardÄąmcÄą olacak ayrÄąntÄąlÄą talimatlar sağlar.
#### Backend
Backend hizmetini kurmak için, lÃŧtfen `api/README.md` dosyasÄąndaki detaylÄą [talimatlarÄąmÄąza](https://github.com/langgenius/dify/blob/main/api/README.md) bakÄąn. Bu belge, backend'i sorunsuz bir şekilde çalÄąÅŸtÄąrmanÄąza yardÄąmcÄą olacak adÄąm adÄąm bir kÄąlavuz içerir.
#### Dikkat Edilecek Diğer Şeyler
Kuruluma geçmeden Ãļnce bu belgeyi dikkatlice incelemenizi Ãļneririz, çÃŧnkÃŧ şunlar hakkÄąnda temel bilgiler içerir:
- Ön koşullar ve bağımlÄąlÄąklar
- Kurulum adÄąmlarÄą
- YapÄąlandÄąrma detaylarÄą
- YaygÄąn sorun giderme ipuçlarÄą
Kurulum sÃŧreci sÄąrasÄąnda herhangi bir sorunla karÅŸÄąlaÅŸÄąrsanÄąz bizimle iletişime geçmekten çekinmeyin.
## YardÄąm Almak
KatkÄąda bulunurken takÄąlÄąrsanÄąz veya yanÄącÄą bir sorunuz olursa, sorularÄąnÄązÄą ilgili GitHub sorunu aracÄąlığıyla bize gÃļnderin veya hÄązlÄą bir sohbet için [Discord'umuza](https://discord.gg/8Tpq4AcN9c) katÄąlÄąn.

97
CONTRIBUTING_TW.md Normal file
View File

@@ -0,0 +1,97 @@
# åƒčˆ‡č˛ĸįģ
我們垈é̘興äŊ æƒŗčρį‚ē Dify 做å‡ēč˛ĸįģīŧäŊœį‚ē䏀個躇æēæœ‰é™įš„æ–°å‰ĩ團隊īŧŒæˆ‘å€‘æœŸæœ›æ‰“é€ æœ€į›´č§€įš„ LLM æ‡‰į”¨é–‹į™ŧčˆ‡įŽĄį†åˇĨäŊœæĩį¨‹ã€‚į¤žįž¤ä¸­įš„æ¯ä¸€äģŊč˛ĸįģ對我們䞆čĒĒéƒŊ非常重čĻã€‚
äŊœį‚ē一個åŋĢ速į™ŧåą•įš„å°ˆæĄˆīŧŒæˆ‘們需čρäŋæŒæ•æˇä¸ĻåŋĢ速čŋ­äģŖīŧŒåŒæ™‚䚟希望čƒŊį‚ēč˛ĸįģč€…æäž›é †æšĸįš„åƒčˆ‡éĢ”éŠ—ã€‚æˆ‘å€‘æē–å‚™äē†é€™äģŊč˛ĸįģ指南īŧŒåšĢ劊äŊ äē†č§Ŗį¨‹åŧįĸŧåēĢå’Œæˆ‘å€‘čˆ‡č˛ĸįģč€…åˆäŊœįš„æ–šåŧīŧŒčŽ“äŊ čƒŊå¤ į›Ąåŋ̿Еå…Ĩ有čļŖįš„é–‹į™ŧåˇĨäŊœã€‚
這äģŊæŒ‡å—čˆ‡ Dify ä¸€æ¨ŖīŧŒéƒŊ在持įēŒåŽŒå–„ä¸­ã€‚åĻ‚æžœæŒ‡å—å…§åŽšæœ‰čŊ垌æ–ŧå¯Ļéš›å°ˆæĄˆįš„æƒ…æŗīŧŒé‚„čĢ‹čĻ‹čĢ’īŧŒä🿭ĄčŋŽæäž›æ”šé€˛åģēč­°ã€‚
關æ–ŧ授æŦŠéƒ¨åˆ†īŧŒčĢ‹čŠąéģžæ™‚é–“é–ąčŽ€æˆ‘å€‘į°ĄįŸ­įš„[授æŦŠå’Œč˛ĸįģč€…å”č­°](./LICENSE)ã€‚į¤žįž¤äšŸéœ€éĩ厈[行į‚ēæē–則](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)。
## 開始䚋前
æƒŗæ‰žéģžäē‹åšīŧŸį€čĻŊæˆ‘å€‘įš„[æ–°æ‰‹å‹å–„č­°éĄŒ](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22)ä¸Ļ挑選一個開始īŧ
æœ‰é…ˇį‚Ģįš„æ¨Ąåž‹åŸˇčĄŒæ™‚æœŸæˆ–åˇĨå…ˇčĻæ–°åĸžīŧŸåœ¨æˆ‘å€‘įš„[外掛倉åēĢ](https://github.com/langgenius/dify-plugins)開啟 PR åą•į¤ēäŊ įš„äŊœå“ã€‚
需čĻæ›´æ–°įžæœ‰įš„æ¨Ąåž‹åŸˇčĄŒæ™‚æœŸã€åˇĨå…ˇæˆ–äŋŽåžŠéŒ¯čǤīŧŸå‰åž€æˆ‘å€‘įš„[厘斚外掛倉åēĢ](https://github.com/langgenius/dify-official-plugins)開始äŊ įš„é­”æŗ•äš‹æ—…īŧ
加å…Ĩ我們īŧŒä¸€čĩˇč˛ĸįģä¸Ļ打造äģ¤äēēéŠšč‰ˇįš„äŊœå“å§īŧđŸ’Ąâœ¨
åˆĨåŋ˜äē†åœ¨ PR 描čŋ°ä¸­é€Ŗįĩįžæœ‰č­°éĄŒæˆ–é–‹å•Ÿæ–°č­°éĄŒã€‚
### 錯čĒ¤å›žå ą
> [!IMPORTANT]\
> 提äē¤éŒ¯čĒ¤å›žå ąæ™‚īŧŒč̋務åŋ…包åĢäģĨä¸‹čŗ‡č¨Šīŧš
- 清晰明įĸēįš„æ¨™éĄŒ
- čŠŗį´°įš„éŒ¯čĒ¤æčŋ°īŧŒåŒ…åĢäģģäŊ•錯čĒ¤č¨Šæ¯
- é‡įžéŒ¯čĒ¤įš„æ­Ĩ驟
- é æœŸčĄŒį‚ē
- **æ—Ĩčnj**īŧŒåĻ‚æžœæœ‰įš„čŠąã€‚å°åžŒįĢ¯å•éĄŒäž†čĒĒ這éģžåžˆé‡čρīŧŒäŊ å¯äģĨ在 docker-compose logs 中扞到
- æˆĒ圖或åŊąį‰‡īŧˆåĻ‚éŠį”¨īŧ‰
å„Ē先順åēčŠ•äŧ°īŧš
| č­°éĄŒéĄžåž‹ | å„Ēå…ˆį´š |
| -------- | ------ |
| æ ¸åŋƒåŠŸčƒŊ錯čǤīŧˆé›˛įĢ¯æœå‹™ã€į„Ąæŗ•į™ģå…Ĩã€æ‡‰į”¨į¨‹åŧį„Ąæŗ•運äŊœã€åމ免æŧæ´žīŧ‰ | ᎊæ€Ĩ |
| éžįˇŠæ€Ĩ錯čĒ¤ã€æ•ˆčƒŊå„Ē化 | 䏭ᭉ |
| æŦĄčρäŋŽæ­Ŗīŧˆæ‹ŧ字錯čĒ¤ã€äģ‹éĸæˇˇæˇ†äŊ†å¯é‹äŊœīŧ‰ | äŊŽ |
### 功čƒŊčĢ‹æą‚
> [!NOTE]\
> 提äē¤åŠŸčƒŊčĢ‹æą‚æ™‚īŧŒč̋務åŋ…包åĢäģĨä¸‹čŗ‡č¨Šīŧš
- 清晰明įĸēįš„æ¨™éĄŒ
- čŠŗį´°įš„åŠŸčƒŊ描čŋ°
- 功čƒŊįš„äŊŋį”¨æƒ…åĸƒ
- å…ļäģ–į›¸é—œčƒŒæ™¯čĒĒæ˜Žæˆ–æˆĒ圖
å„Ē先順åēčŠ•äŧ°īŧš
| 功čƒŊéĄžåž‹ | å„Ēå…ˆį´š |
| -------- | ------ |
| åœ˜éšŠæˆå“Ąæ¨™č¨˜į‚ēé̘å„Ēå…ˆį´šįš„åŠŸčƒŊ | é̘ |
| 來č‡Ē[į¤žįž¤å›žéĨ‹æŋ](https://github.com/langgenius/dify/discussions/categories/feedbacks)įš„į†ąé–€åŠŸčƒŊčĢ‹æą‚ | 中 |
| 非核åŋƒåŠŸčƒŊå’Œå°åš…æ”šé€˛ | äŊŽ |
| 有僚å€ŧäŊ†éžæ€ĨčŋĢįš„åŠŸčƒŊ | æœĒ䞆功čƒŊ |
## 提äē¤ PR
### PR æĩį¨‹
1. Fork å°ˆæĄˆ
1. 在開始撰å¯Ģ PR 前īŧŒčĢ‹å…ˆåģēįĢ‹č­°éĄŒč¨ŽčĢ–äŊ æƒŗåšįš„æ›´æ”š
1. į‚ēäŊ įš„æ›´æ”šåģēįĢ‹æ–°åˆ†æ”¯
1. čĢ‹į‚ēäŊ įš„æ›´æ”šæ–°åĸžį›¸æ‡‰įš„æ¸ŦčŠĻ
1. įĸēäŋäŊ įš„ፋåŧįĸŧé€šéŽįžæœ‰æ¸ŦčŠĻ
1. čĢ‹åœ¨ PR 描čŋ°ä¸­é€Ŗįĩį›¸é—œč­°éĄŒīŧŒäŊŋᔍ `fixes #<issue_number>`
1. į­‰åž…åˆäŊĩīŧ
### å°ˆæĄˆč¨­åŽš
#### 前į̝
關æ–ŧ前įĢ¯æœå‹™įš„č¨­åŽšīŧŒčĢ‹åƒč€ƒ `web/README.md` ä¸­įš„åŽŒæ•´[指南](https://github.com/langgenius/dify/blob/main/web/README.md)。此文äģļæäž›čŠŗį´°čĒĒæ˜ŽīŧŒåšĢ劊äŊ æ­Ŗįĸēč¨­åŽšå‰į̝ᒰåĸƒã€‚
#### 垌į̝
關æ–ŧ垌įĢ¯æœå‹™įš„č¨­åŽšīŧŒčĢ‹åƒč€ƒ `api/README.md` ä¸­įš„čŠŗį´°[čĒĒæ˜Ž](https://github.com/langgenius/dify/blob/main/api/README.md)。此文äģļ包åĢ逐æ­Ĩ指åŧ•īŧŒåšĢ劊äŊ é †åˆŠå•Ÿå‹•垌įĢ¯æœå‹™ã€‚
#### å…ļäģ–æŗ¨æ„äē‹é …
我們åģēč­°åœ¨é–‹å§‹č¨­åŽšå‰äģ”į´°é–ąčŽ€æ­¤æ–‡äģļīŧŒå› į‚ē厃包åĢäģĨ下重čĻčŗ‡č¨Šīŧš
- 前įŊŽéœ€æą‚å’Œį›¸äžæ€§
- åŽ‰čŖæ­Ĩ驟
- č¨­åŽšį´°į¯€
- 常čĻ‹å•éĄŒæŽ’č§Ŗ
åĻ‚æžœåœ¨č¨­åŽšéŽį¨‹ä¸­é‡åˆ°äģģäŊ•å•éĄŒīŧŒæ­ĄčŋŽéš¨æ™‚čŠĸ問。
## å°‹æą‚å”åŠŠ
åĻ‚æžœäŊ åœ¨č˛ĸįģéŽį¨‹ä¸­é‡åˆ°å›°é›Ŗæˆ–æœ‰æ€Ĩåˆ‡įš„å•éĄŒīŧŒå¯äģĨé€éŽį›¸é—œįš„ GitHub 議題čŠĸ問īŧŒæˆ–加å…Ĩæˆ‘å€‘įš„ [Discord](https://discord.gg/8Tpq4AcN9c) é€˛čĄŒåŗæ™‚ä礿ĩã€‚

97
CONTRIBUTING_VI.md Normal file
View File

@@ -0,0 +1,97 @@
# ĐÓNG GÓP
BáēĄn đang muáģ‘n Ä‘Ãŗng gÃŗp cho Dify - tháē­t tuyáģ‡t váģi, chÃēng tôi ráēĨt mong đưáģŖc tháēĨy nháģ¯ng gÃŦ báēĄn sáēŊ làm. Là máģ™t startup váģ›i nguáģ“n nhÃĸn láģąc và tài chính háēĄn cháēŋ, chÃēng tôi cÃŗ tham váģng láģ›n trong viáģ‡c thiáēŋt káēŋ quy trÃŦnh tráģąc quan nháēĨt đáģƒ xÃĸy dáģąng và quáēŖn lÃŊ cÃĄc áģŠng dáģĨng LLM. Máģi sáģą giÃēp đáģĄ táģĢ cáģ™ng đáģ“ng đáģu ráēĨt cÃŗ ÃŊ nghÄŠa.
ChÃēng tôi cáē§n pháēŖi nhanh nháēšn và triáģƒn khai nhanh chÃŗng, nhưng cÅŠng muáģ‘n đáēŖm báēŖo nháģ¯ng ngưáģi Ä‘Ãŗng gÃŗp như báēĄn cÃŗ tráēŖi nghiáģ‡m Ä‘Ãŗng gÃŗp thuáē­n láģŖi nháēĨt cÃŗ tháģƒ. ChÃēng tôi Ä‘ÃŖ táēĄo hưáģ›ng dáēĢn Ä‘Ãŗng gÃŗp này nháēąm giÃēp báēĄn làm quen váģ›i codebase và cÃĄch chÃēng tôi làm viáģ‡c váģ›i ngưáģi Ä‘Ãŗng gÃŗp, đáģƒ báēĄn cÃŗ tháģƒ nhanh chÃŗng báē¯t đáē§u pháē§n thÃē váģ‹.
Hưáģ›ng dáēĢn này, giáģ‘ng như Dify, đang đưáģŖc phÃĄt triáģƒn liÃĒn táģĨc. ChÃēng tôi ráēĨt cáēŖm kích sáģą thông cáēŖm cáģ§a báēĄn náēŋu đôi khi nÃŗ chưa theo káģ‹p dáģą ÃĄn tháģąc táēŋ, và hoan nghÃĒnh máģi pháēŖn háģ“i đáģƒ cáēŖi thiáģ‡n.
Váģ giáēĨy phÊp, vui lÃ˛ng dành chÃēt tháģi gian đáģc [Tháģa thuáē­n CáēĨp phÊp và Ngưáģi Ä‘Ãŗng gÃŗp](./LICENSE) ngáē¯n gáģn cáģ§a chÃēng tôi. Cáģ™ng đáģ“ng cÅŠng tuÃĸn theo [quy táē¯c áģŠng xáģ­](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md).
## Trưáģ›c khi báē¯t đáē§u
Đang tÃŦm viáģ‡c đáģƒ tháģąc hiáģ‡n? HÃŖy xem qua [cÃĄc issue dành cho ngưáģi máģ›i](https://github.com/langgenius/dify/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) và cháģn máģ™t đáģƒ báē¯t đáē§u!
BáēĄn cÃŗ máģ™t model runtime hoáēˇc công cáģĨ máģ›i thÃē váģ‹ Ä‘áģƒ thÃĒm vào? MáģŸ PR trong [repo plugin](https://github.com/langgenius/dify-plugins) cáģ§a chÃēng tôi và cho chÃēng tôi tháēĨy nháģ¯ng gÃŦ báēĄn Ä‘ÃŖ xÃĸy dáģąng.
Cáē§n cáē­p nháē­t model runtime, công cáģĨ hiáģ‡n cÃŗ hoáēˇc sáģ­a láģ—i? GhÊ thăm [repo plugin chính tháģŠc](https://github.com/langgenius/dify-official-plugins) và tháģąc hiáģ‡n phÊp màu cáģ§a báēĄn!
HÃŖy tham gia, Ä‘Ãŗng gÃŗp và cÚng nhau xÃĸy dáģąng điáģu tuyáģ‡t váģi! 💡✨
ĐáģĢng quÃĒn liÃĒn káēŋt đáēŋn issue hiáģ‡n cÃŗ hoáēˇc máģŸ issue máģ›i trong mô táēŖ PR.
### BÃĄo cÃĄo láģ—i
> [!QUAN TRáģŒNG]\
> Vui lÃ˛ng đáēŖm báēŖo cung cáēĨp cÃĄc thông tin sau khi gáģ­i bÃĄo cÃĄo láģ—i:
- TiÃĒu đáģ rÃĩ ràng và mô táēŖ
- Mô táēŖ chi tiáēŋt váģ láģ—i, bao gáģ“m cÃĄc thông bÃĄo láģ—i
- CÃĄc bưáģ›c đáģƒ tÃĄi hiáģ‡n láģ—i
- Hành vi mong đáģŖi
- **Log**, náēŋu cÃŗ, cho cÃĄc váēĨn đáģ backend, điáģu này ráēĨt quan tráģng, báēĄn cÃŗ tháģƒ tÃŦm tháēĨy chÃēng trong docker-compose logs
- áēĸnh cháģĨp màn hÃŦnh hoáēˇc video, náēŋu cÃŗ tháģƒ
CÃĄch chÃēng tôi ưu tiÃĒn:
| LoáēĄi váēĨn đáģ | MáģŠc đáģ™ Æ°u tiÃĒn |
| ----------- | -------------- |
| Láģ—i trong cÃĄc cháģŠc năng cáģ‘t lÃĩi (dáģ‹ch váģĨ Ä‘ÃĄm mÃĸy, không tháģƒ Ä‘Äƒng nháē­p, áģŠng dáģĨng không hoáēĄt đáģ™ng, láģ— háģ•ng báēŖo máē­t) | Quan tráģng |
| Láģ—i không nghiÃĒm tráģng, cáēŖi thiáģ‡n hiáģ‡u suáēĨt | Ưu tiÃĒn trung bÃŦnh |
| Sáģ­a láģ—i nháģ (láģ—i chính táēŖ, UI gÃĸy nháē§m láēĢn nhưng váēĢn hoáēĄt đáģ™ng) | Ưu tiÃĒn tháēĨp |
### YÃĒu cáē§u tính năng
> [!LƯU Ý]
> Vui lÃ˛ng đáēŖm báēŖo cung cáēĨp cÃĄc thông tin sau khi gáģ­i yÃĒu cáē§u tính năng:
- TiÃĒu đáģ rÃĩ ràng và mô táēŖ
- Mô táēŖ chi tiáēŋt váģ tính năng
- Trưáģng háģŖp sáģ­ dáģĨng cho tính năng
- BáēĨt káģŗ ngáģ¯ cáēŖnh hoáēˇc áēŖnh cháģĨp màn hÃŦnh nào váģ yÃĒu cáē§u tính năng
CÃĄch chÃēng tôi ưu tiÃĒn:
| LoáēĄi tính năng | MáģŠc đáģ™ Æ°u tiÃĒn |
| -------------- | -------------- |
| Tính năng ưu tiÃĒn cao đưáģŖc gáē¯n nhÃŖn báģŸi thành viÃĒn nhÃŗm | Ưu tiÃĒn cao |
| YÃĒu cáē§u tính năng pháģ• biáēŋn táģĢ [báēŖng pháēŖn háģ“i cáģ™ng đáģ“ng](https://github.com/langgenius/dify/discussions/categories/feedbacks) | Ưu tiÃĒn trung bÃŦnh |
| Tính năng không cáģ‘t lÃĩi và cáēŖi tiáēŋn nháģ | Ưu tiÃĒn tháēĨp |
| CÃŗ giÃĄ tráģ‹ nhưng không cáēĨp bÃĄch | Tính năng tÆ°ÆĄng lai |
## Gáģ­i PR cáģ§a báēĄn
### Quy trÃŦnh táēĄo Pull Request
1. Fork repository
1. Trưáģ›c khi soáēĄn PR, vui lÃ˛ng táēĄo issue đáģƒ tháēŖo luáē­n váģ cÃĄc thay đáģ•i báēĄn muáģ‘n tháģąc hiáģ‡n
1. TáēĄo nhÃĄnh máģ›i cho cÃĄc thay đáģ•i cáģ§a báēĄn
1. Vui lÃ˛ng thÃĒm test cho cÃĄc thay đáģ•i tÆ°ÆĄng áģŠng
1. ĐáēŖm báēŖo code cáģ§a báēĄn vưáģŖt qua cÃĄc test hiáģ‡n cÃŗ
1. Vui lÃ˛ng liÃĒn káēŋt issue trong mô táēŖ PR, `fixes #<sáģ‘_issue>`
1. ĐưáģŖc merge!
### Thiáēŋt láē­p dáģą ÃĄn
#### Frontend
Đáģƒ thiáēŋt láē­p dáģ‹ch váģĨ frontend, vui lÃ˛ng tham kháēŖo [hưáģ›ng dáēĢn](https://github.com/langgenius/dify/blob/main/web/README.md) chi tiáēŋt cáģ§a chÃēng tôi trong file `web/README.md`. Tài liáģ‡u này cung cáēĨp hưáģ›ng dáēĢn chi tiáēŋt đáģƒ giÃēp báēĄn thiáēŋt láē­p môi trưáģng frontend máģ™t cÃĄch đÃēng đáē¯n.
#### Backend
Đáģƒ thiáēŋt láē­p dáģ‹ch váģĨ backend, vui lÃ˛ng tham kháēŖo [hưáģ›ng dáēĢn](https://github.com/langgenius/dify/blob/main/api/README.md) chi tiáēŋt cáģ§a chÃēng tôi trong file `api/README.md`. Tài liáģ‡u này cháģŠa hưáģ›ng dáēĢn táģĢng bưáģ›c đáģƒ giÃēp báēĄn kháģŸi cháēĄy backend máģ™t cÃĄch suôn sáēģ.
#### CÃĄc điáģƒm cáē§n lưu ÃŊ khÃĄc
ChÃēng tôi khuyáēŋn ngháģ‹ xem xÊt káģš tài liáģ‡u này trưáģ›c khi tiáēŋn hành thiáēŋt láē­p, vÃŦ nÃŗ cháģŠa thông tin thiáēŋt yáēŋu váģ:
- Điáģu kiáģ‡n tiÃĒn quyáēŋt và dependencies
- CÃĄc bưáģ›c cài đáēˇt
- Chi tiáēŋt cáēĨu hÃŦnh
- CÃĄc máēšo xáģ­ lÃŊ sáģą cáģ‘ pháģ• biáēŋn
ĐáģĢng ngáē§n ngáēĄi liÃĒn háģ‡ náēŋu báēĄn gáēˇp báēĨt káģŗ váēĨn đáģ nào trong quÃĄ trÃŦnh thiáēŋt láē­p.
## Nháē­n tráģŖ giÃēp
Náēŋu báēĄn báģ‹ máē¯c káēšt hoáēˇc cÃŗ cÃĸu háģi cáēĨp bÃĄch trong quÃĄ trÃŦnh Ä‘Ãŗng gÃŗp, cháģ‰ cáē§n gáģ­i cÃĸu háģi cáģ§a báēĄn thông qua issue GitHub liÃĒn quan, hoáēˇc tham gia [Discord](https://discord.gg/8Tpq4AcN9c) cáģ§a chÃēng tôi đáģƒ trÃ˛ chuyáģ‡n nhanh.

114
Makefile
View File

@@ -4,96 +4,10 @@ WEB_IMAGE=$(DOCKER_REGISTRY)/dify-web
API_IMAGE=$(DOCKER_REGISTRY)/dify-api
VERSION=latest
# Default target - show help
.DEFAULT_GOAL := help
# Backend Development Environment Setup
.PHONY: dev-setup prepare-docker prepare-web prepare-api
# Dev setup target
dev-setup: prepare-docker prepare-web prepare-api
@echo "✅ Backend development environment setup complete!"
# Step 1: Prepare Docker middleware
prepare-docker:
@echo "đŸŗ Setting up Docker middleware..."
@cp -n docker/middleware.env.example docker/middleware.env 2>/dev/null || echo "Docker middleware.env already exists"
@cd docker && docker compose -f docker-compose.middleware.yaml --env-file middleware.env -p dify-middlewares-dev up -d
@echo "✅ Docker middleware started"
# Step 2: Prepare web environment
prepare-web:
@echo "🌐 Setting up web environment..."
@cp -n web/.env.example web/.env.local 2>/dev/null || echo "Web .env.local already exists"
@pnpm install
@echo "✅ Web environment prepared (not started)"
# Step 3: Prepare API environment
prepare-api:
@echo "🔧 Setting up API environment..."
@cp -n api/.env.example api/.env 2>/dev/null || echo "API .env already exists"
@cd api && uv sync --dev
@cd api && uv run flask db upgrade
@echo "✅ API environment prepared (not started)"
# Clean dev environment
dev-clean:
@echo "âš ī¸ Stopping Docker containers..."
@cd docker && docker compose -f docker-compose.middleware.yaml --env-file middleware.env -p dify-middlewares-dev down
@echo "đŸ—‘ī¸ Removing volumes..."
@rm -rf docker/volumes/db
@rm -rf docker/volumes/redis
@rm -rf docker/volumes/plugin_daemon
@rm -rf docker/volumes/weaviate
@rm -rf api/storage
@echo "✅ Cleanup complete"
# Backend Code Quality Commands
format:
@echo "🎨 Running ruff format..."
@uv run --project api --dev ruff format ./api
@echo "✅ Code formatting complete"
check:
@echo "🔍 Running ruff check..."
@uv run --project api --dev ruff check ./api
@echo "✅ Code check complete"
lint:
@echo "🔧 Running ruff format, check with fixes, import linter, and dotenv-linter..."
@uv run --project api --dev ruff format ./api
@uv run --project api --dev ruff check --fix ./api
@uv run --directory api --dev lint-imports
@uv run --project api --dev dotenv-linter ./api/.env.example ./web/.env.example
@echo "✅ Linting complete"
type-check:
@echo "📝 Running type checks (basedpyright + pyrefly + mypy)..."
@./dev/basedpyright-check $(PATH_TO_CHECK)
@./dev/pyrefly-check-local
@uv --directory api run mypy --exclude-gitignore --exclude 'tests/' --exclude 'migrations/' --check-untyped-defs --disable-error-code=import-untyped .
@echo "✅ Type checks complete"
type-check-core:
@echo "📝 Running core type checks (basedpyright + mypy)..."
@./dev/basedpyright-check $(PATH_TO_CHECK)
@uv --directory api run mypy --exclude-gitignore --exclude 'tests/' --exclude 'migrations/' --check-untyped-defs --disable-error-code=import-untyped .
@echo "✅ Core type checks complete"
test:
@echo "đŸ§Ē Running backend unit tests..."
@if [ -n "$(TARGET_TESTS)" ]; then \
echo "Target: $(TARGET_TESTS)"; \
uv run --project api --dev pytest $(TARGET_TESTS); \
else \
PYTEST_XDIST_ARGS="-n auto" uv run --project api --dev dev/pytest/pytest_unit_tests.sh; \
fi
@echo "✅ Tests complete"
# Build Docker images
build-web:
@echo "Building web Docker image: $(WEB_IMAGE):$(VERSION)..."
docker build -f web/Dockerfile -t $(WEB_IMAGE):$(VERSION) .
docker build -t $(WEB_IMAGE):$(VERSION) ./web
@echo "Web Docker image built successfully: $(WEB_IMAGE):$(VERSION)"
build-api:
@@ -125,29 +39,5 @@ build-push-web: build-web push-web
build-push-all: build-all push-all
@echo "All Docker images have been built and pushed."
# Help target
help:
@echo "Development Setup Targets:"
@echo " make dev-setup - Run all setup steps for backend dev environment"
@echo " make prepare-docker - Set up Docker middleware"
@echo " make prepare-web - Set up web environment"
@echo " make prepare-api - Set up API environment"
@echo " make dev-clean - Stop Docker middleware containers"
@echo ""
@echo "Backend Code Quality:"
@echo " make format - Format code with ruff"
@echo " make check - Check code with ruff"
@echo " make lint - Format, fix, and lint code (ruff, imports, dotenv)"
@echo " make type-check - Run type checks (basedpyright, pyrefly, mypy)"
@echo " make type-check-core - Run core type checks (basedpyright, mypy)"
@echo " make test - Run backend unit tests (or TARGET_TESTS=./api/tests/<target_tests>)"
@echo ""
@echo "Docker Build Targets:"
@echo " make build-web - Build web Docker image"
@echo " make build-api - Build API Docker image"
@echo " make build-all - Build all Docker images"
@echo " make push-all - Push all Docker images"
@echo " make build-push-all - Build and push all Docker images"
# Phony targets
.PHONY: build-web build-api push-web push-api build-all push-all build-push-all dev-setup prepare-docker prepare-web prepare-api dev-clean help format check lint type-check test
.PHONY: build-web build-api push-web push-api build-all push-all build-push-all

View File

@@ -1,5 +1,9 @@
![cover-v5-optimized](./images/GitHub_README_if.png)
<p align="center">
📌 <a href="https://dify.ai/blog/introducing-dify-workflow-file-upload-a-demo-on-ai-podcast">Introducing Dify Workflow File Upload: Recreate Google NotebookLM Podcast</a>
</p>
<p align="center">
<a href="https://cloud.dify.ai">Dify Cloud</a> ¡
<a href="https://docs.dify.ai/getting-started/install-self-hosted">Self-hosting</a> ¡
@@ -32,35 +36,25 @@
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
<a href="https://insights.linuxfoundation.org/project/langgenius-dify" target="_blank">
<img alt="LFX Health Score" src="https://insights.linuxfoundation.org/api/badge/health-score?project=langgenius-dify"></a>
<a href="https://insights.linuxfoundation.org/project/langgenius-dify" target="_blank">
<img alt="LFX Contributors" src="https://insights.linuxfoundation.org/api/badge/contributors?project=langgenius-dify"></a>
<a href="https://insights.linuxfoundation.org/project/langgenius-dify" target="_blank">
<img alt="LFX Active Contributors" src="https://insights.linuxfoundation.org/api/badge/active-contributors?project=langgenius-dify"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./docs/zh-TW/README.md"><img alt="įšéĢ”ä¸­æ–‡æ–‡äģļ" src="https://img.shields.io/badge/įšéĢ”ä¸­æ–‡-d9d9d9"></a>
<a href="./docs/zh-CN/README.md"><img alt="įŽ€äŊ“中文文äģļ" src="https://img.shields.io/badge/įŽ€äŊ“中文-d9d9d9"></a>
<a href="./docs/ja-JP/README.md"><img alt="æ—ĨæœŦčĒžãŽREADME" src="https://img.shields.io/badge/æ—ĨæœŦčĒž-d9d9d9"></a>
<a href="./docs/es-ES/README.md"><img alt="README en EspaÃąol" src="https://img.shields.io/badge/EspaÃąol-d9d9d9"></a>
<a href="./docs/fr-FR/README.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./docs/tlh/README.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./docs/ko-KR/README.md"><img alt="README in Korean" src="https://img.shields.io/badge/한ęĩ­ė–´-d9d9d9"></a>
<a href="./docs/ar-SA/README.md"><img alt="README Ø¨Ø§Ų„ØšØąØ¨ŲŠØŠ" src="https://img.shields.io/badge/Ø§Ų„ØšØąØ¨ŲŠØŠ-d9d9d9"></a>
<a href="./docs/tr-TR/README.md"><img alt="TÃŧrkçe README" src="https://img.shields.io/badge/TÃŧrkçe-d9d9d9"></a>
<a href="./docs/vi-VN/README.md"><img alt="README Tiáēŋng Viáģ‡t" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./docs/de-DE/README.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="./docs/it-IT/README.md"><img alt="README in Italiano" src="https://img.shields.io/badge/Italiano-d9d9d9"></a>
<a href="./docs/pt-BR/README.md"><img alt="README em PortuguÃĒs do Brasil" src="https://img.shields.io/badge/Portugu%C3%AAs%20do%20Brasil-d9d9d9"></a>
<a href="./docs/sl-SI/README.md"><img alt="README SlovenÅĄÄina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="./docs/bn-BD/README.md"><img alt="README in āĻŦāĻžāĻ‚āϞāĻž" src="https://img.shields.io/badge/āĻŦāĻžāĻ‚āϞāĻž-d9d9d9"></a>
<a href="./docs/hi-IN/README.md"><img alt="README in ā¤šā¤ŋ⤍āĨā¤ĻāĨ€" src="https://img.shields.io/badge/Hindi-d9d9d9"></a>
<a href="./README_TW.md"><img alt="įšéĢ”ä¸­æ–‡æ–‡äģļ" src="https://img.shields.io/badge/įšéĢ”ä¸­æ–‡-d9d9d9"></a>
<a href="./README_CN.md"><img alt="įŽ€äŊ“ä¸­æ–‡į‰ˆč‡Ēčŋ°æ–‡äģļ" src="https://img.shields.io/badge/įŽ€äŊ“中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="æ—ĨæœŦčĒžãŽREADME" src="https://img.shields.io/badge/æ—ĨæœŦčĒž-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en EspaÃąol" src="https://img.shields.io/badge/EspaÃąol-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한ęĩ­ė–´-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README Ø¨Ø§Ų„ØšØąØ¨ŲŠØŠ" src="https://img.shields.io/badge/Ø§Ų„ØšØąØ¨ŲŠØŠ-d9d9d9"></a>
<a href="./README_TR.md"><img alt="TÃŧrkçe README" src="https://img.shields.io/badge/TÃŧrkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiáēŋng Viáģ‡t" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_DE.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in āĻŦāĻžāĻ‚āϞāĻž" src="https://img.shields.io/badge/āĻŦāĻžāĻ‚āϞāĻž-d9d9d9"></a>
</p>
Dify is an open-source LLM app development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features (including [Opik](https://www.comet.com/docs/opik/integrations/dify), [Langfuse](https://docs.langfuse.com), and [Arize Phoenix](https://docs.arize.com/phoenix)) and more, letting you quickly go from prototype to production. Here's a list of the core features:
Dify is an open-source platform for developing LLM applications. Its intuitive interface combines agentic AI workflows, RAG pipelines, agent capabilities, model management, observability features, and more—allowing you to quickly move from prototype to production.
## Quick start
@@ -69,7 +63,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
> - CPU >= 2 Core
> - RAM >= 4 GiB
<br/>
</br>
The easiest way to start the Dify server is through [Docker Compose](docker/docker-compose.yaml). Before running Dify with the following commands, make sure that [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) are installed on your machine:
@@ -115,15 +109,15 @@ All of Dify's offerings come with corresponding APIs, so you could effortlessly
## Using Dify
- **Cloud <br/>**
- **Cloud </br>**
We host a [Dify Cloud](https://dify.ai) service for anyone to try with zero setup. It provides all the capabilities of the self-deployed version, and includes 200 free GPT-4 calls in the sandbox plan.
- **Self-hosting Dify Community Edition<br/>**
- **Self-hosting Dify Community Edition</br>**
Quickly get Dify running in your environment with this [starter guide](#quick-start).
Use our [documentation](https://docs.dify.ai) for further references and more in-depth instructions.
- **Dify for enterprise / organizations<br/>**
We provide additional enterprise-centric features. [Send us an email](mailto:business@dify.ai?subject=%5BGitHub%5DBusiness%20License%20Inquiry) to discuss your enterprise needs. <br/>
- **Dify for enterprise / organizations</br>**
We provide additional enterprise-centric features. [Log your questions for us through this chatbot](https://udify.app/chat/22L1zSxg6yW1cWQg) or [send us an email](mailto:business@dify.ai?subject=%5BGitHub%5DBusiness%20License%20Inquiry) to discuss enterprise needs. </br>
> For startups and small businesses using AWS, check out [Dify Premium on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) and deploy it to your own AWS VPC with one click. It's an affordable AMI offering with the option to create apps with custom logo and branding.
@@ -135,30 +129,7 @@ Star Dify on GitHub and be instantly notified of new releases.
## Advanced Setup
### Custom configurations
If you need to customize the configuration, please refer to the comments in our [.env.example](docker/.env.example) file and update the corresponding values in your `.env` file. Additionally, you might need to make adjustments to the `docker-compose.yaml` file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run `docker compose up -d`. You can find the full list of available environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
#### Customizing Suggested Questions
You can now customize the "Suggested Questions After Answer" feature to better fit your use case. For example, to generate longer, more technical questions:
```bash
# In your .env file
SUGGESTED_QUESTIONS_PROMPT='Please help me predict the five most likely technical follow-up questions a developer would ask. Focus on implementation details, best practices, and architecture considerations. Keep each question between 40-60 characters. Output must be JSON array: ["question1","question2","question3","question4","question5"]'
SUGGESTED_QUESTIONS_MAX_TOKENS=512
SUGGESTED_QUESTIONS_TEMPERATURE=0.3
```
See the [Suggested Questions Configuration Guide](docs/suggested-questions-configuration.md) for detailed examples and usage instructions.
### Metrics Monitoring with Grafana
Import the dashboard to Grafana, using Dify's PostgreSQL database as data source, to monitor metrics in granularity of apps, tenants, messages, and more.
- [Grafana Dashboard by @bowenliang123](https://github.com/bowenliang123/dify-grafana-dashboard)
### Deployment with Kubernetes
If you need to customize the configuration, please refer to the comments in our [.env.example](docker/.env.example) file and update the corresponding values in your `.env` file. Additionally, you might need to make adjustments to the `docker-compose.yaml` file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run `docker-compose up -d`. You can find the full list of available environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) and YAML files which allow Dify to be deployed on Kubernetes.

196
README_AR.md Normal file
View File

@@ -0,0 +1,196 @@
![cover-v5-optimized](./images/GitHub_README_if.png)
<p align="center">
<a href="https://cloud.dify.ai">Dify Cloud</a> ¡
<a href="https://docs.dify.ai/getting-started/install-self-hosted">Ø§Ų„Ø§ØŗØĒØļØ§ŲØŠ Ø§Ų„Ø°Ø§ØĒŲŠØŠ</a> ¡
<a href="https://docs.dify.ai">Ø§Ų„ØĒ؈ØĢŲŠŲ‚</a> ¡
<a href="https://dify.ai/pricing">Ų†Ø¸ØąØŠ ØšØ§Ų…ØŠ ØšŲ„Ų‰ Ų…Ų†ØĒØŦاØĒ Dify</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Product-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/free-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="chat on Discord"></a>
<a href="https://reddit.com/r/difyai" target="_blank">
<img src="https://img.shields.io/reddit/subreddit-subscribers/difyai?style=plastic&logo=reddit&label=r%2Fdifyai&labelColor=white"
alt="join Reddit"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Commits last month" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_CN.md"><img alt="įŽ€äŊ“ä¸­æ–‡į‰ˆč‡Ēčŋ°æ–‡äģļ" src="https://img.shields.io/badge/įŽ€äŊ“中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="æ—ĨæœŦčĒžãŽREADME" src="https://img.shields.io/badge/æ—ĨæœŦčĒž-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en EspaÃąol" src="https://img.shields.io/badge/EspaÃąol-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한ęĩ­ė–´-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README Ø¨Ø§Ų„ØšØąØ¨ŲŠØŠ" src="https://img.shields.io/badge/Ø§Ų„ØšØąØ¨ŲŠØŠ-d9d9d9"></a>
<a href="./README_TR.md"><img alt="TÃŧrkçe README" src="https://img.shields.io/badge/TÃŧrkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiáēŋng Viáģ‡t" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in āĻŦāĻžāĻ‚āϞāĻž" src="https://img.shields.io/badge/āĻŦāĻžāĻ‚āϞāĻž-d9d9d9"></a>
</p>
<div style="text-align: right;">
Ų…Ø´ØąŲˆØš Dify Ų‡Ųˆ Ų…Ų†ØĩØŠ ØĒØˇŲˆŲŠØą ØĒØˇØ¨ŲŠŲ‚Ø§ØĒ Ø§Ų„Ø°ŲƒØ§ØĄ Ø§Ų„ØĩŲ†Ø§ØšŲŠ ؅؁ØĒŲˆØ­ØŠ Ø§Ų„Ų…ØĩØ¯Øą. ØĒØŦŲ…Øš ŲˆØ§ØŦŲ‡ØĒŲ‡ Ø§Ų„Ø¨Ø¯ŲŠŲ‡ŲŠØŠ Ø¨ŲŠŲ† ØŗŲŠØą Ø§Ų„ØšŲ…Ų„ Ø§Ų„Ø°ŲƒŲŠ Ø¨Ø§Ų„Ø°ŲƒØ§ØĄ Ø§Ų„Ø§ØĩØˇŲ†Ø§ØšŲŠ ŲˆØŽØˇ ØŖŲ†Ø§Ø¨ŲŠØ¨ RAG ŲˆŲ‚Ø¯ØąØ§ØĒ Ø§Ų„ŲˆŲƒŲŠŲ„ ؈ØĨØ¯Ø§ØąØŠ Ø§Ų„Ų†Ų…Ø§Ø°ØŦ ŲˆŲ…ŲŠØ˛Ø§ØĒ Ø§Ų„Ų…Ų„Ø§Ø­Ø¸ØŠ ŲˆØŖŲƒØĢØą Ų…Ų† Ø°Ų„ŲƒØŒ Ų…Ų…Ø§ ؊ØĒŲŠØ­ Ų„Ųƒ Ø§Ų„Ø§Ų†ØĒŲ‚Ø§Ų„ Ø¨ØŗØąØšØŠ Ų…Ų† Ø§Ų„Ų…ØąØ­Ų„ØŠ Ø§Ų„ØĒØŦØąŲŠØ¨ŲŠØŠ ØĨŲ„Ų‰ Ø§Ų„ØĨŲ†ØĒاØŦ. ØĨŲ„ŲŠŲƒ Ų‚Ø§ØĻŲ…ØŠ Ø¨Ø§Ų„Ų…ŲŠØ˛Ø§ØĒ Ø§Ų„ØŖØŗØ§ØŗŲŠØŠ:
</br> </br>
**1. ØŗŲŠØą Ø§Ų„ØšŲ…Ų„**: Ų‚Ų… Ø¨Ø¨Ų†Ø§ØĄ ŲˆØ§ØŽØĒØ¨Ø§Øą ØŗŲŠØą ØšŲ…Ų„ Ø§Ų„Ø°ŲƒØ§ØĄ Ø§Ų„Ø§ØĩØˇŲ†Ø§ØšŲŠ Ø§Ų„Ų‚ŲˆŲŠ ØšŲ„Ų‰ Ų‚Ų…Ø§Ø´ بØĩØąŲŠØŒ Ų…ØŗØĒŲŲŠØ¯Ų‹Ø§ Ų…Ų† ØŦŲ…ŲŠØš Ø§Ų„Ų…ŲŠØ˛Ø§ØĒ Ø§Ų„ØĒØ§Ų„ŲŠØŠ ŲˆØŖŲƒØĢØą.
**2. Ø§Ų„Ø¯ØšŲ… Ø§Ų„Ø´Ø§Ų…Ų„ Ų„Ų„Ų†Ų…Ø§Ø°ØŦ**: ØĒŲƒØ§Ų…Ų„ ØŗŲ„Øŗ Ų…Øš Ų…ØĻاØĒ Ų…Ų† LLMs Ø§Ų„ØŽØ§ØĩØŠ / ؅؁ØĒŲˆØ­ØŠ Ø§Ų„Ų…ØĩØ¯Øą Ų…Ų† ØšØ´ØąØ§ØĒ Ų…Ų† Ų…ŲˆŲØąŲŠ Ø§Ų„ØĒØ­Ų„ŲŠŲ„ ŲˆØ§Ų„Ø­Ų„ŲˆŲ„ Ø§Ų„Ų…ØŗØĒØļØ§ŲØŠ ذاØĒŲŠŲ‹Ø§ØŒ Ų…Ų…Ø§ ؊ØēØˇŲŠ GPT ؈ Mistral ؈ Llama3 ŲˆØŖŲŠ Ų†Ų…Ø§Ø°ØŦ Ų…ØĒŲˆØ§ŲŲ‚ØŠ Ų…Øš ŲˆØ§ØŦŲ‡ØŠ OpenAI API. ŲŠŲ…ŲƒŲ† Ø§Ų„ØšØĢŲˆØą ØšŲ„Ų‰ Ų‚Ø§ØĻŲ…ØŠ ŲƒØ§Ų…Ų„ØŠ Ø¨Ų…Ø˛ŲˆØ¯ŲŠ Ø§Ų„Ų†Ų…ŲˆØ°ØŦ Ø§Ų„Ų…Ø¯ØšŲˆŲ…ŲŠŲ† [Ų‡Ų†Ø§](https://docs.dify.ai/getting-started/readme/model-providers).
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. Ø¨ŲŠØĻØŠ Ø§Ų„ØĒØˇŲˆŲŠØą Ų„Ų„ØŖŲˆØ§Ų…Øą**: ŲˆØ§ØŦŲ‡ØŠ Ø¨ŲŠØĻØŠ Ø§Ų„ØĒØˇŲˆŲŠØą Ø§Ų„Ų…Ø¨ØĒŲƒØąØŠ Ų„ØĩŲŠØ§ØēØŠ Ø§Ų„ØŖŲ…Øą ŲˆŲ…Ų‚Ø§ØąŲ†ØŠ ØŖØ¯Ø§ØĄ Ø§Ų„Ų†Ų…ŲˆØ°ØŦ، ؈ØĨØļØ§ŲØŠ Ų…ŲŠØ˛Ø§ØĒ ØĨØļØ§ŲŲŠØŠ Ų…ØĢŲ„ ØĒØ­ŲˆŲŠŲ„ Ø§Ų„Ų†Øĩ ØĨŲ„Ų‰ ŲƒŲ„Ø§Ų… ØĨŲ„Ų‰ ØĒØˇØ¨ŲŠŲ‚ Ų‚Ø§ØĻŲ… ØšŲ„Ų‰ Ø§Ų„Ø¯ØąØ¯Ø´ØŠ.
**4. ØŽØˇ ØŖŲ†Ø§Ø¨ŲŠØ¨ RAG**: Ų‚Ø¯ØąØ§ØĒ RAG Ø§Ų„ŲˆØ§ØŗØšØŠ Ø§Ų„ØĒ؊ ØĒØēØˇŲŠ ŲƒŲ„ Ø´ŲŠØĄ Ų…Ų† Ø§ØŗØĒŲŠØšØ§Ø¨ Ø§Ų„ŲˆØĢاØĻŲ‚ ØĨŲ„Ų‰ Ø§Ų„Ø§ØŗØĒØąØŦاؚ، Ų…Øš Ø§Ų„Ø¯ØšŲ… Ø§Ų„ŲŲˆØąŲŠ Ų„Ø§ØŗØĒØŽØąØ§ØŦ Ø§Ų„Ų†Øĩ Ų…Ų† Ų…Ų„ŲØ§ØĒ PDF ؈ PPT ؈ØĒŲ†ØŗŲŠŲ‚Ø§ØĒ Ø§Ų„ŲˆØĢاØĻŲ‚ Ø§Ų„Ø´Ø§ØĻؚ؊ Ø§Ų„ØŖØŽØąŲ‰.
**5. Ų‚Ø¯ØąØ§ØĒ Ø§Ų„ŲˆŲƒŲŠŲ„**: ŲŠŲ…ŲƒŲ†Ųƒ ØĒØšØąŲŠŲ Ø§Ų„ŲˆŲƒŲ„Ø§ØĄ Ø¨Ų†Ø§ØĄŲ‹ ØšŲ„Ų‰ ØŖŲ…Øą ŲˆØ¸ŲŠŲØŠ LLM ØŖŲˆ ReAct، ؈ØĨØļØ§ŲØŠ ØŖØ¯ŲˆØ§ØĒ Ų…Ø¯Ų…ØŦØŠ ØŖŲˆ Ų…ØŽØĩØĩØŠ Ų„Ų„ŲˆŲƒŲŠŲ„. ØĒŲˆŲØą Dify ØŖŲƒØĢØą Ų…Ų† 50 ØŖØ¯Ø§ØŠ Ų…Ø¯Ų…ØŦØŠ Ų„ŲˆŲƒŲ„Ø§ØĄ Ø§Ų„Ø°ŲƒØ§ØĄ Ø§Ų„Ø§ØĩØˇŲ†Ø§ØšŲŠØŒ Ų…ØĢŲ„ Ø§Ų„Ø¨Ø­ØĢ ؁؊ Google ؈ DALL¡E ؈Stable Diffusion ؈ WolframAlpha.
**6. Ø§Ų„Ų€ LLMOps**: ØąØ§Ų‚Ø¨ ؈ØĒØ­Ų„Ų„ ØŗØŦŲ„Ø§ØĒ Ø§Ų„ØĒØˇØ¨ŲŠŲ‚ ŲˆØ§Ų„ØŖØ¯Ø§ØĄ ØšŲ„Ų‰ Ų…Øą Ø§Ų„Ø˛Ų…Ų†. ŲŠŲ…ŲƒŲ†Ųƒ ØĒØ­ØŗŲŠŲ† Ø§Ų„ØŖŲˆØ§Ų…Øą ŲˆØ§Ų„Ø¨ŲŠØ§Ų†Ø§ØĒ ŲˆØ§Ų„Ų†Ų…Ø§Ø°ØŦ Ø¨Ø§ØŗØĒŲ…ØąØ§Øą Ø§ØŗØĒŲ†Ø§Ø¯Ų‹Ø§ ØĨŲ„Ų‰ Ø§Ų„Ø¨ŲŠØ§Ų†Ø§ØĒ Ø§Ų„ØĨŲ†ØĒاØŦŲŠØŠ ŲˆØ§Ų„ØĒØšŲ„ŲŠŲ‚Ø§ØĒ.
**7.Ø§Ų„ŲˆØ§ØŦŲ‡ØŠ Ø§Ų„ØŽŲ„ŲŲŠØŠ (Backend) ŲƒØŽØ¯Ų…ØŠ**: ØĒØŖØĒ؊ ØŦŲ…ŲŠØš ØšØąŲˆØļ Dify Ų…Øš APIs Ų…ØˇØ§Ø¨Ų‚ØŠØŒ Ø­ØĒŲ‰ ŲŠŲ…ŲƒŲ†Ųƒ Ø¯Ų…ØŦ Dify Ø¨ØŗŲ‡ŲˆŲ„ØŠ ؁؊ Ų…Ų†ØˇŲ‚ ØŖØšŲ…Ø§Ų„Ųƒ Ø§Ų„ØŽØ§Øĩ.
## Ø§ØŗØĒØŽØ¯Ø§Ų… Dify
- **ØŗØ­Ø§Ø¨ØŠ </br>**
Ų†Ø­Ų† Ų†ØŗØĒØļ؊؁ [ØŽØ¯Ų…ØŠ Dify Cloud](https://dify.ai) Ų„ØŖŲŠ Ø´ØŽØĩ Ų„ØĒØŦØąØ¨ØĒŲ‡Ø§ Ø¨Ø¯ŲˆŲ† ØŖŲŠ ØĨؚداداØĒ. ØĒŲˆŲØą ŲƒŲ„ Ų‚Ø¯ØąØ§ØĒ Ø§Ų„Ų†ØŗØŽØŠ Ø§Ų„ØĒ؊ ØĒŲ…ØĒ Ø§ØŗØĒØļØ§ŲØĒŲ‡Ø§ ذاØĒŲŠŲ‹Ø§ØŒ ؈ØĒØĒØļŲ…Ų† 200 ØŖŲ…Øą GPT-4 Ų…ØŦØ§Ų†Ų‹Ø§ ؁؊ ØŽØˇØŠ Ø§Ų„ØĩŲ†Ø¯ŲˆŲ‚ Ø§Ų„ØąŲ…Ų„ŲŠ.
- **Ø§ØŗØĒØļØ§ŲØŠ ذاØĒŲŠØŠ Ų„Ų†ØŗØŽØŠ Ø§Ų„Ų…ØŦØĒŲ…Øš Dify</br>**
Ø§Ø¨Ø¯ØŖ ØŗØąŲŠØšŲ‹Ø§ ؁؊ ØĒØ´ØēŲŠŲ„ Dify ؁؊ Ø¨ŲŠØĻØĒ؃ Ø¨Ø§ØŗØĒØŽØ¯Ø§Ų… \[Ø¯Ų„ŲŠŲ„ Ø§Ų„Ø¨Ø¯ØĄ Ø§Ų„ØŗØąŲŠØš\](#Ø§Ų„Ø¨Ø¯ØĄ Ø§Ų„ØŗØąŲŠØš).
Ø§ØŗØĒØŽØ¯Ų… [ØĒ؈ØĢŲŠŲ‚Ų†Ø§](https://docs.dify.ai) Ų„Ų„Ų…Ø˛ŲŠØ¯ Ų…Ų† Ø§Ų„Ų…ØąØ§ØŦØš ŲˆØ§Ų„ØĒØšŲ„ŲŠŲ…Ø§ØĒ Ø§Ų„ØŖØšŲ…Ų‚.
- **Ų…Ø´ØąŲˆØš Dify Ų„Ų„Ø´ØąŲƒØ§ØĒ / Ø§Ų„Ų…Ø¤ØŗØŗØ§ØĒ</br>**
Ų†Ø­Ų† Ų†ŲˆŲØą Ų…ŲŠØ˛Ø§ØĒ ØĨØļØ§ŲŲŠØŠ Ų…ØąŲƒØ˛ØŠ ØšŲ„Ų‰ Ø§Ų„Ø´ØąŲƒØ§ØĒ. [ØŦØ¯ŲˆŲ„ اØŦØĒŲ…Ø§Øš Ų…ØšŲ†Ø§](https://cal.com/guchenhe/30min) ØŖŲˆ [ØŖØąØŗŲ„ Ų„Ų†Ø§ Ø¨ØąŲŠØ¯Ų‹Ø§ ØĨŲ„ŲƒØĒØąŲˆŲ†ŲŠŲ‹Ø§](mailto:business@dify.ai?subject=%5BGitHub%5DBusiness%20License%20Inquiry) Ų„Ų…Ų†Ø§Ų‚Ø´ØŠ احØĒŲŠØ§ØŦاØĒ Ø§Ų„Ø´ØąŲƒØ§ØĒ. </br>
> Ø¨Ø§Ų„Ų†ØŗØ¨ØŠ Ų„Ų„Ø´ØąŲƒØ§ØĒ Ø§Ų„Ų†Ø§Ø´ØĻØŠ ŲˆØ§Ų„Ø´ØąŲƒØ§ØĒ Ø§Ų„ØĩØēŲŠØąØŠ Ø§Ų„ØĒ؊ ØĒØŗØĒØŽØ¯Ų… ØŽØ¯Ų…Ø§ØĒ AWS، ØĒØ­Ų‚Ų‚ Ų…Ų† [Dify Premium ØšŲ„Ų‰ AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) ŲˆŲ†Ø´ØąŲ‡Ø§ ؁؊ Ø´Ø¨ŲƒØĒ؃ Ø§Ų„ØŽØ§ØĩØŠ ØšŲ„Ų‰ AWS VPC Ø¨Ų†Ų‚ØąØŠ ŲˆØ§Ø­Ø¯ØŠ. ØĨŲ†Ų‡Ø§ ØšØąØļ AMI Ø¨ØŖØŗØšØ§Øą Ų…ØšŲ‚ŲˆŲ„ØŠ Ų…Øš ØŽŲŠØ§Øą ØĨŲ†Ø´Ø§ØĄ ØĒØˇØ¨ŲŠŲ‚Ø§ØĒ Ø¨Ø´ØšØ§Øą ŲˆØšŲ„Ø§Ų…ØŠ ØĒØŦØ§ØąŲŠØŠ Ų…ØŽØĩØĩØŠ.
## Ø§Ų„Ø¨Ų‚Ø§ØĄ Ų‚Ø¯Ų…Ų‹Ø§
Ų‚Ų… بØĨØļØ§ŲØŠ Ų†ØŦŲ…ØŠ ØĨŲ„Ų‰ Dify ØšŲ„Ų‰ GitHub ؈ØĒŲ„Ų‚ ØĒŲ†Ø¨ŲŠŲ‡Ų‹Ø§ ŲŲˆØąŲŠŲ‹Ø§ Ø¨Ø§Ų„ØĨØĩØ¯Ø§ØąØ§ØĒ Ø§Ų„ØŦØ¯ŲŠØ¯ØŠ.
![Ų†ØŦŲ…Ų†Ø§](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## Ø§Ų„Ø¨Ø¯Ø§ŲŠØŠ Ø§Ų„ØŗØąŲŠØšØŠ
> Ų‚Ø¨Ų„ ØĒØĢØ¨ŲŠØĒ Dify، ØĒØŖŲƒØ¯ Ų…Ų† ØŖŲ† ØŦŲ‡Ø§Ø˛Ųƒ ŲŠŲ„Ø¨ŲŠ Ø§Ų„Ø­Ø¯ Ø§Ų„ØŖØ¯Ų†Ų‰ Ų…Ų† Ų…ØĒØˇŲ„Ø¨Ø§ØĒ Ø§Ų„Ų†Ø¸Ø§Ų… Ø§Ų„ØĒØ§Ų„ŲŠØŠ:
>
> - Ų…ØšØ§Ų„ØŦ >= 2 Ų†ŲˆØ§ØŠ
> - Ø°Ø§ŲƒØąØŠ ؈ØĩŲˆŲ„ ØšØ´ŲˆØ§ØĻ؊ (RAM) >= 4 ØŦ؊ØŦØ§Ø¨Ø§ŲŠØĒ
</br>
ØŖØŗŲ‡Ų„ ØˇØąŲŠŲ‚ØŠ Ų„Ø¨Ø¯ØĄ ØĒØ´ØēŲŠŲ„ ØŽØ§Ø¯Ų… Dify Ų‡ŲŠ ØĒØ´ØēŲŠŲ„ ؅؄؁ [docker-compose.yml](docker/docker-compose.yaml) Ø§Ų„ØŽØ§Øĩ Ø¨Ų†Ø§. Ų‚Ø¨Ų„ ØĒØ´ØēŲŠŲ„ ØŖŲ…Øą Ø§Ų„ØĒØĢØ¨ŲŠØĒ، ØĒØŖŲƒØ¯ Ų…Ų† ØĒØĢØ¨ŲŠØĒ [Docker](https://docs.docker.com/get-docker/) ؈ [Docker Compose](https://docs.docker.com/compose/install/) ØšŲ„Ų‰ ØŦŲ‡Ø§Ø˛Ųƒ:
```bash
cd docker
cp .env.example .env
docker compose up -d
```
بؚد Ø§Ų„ØĒØ´ØēŲŠŲ„ØŒ ŲŠŲ…ŲƒŲ†Ųƒ Ø§Ų„ŲˆØĩŲˆŲ„ ØĨŲ„Ų‰ Ų„ŲˆØ­ØŠ ØĒØ­ŲƒŲ… Dify ؁؊ Ų…ØĒØĩŲØ­Ųƒ ØšŲ„Ų‰ [http://localhost/install](http://localhost/install) ŲˆØ¨Ø¯ØĄ ØšŲ…Ų„ŲŠØŠ Ø§Ų„ØĒŲ‡ŲŠØĻØŠ.
> ØĨذا ŲƒŲ†ØĒ ØĒØąØēب ؁؊ Ø§Ų„Ų…ØŗØ§Ų‡Ų…ØŠ ؁؊ Dify ØŖŲˆ Ø§Ų„Ų‚ŲŠØ§Ų… بØĒØˇŲˆŲŠØą ØĨØļØ§ŲŲŠØŒ ŲØ§Ų†Ø¸Øą ØĨŲ„Ų‰ [Ø¯Ų„ŲŠŲ„Ų†Ø§ Ų„Ų„Ų†Ø´Øą Ų…Ų† Ø§Ų„Ø´ŲØąØŠ (code) Ø§Ų„Ų…ØĩØ¯ØąŲŠØŠ](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code)
## Ø§Ų„ØŽØˇŲˆØ§ØĒ Ø§Ų„ØĒØ§Ų„ŲŠØŠ
ØĨذا ŲƒŲ†ØĒ بحاØŦØŠ ØĨŲ„Ų‰ ØĒØŽØĩ؊Øĩ Ø§Ų„ØĨؚداداØĒ، ŲŲŠØąØŦŲ‰ Ø§Ų„ØąØŦŲˆØš ØĨŲ„Ų‰ Ø§Ų„ØĒØšŲ„ŲŠŲ‚Ø§ØĒ ؁؊ ؅؄؁ [.env.example](docker/.env.example) ؈ØĒØ­Ø¯ŲŠØĢ Ø§Ų„Ų‚ŲŠŲ… Ø§Ų„Ų…Ų‚Ø§Ø¨Ų„ØŠ ؁؊ ؅؄؁ `.env`. Ø¨Ø§Ų„ØĨØļØ§ŲØŠ ØĨŲ„Ų‰ Ø°Ų„ŲƒØŒ Ų‚Ø¯ ØĒØ­ØĒاØŦ ØĨŲ„Ų‰ ØĨØŦØąØ§ØĄ ØĒØšØ¯ŲŠŲ„Ø§ØĒ ØšŲ„Ų‰ ؅؄؁ `docker-compose.yaml` Ų†ŲØŗŲ‡ØŒ Ų…ØĢŲ„ ØĒØēŲŠŲŠØą ØĨØĩØ¯Ø§ØąØ§ØĒ Ø§Ų„ØĩŲˆØą ØŖŲˆ ØĒØšŲŠŲŠŲ†Ø§ØĒ Ø§Ų„Ų…Ų†Ø§ŲØ° ØŖŲˆ Ų†Ų‚Ø§Øˇ ØĒØ­Ų…ŲŠŲ„ ŲˆØ­Ø¯Ø§ØĒ Ø§Ų„ØĒØŽØ˛ŲŠŲ†ØŒ Ø¨Ų†Ø§ØĄŲ‹ ØšŲ„Ų‰ Ø¨ŲŠØĻØŠ Ø§Ų„Ų†Ø´Øą ŲˆŲ…ØĒØˇŲ„Ø¨Ø§ØĒ؃ Ø§Ų„ØŽØ§ØĩØŠ. بؚد ØĨØŦØąØ§ØĄ ØŖŲŠ ØĒØēŲŠŲŠØąØ§ØĒ، ŲŠØąØŦŲ‰ ØĨؚاد؊ ØĒØ´ØēŲŠŲ„ `docker-compose up -d`. ŲŠŲ…ŲƒŲ†Ųƒ Ø§Ų„ØšØĢŲˆØą ØšŲ„Ų‰ Ų‚Ø§ØĻŲ…ØŠ ŲƒØ§Ų…Ų„ØŠ Ø¨Ų…ØĒØēŲŠØąØ§ØĒ Ø§Ų„Ø¨ŲŠØĻØŠ Ø§Ų„Ų…ØĒاح؊ [Ų‡Ų†Ø§](https://docs.dify.ai/getting-started/install-self-hosted/environments).
؊؈ØŦد Ų…ØŦØĒŲ…Øš ؎اØĩ Ø¨Ų€ [Helm Charts](https://helm.sh/) ŲˆŲ…Ų„ŲØ§ØĒ YAML Ø§Ų„ØĒ؊ ØĒØŗŲ…Ø­ بØĒŲ†ŲŲŠØ° Dify ØšŲ„Ų‰ Kubernetes Ų„Ų„Ų†Ø¸Ø§Ų… Ų…Ų† Ø§Ų„ØĨ؊ØŦØ§Ø¨ŲŠØ§ØĒ Ø§Ų„ØšŲ„ŲˆŲŠØŠ.
- [ØąØŗŲ… Ø¨ŲŠØ§Ų†ŲŠ Helm Ų…Ų† Ų‚Ø¨Ų„ @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [ØąØŗŲ… Ø¨ŲŠØ§Ų†ŲŠ Helm Ų…Ų† Ų‚Ø¨Ų„ @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [ØąØŗŲ… Ø¨ŲŠØ§Ų†ŲŠ Helm Ų…Ų† Ų‚Ø¨Ų„ @magicsong](https://github.com/magicsong/ai-charts)
- [؅؄؁ YAML Ų…Ų† Ų‚Ø¨Ų„ @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [؅؄؁ YAML Ų…Ų† Ų‚Ø¨Ų„ @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 ØŦØ¯ŲŠØ¯! Ų…Ų„ŲØ§ØĒ YAML (ØĒØ¯ØšŲ… Dify v1.6.0) Ø¨ŲˆØ§ØŗØˇØŠ @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Ø§ØŗØĒØŽØ¯Ø§Ų… Terraform Ų„Ų„ØĒŲˆØ˛ŲŠØš
Ø§Ų†Ø´Øą Dify ØĨŲ„Ų‰ Ų…Ų†ØĩØŠ Ø§Ų„ØŗØ­Ø§Ø¨ØŠ Ø¨Ų†Ų‚ØąØŠ ŲˆØ§Ø­Ø¯ØŠ Ø¨Ø§ØŗØĒØŽØ¯Ø§Ų… [terraform](https://www.terraform.io/)
##### Azure Global
- [Azure Terraform Ø¨ŲˆØ§ØŗØˇØŠ @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform Ø¨ŲˆØ§ØŗØˇØŠ @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
#### Ø§ØŗØĒØŽØ¯Ø§Ų… AWS CDK Ų„Ų„Ų†Ø´Øą
Ø§Ų†Ø´Øą Dify ØšŲ„Ų‰ AWS Ø¨Ø§ØŗØĒØŽØ¯Ø§Ų… [CDK](https://aws.amazon.com/cdk/)
##### AWS
- [AWS CDK Ø¨ŲˆØ§ØŗØˇØŠ @KevinZhao (EKS based)](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
- [AWS CDK Ø¨ŲˆØ§ØŗØˇØŠ @tmokmss (ECS based)](https://github.com/aws-samples/dify-self-hosted-on-aws)
#### Ø§ØŗØĒØŽØ¯Ø§Ų… Alibaba Cloud Ų„Ų„Ų†Ø´Øą
[Ø¨ØŗØąØšØŠ Ų†Ø´Øą Dify ØĨŲ„Ų‰ ØŗØ­Ø§Ø¨ØŠ ØšŲ„ŲŠ بابا Ų…Øš ؚش Ø§Ų„Ø­ŲˆØŗØ¨ØŠ Ø§Ų„ØŗØ­Ø§Ø¨ŲŠØŠ ØšŲ„ŲŠ بابا](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Ø§ØŗØĒØŽØ¯Ø§Ų… Alibaba Cloud Data Management Ų„Ų„Ų†Ø´Øą
Ø§Ų†Ø´Øą ​​Dify ØšŲ„Ų‰ ØšŲ„ŲŠ بابا ŲƒŲ„Ø§ŲˆØ¯ Ø¨Ų†Ų‚ØąØŠ ŲˆØ§Ø­Ø¯ØŠ Ø¨Ø§ØŗØĒØŽØ¯Ø§Ų… [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
#### Ø§ØŗØĒØŽØ¯Ø§Ų… Azure Devops Pipeline Ų„Ų„Ų†Ø´Øą ØšŲ„Ų‰ AKS
Ø§Ų†Ø´Øą Dify ØšŲ„Ų‰ AKS Ø¨Ų†Ų‚ØąØŠ ŲˆØ§Ø­Ø¯ØŠ Ø¨Ø§ØŗØĒØŽØ¯Ø§Ų… [Azure Devops Pipeline Helm Chart by @LeoZhang](https://github.com/Ruiruiz30/Dify-helm-chart-AKS)
## Ø§Ų„Ų…ØŗØ§Ų‡Ų…ØŠ
Ų„ØŖŲˆŲ„ØĻ؃ Ø§Ų„Ø°ŲŠŲ† ŲŠØąØēØ¨ŲˆŲ† ؁؊ Ø§Ų„Ų…ØŗØ§Ų‡Ų…ØŠØŒ Ø§Ų†Ø¸Øą ØĨŲ„Ų‰ [Ø¯Ų„ŲŠŲ„ Ø§Ų„Ų…ØŗØ§Ų‡Ų…ØŠ](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) Ų„Ø¯ŲŠŲ†Ø§.
؁؊ Ø§Ų„ŲˆŲ‚ØĒ Ų†ŲØŗŲ‡ØŒ ŲŠØąØŦŲ‰ Ø§Ų„Ų†Ø¸Øą ؁؊ Ø¯ØšŲ… Dify ØšŲ† ØˇØąŲŠŲ‚ Ų…Ø´Ø§ØąŲƒØĒŲ‡ ØšŲ„Ų‰ ŲˆØŗØ§ØĻŲ„ Ø§Ų„ØĒŲˆØ§ØĩŲ„ Ø§Ų„Ø§ØŦØĒŲ…Ø§ØšŲŠ ؈؁؊ Ø§Ų„ŲØšØ§Ų„ŲŠØ§ØĒ ŲˆØ§Ų„Ų…Ø¤ØĒŲ…ØąØ§ØĒ.
> Ų†Ø­Ų† Ų†Ø¨Ø­ØĢ ØšŲ† Ų…ØŗØ§Ų‡Ų…ŲŠŲ† Ų„Ų…ØŗØ§ØšØ¯ØŠ ؁؊ ØĒØąØŦŲ…ØŠ Dify ØĨŲ„Ų‰ Ų„ØēاØĒ ØŖØŽØąŲ‰ ØēŲŠØą Ø§Ų„Ų„ØēØŠ Ø§Ų„ØĩŲŠŲ†ŲŠØŠ Ø§Ų„Ų…Ų†Ø¯ØąŲŠŲ† ØŖŲˆ Ø§Ų„ØĨŲ†ØŦŲ„ŲŠØ˛ŲŠØŠ. ØĨذا ŲƒŲ†ØĒ Ų…Ų‡ØĒŲ…Ų‹Ø§ Ø¨Ø§Ų„Ų…ØŗØ§ØšØ¯ØŠØŒ ŲŠØąØŦŲ‰ Ø§Ų„Ø§ØˇŲ„Ø§Øš ØšŲ„Ų‰ [README Ų„Ų„ØĒØąØŦŲ…ØŠ](https://github.com/langgenius/dify/blob/main/web/i18n-config/README.md) Ų„Ų…Ø˛ŲŠØ¯ Ų…Ų† Ø§Ų„Ų…ØšŲ„ŲˆŲ…Ø§ØĒ، ŲˆØ§ØĒØąŲƒ Ų„Ų†Ø§ ØĒØšŲ„ŲŠŲ‚Ų‹Ø§ ؁؊ Ų‚Ų†Ø§ØŠ `global-users` ØšŲ„Ų‰ [ØŽØ§Ø¯Ų… Ø§Ų„Ų…ØŦØĒŲ…Øš ØšŲ„Ų‰ Discord](https://discord.gg/8Tpq4AcN9c).
**Ø§Ų„Ų…ØŗØ§Ų‡Ų…ŲˆŲ†**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## Ø§Ų„Ų…ØŦØĒŲ…Øš ŲˆØ§Ų„Ø§ØĒØĩØ§Ų„
- [Ų…Ų†Ø§Ų‚Ø´ØŠ GitHub](https://github.com/langgenius/dify/discussions). Ø§Ų„ØŖŲØļŲ„ Ų„Ų€: Ų…Ø´Ø§ØąŲƒØŠ Ø§Ų„ØĒØšŲ„ŲŠŲ‚Ø§ØĒ ŲˆØˇØąØ­ Ø§Ų„ØŖØŗØĻŲ„ØŠ.
- [Ø§Ų„Ų…Ø´ŲƒŲ„Ø§ØĒ ØšŲ„Ų‰ GitHub](https://github.com/langgenius/dify/issues). Ø§Ų„ØŖŲØļŲ„ Ų„Ų€: Ø§Ų„ØŖØŽØˇØ§ØĄ Ø§Ų„ØĒ؊ ØĒŲˆØ§ØŦŲ‡Ų‡Ø§ ؁؊ Ø§ØŗØĒØŽØ¯Ø§Ų… Dify.AI، ŲˆØ§Ų‚ØĒØąØ§Ø­Ø§ØĒ Ø§Ų„Ų…ŲŠØ˛Ø§ØĒ. Ø§Ų†Ø¸Øą [Ø¯Ų„ŲŠŲ„ Ø§Ų„Ų…ØŗØ§Ų‡Ų…ØŠ](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
- [Discord](https://discord.gg/FngNHpbcY7). Ø§Ų„ØŖŲØļŲ„ Ų„Ų€: Ų…Ø´Ø§ØąŲƒØŠ ØĒØˇØ¨ŲŠŲ‚Ø§ØĒ؃ ŲˆØ§Ų„ØĒØąŲŲŠŲ‡ Ų…Øš Ø§Ų„Ų…ØŦØĒŲ…Øš.
- [ØĒ؈؊ØĒØą](https://twitter.com/dify_ai). Ø§Ų„ØŖŲØļŲ„ Ų„Ų€: Ų…Ø´Ø§ØąŲƒØŠ ØĒØˇØ¨ŲŠŲ‚Ø§ØĒ؃ ŲˆØ§Ų„ØĒØąŲŲŠŲ‡ Ų…Øš Ø§Ų„Ų…ØŦØĒŲ…Øš.
## ØĒØ§ØąŲŠØŽ Ø§Ų„Ų†ØŦŲ…ØŠ
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## Ø§Ų„ŲƒØ´Ų ØšŲ† Ø§Ų„ØŖŲ…Ø§Ų†
Ų„Ø­Ų…Ø§ŲŠØŠ ØŽØĩ؈Øĩ؊ØĒŲƒØŒ ŲŠØąØŦŲ‰ ØĒØŦŲ†Ø¨ Ų†Ø´Øą Ų…Ø´ŲƒŲ„Ø§ØĒ Ø§Ų„ØŖŲ…Ø§Ų† ØšŲ„Ų‰ GitHub. Ø¨Ø¯Ų„Ø§Ų‹ Ų…Ų† Ø°Ų„ŲƒØŒ ØŖØąØŗŲ„ ØŖØŗØĻŲ„ØĒ؃ ØĨŲ„Ų‰ <security@dify.ai> ŲˆØŗŲ†Ų‚Ø¯Ų… Ų„Ųƒ ØĨØŦاب؊ ØŖŲƒØĢØą ØĒ؁ØĩŲŠŲ„Ø§Ų‹.
## Ø§Ų„ØąØŽØĩØŠ
Ų‡Ø°Ø§ Ø§Ų„Ų…ØŗØĒŲˆØ¯Øš Ų…ØĒاح ØĒØ­ØĒ [ØąØŽØĩØŠ Ø§Ų„Ø¨ØąŲ†Ø§Ų…ØŦ Ø§Ų„Ø­Øą Dify](LICENSE)، ŲˆØ§Ų„ØĒ؊ ØĒØšØĒØ¨Øą Ø¨Ø´ŲƒŲ„ ØŖØŗØ§ØŗŲŠ Apache 2.0 Ų…Øš بؚØļ Ø§Ų„Ų‚ŲŠŲˆØ¯ Ø§Ų„ØĨØļØ§ŲŲŠØŠ.
## Ø§Ų„ŲƒØ´Ų ØšŲ† Ø§Ų„ØŖŲ…Ø§Ų†
Ų„Ø­Ų…Ø§ŲŠØŠ ØŽØĩ؈Øĩ؊ØĒŲƒØŒ ŲŠØąØŦŲ‰ ØĒØŦŲ†Ø¨ Ų†Ø´Øą Ų…Ø´ŲƒŲ„Ø§ØĒ Ø§Ų„ØŖŲ…Ø§Ų† ØšŲ„Ų‰ GitHub. Ø¨Ø¯Ų„Ø§Ų‹ Ų…Ų† Ø°Ų„ŲƒØŒ ØŖØąØŗŲ„ ØŖØŗØĻŲ„ØĒ؃ ØĨŲ„Ų‰ <security@dify.ai> ŲˆØŗŲ†Ų‚Ø¯Ų… Ų„Ųƒ ØĨØŦاب؊ ØŖŲƒØĢØą ØĒ؁ØĩŲŠŲ„Ø§Ų‹.
## Ø§Ų„ØąØŽØĩØŠ
Ų‡Ø°Ø§ Ø§Ų„Ų…ØŗØĒŲˆØ¯Øš Ų…ØĒاح ØĒØ­ØĒ [ØąØŽØĩØŠ Ø§Ų„Ø¨ØąŲ†Ø§Ų…ØŦ Ø§Ų„Ø­Øą Dify](LICENSE)، ŲˆØ§Ų„ØĒ؊ ØĒØšØĒØ¨Øą Ø¨Ø´ŲƒŲ„ ØŖØŗØ§ØŗŲŠ Apache 2.0 Ų…Øš بؚØļ Ø§Ų„Ų‚ŲŠŲˆØ¯ Ø§Ų„ØĨØļØ§ŲŲŠØŠ.

206
README_BN.md Normal file
View File

@@ -0,0 +1,206 @@
![cover-v5-optimized](./images/GitHub_README_if.png)
<p align="center">
📌 <a href="https://dify.ai/blog/introducing-dify-workflow-file-upload-a-demo-on-ai-podcast">āĻĄāĻŋāĻĢāĻžāχ āĻ“āϝāĻŧāĻžāĻ°ā§āĻ•āĻĢā§āϞ⧋ āĻĢāĻžāχāϞ āφāĻĒāϞ⧋āĻĄ āĻĒāϰāĻŋāϚāĻŋāϤāĻŋ: āϗ⧁āĻ—āϞ āύ⧋āϟāĻŦ⧁āĻ•-āĻāϞāĻāĻŽ āĻĒāĻĄāĻ•āĻžāĻ¸ā§āϟ āĻĒ⧁āύāĻ°ā§āύāĻŋāĻ°ā§āĻŽāĻžāĻŖ</a>
</p>
<p align="center">
<a href="https://cloud.dify.ai">āĻĄāĻŋāĻĢāĻžāχ āĻ•ā§āϞāĻžāωāĻĄ</a> ¡
<a href="https://docs.dify.ai/getting-started/install-self-hosted">āϏ⧇āĻ˛ā§āĻĢ-āĻšā§‹āĻ¸ā§āϟāĻŋāĻ‚</a> ¡
<a href="https://docs.dify.ai">āĻĄāϕ⧁āĻŽā§‡āĻ¨ā§āĻŸā§‡āĻļāύ</a> ¡
<a href="https://dify.ai/pricing">Dify āĻĒāĻŖā§āϝ⧇āϰ āϰ⧂āĻĒāϭ⧇āĻĻ</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Product-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/free-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="chat on Discord"></a>
<a href="https://reddit.com/r/difyai" target="_blank">
<img src="https://img.shields.io/reddit/subreddit-subscribers/difyai?style=plastic&logo=reddit&label=r%2Fdifyai&labelColor=white"
alt="join Reddit"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Commits last month" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_CN.md"><img alt="įŽ€äŊ“ä¸­æ–‡į‰ˆč‡Ēčŋ°æ–‡äģļ" src="https://img.shields.io/badge/įŽ€äŊ“中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="æ—ĨæœŦčĒžãŽREADME" src="https://img.shields.io/badge/æ—ĨæœŦčĒž-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en EspaÃąol" src="https://img.shields.io/badge/EspaÃąol-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한ęĩ­ė–´-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README Ø¨Ø§Ų„ØšØąØ¨ŲŠØŠ" src="https://img.shields.io/badge/Ø§Ų„ØšØąØ¨ŲŠØŠ-d9d9d9"></a>
<a href="./README_TR.md"><img alt="TÃŧrkçe README" src="https://img.shields.io/badge/TÃŧrkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiáēŋng Viáģ‡t" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_DE.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in āĻŦāĻžāĻ‚āϞāĻž" src="https://img.shields.io/badge/āĻŦāĻžāĻ‚āϞāĻž-d9d9d9"></a>
</p>
āĻĄāĻŋāĻĢāĻžāχ āĻāĻ•āϟāĻŋ āĻ“āĻĒ⧇āύ-āϏ⧋āĻ°ā§āϏ LLM āĻ…ā§āϝāĻžāĻĒ āĻĄā§‡āϭ⧇āϞāĻĒāĻŽā§‡āĻ¨ā§āϟ āĻĒā§āĻ˛ā§āϝāĻžāϟāĻĢāĻ°ā§āĻŽāĨ¤ āĻāϟāĻŋ āχāĻ¨ā§āϟ⧁āχāϟāĻŋāĻ­ āχāĻ¨ā§āϟāĻžāϰāĻĢ⧇āϏ, āĻāĻœā§‡āĻ¨ā§āϟāĻŋāĻ• AI āĻ“āϝāĻŧāĻžāĻ°ā§āĻ•āĻĢā§āϞ⧋, RAG āĻĒāĻžāχāĻĒāϞāĻžāχāύ, āĻāĻœā§‡āĻ¨ā§āϟ āĻ•ā§āϝāĻžāĻĒāĻžāĻŦāĻŋāϞāĻŋāϟāĻŋ, āĻŽāĻĄā§‡āϞ āĻŽā§āϝāĻžāύ⧇āϜāĻŽā§‡āĻ¨ā§āϟ, āĻŽāύāĻŋāϟāϰāĻŋāĻ‚ āϏ⧁āĻŦāĻŋāϧāĻž āĻāĻŦāĻ‚ āφāϰāĻ“ āĻ…āύ⧇āĻ• āĻ•āĻŋāϛ⧁ āĻāĻ•āĻ¤ā§āϰāĻŋāϤ āĻ•āϰ⧇, āϝāĻž āĻĻā§āϰ⧁āϤ āĻĒā§āϰ⧋āĻŸā§‹āϟāĻžāχāĻĒ āĻĨ⧇āϕ⧇ āĻĒā§āϰ⧋āĻĄāĻžāĻ•āĻļāύ āĻĒāĻ°ā§āϝāĻ¨ā§āϤ āύāĻŋāϝāĻŧ⧇ āϝ⧇āϤ⧇ āϏāĻšāĻžāϝāĻŧāϤāĻž āĻ•āϰ⧇āĨ¤
## āϕ⧁āχāĻ• āĻ¸ā§āϟāĻžāĻ°ā§āϟ
> āĻĄāĻŋāĻĢāĻžāχ āχāύāĻ¸ā§āϟāϞ āĻ•āϰāĻžāϰ āφāϗ⧇, āύāĻŋāĻļā§āϚāĻŋāϤ āĻ•āϰ⧁āύ āϝ⧇ āφāĻĒāύāĻžāϰ āĻŽā§‡āĻļāĻŋāύ āύāĻŋāĻŽā§āύāϞāĻŋāĻ–āĻŋāϤ āĻ¨ā§āϝ⧂āύāϤāĻŽ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ⧇āĻļāύ⧇āϰ āĻĒā§āĻ°ā§Ÿā§‹āϜāĻ¨ā§€ā§ŸāϤāĻž āĻĒā§‚āϰāύ āĻ•āϰ⧇ :
>
> - āϏāĻŋāĻĒāĻŋāω >= 2 āϕ⧋āϰ
> - āĻ°â€ā§āϝāĻžāĻŽ >= 4 āϜāĻŋāĻŦāĻŋ
</br>
āĻĄāĻŋāĻĢāĻžāχ āϏāĻžāĻ°ā§āĻ­āĻžāϰ āϚāĻžāϞ⧁ āĻ•āϰāĻžāϰ āϏāĻŦāĻšā§‡āϝāĻŧ⧇ āϏāĻšāϜ āωāĻĒāĻžāϝāĻŧ [docker compose](docker/docker-compose.yaml) āĻŽāĻžāĻ§ā§āϝāĻŽā§‡āĨ¤ āύāĻŋāĻŽā§āύāϞāĻŋāĻ–āĻŋāϤ āĻ•āĻŽāĻžāĻ¨ā§āĻĄāϗ⧁āϞ⧋ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻĄāĻŋāĻĢāĻžāχ āϚāĻžāϞāĻžāύ⧋āϰ āφāϗ⧇, āύāĻŋāĻļā§āϚāĻŋāϤ āĻ•āϰ⧁āύ āϝ⧇ āφāĻĒāύāĻžāϰ āĻŽā§‡āĻļāĻŋāύ⧇ [Docker](https://docs.docker.com/get-docker/) āĻāĻŦāĻ‚ [Docker Compose](https://docs.docker.com/compose/install/) āχāύāĻ¸ā§āϟāϞ āĻ•āϰāĻž āφāϛ⧇ :
```bash
cd dify
cd docker
cp .env.example .env
docker compose up -d
```
āϚāĻžāϞāĻžāύ⧋āϰ āĻĒāϰ, āφāĻĒāύāĻŋ āφāĻĒāύāĻžāϰ āĻŦā§āϰāĻžāωāϜāĻžāϰ⧇ [http://localhost/install](http://localhost/install)-āĻ āĻĄāĻŋāĻĢāĻžāχ āĻĄā§āϝāĻžāĻļāĻŦā§‹āĻ°ā§āĻĄā§‡ āĻ…ā§āϝāĻžāĻ•ā§āϏ⧇āϏ āĻ•āϰāϤ⧇ āĻĒāĻžāϰ⧇āύ āĻāĻŦāĻ‚ āχāύāĻŋāĻļāĻŋ⧟āĻžāϞāĻžāχāĻœā§‡āĻļāύ āĻĒā§āϰāĻ•ā§āϰāĻŋ⧟āĻž āĻļ⧁āϰ⧁ āĻ•āϰāϤ⧇ āĻĒāĻžāϰ⧇āύāĨ¤
#### āϏāĻžāĻšāĻžāĻ¯ā§āϝ⧇āϰ āĻ–ā§‹āρāĻœā§‡
āĻĄāĻŋāĻĢāĻžāχ āϏ⧇āϟ āφāĻĒ āĻ•āϰāϤ⧇ āϏāĻŽāĻ¸ā§āϝāĻž āĻšāϞ⧇ āĻĻāϝāĻŧāĻž āĻ•āϰ⧇ āφāĻŽāĻžāĻĻ⧇āϰ [FAQ](https://docs.dify.ai/getting-started/install-self-hosted/faqs) āĻĻ⧇āϖ⧁āύāĨ¤ āϝāĻĻāĻŋ āϤāĻŦ⧁āĻ“ āϏāĻŽāĻ¸ā§āϝāĻž āĻĨ⧇āϕ⧇ āĻĨāĻžāϕ⧇, āϤāĻžāĻšāϞ⧇ [āĻ•āĻŽāĻŋāωāύāĻŋāϟāĻŋ āĻāĻŦāĻ‚ āφāĻŽāĻžāĻĻ⧇āϰ](#community--contact) āϏāĻžāĻĨ⧇ āϝ⧋āĻ—āĻžāϝ⧋āĻ— āĻ•āϰ⧁āύāĨ¤
> āϝāĻĻāĻŋ āφāĻĒāύāĻŋ āĻĄāĻŋāĻĢāĻžāχāϤ⧇ āĻ…āĻŦāĻĻāĻžāύ āϰāĻžāĻ–āϤ⧇ āĻŦāĻž āĻ…āϤāĻŋāϰāĻŋāĻ•ā§āϤ āωāĻ¨ā§āύ⧟āύ āĻ•āϰāϤ⧇ āϚāĻžāύ, āφāĻŽāĻžāĻĻ⧇āϰ [āϏ⧋āĻ°ā§āϏ āϕ⧋āĻĄ āĻĨ⧇āϕ⧇ āĻĄāĻŋāĻĒā§āϞ⧟āĻŽā§‡āĻ¨ā§āĻŸā§‡āϰ āĻ—āĻžāχāĻĄ](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code) āĻĻ⧇āϖ⧁āύāĨ¤
## āĻĒā§āϰāϧāĻžāύ āĻĢāĻŋāϚāĻžāϰāϏāĻŽā§‚āĻš
**ā§§. āĻ“ā§ŸāĻžāĻ°ā§āĻ•āĻĢā§āϞ⧋**:
āĻ­āĻŋāĻœā§āϝ⧁āϝāĻŧāĻžāϞ āĻ•ā§āϝāĻžāύāĻ­āĻžāϏ⧇ AI āĻ“āϝāĻŧāĻžāĻ°ā§āĻ•āĻĢā§āϞ⧋ āϤ⧈āϰāĻŋ āĻāĻŦāĻ‚ āĻĒāϰ⧀āĻ•ā§āώāĻž āĻ•āϰ⧁āύ, āύāĻŋāĻŽā§āύāϞāĻŋāĻ–āĻŋāϤ āϏāĻŦ āĻĢāĻŋāϚāĻžāϰ āĻāĻŦāĻ‚ āϤāĻžāϰ āĻŦāĻžāχāϰ⧇āĻ“ āφāϰāĻ“ āĻ…āύ⧇āĻ• āĻ•āĻŋāϛ⧁ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇āĨ¤
**⧍. āĻŽāĻĄā§‡āϞ āϏāĻžāĻĒā§‹āĻ°ā§āϟ**:
GPT, Mistral, Llama3, āĻāĻŦāĻ‚ āϝ⧇āϕ⧋āύ⧋ OpenAI API-āϏāĻžāĻŽāĻžā§āϜāĻ¸ā§āϝāĻĒā§‚āĻ°ā§āĻŖ āĻŽāĻĄā§‡āϞāϏāĻš, āĻ•ā§Ÿā§‡āĻ• āĻĄāϜāύ āχāύāĻĢāĻžāϰ⧇āĻ¨ā§āϏ āĻĒā§āϰāĻĻāĻžāύāĻ•āĻžāϰ⧀ āĻāĻŦāĻ‚ āϏ⧇āĻ˛ā§āĻĢ-āĻšā§‹āĻ¸ā§āĻŸā§‡āĻĄ āϏāĻŽāĻžāϧāĻžāύ āĻĨ⧇āϕ⧇ āĻļ⧁āϰ⧁ āĻ•āϰ⧇ āĻĒā§āϰ⧋āĻĒā§āϰāĻžāχāϟāϰāĻŋ/āĻ“āĻĒ⧇āύ-āϏ⧋āĻ°ā§āϏ LLM-āĻāϰ āϏāĻžāĻĨ⧇ āϏāĻšāĻœā§‡ āχāĻ¨ā§āϟāĻŋāĻ—ā§āϰ⧇āĻļāύāĨ¤ āϏāĻŽāĻ°ā§āĻĨāĻŋāϤ āĻŽāĻĄā§‡āϞ āĻĒā§āϰāĻĻāĻžāύāĻ•āĻžāϰ⧀āĻĻ⧇āϰ āĻāĻ•āϟāĻŋ āϏāĻŽā§āĻĒā§‚āĻ°ā§āĻŖ āϤāĻžāϞāĻŋāĻ•āĻž āĻĒāĻžāĻ“āϝāĻŧāĻž āϝāĻžāĻŦ⧇ [āĻāĻ–āĻžāύ⧇](https://docs.dify.ai/getting-started/readme/model-providers)āĨ¤
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. āĻĒā§āϰāĻŽā§āĻĒāϟ IDE**:
āĻĒā§āϰāĻŽā§āĻĒāϟ āϤ⧈āϰāĻŋ, āĻŽāĻĄā§‡āϞ⧇āϰ āĻĒāĻžāϰāĻĢāϰāĻŽā§āϝāĻžāĻ¨ā§āϏ āϤ⧁āϞāύāĻž āĻāĻŦāĻ‚ āĻšā§āϝāĻžāϟ-āĻŦ⧇āϜāĻĄ āĻ…ā§āϝāĻžāĻĒ⧇ āĻŸā§‡āĻ•ā§āϏāϟ-āϟ⧁-āĻ¸ā§āĻĒāĻŋāĻšā§‡āϰ āĻŽāϤ⧋ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āϝ⧁āĻ•ā§āϤ āĻ•āϰāĻžāϰ āϜāĻ¨ā§āϝ āχāĻ¨ā§āϟ⧁āχāϟāĻŋāĻ­ āχāĻ¨ā§āϟāĻžāϰāĻĢ⧇āϏāĨ¤
**4. RAG āĻĒāĻžāχāĻĒāϞāĻžāχāύ**:
āĻĄāϕ⧁āĻŽā§‡āĻ¨ā§āϟ āχāύāĻœā§‡āĻļāύ āĻĨ⧇āϕ⧇ āĻļ⧁āϰ⧁ āĻ•āϰ⧇ āϰāĻŋāĻŸā§āϰāĻŋāĻ­ āĻĒāĻ°ā§āϝāĻ¨ā§āϤ āϏāĻŦāĻ•āĻŋāϛ⧁āχ āĻŦāĻŋāĻ¸ā§āϤ⧃āϤ RAG āĻ•ā§āϝāĻžāĻĒāĻžāĻŦāĻŋāϞāĻŋāϟāĻŋāϰ āφāĻ“āϤāĻžāϭ⧁āĻ•ā§āϤāĨ¤ PDF, PPT āĻāĻŦāĻ‚ āĻ…āĻ¨ā§āϝāĻžāĻ¨ā§āϝ āϏāĻžāϧāĻžāϰāĻŖ āĻĄāϕ⧁āĻŽā§‡āĻ¨ā§āϟ āĻĢāĻ°ā§āĻŽā§āϝāĻžāϟ āĻĨ⧇āϕ⧇ āĻŸā§‡āĻ•ā§āϏāϟ āĻāĻ•ā§āϏāĻŸā§āϰāĻžāĻ•āĻļāύ⧇āϰ āϜāĻ¨ā§āϝ āφāωāϟ-āĻ…āĻĢ-āĻŦāĻ•ā§āϏ āϏāĻžāĻĒā§‹āĻ°ā§āϟāĨ¤
**5. āĻāĻœā§‡āĻ¨ā§āϟ āĻ•ā§āϝāĻžāĻĒāĻžāĻŦāĻŋāϞāĻŋāϟāĻŋ**:
LLM āĻĢāĻžāĻ‚āĻļāύ āĻ•āϞāĻŋāĻ‚ āĻŦāĻž ReAct āωāĻĒāϰ āĻ­āĻŋāĻ¤ā§āϤāĻŋ āĻ•āϰ⧇ āĻāĻœā§‡āĻ¨ā§āϟ āĻĄāĻŋāĻĢāĻžāχāύ āĻ•āϰāϤ⧇ āĻĒāĻžāϰ⧇āύ āĻāĻŦāĻ‚ āĻāĻœā§‡āĻ¨ā§āĻŸā§‡āϰ āϜāĻ¨ā§āϝ āĻĒā§‚āĻ°ā§āĻŦ-āύāĻŋāĻ°ā§āĻŽāĻŋāϤ āĻŦāĻž āĻ•āĻžāĻ¸ā§āϟāĻŽ āϟ⧁āϞāϏ āϝ⧁āĻ•ā§āϤ āĻ•āϰāϤ⧇ āĻĒāĻžāϰ⧇āύāĨ¤ Dify AI āĻāĻœā§‡āĻ¨ā§āϟāĻĻ⧇āϰ āϜāĻ¨ā§āϝ 50+ āĻŦāĻŋāĻ˛ā§āϟ-āχāύ āϟ⧁āϞāϏ āϏāϰāĻŦāϰāĻžāĻš āĻ•āϰ⧇, āϝ⧇āĻŽāύ Google Search, DALL¡E, Stable Diffusion āĻāĻŦāĻ‚ WolframAlphaāĨ¤
**6. āĻāϞāĻāϞāĻāĻŽ-āĻ…āĻĒā§āϏ**:
āϏāĻŽāϝāĻŧ⧇āϰ āϏāĻžāĻĨ⧇ āϏāĻžāĻĨ⧇ āĻ…ā§āϝāĻžāĻĒā§āϞāĻŋāϕ⧇āĻļāύ āϞāĻ— āĻāĻŦāĻ‚ āĻĒāĻžāϰāĻĢāϰāĻŽā§āϝāĻžāĻ¨ā§āϏ āĻŽāύāĻŋāϟāϰ āĻāĻŦāĻ‚ āĻŦāĻŋāĻļā§āϞ⧇āώāĻŖ āĻ•āϰ⧁āύāĨ¤ āĻĒā§āϰāĻĄāĻžāĻ•āĻļāύ āĻĄā§‡āϟāĻž āĻāĻŦāĻ‚ annotation āĻāϰ āωāĻĒāϰ āĻ­āĻŋāĻ¤ā§āϤāĻŋ āĻ•āϰ⧇ āĻĒā§āϰāĻŽā§āĻĒāϟ, āĻĄā§‡āϟāĻžāϏ⧇āϟ āĻāĻŦāĻ‚ āĻŽāĻĄā§‡āϞāϗ⧁āϞāĻŋāϕ⧇ āĻ•ā§āϰāĻŽāĻžāĻ—āϤ āωāĻ¨ā§āύāϤ āĻ•āϰāϤ⧇ āĻĒāĻžāϰ⧇āύāĨ¤
**7. āĻŦā§āϝāĻžāĻ•āĻāĻ¨ā§āĻĄ-āĻ…ā§āϝāĻžāϜ-āĻ-āϏāĻžāĻ°ā§āĻ­āĻŋāϏ**:
āĻĄāĻŋāĻĢāĻžāχ-āĻāϰ āϏāĻŽāĻ¸ā§āϤ āĻ…āĻĢāĻžāϰ āϏāĻ‚āĻļā§āϞāĻŋāĻˇā§āϟ API-āϏāĻš āφāϛ⧇, āϝāĻžāϤ⧇ āφāĻĒāύāĻŋ āĻ…āύāĻžāϝāĻŧāĻžāϏ⧇ āĻĄāĻŋāĻĢāĻžāχāϕ⧇ āφāĻĒāύāĻžāϰ āύāĻŋāϜāĻ¸ā§āĻŦ āĻŦāĻŋāϜāύ⧇āϏ āϞāϜāĻŋāϕ⧇ āχāĻ¨ā§āĻŸā§‡āĻ—ā§āϰ⧇āϟ āĻ•āϰāϤ⧇ āĻĒāĻžāϰ⧇āύāĨ¤
## āĻĄāĻŋāĻĢāĻžāχ-āĻāϰ āĻŦā§āϝāĻŦāĻšāĻžāϰ
- **āĻ•ā§āϞāĻžāωāĻĄ </br>**
āϜāĻŋāϰ⧋ āϏ⧇āϟāĻžāĻĒ⧇ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰāϤ⧇ āφāĻŽāĻžāĻĻ⧇āϰ [Dify Cloud](https://dify.ai) āϏāĻžāĻ°ā§āĻ­āĻŋāϏāϟāĻŋ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰāϤ⧇ āĻĒāĻžāϰ⧇āύāĨ¤ āĻāĻ–āĻžāύ⧇ āϏ⧇āĻ˛ā§āĻĢāĻšā§‹āĻ¸ā§āϟāĻŋāĻ‚-āĻāϰ āϏāĻ•āϞ āĻĢāĻŋāϚāĻžāϰ āĻ“ āĻ•ā§āϝāĻžāĻĒāĻžāĻŦāĻŋāϞāĻŋāϟāĻŋāϏāĻš āĻ¸ā§āϝāĻžāĻ¨ā§āĻĄāĻŦāĻ•ā§āϏ⧇ ⧍ā§Ļā§Ļ āϜāĻŋāĻĒāĻŋāϟāĻŋ-ā§Ē āĻ•āϞ āĻĢā§āϰāĻŋ āĻĒāĻžāĻŦ⧇āύāĨ¤
- **āϏ⧇āĻ˛ā§āĻĢāĻšā§‹āĻ¸ā§āϟāĻŋāĻ‚ āĻĄāĻŋāĻĢāĻžāχ āĻ•āĻŽāĻŋāωāύāĻŋāϟāĻŋ āϏāĻ‚āĻ¸ā§āĻ•āϰāĻŖ</br>**
āϏ⧇āĻ˛ā§āĻĢāĻšā§‹āĻ¸ā§āϟ āĻ•āϰāϤ⧇ āĻāχ [āĻ¸ā§āϟāĻžāĻ°ā§āϟāĻžāϰ āĻ—āĻžāχāĻĄ](#quick-start) āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻĻā§āϰ⧁āϤ āφāĻĒāύāĻžāϰ āĻāύāĻ­āĻžā§ŸāϰāύāĻŽā§‡āĻ¨ā§āĻŸā§‡ āĻĄāĻŋāĻĢāĻžāχ āϚāĻžāϞāĻžāύāĨ¤
āφāϰ⧋ āχāύ-āĻĄā§‡āĻĒāĻĨ āϰ⧇āĻĢāĻžāϰ⧇āĻ¨ā§āϏ⧇āϰ āϜāĻ¨ā§āϝ [āĻĄāϕ⧁āĻŽā§‡āĻ¨ā§āĻŸā§‡āĻļāύ](https://docs.dify.ai) āĻĻ⧇āϖ⧇āύāĨ¤
- **āĻāĻ¨ā§āϟāĻžāϰāĻĒā§āϰāĻžāχāϜ / āĻĒā§āϰāϤāĻŋāĻˇā§āĻ āĻžāύ⧇āϰ āϜāĻ¨ā§āϝ Dify</br>**
āφāĻŽāϰāĻž āĻāĻ¨ā§āϟāĻžāϰāĻĒā§āϰāĻžāχāϜ/āĻĒā§āϰāϤāĻŋāĻˇā§āĻ āĻžāύ-āϕ⧇āĻ¨ā§āĻĻā§āϰāĻŋāĻ• āϏ⧇āĻŦāĻž āĻĒā§āϰāĻĻāĻžāύ āĻ•āϰ⧇ āĻĨāĻžāĻ•āĻŋ āĨ¤ [āĻāχ āĻšā§āϝāĻžāϟāĻŦāĻŸā§‡āϰ āĻŽāĻžāĻ§ā§āϝāĻŽā§‡ āφāĻĒāύāĻžāϰ āĻĒā§āϰāĻļā§āύāϗ⧁āϞāĻŋ āφāĻŽāĻžāĻĻ⧇āϰ āϜāĻ¨ā§āϝ āϞāĻ— āĻ•āϰ⧁āύāĨ¤](https://udify.app/chat/22L1zSxg6yW1cWQg) āĻ…āĻĨāĻŦāĻž [āφāĻŽāĻžāĻĻ⧇āϰ āχāĻŽā§‡āϞ āĻĒāĻžāĻ āĻžāύ](mailto:business@dify.ai?subject=%5BGitHub%5DBusiness%20License%20Inquiry) āφāĻĒāύāĻžāϰ āϚāĻžāĻšāĻŋāĻĻāĻž āϏāĻŽā§āĻĒāĻ°ā§āϕ⧇ āφāϞ⧋āϚāύāĻž āĻ•āϰāĻžāϰ āϜāĻ¨ā§āϝāĨ¤ </br>
> AWS āĻŦā§āϝāĻŦāĻšāĻžāϰāĻ•āĻžāϰ⧀ āĻ¸ā§āϟāĻžāĻ°ā§āϟāφāĻĒ āĻāĻŦāĻ‚ āϛ⧋āϟ āĻŦā§āϝāĻŦāϏāĻžāϰ āϜāĻ¨ā§āϝ, [AWS āĻŽāĻžāĻ°ā§āϕ⧇āϟāĻĒā§āϞ⧇āϏ⧇ Dify Premium](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) āĻĻ⧇āϖ⧁āύ āĻāĻŦāĻ‚ āĻāĻ•-āĻ•ā§āϞāĻŋāϕ⧇āϰ āĻŽāĻžāĻ§ā§āϝāĻŽā§‡ āĻāϟāĻŋ āφāĻĒāύāĻžāϰ āύāĻŋāϜāĻ¸ā§āĻŦ AWS VPC-āϤ⧇ āĻĄāĻŋāĻĒā§āϞ⧟ āĻ•āϰ⧁āύāĨ¤ āĻāϟāĻŋ āĻāĻ•āϟāĻŋ āϏāĻžāĻļā§āϰāϝāĻŧā§€ āĻŽā§‚āĻ˛ā§āϝ⧇āϰ AMI āĻ…āĻĢāĻžāϰ, āϝāĻžāϤ⧇ āĻ•āĻžāĻ¸ā§āϟāĻŽ āϞ⧋āĻ—ā§‹ āĻāĻŦāĻ‚ āĻŦā§āĻ°ā§āϝāĻžāĻ¨ā§āĻĄāĻŋāĻ‚ āϏāĻš āĻ…ā§āϝāĻžāĻĒ āϤ⧈āϰāĻŋāϰ āϏ⧁āĻŦāĻŋāϧāĻž āφāϛ⧇āĨ¤
## āĻāĻ—āĻŋāϝāĻŧ⧇ āĻĨāĻžāϕ⧁āύ
GitHub-āĻ āĻĄāĻŋāĻĢāĻžāχāϕ⧇ āĻ¸ā§āϟāĻžāϰ āĻĻāĻŋā§Ÿā§‡ āϰāĻžāϖ⧁āύ āĻāĻŦāĻ‚ āύāϤ⧁āύ āϰāĻŋāϞāĻŋāĻœā§‡āϰ āĻ–āĻŦāϰ āϤāĻžā§ŽāĻ•ā§āώāĻŖāĻŋāĻ•āĻ­āĻžāĻŦ⧇ āĻĒāĻžāύāĨ¤
![star-us](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## Advanced Setup
āϝāĻĻāĻŋ āφāĻĒāύāĻžāϰ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ⧇āĻļāύāϟāĻŋ āĻ•āĻžāĻ¸ā§āϟāĻŽāĻžāχāϜ āĻ•āϰāĻžāϰ āĻĒā§āĻ°ā§Ÿā§‹āϜāύ āĻšā§Ÿ, āϤāĻžāĻšāϞ⧇ āĻ…āύ⧁āĻ—ā§āϰāĻš āĻ•āϰ⧇ āφāĻŽāĻžāĻĻ⧇āϰ [.env.example](docker/.env.example) āĻĢāĻžāχāϞ āĻĻ⧇āϖ⧁āύ āĻāĻŦāĻ‚ āφāĻĒāύāĻžāϰ `.env` āĻĢāĻžāχāϞ⧇ āϏāĻ‚āĻļā§āϞāĻŋāĻˇā§āϟ āĻŽāĻžāύāϗ⧁āϞāĻŋ āφāĻĒāĻĄā§‡āϟ āĻ•āϰ⧁āύāĨ¤ āĻāĻ›āĻžā§œāĻžāĻ“, āφāĻĒāύāĻžāϰ āύāĻŋāĻ°ā§āĻĻāĻŋāĻˇā§āϟ āĻāύāĻ­āĻžā§ŸāϰāύāĻŽā§‡āĻ¨ā§āϟ āĻāĻŦāĻ‚ āĻĒā§āϰāϝāĻŧā§‹āϜāύ⧀āϝāĻŧāϤāĻžāϰ āωāĻĒāϰ āĻ­āĻŋāĻ¤ā§āϤāĻŋ āĻ•āϰ⧇ āφāĻĒāύāĻžāϕ⧇ `docker-compose.yaml` āĻĢāĻžāχāϞ⧇ āϏāĻŽāĻ¨ā§āĻŦāϝāĻŧ āĻ•āϰāϤ⧇ āĻšāϤ⧇ āĻĒāĻžāϰ⧇, āϝ⧇āĻŽāύ āχāĻŽā§‡āϜ āĻ­āĻžāĻ°ā§āϏāύ āĻĒāϰāĻŋāĻŦāĻ°ā§āϤāύ āĻ•āϰāĻž, āĻĒā§‹āĻ°ā§āϟ āĻŽā§āϝāĻžāĻĒāĻŋāĻ‚ āĻ•āϰāĻž, āĻ…āĻĨāĻŦāĻž āĻ­āϞāĻŋāωāĻŽ āĻŽāĻžāωāĻ¨ā§āϟ āĻ•āϰāĻžāĨ¤
āϝ⧇āϕ⧋āύ⧋ āĻĒāϰāĻŋāĻŦāĻ°ā§āϤāύ āĻ•āϰāĻžāϰ āĻĒāϰ, āĻ…āύ⧁āĻ—ā§āϰāĻš āĻ•āϰ⧇ `docker-compose up -d` āĻĒ⧁āύāϰāĻžāϝāĻŧ āϚāĻžāϞāĻžāύāĨ¤ āϭ⧇āϰāĻŋāϝāĻŧ⧇āĻŦāϞ⧇āϰ āϏāĻŽā§āĻĒā§‚āĻ°ā§āĻŖ āϤāĻžāϞāĻŋāĻ•āĻž [āĻāĻ–āĻžāύ⧇] (https://docs.dify.ai/getting-started/install-self-hosted/environments) āϖ⧁āρāĻœā§‡ āĻĒ⧇āϤ⧇ āĻĒāĻžāϰ⧇āύāĨ¤
āϝāĻĻāĻŋ āφāĻĒāύāĻŋ āĻāĻ•āϟāĻŋ āĻšāĻžāχāϞāĻŋ āĻāϭ⧇āχāϞ⧇āĻŦāϞ āϏ⧇āϟāφāĻĒ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ āĻ•āϰāϤ⧇ āϚāĻžāύ, āϤāĻžāĻšāϞ⧇ āĻ•āĻŽāĻŋāωāύāĻŋāϟāĻŋ [Helm Charts](https://helm.sh/) āĻāĻŦāĻ‚ YAML āĻĢāĻžāχāϞ āϰāϝāĻŧ⧇āϛ⧇ āϝāĻž Dify āϕ⧇ Kubernetes-āĻ āĻĄāĻŋāĻĒā§āϞ⧟ āĻ•āϰāĻžāϰ āĻĒā§āϰāĻ•ā§āϰāĻŋ⧟āĻž āĻŦāĻ°ā§āĻŖāύāĻž āĻ•āϰ⧇āĨ¤
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [Helm Chart by @magicsong](https://github.com/magicsong/ai-charts)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 āύāϤ⧁āύ! YAML āĻĢāĻžāχāϞāϏāĻŽā§‚āĻš (Dify v1.6.0 āϏāĻŽāĻ°ā§āĻĨāĻŋāϤ) āϤ⧈āϰāĻŋ āĻ•āϰ⧇āϛ⧇āύ @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### āĻŸā§‡āϰāĻžāĻĢāĻ°ā§āĻŽ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻĄāĻŋāĻĒā§āϞ⧟
[terraform](https://www.terraform.io/) āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻāĻ• āĻ•ā§āϞāĻŋāϕ⧇āχ āĻ•ā§āϞāĻžāωāĻĄ āĻĒā§āĻ˛ā§āϝāĻžāϟāĻĢāĻ°ā§āĻŽā§‡ Dify āĻĄāĻŋāĻĒā§āϞ⧟ āĻ•āϰ⧁āύāĨ¤
##### āĻ…ā§āϝāĻžāϜ⧁āϰ āĻ—ā§āϞ⧋āĻŦāĻžāϞ
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### āϗ⧁āĻ—āϞ āĻ•ā§āϞāĻžāωāĻĄ
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
#### AWS CDK āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻĄāĻŋāĻĒā§āϞ⧟
[CDK](https://aws.amazon.com/cdk/) āĻĻāĻŋāϝāĻŧ⧇ AWS-āĻ Dify āĻĄāĻŋāĻĒā§āϞ⧟ āĻ•āϰ⧁āύ
##### AWS
- [AWS CDK by @KevinZhao (EKS based)](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
- [AWS CDK by @tmokmss (ECS based)](https://github.com/aws-samples/dify-self-hosted-on-aws)
#### Alibaba Cloud āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻĄāĻŋāĻĒā§āϞ⧟
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻĄāĻŋāĻĒā§āϞ⧟
[Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
#### AKS-āĻ āĻĄāĻŋāĻĒā§āϞāϝāĻŧ āĻ•āϰāĻžāϰ āϜāĻ¨ā§āϝ Azure Devops Pipeline āĻŦā§āϝāĻŦāĻšāĻžāϰ
[Azure Devops Pipeline Helm Chart by @LeoZhang](https://github.com/Ruiruiz30/Dify-helm-chart-AKS) āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ Dify āϕ⧇ AKS-āĻ āĻāĻ• āĻ•ā§āϞāĻŋāϕ⧇ āĻĄāĻŋāĻĒā§āϞāϝāĻŧ āĻ•āϰ⧁āύ
## Contributing
āϝāĻžāϰāĻž āϕ⧋āĻĄ āĻ…āĻŦāĻĻāĻžāύ āϰāĻžāĻ–āϤ⧇ āϚāĻžāύ, āϤāĻžāĻĻ⧇āϰ āϜāĻ¨ā§āϝ āφāĻŽāĻžāĻĻ⧇āϰ [āĻ…āĻŦāĻĻāĻžāύ āύāĻŋāĻ°ā§āĻĻ⧇āĻļāĻŋāĻ•āĻž] āĻĻ⧇āϖ⧁āύ (https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)āĨ¤
āĻāĻ•āχ āϏāĻžāĻĨ⧇, āϏ⧋āĻļā§āϝāĻžāϞ āĻŽāĻŋāĻĄāĻŋāϝāĻŧāĻž āĻāĻŦāĻ‚ āχāϭ⧇āĻ¨ā§āϟ āĻāĻŦāĻ‚ āĻ•āύāĻĢāĻžāϰ⧇āĻ¨ā§āϏ⧇ āĻāϟāĻŋ āĻļ⧇āϝāĻŧāĻžāϰ āĻ•āϰ⧇ Dify āϕ⧇ āϏāĻŽāĻ°ā§āĻĨāύ āĻ•āϰ⧁āύāĨ¤
> āφāĻŽāϰāĻž āĻŽā§āϝāĻžāĻ¨ā§āĻĄāĻžāϰāĻŋāύ āĻŦāĻž āχāĻ‚āϰ⧇āϜāĻŋ āĻ›āĻžāĻĄāĻŧāĻž āĻ…āĻ¨ā§āϝ āĻ­āĻžāώāĻžā§Ÿ Dify āĻ…āύ⧁āĻŦāĻžāĻĻ āĻ•āϰāϤ⧇ āϏāĻžāĻšāĻžāĻ¯ā§āϝ āĻ•āϰāĻžāϰ āϜāĻ¨ā§āϝ āĻ…āĻŦāĻĻāĻžāύāĻ•āĻžāϰ⧀āĻĻ⧇āϰ āϖ⧁āρāϜāĻ›āĻŋāĨ¤ āφāĻĒāύāĻŋ āϝāĻĻāĻŋ āϏāĻžāĻšāĻžāĻ¯ā§āϝ āĻ•āϰāϤ⧇ āφāĻ—ā§āϰāĻšā§€ āĻšāύ, āϤāĻžāĻšāϞ⧇ āφāϰāĻ“ āϤāĻĨā§āϝ⧇āϰ āϜāĻ¨ā§āϝ [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n-config/README.md) āĻĻ⧇āϖ⧁āύ āĻāĻŦāĻ‚ āφāĻŽāĻžāĻĻ⧇āϰ [āĻĄāĻŋāϏāĻ•āĻ°ā§āĻĄ āĻ•āĻŽāĻŋāωāύāĻŋāϟāĻŋ āϏāĻžāĻ°ā§āĻ­āĻžāϰ](https://discord.gg/8Tpq4AcN9c) āĻāϰ `āĻ—ā§āϞ⧋āĻŦāĻžāϞ-āχāωāϜāĻžāϰāϏ` āĻšā§āϝāĻžāύ⧇āϞ⧇ āφāĻŽāĻžāĻĻ⧇āϰ āĻāĻ•āϟāĻŋ āĻŽāĻ¨ā§āϤāĻŦā§āϝ āĻ•āϰ⧁āύāĨ¤
## āĻ•āĻŽāĻŋāωāύāĻŋāϟāĻŋ āĻāĻŦāĻ‚ āϝ⧋āĻ—āĻžāϝ⧋āĻ—
- [GitHub Discussion](https://github.com/langgenius/dify/discussions) āĻĢāĻŋāĻĄāĻŦā§āϝāĻžāĻ• āĻāĻŦāĻ‚ āĻĒā§āϰāϤāĻŋāĻ•ā§āϰāĻŋ⧟āĻž āϜāĻžāύāĻžāύ⧋āϰ āĻŽāĻžāĻ§ā§āϝāĻŽāĨ¤
- [GitHub Issues](https://github.com/langgenius/dify/issues). Dify.AI āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āφāĻĒāύāĻŋ āϝ⧇āϏāĻŦ āĻŦāĻžāϗ⧇āϰ āϏāĻŽā§āĻŽā§āĻ–ā§€āύ āĻšāύ āĻāĻŦāĻ‚ āĻĢāĻŋāϚāĻžāϰ āĻĒā§āϰāĻ¸ā§āϤāĻžāĻŦāύāĻžāĨ¤ āφāĻŽāĻžāĻĻ⧇āϰ [āĻ…āĻŦāĻĻāĻžāύ āύāĻŋāĻ°ā§āĻĻ⧇āĻļāĻŋāĻ•āĻž](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) āĻĻ⧇āϖ⧁āύāĨ¤
- [Discord](https://discord.gg/FngNHpbcY7) āφāĻĒāύāĻžāϰ āĻāĻĒā§āϞāĻŋāϕ⧇āĻļāύ āĻļā§‡ā§ŸāĻžāϰ āĻāĻŦāĻ‚ āĻ•āĻŽāĻŋāωāύāĻŋāϟāĻŋ āφāĻĄā§āĻĄāĻžāϰ āĻŽāĻžāĻ§ā§āϝāĻŽāĨ¤
- [X(Twitter)](https://twitter.com/dify_ai) āφāĻĒāύāĻžāϰ āĻāĻĒā§āϞāĻŋāϕ⧇āĻļāύ āĻļā§‡ā§ŸāĻžāϰ āĻāĻŦāĻ‚ āĻ•āĻŽāĻŋāωāύāĻŋāϟāĻŋ āφāĻĄā§āĻĄāĻžāϰ āĻŽāĻžāĻ§ā§āϝāĻŽāĨ¤
**āĻ…āĻŦāĻĻāĻžāύāĻ•āĻžāϰ⧀āĻĻ⧇āϰ āϤāĻžāϞāĻŋāĻ•āĻž**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## āĻ¸ā§āϟāĻžāϰ āĻšāĻŋāĻ¸ā§āĻŸā§āϰāĻŋ
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## āύāĻŋāϰāĻžāĻĒāĻ¤ā§āϤāĻž āĻŦāĻŋāώ⧟āĻ•
āφāĻĒāύāĻžāϰ āĻ—ā§‹āĻĒāύ⧀āϝāĻŧāϤāĻž āϰāĻ•ā§āώāĻž āĻ•āϰāϤ⧇, āĻ…āύ⧁āĻ—ā§āϰāĻš āĻ•āϰ⧇ GitHub-āĻ āύāĻŋāϰāĻžāĻĒāĻ¤ā§āϤāĻž āϏāĻ‚āĻ•ā§āϰāĻžāĻ¨ā§āϤ āϏāĻŽāĻ¸ā§āϝāĻž āĻĒā§‹āĻ¸ā§āϟ āĻ•āϰāĻž āĻāĻĄāĻŧāĻŋāϝāĻŧ⧇ āϚāϞ⧁āύāĨ¤ āĻĒāϰāĻŋāĻŦāĻ°ā§āϤ⧇, āφāĻĒāύāĻžāϰ āĻĒā§āϰāĻļā§āύāϗ⧁āϞāĻŋ <security@dify.ai> āĻ āĻŋāĻ•āĻžāύāĻžāϝāĻŧ āĻĒāĻžāĻ āĻžāύ āĻāĻŦāĻ‚ āφāĻŽāϰāĻž āφāĻĒāύāĻžāϕ⧇ āφāϰāĻ“ āĻŦāĻŋāĻ¸ā§āϤāĻžāϰāĻŋāϤ āωāĻ¤ā§āϤāϰ āĻĒā§āϰāĻĻāĻžāύ āĻ•āϰāĻŦāĨ¤
## āϞāĻžāχāϏ⧇āĻ¨ā§āϏ
āĻāχ āϰāĻŋāĻĒā§‹āϜāĻŋāϟāϰāĻŋāϟāĻŋ [āĻĄāĻŋāĻĢāĻžāχ āĻ“āĻĒ⧇āύ āϏ⧋āĻ°ā§āϏ āϞāĻžāχāϏ⧇āĻ¨ā§āϏ](LICENSE) āĻāϰ āĻ…āϧāĻŋāύ⧇ , āϝāĻž āĻŽā§‚āϞāϤ āĻ…ā§āϝāĻžāĻĒāĻžāϚāĻŋ ⧍.ā§Ļ, āϤāĻŦ⧇ āĻ•āĻŋāϛ⧁ āĻ…āϤāĻŋāϰāĻŋāĻ•ā§āϤ āĻŦāĻŋāϧāĻŋāύāĻŋāώ⧇āϧ āϰāϝāĻŧ⧇āϛ⧇āĨ¤

Some files were not shown because too many files have changed in this diff Show More